id
stringlengths
10
10
title
stringlengths
7
231
abstract
stringlengths
3
2.43k
authors
stringlengths
5
21.5k
published_date
stringlengths
20
20
link
stringlengths
33
34
markdown
stringlengths
133
1.92M
2301.01996
Non-homogeneous approximation for the kurtosis evolution of shoaling rogue waves
Bathymetric changes have been experimentally shown to affect the occurrence of rogue waves. We recently derived a non-homogeneous correction to the spectral analysis, allowing to describe the evolution of the rogue wave probability over a shoal. Here, we extend this work to the evolution of the excess kurtosis of the surface elevation, that plays a central role in estimating rare event probabilities. Furthermore, we provide an upper bound to the excess kurtosis. In intermediate and deep water regimes, a shoal does not affect wave steepness nor bandwidth significantly, so that the vertical asymmetry between crests and troughs, the excess kurtosis, and the exceedance probability of wave height stay rather constant. In contrast, in shallower water, a sharp increase in wave steepness increases the vertical asymmetry, resulting in a growth of both the tail of the exceedance probability and the excess kurtosis.
Saulo Mendes, Jérôme Kasparian
2023-01-05T10:26:17Z
http://arxiv.org/abs/2301.01996v2
# Non-homogeneous approximation for the kurtosis evolution of shoaling rogue waves ###### Abstract Bathymetric changes have been experimentally shown to affect the occurrence of rogue waves. In view of the central role played by the excess kurtosis of the surface elevation in estimating rare event probabilities, we translate the recently determined evolution of the rogue wave probability over a shoal into that of the kurtosis. We provide an effective theory that connects the non-homogeneous correction to the spectral analysis and the evolution of excess kurtosis over the shoal, as well as the kurtosis upper bound. In intermediate water depth the vertical asymmetry between crests and troughs is virtually constant throughout the shoal for a given wave steepness and bandwidth, thus not affecting the exceedance probability. Conversely, a sharp increase in steepness increases the asymmetry for waves in intermediate depth. Thus, both the tail of the exceedance probability and excess kurtosis grow. ## I Introduction Ocean wave statistics is at the crossroads of ocean engineering and physical oceanography. Ocean engineers are commonly concerned with both short-term and long-term waves statistics [1], while the mechanisms responsible for the formation of extreme waves is the focus in physical oceanography [2]. The unexpected observation of the so-called rogue waves (also known as freak waves) over the past decades [3] reignited the cross-disciplinary interest in wave statistics. These waves seemingly "appear from nowhere" [4], and are by statistical definition at least twice taller than the significant wave height. From an engineering perspective, the performance of theoretical probability models at the tail of the distribution measures their practical success. Applying the signal processing methods of Rice [5], the bulk of surface gravity waves were demonstrated to follow a Rayleigh distribution of heights [6]. Nevertheless, the Rayleigh distribution is unsuited to capture the tail of the distribution in real ocean conditions [7; 8]. On the other hand, nonlinear theories and their associated probability distributions are inaccurate in a wide range of real ocean conditions [9; 10]. These difficulties were realized early on, such that the Longuet-Higgins [11] handling of a non-Gaussian distribution of the sea surface elevation through a factorization of skewness and excess kurtosis from the normal distribution has been widely favoured. Moreover, approaches to compute surface elevation, crest and wave height distributions require methodologies that are often computationally burdensome [12]. Naturally, the excess kurtosis became the centre of wave statistics in an attempt to transfer the problem from the probability distribution to the cumulant expansion [13; 14]. The complexity of water wave solutions led to the use of excess kurtosis as an alternative to the evaluation of statistical distributions [15; 16]. To connect both frameworks, we provide an effective theory based on energy density redistribution [17] to describe the evolution of kurtosis of wave trains travelling over a shoal. Moreover, as a key element for the amplification of rogue wave probability due to a shoal, we obtain an approximation for the vertical asymmetry between crests and troughs as a function of water depth, bandwidth and steepness. Rogue waves travelling past a shoal are amplified with little regard for vertical asymmetry variations when \(k_{p}h>0.5\), unless either the spectrum is significantly broad-banded (\(\nu>0.5\)) or the steepness is large (\(\varepsilon>1/10\)). Accordingly, these formulae lead to an upper bound for the excess kurtosis, of paramount importance for naval design purposes. ## II Theoretical considerations We review the main ideas of the theory of non-homogeneous analysis of water waves travelling over a shoal [17]. Given a velocity potential \(\Phi(x,z,t)\) and surface elevation \(\zeta(x,t)\), the average energy density evolving over a shoal described by \(h(x)=h_{0}+x\nabla h\) with finite constant slope \(1/20\leq\mid\!\nabla h|=|h_{f}-h_{0}|/L<1\) (see figure 1) is expressed as: \[\mathscr{E}=\frac{1}{2\lambda}\int_{0}^{\lambda}\Big{\{}\Big{[} \zeta(x,t)+h(x)\Big{]}^{2}-h^{2}(x)+\\ \frac{1}{g}\int_{-h(x)}^{\zeta}\left[\left(\frac{\partial\Phi}{ \partial x}\right)^{2}+\left(\frac{\partial\Phi}{\partial z}\right)^{2} \right]dz\Big{\}}dx\quad, \tag{1}\] with zero-crossing wavelength \(\lambda\) and gravitational acceleration \(g\). An inhomogeneous elevation \(\zeta(x,t)\) perturbs the energy partition and redistributes the wave energy density, thus modifying the probability density of water waves. The spectral analysis consistent with a inhomogeneous energy defines a correction (\(\langle\cdot\rangle_{t}\) stands for temporal average): \[\Gamma(x)\approx\frac{\langle\zeta^{2}(x,t)\rangle_{t}(x)}{\mathscr{E}(x)}\quad, \tag{2}\] dependent on the steepness \(\varepsilon=H_{s}/\lambda\) and depth \(k_{p}h\), with \(H_{s}\) being the significant wave height (the average among the \(1/3\) largest waves). The inhomogeneity of both \(\mathscr{E}(x)\) and \(\langle\zeta^{2}\rangle_{t}(x)\) redistributes energy and transforms the pre-shoal Rayleigh exceedance probability into: \[\mathbb{P}_{\alpha,\Gamma}(H>\alpha H_{s})=\int_{\alpha}^{+\infty}\frac{4 \alpha_{0}}{\Gamma}\,e^{-2\alpha_{0}^{2}/\Gamma}\,d\alpha_{0}=e^{-2\alpha^{2}/ \Gamma}\,. \tag{3}\] For linear waves (\(\varepsilon\ll 1/100\)) \(\Gamma=1\) and we recover the case of a Gaussian sea. The evolution of the exceedance probability \(\mathbb{P}(H>\alpha H_{s})\) in eq. (3) can be generalized to any arbitrary incoming statistics [17]: \[\ln\left(\frac{\mathbb{P}_{\alpha,\,\Gamma_{\Theta}}}{\mathbb{P}_{\alpha}} \right)\approx 2\alpha^{2}\left(1-\frac{1}{\mathfrak{S}^{2}(\alpha)\Gamma_{ \mathfrak{S}}}\right)\quad, \tag{4}\] with the mean vertical asymmetry between crests and troughs (twice the mean ratio of crest to wave heights) of rogue waves expressed as [18], \[\mathfrak{S}(\alpha=2)\approx\frac{2\eta_{s}}{1+\eta_{s}}\Bigg{(}1+\frac{\eta _{s}}{6}\Bigg{)}\,\ \eta_{s}\approx 1+\mu_{3}\, \tag{5}\] obeying \(1\leqslant\mathfrak{S}\leqslant 2\) for all \(\alpha\) and with \(\eta_{s}\) measuring the ratio between mean crests and mean troughs, empirically found to depend on the skewness [18]. When the water depth decreases waves become steeper while the super-harmonic contribution has an increasing share of the wave envelope. The combination of these two effects redistributes the exceedance probability by causing the rise in \(\langle\zeta^{2}\rangle\) to exceed the growth of \(\mathscr{E}\). Such uneven growth explains why a shoal in intermediate water amplifies rogue wave occurrence as compared to deep water [19; 20] while it reduces this occurrence in shallow water [21; 22]. The linear term in \(\zeta(x,t)\) has the leading order in deep water and \(\Gamma-1\!\!\lesssim 10^{-2}\) is small. Conversely, in intermediate water the sub-harmonic creates significant disturbances in the energy density increasing \(\Gamma-1\) up to \(10^{-1}\), whereas in shallow water the sub-harmonic diverges and \(\Gamma-1\!\!\lesssim 10^{-3}\) becomes small again, reading even smaller values than in deep water. ## III Kurtosis evolution over a shoal The probability evolution of eq. (3) depends solely on \(\Gamma\). However, any deviation from a Gaussian distribution may be described by a cumulant expansion [11] which at leading order is expressed as a function of the excess kurtosis \(\mu_{4}\). For the case of an inhomogeneous wave field due to a shoal, there is an excess in kurtosis due to the energy partition. The probability ratio relative to the Rayleigh distribution (implying a pre-shoal \(\mu_{4}=0\)) is computed through the transformation of variables from the wave envelope in Mori and Yasuda [23] into normalized heights, to leading order in \(\mu_{4}\) computed in section 6.2.3 of Mendes [24]: \[\frac{\mathbb{P}_{\alpha,\mu_{4}}}{\mathbb{P}_{\alpha}}\,\approx\,1+\mu_{4} \frac{\alpha^{2}}{2}\left(\alpha^{2}-1\right)+\mu_{3}^{2}\frac{5\alpha^{2}}{18 }\left(2\alpha^{4}-6\alpha^{2}-3\right) \tag{6}\] For all \(\alpha\geqslant 1\). Taking into account the theoretical relation \(\mu_{4}\approx 16\mu_{3}^{2}/9\) between kurtosis and skewness confirmed by wave shoaling experiments [25], we rewrite eq. (6): \[\frac{\mathbb{P}_{\alpha,\mu_{4}}}{\mathbb{P}_{\alpha}}\!\approx\!1+\mu_{4} \cdot\frac{\alpha^{2}}{32}\left(10\alpha^{4}-14\alpha^{2}-31\right)\,,\ \forall\,\alpha\gtrsim 2 \tag{7}\] The kurtosis measures taildness and it affects the exceedance probability for \(\alpha\gtrsim 1.5\). Eqs. (4) and (7) describe the physical effect of energy redistribution and the associated deviation from a Gaussian sea. They can be matched, yielding a kurtosis \(\mu_{4}(\Gamma,\alpha)\). Therefore, we evaluate both equations at \(\alpha=2\) as it stays within \(\pm 20\%\) over the range of validity and stability of eq. (7) (\(2\lesssim\alpha\lesssim 3\)), finding: \[\mu_{4}(\Gamma)\approx\frac{1}{9}\left[e^{8\left(1-\frac{1}{\mathfrak{S}^{2} \Gamma}\right)}-1\right]\quad. \tag{8}\] This expression generalizes the result obtained by eqs. 46-47 of Mori and Janssen [16] in the case of a narrow-banded wave train, with less than \(5\%\) deviation as compared to their model with a \((2/3)\alpha^{2}(\alpha^{2}-1)\) polynomial in the counterpart of eq. (7). In the case of a non-Gaussian sea prior to the shoal, the above equation can be corrected according to eqs. (C1,C7b) of Mendes _et al._[17]. The excess kurtosis of the experiments in Trulsen _et al._[19] is well described by eq. (8) (see figure 2). Despite the assumption of Gaussianity prior to the shoal the agreement with observed kurtosis is reasonable, especially in deeper water. Our model rises a little earlier than measured data during the shoaling and later during the de-shoaling, while the trend of the peak in kurtosis to decrease towards deeper waters as well as its magnitude are well estimated. In the comparison, we employed the empirical [18] asymmetry \(\mathfrak{S}(\alpha=2)=6/5\). In the next section we validate this approximation for the ranges Figure 1: Portraying of the extreme wave amplification due to a bar [17]. The water column depth evolves as \(h(x)=h_{0}+x\nabla h\) with slope \(\nabla h=(h_{f}-h_{0})/L\). Dashed vertical lines delineate shoaling and de-shoaling regions as in figure 2. \(k_{p}h\gtrsim\pi/10\), \(\nu\lesssim 1/2\) (bandwidth) and \(\varepsilon{\ll 1/10}\) representative of Trulsen _et al._'s experiments. ## IV Vertical asymmetry in finite depth Eqs. (4) and (8) highlight the influence of the vertical asymmetry on the evolution of rogue wave occurrence and excess kurtosis over a shoal when in intermediate depths. However, the evolution of this asymmetry due to finite depth effects is not well-known, except that is a slowly varying function of the steepness. Following Marthinsen [15] we may consider the skewness to depend solely on depth and steepness \(\mu_{3}=\mu_{3}(\varepsilon,k_{p}h)\), and consequently identify \(\mathfrak{S}(\mu_{3}){=}\,\mathfrak{S}(\varepsilon,k_{p}h)\) for any \(\alpha\). The skewness can be approximated as (see eq. 19 of Tayfun [27], with \(\mu\) denoting steepness and \(\lambda_{3}\) the skewness): \[\mu_{3}(k_{p}h>\pi) \approx 3k_{1}\sigma(1-\nu\sqrt{2}+\nu^{2}) \tag{9}\] \[\equiv 3k_{1}\sigma\cdot\mathfrak{B}(\nu)\approx\frac{\pi}{\sqrt{2}} \,\varepsilon\,\mathfrak{B}(\nu)\quad,\] where \(H_{s}{=}\,\pi\varepsilon/\sqrt{2}k_{p}\) and \(k_{p}\) is the peak wavenumber obtained from the spectral mean wavenumber \(k_{1}\) through \(k_{p}\approx(3/4)k_{1}\)[17]. Figure 3a shows that eq. (9) captures the trend very well in deep water (\(k_{p}h\geqslant 5\)). In fact, in the limit of narrow-banded waves the mean ratio reads \(\langle\mu_{3}/\varepsilon\rangle\approx 2.6\) for \(k_{p}h\geqslant 5\), being in good agreement with eq. (9). On the other hand, as the depth decreases to intermediate waters the former ratio significantly increases. To account for this finite depth effect, one must include the sub- and super-harmonic coefficients [12]: \[\tilde{\chi}_{0} =\frac{\left[4\left(1+\frac{2k_{p}h}{\sinh\left(2k_{p}h\right)} \right)-2\right]}{\left(1+\frac{2k_{p}h}{\sinh\left(2k_{p}h\right)}\right)^{2 }\tanh k_{p}h-4k_{p}h}\quad;\] \[\frac{\sqrt{\tilde{\chi}_{1}}}{2} =\frac{3-\tanh^{2}\left(k_{p}h\right)}{2\tanh^{3}\left(k_{p}h \right)}\quad, \tag{10}\] with notation \(\tilde{\chi}_{i}\) from Mendes _et al._[17]. Combining eq. (9) with eq. 11 of Tayfun and Alkhalidi [12], the finite-depth skewness reads approximately: \[\mu_{3}{\approx}\frac{\pi\varepsilon}{\sqrt{2}}\,\mathfrak{B}(\nu)\Big{(} \tilde{\chi}_{0}+\frac{\sqrt{\tilde{\chi}_{1}}}{2}\Big{)}\quad. \tag{11}\] Although Tayfun and Alkhalidi's model is a good fit for \(k_{p}h>\pi\), the sum \(\tilde{\chi}_{0}+\sqrt{\tilde{\chi}_{1}}/2\) stays close to unity for \(k_{p}h{\geqslant}2\). Hence, the larger values of the ratio \(\mu_{3}/\varepsilon\) for shallower water (\(2\leqslant k_{p}h\leqslant\pi\)) must stem from a dependence of \(\mathfrak{B}(\nu)\) with depth. We therefore seek a function \(\mathfrak{B}(\nu,k_{p}h)=1-\nu\sqrt{2}+f_{k_{p}h}\cdot\nu^{2}\) capable of providing a smooth transition from \(f_{k_{p}h\sim 3}\approx 3.5\) in shallower depths (see figure 3a) to the deep water value \(f_{k_{p}h=\infty}\sim 1\) (see eq. (9)). Hence, the vertical asymmetry accounting for Figure 2: Kurtosis \(\mu_{4}\) (dots) of Runs 1, 2, 5, and 6 in Trulsen _et al._[19] versus the model of eq. (8) (dashed). Dashed vertical lines mark the shoaling and de-shoaling zones (see figure 1). Solid curves include the slope effect [26]. depth-induced effects is of the type: \[\mathfrak{S}(\alpha=2)\approx\frac{(2+6\varepsilon_{*})(7+3 \varepsilon_{*})}{6(2+3\varepsilon_{*})}\quad,\] \[\varepsilon_{*}\approx\frac{\pi\varepsilon}{3\sqrt{2}}\left[1- \nu\sqrt{2}+f_{k_{p}h}\cdot\nu^{2}\right]\Bigl{(}\tilde{\chi}_{0}+\frac{ \sqrt{\tilde{\chi}_{1}}}{2}\Bigr{)}. \tag{12}\] Figure (b)b provides a contour plot of the empirically fitted model of \(\mathfrak{B}(\nu,k_{p}h)\). Here, \(\varepsilon_{*}\) is the effective steepness and \(f_{k_{p}h}\) is a function of depth that can be obtained through the constraint of eq (5) applied to eq. (12): \[\lim_{k_{p}h\to 0}\mathfrak{S}(\alpha=2)\approx\lim_{k_{p}h \to 0}\frac{(2+6\varepsilon_{*})(7+3\varepsilon_{*})}{6(2+3\varepsilon_{*})} \leqslant 2\ \therefore\] \[9\varepsilon_{*}{}^{2}+6\varepsilon_{*}-5\leqslant 0\ \therefore\ \varepsilon_{*} \leqslant\frac{\sqrt{6}-1}{3}\quad. \tag{13}\] The function \(\mathfrak{B}(\nu,k_{p}h)\) makes the exceedance probability of rogue waves weakly dependent on the bandwidth. Although the latter is defined in Longuet-Higgins Longuet and Higgins (2003) to have a range \(\nu\in[0,\infty)\), seas exceeding \(\nu=1\) account for 3% of observed stormy states in the North Sea Longuet and Higgins (2003). These very extreme sea conditions are typically short-lived and found for instance in hurricanes. Albeit bandwidths much larger than \(\nu=1\) can increase the vertical asymmetry in about 5-10%, their lifespan makes the weighted exceedance probability of rogue waves over a daily forecast to be slightly deviated (\(\sim 10\%\)). Accordingly, we may set \(\nu=1\) as the realistic and effective maximum bandwidth to be considered for estimating the exceedance probability. Hence, in the limit of second-order waves (\(k_{p}h\to 1/2\)) we obtain: \[\frac{\pi\varepsilon f_{k_{p}h}}{3\sqrt{2}}\Bigl{(}\tilde{\chi}_ {0}+\frac{\sqrt{\tilde{\chi}_{1}}}{2}\Bigr{)}<\frac{\sqrt{6}-1}{3}\ \therefore\ f_{k_{p}h}\lesssim\frac{18\sqrt{2}}{\pi} \tag{14}\] valid for \(\nu\leqslant 1\). Broad-banded waves have an effective steepness of the order of \(\varepsilon f_{k_{p}h}\nu^{2}\). Since finite-depth effects involve the ratio \(\varepsilon/k_{p}h\) as it is directly related to \(H_{s}/h\) we expect \(f_{k_{p}h}\sim(k_{p}h)^{-n}\) with \(n\in\mathbb{N}^{*}\). In order to fulfill eqs. (13-14), a sigmoid function with continuous derivative providing the best fit for the North Sea data reads (see figure (a)a): \[f_{k_{p}h}\approx\frac{8}{1+7\,\tanh^{2}\left(k_{p}h/7\right)}\quad,\quad\nu \leqslant 1\quad. \tag{15}\] Figure 4: (a) Finite-depth functions \(f_{k_{p}h}\) versus data (circles) from figure (a)a. (b) Vertical asymmetry of broad-banded rogue waves (\(\nu=0.5\)) as a function of water depth for different steepness, with the dotted line depicting the empirical mean value \(\mathfrak{S}=6/5\) from Mendes _et al._ Mendes et al. (2002); Longuet and Higgins (2003). Dashed vertical line marks the limit of validity of second-order theory. Figure 3: (a) Ratio of skewness and steepness varying with bandwidth in strongly non-Gaussian (\(\langle\mu_{4}\rangle\approx 0.4\)) North Sea data Longuet and Higgins (2003), with numerical polynomial fit \(\mathfrak{B}(\nu)\approx 1-\nu\sqrt{2}+(7/2)\nu^{2}\) at \(2\leqslant k_{p}h\leqslant\pi\). (b) Contour plot of the same ratio as computed from eq. (11) for the fitted function \(\mathfrak{B}(\nu,k_{p}h)\) in (a). Plugging eq. (15) into eq. (12) introduces an approximation for the vertical asymmetry covering the entire range of second-order theory for narrow and broad-banded irregular waves. In fact, figure 3(b) shows that the vertical asymmetry is almost constant for typical values of steepness (\(\varepsilon{\ll}\,1/10\)) in intermediate and deep waters (\(k_{p}h\geqslant\pi/10\)). Conversely, sharp increases in the steepness will induce a few percent increase in the vertical asymmetry in the same regimes (\(k_{p}h\geqslant\pi/10\)). The contour plot in figure 4(b) provides a full description of the variations in asymmetry with depth and steepness. Furthermore, figure 4(a) shows that in shallow depths the vertical asymmetry strongly depends on \(k_{p}h\) while in deep water it tends to saturate. Figure 4(c) also testifies to the role of bandwidth in increasing the asymmetry, albeit sharp changes are restricted to sufficiently broad spectra (\(\nu>1\)). Thus, we conclude that unless the steepness in intermediate water is too large (\(\varepsilon>1/10\)) or the spectrum very broad (\(\nu>1/2\)), sharp variations of the vertical asymmetry occur only in shallow waters (\(k_{p}h<\pi/10\)) and otherwise upholds the approximation \(\mathfrak{S}=6/5\) used above (section III) and Mendes _et al._[17]. Moreover, the special case of narrow-banded (\(\nu=0\)) linear waves (\(\varepsilon\ll 1/10\)) in deep water leads to \(\varepsilon_{*}\to 0\), thus reaching the asymmetry lower bound \(\mathfrak{S}\!=\!7/6\) for rogue waves. This suggests that in intermediate waters narrowing the bandwidth from \(\nu=0.3\) to \(\nu=0\) will have little impact on the amplification of rogue wave statistics due to the negligible change in vertical asymmetry, whereas in shallow water increasing the bandwidth above \(\nu=0.5\) will be significant. From the point of view of the theory in Mendes _et al._[17], the asymmetry approximation of eqs. (12,15) explains why narrow-banded models [29] are successful in predicting rogue wave statistics travelling past a step in a broad-banded irregular wave background in intermediate water. Provided there is no wave breaking (\(H_{s}/h\ll 1\)), the bandwidth effect will play a role in amplifying statistics in shallower depths because of the contributuin of the term \(f_{k_{p}h}\nu^{2}\), as experimentally demonstrated in Doeleman [30]. ## V Upper bound for kurtosis For naval design purposes, the assessment of maximum expected waves over a specific return time is crucial. Typically, ocean structures and vessels must be designed to sustain expected maximum extreme waves over their lifespan [31; 32]. Therefore, we will assess the upper bound for kurtosis over a shoal in the same fashion we obtained eq. (8), but now for the region where there is a balance between the contribution of skewness and kurtosis in the cumulant expansions (atop the shoal). In order to do so, we shall evaluate maxima for the parameters (\(\mathfrak{S},\Gamma\)). Poseessing an in-depth formula for the vertical asymmetry, we can estimate the upper bound (henceforth denoted by \(\infty\)) for kurtosis of wave trains of second-order in steepness. Eqs. (12) and (15) provide the upper bound for the vertical asymmetry of rogue waves: \(\mathfrak{S}_{\infty}(k_{p}h=\infty)\approx 7/5\) in deep water, and \(\mathfrak{S}_{\infty}(k_{p}h=0)\approx 5/3\) in shallow water. Since the steepness growth due to shoaling is limited to \(\varepsilon\leqslant 1/7\) by wave breaking, one finds the bound \(\Gamma_{\infty}-1\lesssim 1/12\) due to eq. (3.17) of Mendes _et al._[17]. Hence, we may approximate: \[1-\frac{1}{\mathfrak{S}_{\infty}^{2}\Gamma_{\infty}}\approx 8(\Gamma_{\infty} -1)\quad. \tag{16}\] Approaching the value \(\Gamma_{\infty}\) atop the shoal, the contribution of the skewness for the amplification of wave statistics increases as compared to the shoaling zone [33], such that the relationship between kurtosis and skewness leading to eq. (7) is modified and now follows \(\mu_{4}{\approx}\mu_{3}^{2}\). Then, we are able to compute the upper bound for the excess kurtosis with the assumption of a pre-shoal Gaussian statistics (see figure 6): \[e^{16\alpha^{2}(\Gamma_{\infty}-1)}\approx 1+\alpha^{2}\left(\alpha^{2}-1 \right)\mu_{4}\;\;, \tag{17}\] Figure 5: Vertical asymmetry of large and rogue waves as a function of water depth for different steepness, bandwidth and normalized height. The dashed line in panel b represents the Ursell limit for second-order theory. Ergo, \[\mu_{4\,,\,\infty}\approx\frac{1}{12}\left[\varepsilon^{64(\Gamma_{\infty}-1)}-1 \right]\quad, \tag{18}\] where \(\Gamma_{\infty}\) varies with water depth. According to eq. (18), steep and highly asymmetrical broad-banded waves lead to an upper bound for kurtosis of the order of \(\mu_{4\,,\,\infty}\sim 4\), see figure 6. To give some context, for the special case of a pre-shoal steepness \(\varepsilon=0.023\) as in Run 1 (dashed blue curve of figure 6) of the experiments of Trulsen _et al._[19] the observed kurtosis was approximately \(\mu_{4}\approx 1\), whereas our results bound the maximum kurtosis for this experimental set-up to be twice the observed value (solid blue curve of figure 6). We already described that \(\Gamma\) will peak around \(k_{p}h\approx 0.5\) in Mendes _et al._[17], and now the excellent agreement with experiments in figure 2 allows us to conclusively assert that the excess kurtosis will also peak in this region. Furthermore, experimental evidence for the kurtosis reaching a maximum in this region has been recently described in Zhang _et al._[34]. ## VI Conclusions In this work we have extended the framework in Mendes _et al._[17] to an effective theory for the evolution of excess kurtosis of the surface elevation over a shoal of finite and constant steep slope. We find excellent quantitative agreement during and atop the shoal of experiments in Trulsen _et al._[19]. Here we have shown that the kurtosis also depends on the inhomogeneities of the energy density over a shoal, whereas the groundwork of Marthinsen [15] computes the excess kurtosis directly from the solution \(\zeta(x,t)\). On the other hand, a computation of the kurtosis from the probability density of \(\zeta(x,t)\) through the non-homogeneous framework will be pursued in a future work with an analytical non-uniform distribution of random phases. Furthermore, we have obtained an approximation for the vertical asymmetry in finite depth as a function of both steepness and bandwidth. This approximation recovers the seminal work of Tayfun [27] for the skewness of the surface elevation in narrow-banded deep water waves. Building on this new approximation, we have demonstrated that the vertical asymmetry varies slowly over a shoal in both deep and intermediate waters. Moreover, we were able to compute the maximum possible excess kurtosis driven by shoaling inhomogeneities. ## VII Acknowledgments S.M and J.K. were supported by the Swiss National Science Foundation under grant 200020-175697. We thank Maura Brunetti and Alexis Gomel for fruitful discussions.
2301.10643
Automatic Locally Robust Estimation with Generated Regressors
Many economic and causal parameters of interest depend on generated regressors. Examples include structural parameters in models with endogenous variables estimated by control functions and in models with sample selection, treatment effect estimation with propensity score matching, and marginal treatment effects. Inference with generated regressors is complicated by the very complex expression for influence functions and asymptotic variances. To address this problem, we propose Automatic Locally Robust/debiased GMM estimators in a general setting with generated regressors. Importantly, we allow for the generated regressors to be generated from machine learners, such as Random Forest, Neural Nets, Boosting, and many others. We use our results to construct novel Doubly Robust and Locally Robust estimators for the Counterfactual Average Structural Function and Average Partial Effects in models with endogeneity and sample selection, respectively. We provide sufficient conditions for the asymptotic normality of our debiased GMM estimators and investigate their finite sample performance through Monte Carlo simulations.
Juan Carlos Escanciano, Telmo Pérez-Izquierdo
2023-01-25T15:26:18Z
http://arxiv.org/abs/2301.10643v2
# Automatic Locally Robust Estimation with Generated Regressors1 ###### Abstract Many economic and causal parameters of interest depend on generated regressors, including structural parameters in models with endogenous variables estimated by control functions and in models with sample selection. Inference with generated regressors is complicated by the very complex expression for influence functions and asymptotic variances. To address this problem, we propose automatic Locally Robust/debiased GMM estimators in a general setting with generated regressors. Importantly, we allow for the generated regressors to be generated from machine learners, such as Random Forest, Neural Nets, Boosting, and many others. We use our results to construct novel Doubly Robust estimators for the Counterfactual Average Structural Function and Average Partial Effects in models with endogeneity and sample selection, respectively. Keywords: Local robustness, orthogonal moments, double robustness, semiparametric estimation, bias, GMM. JEL Classification: C13; C14; C21; D24 Introduction Many economic and causal parameters of interest depend on generated regressors. Leading examples include the Counterfactual Average Structural Function (CASF) in models with endogenous variables estimated by control functions (cf. Blundell and Powell, 2004; Stock, 1989, 1991), and Average Partial Effects (APE) in sample selection models (Das et al., 2003). There are currently no econometric methods for inference on these parameters allowing for generated regressors obtained by machine learning. The goal of this paper is to propose Automatic Locally Robust(ALR)/Debiased estimators of and inference on structural parameters in such models. The paper builds on two different literatures. The first literature is the classical literature on semiparametric estimators with generated regressors, see Ahn and Powell (1993); Heckman et al. (1998); Ichimura and Lee (1991); Imbens and Newey (2009); Newey et al. (1999); Rothe (2009), among others. The asymptotic properties of several estimators within this class is given by Hahn and Ridder (2013, 2019) and Mammen et al. (2012, 2016). With respect to these papers, we allow the second step to be semiparametric or parametric (on top of fully non-parametric). Furthermore, we contribute to this literature by proposing LR automatic estimators. Automatic estimation is very well motivated in this setting because the form of the influence function and asymptotic variances is very complex. The second literature we build on is the more recent literature on LR/Debiased estimators, see Chernozhukov et al. (2018, 2022). With the only exception of Sasaki and Ura (2021), this literature has not considered models with generated regressors. Our results complement the analysis of the Policy Relevant Treatment Effect (PRTE) in Sasaki and Ura (2021) by providing automatic estimation of the influence function. Relative to the automatic LR literature (e.g. Chernozhukov et al., 2022) we innovate in considering a nonlinear setting with an implicit functional (the generated regressor as a conditioning argument) for which an analytic derivative is not available for general machine learners. As an application of our methods we propose novel Automatic Locally Robust (ALR) estimators for the CASF parameter of Blundell and Powell (2004) and for the APE in a sample selection model with a flexible selection equation estimated by machine learning. All these examples are characterized by being linear functionals of a second step function satisfying orthogonality conditions involving generated regressors (the control function or the propensity score) from a first step. We show that it is straightforward to construct automatic Double-Robust (DR) estimators that are robust to functional form assumptions for the second step. For instance, a practical approach could be to fit a partially linear specification for the second step, like in Robinson (1988) but with a non-parametric function of the generated regressors. Our results cover this case, in which the second step is semiparametric. The DR estimators are, however, not LR to the generated regressors in general. To construct fully LR estimators we use numerical derivatives to account for the presence of generated regressors. Fortunately, our automatic approach is amenable to any machine learning method for which predictions out of sample are available. Another approach could be to specify a model for the second step for which analytical derivatives are available. We note that the DR moment conditions are robust to this model being misspecified. The rest of the paper is organized as follows. Section 2 introduces the setting and the examples. Section 2.1 finds the influence function of parameters identified by moments with generated regressors. Section 3 gives the general construction of automatic LR moments with generated regressors. In Section 4, we provide the details for Debiased LR GMM estimation. A summary of the estimation algorithm is given in Section 4.2. Section 5 develops some examples. ## 2 Setting and Examples We observe data \(W=(Y,D,Z)\) with cumulative distribution function (cdf) \(F_{0}\). For simplicity, we consider that \(Y\) and \(D\) are one-dimensional. In our setting, there is a first step linking \(D\) with \(Z\). The first step results in a one-dimensional generated regressor \[V\equiv\varphi(D,Z,g_{0}),\] where \(\varphi\) is a known function of observed variables \((D,Z)\) and an unknown parameter \(g_{0}\in\Gamma_{1}\), for \(\Gamma_{1}\) a linear and closed subspace of the Hilbert space \(L_{2}(Z)\) of square-integrable functions of \(Z\).1 The unknown parameter \(g_{0}\) solves the orthogonal moments Footnote 1: _Notation:_ For a (measurable) function \(f(w)\), \(\mathbb{E}[f(W)]\equiv\int f(w)dF_{0}(w)\) denotes expectation w.r.t. the distribution \(F_{0}\). For simplicity of notation, omit that the measure when referring to the \(L_{2}\) Hilbert spaces of measurable functions with finite second moments. This measure is the marginal distribution that \(F_{0}\) induces on some of the components of \(W\). \[\mathbb{E}[\delta_{1}(Z)(D-g_{0}(Z))]=0\text{ for all }\delta_{1}\in\Gamma_{1}. \tag{2.1}\] This setting covers parametric, semiparametric, and non-parametric first steps. For example, when \(\Gamma_{1}=L_{2}(Z)\), we have \(g_{0}(Z)=\mathbb{E}[D|Z]\). Then, there is a second step linking \(Y\) with a component of \((D,Z)\), denoted by \(X\), and the generated regressor \(V\), through the moment restrictions \[\mathbb{E}[\delta_{2}(D,Z)(Y-h_{0}(X,V))]=0\text{ for all }\delta_{2}\in\Gamma_{2}(g_{0}), \tag{2.2}\] where \(\Gamma_{2}(g_{0})\) is a linear and closed subspace of \(L_{2}(D,Z)\). We have that \(h_{0}(X,\varphi(D,Z,g_{0}))\), understood as a function of \((D,Z)\) is an element of \(\Gamma_{2}(g_{0})\). The set \(\Gamma_{2}(g_{0})\) may depend on the fist step parameter \(g_{0}\). In some settings, \(\Gamma_{2}(g_{0})\) includes only functions of \(X\) and the generated regressor \(V\). That is, \(\Gamma_{2}(g_{0})\) includes functions with the following shape: \(\delta_{2}(D,Z)=\delta(X,\varphi(D,Z,g_{0}))\) for \(\delta\in\Gamma\), a linear and closed subspace of \(L_{2}(X,V)\). For instance, Hahn and Ridder (2013) and Mammen et al. (2016) consider where the second step is a non-parametric regression of \(Y\) on \((X,V)\). In that case, \(\Gamma_{2}(g)=L_{2}(g)\), with \(L_{2}(g)\equiv\{(d,z)\mapsto\delta(x,\varphi(d,z,g))\colon\delta\in L_{2}(X,V) \}\subseteq L_{2}(D,Z)\). Let \(\Theta\subseteq\mathbb{R}\) denote the space where the structural parameter of interest lies. We have the moment function \(m\colon\mathbb{R}^{\dim(W)}\times L_{2}(Z)\times L_{2}(X,V)\times\Theta\to \mathbb{R}\). The parameter of interest \(\theta_{0}\) is identified in a third step by a GMM moment condition \[\mathbb{E}[m(W,g_{0},h_{0},\theta_{0})]=0.\] Here we assume that \(\theta_{0}\) is identified by these moments, i.e. that \(\theta_{0}\) is the unique solution to \(E[m(W,g_{0},h_{0},\theta)]=0\) over \(\theta\in\Theta\). Extensions of our setting to a larger number of moment conditions, structural parameters, and multiple variables \(D\) and \(Y\) are straightforward. We illustrate the notation and concepts with two general running examples. **Example 1** (Control Function Approach): We observe \(W=(Y,D,Z)\) satisfying the model \(Y=H(X,U)\), for an unknown function \(H\). The main feature of this model is that \(D\), a component of \(X\), may be an endogenous regressor. We assume that the endogenous regressor satisfies \(D=g_{0}(Z)+V\), with \(U\) and \(V\) being unobserved correlated error terms. The function \(g_{0}\) could be identified by a conditional mean restriction, as in equation (2.1). We assume a Control Function approach: where \(U|X,V\sim U|V\), where \(\sim\) denotes equally distributed. Thus, the corresponding \(\varphi\) is \[V\equiv\varphi(X,Z,g_{0})\equiv D-g_{0}(Z).\] As in Blundell and Powell (2004), the Control Function assumption implies \[\mathbb{E}[Y|X=x,V=v] =\mathbb{E}[H(X,U)|X=x,V=v]=\mathbb{E}[H(x,U)|X=x,V=v]\] \[=\mathbb{E}[H(x,U)|V=v]\equiv h_{0}(x,v).\] This defines the second step. In this example, we have that \(\Gamma_{2}(g)=L_{2}(g)\). The Control Function assumption allows us to identify the Average Structural Function (ASF) at a point \(x\in\mathbb{R}^{\dim(X)}\): \[\text{ASF}_{0}(x)\equiv\mathbb{E}[H(x,U)]=\mathbb{E}[\mathbb{E}[H(x,U)|V]]= \mathbb{E}[h_{0}(x,V)].\] Some conditions on the support of the random vectors are needed for the above equation to hold (see Blundell and Powell, 2004; Imbens and Newey, 2009). In this setup, a parameter of interest is the Counterfactual Average Structural Function (CASF) given by \[\theta_{0}=\int\text{ASF}(x^{*})dF^{*}(x^{*}),\] for an counterfactual distribution \(F^{*}\). When \(F^{*}\) is implied by a certain policy, the CASF may be used to measure the effect of the policy (see Blundell and Powell, 2004; Stock, 1989, 1991). By Fubini's Theorem, the CASF can be written as a function of \((g_{0},h_{0})\): \[\theta_{0}=\int\mathbb{E}[h_{0}(x^{*},\varphi(D,Z,g_{0}))]dF^{*}(x^{*})= \mathbb{E}\left[\int h_{0}(x^{*},\varphi(D,Z,g_{0}))dF^{*}(x^{*})\right].\] Hence, the moment function that identifies the CASF is: \[m(w,g,h,\theta)=\int h(x^{*},\varphi(d,z,g))dF^{*}(x^{*})-\theta.\] We note here that the CASF is not covered by the work of Hahn and Ridder (2013, 2019). The key difference is that the functional defining the CASF cannot be written as \(\mathbb{E}[\eta(X,\text{ASF}_{0}(X))]\) for a function \(\eta\) with domain in an Euclidean space. We will propose below a novel Doubly Robust (DR) estimator for the CASF. \(\blacksquare\) Example 2 (Sample Selection Models)We observe \(W=(Y,D,Z)\) following the model \(Y=Y^{*}D\equiv H(X,\varepsilon)D\), where \(X\) is a component of \(Z\), and we do not observe \(Y^{*}\) when \(D=0\). This is a very general setting for sample selection models. We do not know much about the selection, so this is given by \(D=1\left[g_{0}\left(Z\right)-U\geq 0\right]\), where \(U\) is uniformly distributed in \(\left[0,1\right]\). The unobserved errors \(\varepsilon\) and \(U\), though independent of \(Z\), are correlated with each other (selection on unobservables). In this example, \(V=g_{0}\left(Z\right)=\mathbb{E}(D|Z)\). Then, it can be shown that \[\mathbb{E}(Y|Z) =\mathbb{E}(H(X,\varepsilon)1\left[g_{0}\left(Z\right)-U\geq 0 \right]|Z)\] \[=h_{0}(X,V).\] This setting provides a nonparametric extension of the classical model of Heckman (1979), where \(H(X,\varepsilon)=X^{\prime}\beta_{0}+\varepsilon\), \(g_{0}\left(Z\right)=Z^{\prime}\gamma_{0}\), and the joint distribution of \((\varepsilon,U)\) is bivariate Gaussian. As a parameter of interest consider the Average Partial Effects (APE) given, for simplicity of presentation for a one-dimensional continuous regressor, by \[\theta_{0}=\mathbb{E}\left[\frac{\partial h_{0}}{\partial x}(X,V)\right].\] The moment function identifying the APE is \[m(w,g,h,\theta)=\left.\frac{\partial}{\partial s}h(s,g(z))\right|_{s=x}-\theta.\] This parameter is covered by Proposition 5 in Hahn and Ridder (2019). However, the authors do not allow for ML estimators in the first and second steps. Bellow we prose a novel DR estimator for the APE which allows for ML first and second step estimators. \(\blacksquare\) ### Orthogonal Moment Functions with Generated Regressors We follow Chernozhukov et al. (2022a, henceforth, CEINR) for the construction of LR-Debiased-Orthogonal moment functions. Furthermore, we show that the effect of the first and second step estimation can be studied separately. This will allow us to construct separate automatic estimators of the nuisance parameters in first and second step Influence Functions (IF). We begin by introducing some additional concepts and notation. Let \(F\) denote a possible cdf for a data observation \(W\). We denote by \(g(F)\) the probability limit an estimator \(\hat{g}\) of the first step when the true distribution of \(W\) is \(F\), i.e., under general misspecification (see Newey, 1994). Here, \(F\) is unrestricted except for regularity conditions such as existence of \(g(F)\) or the expectation of certain functions of the data. For example, if \(\hat{g}(z)\) is a nonparametric estimator of \(\mathbb{E}[D|Z=z]\) then \(g(F)(z)=E_{F}[D|Z=z]\) is the conditional expectation function when \(F\) is the true distribution of \(W\), denoted by \(E_{F}\), which is well defined under the regularity condition that \(E_{F}[|D|]\) is finite. We assume that \(g(F)\) is identified as the solution in \(g\) to \[E_{F}[\delta_{1}(Z)(D-g(Z))]=0\text{ for all }\delta_{1}\in\Gamma_{1}.\] Hence, we have that \(g(F_{0})=g_{0}\), consistent with \(g_{0}\) being the probability limit of \(\hat{g}\) when \(F_{0}\) is the cdf of \(W\). To study the effect of the second step, suppose that \(W\) is distributed according to \(F\). However, the first step parameter is independently fixed to \(g\). Let \(h(F,g)\) be the solution in \(h\) to \[E_{F}\left[\delta_{2}(D,Z)\{Y-h(X,\varphi(D,Z,g))\}\right]=0\text{ for all }\delta_{2}\in\Gamma_{2}(g).\] The solution of the above equation is a function of \((x,v)\): \(h(F,g)(x,v)\). We have that \(h(F_{0},g_{0})=h_{0}\). We may think of the mapping \(h(F,g)\) as the probability limit of an estimator of \(h_{0}\) under the following conditions: (i) the true distribution of \(W\) is \(F\) and (ii) the estimator is built with the first step parameter fixed to \(g\). A feasible estimator \(\hat{h}\) of \(h_{0}\) will, however, rely on the estimator \(\hat{g}\). Therefore, we assume that the probability limit of \(\hat{h}\) under general misspecification is \(h(F,g(F))\). To introduce orthogonal moments, let \(H\) be some alternative distribution that is unrestricted except for regularity conditions, and \(F_{\tau}\equiv(1-\tau)F_{0}+\tau H\) for \(\tau\in[0,1].\) We assume that \(H\) is chosen so that \(g(F_{\tau})\) and \(h(F_{\tau},g(F_{\tau}))\) exist for \(\tau\) small enough, and possibly other regularity conditions are satisfied. The IF that corrects for _both first and second step estimation_, as introduced in CEINR, is the function \(\phi(w,g,h,\alpha,\theta)\) such that \[\frac{d}{d\tau}\mathbb{E}[m(W,g(F_{\tau}),h(F_{\tau},g(F_{\tau})), \theta)]=\int\phi(w,g_{0},h_{0},\alpha_{0},\theta)dH(w), \tag{2.3}\] \[E[\phi(W,g_{0},h_{0},\alpha_{0},\theta)]=0,\text{ and }E[\phi(W,g_{0},h_{0},\alpha_{0},\theta)^{2}]<\infty,\] for all \(H\) and all \(\theta.\) Here \(\alpha\) is an unknown function, additional to \((g,h)\), on which only the IF depends. The "true parameter" \(\alpha_{0}\) is the \(\alpha\) such that equation (2.3) is satisfied. Throughout the paper, \(d/d\tau\) is the derivative from the right (i.e. for non-negative values of \(\tau\)) at \(\tau=0.\) As in the work of von Mises (1947), Hampel (1974), and Huber (1981), this equation is the Gateaux derivative characterization of the IF of the functional \(\bar{m}(g(F),h(F,g(F)),\theta)\), with \[\bar{m}(g,h,\theta)\equiv\mathbb{E}[m(W,g,h,\theta)].\] Orthogonal moment functions can be constructed by adding this IF to the original identifying moment functions to obtain \[\psi(w,g,h,\alpha,\theta)\equiv m(w,g,h,\theta)+\phi(w,g,h,\alpha,\theta). \tag{2.4}\] This vector of moment functions has two key orthogonality properties. First, we have that varying \((g,h)\) away from \((g_{0},h_{0})\) has no effect, locally, on \(\mathbb{E}[\psi(W,g,h,\alpha_{0},\theta)]\). The second property is that varying \(\alpha\) will have no effect, globally, on \(\mathbb{E}[\psi(W,g_{0},h_{0},\alpha,\theta)]\). These properties are shown in great generality in CEINR. The IF in equation (2.3) measures the effect that the first step (estimation of \(g_{0}\)) and the second step (estimation of \(h_{0}\)) will have on the moment condition. We can show that these effects can be studied separately. The following lemma gives the result: **Lemma 2.1**: _Assume that the chain rule can be applied. Then,_ \[\frac{d}{d\tau}\bar{m}(g(F_{\tau}),h(F_{\tau},g(F_{\tau})),\theta)] =\frac{d}{d\tau}\bar{m}(g(F_{\tau}),h(F_{0},g(F_{\tau})),\theta)]\] \[+\frac{d}{d\tau}\bar{m}(g_{0},h(F_{\tau},g_{0}),\theta)].\] The first derivative in the RHS accounts for the first step. As in Hahn and Ridder (2013), the first step affects the moment condition in two ways. We have a _direct impact_ on \(\bar{m}\), which includes the _effect of evaluating_\(h\) on the generated regressor. We also have an _indirect effect_ on the moment that comes from \(g\) affecting estimation of \(h_{0}\) in the second step (through conditioning). This is present in the term \(h(F_{0},g(F_{\tau}))\). The second derivative accounts for the effect of the second step. This effect is independent from the first step and, as such, considers that \(g_{0}\) is known. This is captured by \(h(F_{\tau},g_{0})\). We may then find an IF corresponding to each step: \(\phi_{1}(w,g,\alpha_{1},\theta)\) and \(\phi_{2}(w,h,\alpha_{2},\theta)\), respectively. The IFs satisfy: \[\frac{d}{d\tau}\bar{m}(g(F_{\tau}),h(F_{0},g(F_{\tau})),\theta)] =\int\phi_{1}(w,g_{0},\alpha_{10},\theta)dH(w)\text{ and } \tag{2.5}\] \[\frac{d}{d\tau}\bar{m}(g_{0},h(F_{\tau},g_{0}),\theta)] =\int\phi_{2}(w,h_{0},\alpha_{20},\theta)dH(w), \tag{2.6}\] on top of the zero mean and square integrability conditions (see equation 2.3). We therefore have that the IF accounting for both the first and second step is \(\phi(w,g,h,\alpha,\theta)=\phi_{1}(w,g,\alpha_{1},\theta)+\phi_{2}(w,h,\alpha_ {2},\theta)\), with \(\alpha=(\alpha_{1},\alpha_{2})\). We now provide the orthogonality conditions that will serve as a basis for the automatic estimation of the nuisance parameters \(\alpha_{01}\) and \(\alpha_{02}\). Define the following moment conditions: \(\psi_{1}(w,g,\alpha_{1},\theta)\equiv m(w,g,h(F_{0},g),\theta)+\phi_{1}(w,g, \alpha_{1},\theta)\) for the first step, and \(\psi_{2}(w,h,\alpha_{2},\theta)\equiv m(w,g_{0},h,\theta)+\phi_{2}(w,h,\alpha_{ 2},\theta)\) for the second step. We note here that, in general, \(\psi\neq\psi_{1}+\psi_{2}\). Applying separately Theorem 1 in CEINR to \(\psi_{1}\) and \(\psi_{2}\) one gets \[\frac{d}{d\tau}\mathbb{E}[\psi_{1}(W,g(F_{\tau}),\alpha_{1}(F_{\tau}),\theta)] =0\text{ and }\frac{d}{d\tau}\mathbb{E}[\psi_{2}(W,h(F_{\tau},g_{0}),\alpha_{ 2}(F_{\tau}),\theta)]=0.\] Since \(\Gamma_{1}\) and \(\Gamma_{2}(g_{0})\) are linear, the above equations then mean that, for all \(\theta\in\Theta\), \[\begin{split}\frac{d}{d\tau}\mathbb{E}[\psi_{1}(W,g_{0}+\tau \delta_{1},\alpha_{10},\theta)]&=0\text{ for all }\delta_{1}\in\Gamma_{1}\text{ and }\\ \frac{d}{d\tau}\mathbb{E}[\psi_{2}(W,h_{0}+\tau\delta_{2}, \alpha_{20},\theta)]&=0\text{ for all }\delta_{2}\in\Gamma_{2}(g_{0}).\end{split} \tag{2.7}\] This result comes from applying Theorem 3 in CEINR. Here \(\delta_{1}\) represents a possible direction of deviation of \(g(F)\) from \(g_{0}\). In turn, \(\delta_{2}\) represents a possible deviation of \(h(F,g_{0})\) from \(h_{0}\). The parameter \(\tau\) is the size of a deviation. The innovation with respect to CEINR is that we can compute the IF \(\phi\) by separately studying \(\psi_{1}\) and \(\psi_{2}\), corresponding to the first and second steps, respectively. ## 3 Automatic estimation of the nuisance parameters The debiased moments require a consistent estimator \(\hat{\alpha}\) of the nuisance parameters \(\alpha_{0}\equiv(\alpha_{01},\alpha_{02})\). When the form of \(\alpha_{0}\) is known, one can plug-in nonparametric estimators of the unknown components of \(\alpha_{0}\) to form \(\hat{\alpha}\). In the generated regressors setup, however, the nuisance parameters (specially \(\alpha_{01}\)) have a complex analytical shape (see the result in equation (A.8) in the Appendix, the examples in Section 3.1, and Hahn and Ridder, 2013). Therefore, the plug-in estimator for \(\hat{\alpha}\) may behave badly. We propose an alternative approach which uses the orthogonality of \(\psi_{1}\) and \(\psi_{2}\) with respect to \(g\) and \(h\), respectively, to construct estimators of \((\alpha_{10},\alpha_{20})\). This approach does not require to know the form of \(\alpha_{0}\), it is "automatic" in only requiring the orthogonal moment functions and data for construction of \(\hat{\alpha}\). Moreover, an automatic estimator can be constructed separately for each step. For more details, we refer to Section 3.2. To build the automatic estimator, we need two ingredients: (i) a consistent estimator of the linearization of the moment condition and (ii) the shape of the first and second step IFs (up to \(\alpha_{1}\) and \(\alpha_{2}\), respectively). We therefore start by deriving these two ingredients. ### First and Second Step Linearization We start with the linearization of the second step effect. This result is well established in the literature and will follow immediately if \(\bar{m}(g_{0},h,\theta)\) can be linearized in \(h\) (as in Newey, 1994, Equation 4.1). The shape of the influence function can be found by applying the results in Ichimura and Newey (2022). Before introducing the result, we note that throughout this section (i) \(\tau\mapsto h_{\tau}\) denotes a differentiable path, i.e., \(0\mapsto h_{0}\) and \(dh_{\tau}/d\tau\) exists (equivalently for \(g_{\tau}\)) and (ii) \(H\) is regular in the sense that, for \(F_{\tau}\equiv(1-\tau)F_{0}+\tau H\), \(g(F_{\tau})\) is a differentiable path in \(L_{2}(Z)\), and \(h(F_{\tau},g_{0})\) and \(h(F_{0},g(F_{\tau}))\) are differentiable paths in \(L_{2}(X,V)\). **Proposition 3.1**: _Under the following assumption:_ **(A1)**: _There exists a function_ \(D_{2}(w,h)\)_, linear and continuous in_ \(h\)_, such that_ \(d\bar{m}(g_{0},h_{\tau},\theta)/d\tau=d\mathbb{E}[D_{2}(W,h_{\tau})]/d\tau\)_, for every_ \(\theta\in\Theta\)_._ _We have that:_ **(Lin)**: _We can linearize the effect of the second step estimation:_ \[\frac{d}{d\tau}\bar{m}(g_{0},h(F_{\tau},g_{0}),\theta)]=\frac{d}{d\tau} \mathbb{E}[D_{2}(W,h(F_{\tau},g_{0}))].\] **(IF)**: _There exists an_ \(\alpha_{02}\in\Gamma_{2}(g_{0})\cap L_{2}(g_{0})\) _such that the function_ \[\phi_{2}(w,h_{0},\alpha_{02},\theta)=\alpha_{02}(x,\varphi(d,z,g_{0}))\cdot \{y-h_{0}(x,\varphi(d,z,g_{0}))\},\] _satisfies equation (2.5) and is thus the Second Step IF._ We note that, since \(\bar{m}\) is linearized at \((g_{0},h_{0},\theta)\), \(D_{2}\) (and also \(\alpha_{02}\)) may also depend on \((g_{0},h_{0},\theta)\). This is omitted for notational simplicity, but will became relevant to construct feasible automatic estimators (see Section 4). We now find the linearization of \(\bar{m}(g_{0},h,\theta)\) in some examples: **Example 1** (continuing from p. 4): Assumption (A1) is easy to check for the CASF. Since \(m(w,g_{0},h,\theta)\) is already linear, we have that \[D_{2}(w,h)=\int h(x^{*},\varphi(d,z,g_{0}))dF^{*}(x^{*}).\] In this case, we can also compute \(r_{2}\), the Riesz Representer of \(\mathbb{E}[D_{2}(W,h)]\). In the present example, since \(\Gamma_{2}(g_{0})=L_{2}(g_{0})\) (non-parametric regression) and \(r_{2}\in L_{2}(g_{0})\), we have that \(\alpha_{02}=r_{2}\) (see equation (A.2) in the Appendix for the definition of \(\alpha_{02}\)). To find it, we follow Perez-Izquierdo (2022) and assume the existence of densities \(f^{*}\), \(f_{0}^{v}\) and, \(f_{0}^{xv}\) for \(F^{*}\), \(F_{0}^{v}\) and \(F_{0}^{xv}\) respectively. Here \(F_{0}^{v}\) and \(F_{0}^{xv}\) denote the distribution under \(F_{0}\) of \(V\) and \((X,V)\), respectively. We then have that \[\mathbb{E}[D_{2}(W,h)] =\int h(x^{*},v)f^{*}(x^{*})f_{0}^{v}(v)dx^{*}dv=\int\frac{f^{*}(x ^{*})f_{0}^{v}(v)}{f_{0}^{xv}(x^{*},v)}h(x^{*},v)f_{0}^{xv}(x^{*},v)dx^{*}dv\] \[=\mathbb{E}[r_{2}(X,V)h(X,V)],\] with \(r_{2}(x,v)\equiv f^{*}(x^{*})f_{0}^{v}(v)/f_{0}^{xv}(x^{*},v)\). Note that, even if we have found the nuisance parameter \(\alpha_{02}=r_{2}\), it has a rather complex shape. It depends on the density of the generated regressor \(V\) and on the joint density of \((X,V)\). These objects are generally hard to estimate and may cause the plug-in estimator for \(r_{2}\) to behave poorly. We advocate automatic estimation (Section 3.2) as a potential solution to this issue. \(\blacksquare\) Example 3(Hahn and Ridder (2013)' Setup)This example discusses the non-parametric setup in Hahn and Ridder (2013, Th. 5). Our theory generalizes their results in two ways: (i) we will allow for a wider range of generated regressors \(\varphi(D,Z,g_{0})\) and (ii) we consider a larger class of moment conditions. The authors focus on the case where there is a function \(\eta\colon\mathbb{R}^{\dim(W)+1}\to\mathbb{R}\) such that \[m(w,g,h,\theta)=\eta(w,h(x,g(z)))-\theta.\] That is, in Hahn and Ridder (2013)'s setup, \((g,h)\) enters the moment condition by the values that the "link" function \(\eta\), with domain in an Euclidean space, takes at \((w,h(x,g(z)))\). Note that they fix \(\varphi(d,z,g)=g(z)\) and that their Theorem 5 covers the fully non-parametric case: \(\Gamma_{1}=L_{2}(Z)\) and \(\Gamma_{2}(g)=\{\delta(x,g(z))\colon\delta\in L_{2}(X,V)\}\) (other results in Hahn and Ridder, 2013, cover parametric first steps, but not the semiparametric case as in equation 2.1). We start by linearizing the moment condition in \(h\). To do it, we assume that \(\eta\) is differentiable w.r.t. \(y\). In that case, as long as we can interchange differentiation and integration: \[\frac{d}{d\tau}\bar{m}(g_{0},h_{\tau},\theta) =\mathbb{E}\left[\frac{d}{d\tau}\eta(W,g_{\tau}(X,g_{0}(Z)))\right]\] \[=\mathbb{E}\left[\frac{\partial\eta}{\partial y}(W,h_{0}(X,g_{0} (Z)))\frac{d}{d\tau}h_{\tau}(X,g_{0}(Z))\right]\] \[=\frac{d}{d\tau}\mathbb{E}\left[\frac{\partial\eta}{\partial y}( W,h_{0}(X,g_{0}(Z)))h_{\tau}(X,g_{0}(Z))\right],\] so that \(D_{2}(w,h)=\partial\eta/\partial y(w,h_{0}(x,g_{0}(z)))\cdot h(x,g_{0}(z))\). In the fully non-parametric case, the second step nuisance parameter is the Riesz Representer of \(\mathbb{E}[D_{2}(W,h)]\). This is given by the expectation of \(\partial\eta/\partial y(W,h_{0}(X,g_{0}(Z)))\) conditional on \((X,V)\). \(\blacksquare\) We now move to linearize the first step effect. Note that if the chain rule can be applied: \[\frac{d}{d\tau}\bar{m}(g(F_{\tau}),h(F_{0},g(F_{\tau})),\theta) =\frac{d}{d\tau}\bar{m}(g(F_{\tau}),h_{0},\theta) \tag{3.1}\] \[+\frac{d}{d\tau}\bar{m}(g_{0},h(F_{0},g(F_{\tau})),\theta).\] The first derivative in the RHS can be easily analyzed if we linearize \(\bar{m}(g,h_{0},\theta)\) in \(g\) (see Assumption (A2) in Theorem 3.1 below). To study \(d\bar{m}(g_{0},h(F_{0},g(F_{\tau})),\theta)/d\tau\) we proceed as in Lemma 1 in Hahn and Ridder (2013). Our extension of the lemma to allow for semiparametric second steps is based on Assumption (A3) in Theorem 3.1. The assumption is discussed below. Under Assumption (A3), we have that \[\frac{d}{d\tau}\bar{m}(g_{0},h(F_{0},g(F_{\tau})),\theta) =-\frac{d}{d\tau}\mathbb{E}[\alpha_{02}(X,V)h_{0}(X,\varphi(D,Z,g( F_{\tau})))]\] \[+\frac{d}{d\tau}\mathbb{E}\left[\alpha_{02}(X,\varphi(D,Z,g(F_{ \tau})))\cdot(Y-h_{0}(X,V))\right].\] Therefore, the remaining step to linearize the moment condition in \(g\) is to linearize the terms \(h_{0}(X,\varphi(D,Z,g(F_{\tau})))\) and \(\alpha_{02}(X,\varphi(D,Z,g(F_{\tau})))\). To achieve this, we require \(h_{0}\), \(\alpha_{0}\), and \(\varphi\) to be differentiable in the appropriate sense (see Assumption (A4) bellow). **THeorem** **3.1**: _Consider that Assumption (A1) holds and:_ **(A2)**: _There exists a function_ \(D_{11}(w,g)\)_, linear and continuous in_ \(g\)_, such that_ \(d\bar{m}(g_{\tau},h_{0},\theta)/d\tau=d\mathbb{E}[D_{11}(W,g_{\tau})]/d\tau\)_, for every_ \(\theta\in\Theta\)_._ **(A3)**: _For every_ \(g\in\Gamma_{1}\) _and_ \(\delta\in L_{2}(X,V)\)_, we have that_ \(\delta(\cdot,\varphi(\cdot,\cdot,g))\in\Gamma_{2}(g)\Leftrightarrow\delta( \cdot,\varphi(\cdot,\cdot,g_{0}))\in\Gamma_{2}(g_{0})\)_._ **(A4)**: \(h_{0}\) _and_ \(\alpha_{02}\) _are differentiable w.r.t._ \(v\)_. Moreover, the function_ \(\varphi(d,z,g)\)_, understood as a mapping from_ \(L_{2}(Z)\) _to_ \(L_{2}(D,Z)\)_, is Hadamard differentiable at_ \(g_{0}\)_, with derivative_ \(D_{\varphi}\)_._ _Then, we have that:_ **(Lin)**: _The function_ \[D_{1}(w,g)\equiv D_{11}(w,g)+\frac{\partial}{\partial v}\left[\alpha_{02}(x,v )(y-h_{0}(x,v))\right]\cdot D_{\varphi}g. \tag{3.2}\] _where the derivative is evaluated at_ \(v=\varphi(d,z,g_{0})\)_, satisfies_ \[\frac{d}{d\tau}\bar{m}(g(F_{\tau}),h(F_{0},g(F_{\tau})),\theta)]=\frac{d}{d \tau}\mathbb{E}[D_{1}(W,g(F_{\tau}))].\] **(IF)**: _There exists an_ \(\alpha_{01}\in\Gamma_{1}\) _such that the function_ \[\phi_{1}(w,g_{0},\alpha_{01},\theta)=\alpha_{01}(z)\cdot\{d-g_{0}(z)\},\] _satisfies equation (2.6) and is thus the First Step IF._ Some comments are in order. Assumption (A3) simply means that the functions in \(\Gamma_{2}(g)\) (at least those that only depend on \((X,V)\)) have the same shape. It does not rule out any relevant case, up to our knowledge. For instance, the general case in which \(\Gamma_{2}(g)=\{(d,z)\mapsto\delta(x,\varphi(d,z,g))\colon\delta\in\Gamma\}\), for \(\Gamma\) a linear subspace of \(L_{2}(X,V)\) satisfies the assumption. When \(\Gamma=L_{2}(X,V)\) (i.e., \(\Gamma_{2}(g)=L_{2}(g)\)), the second step is a non-parametric regression on \(X\) and the generated regressor. One can also take \(\Gamma=\{\beta^{\prime}x+\eta(v)\colon\beta\in\mathbb{R}^{\dim(X)},\eta\in L_ {2}(V)\}\) to specify a partly linear model for the second step (Robinson, 1988). What Assumption (A3) rules out is to specify a partly linear model for some \(g\)'s and a non-parametric regression for others. We also note that Assumption (A3) also covers the case in which \(\Gamma_{2}(g)=L_{2}(D,Z)\), as in Escanciano et al. (2016, 2014). Regarding Assumption (A4), the Haddamard derivative of \(\varphi\) is a linear and continuous map \(D_{\varphi}\colon L_{2}(Z)\to L_{2}(D,Z)\) such that \[\frac{d}{d\tau}\varphi(d,z,g_{\tau})=\frac{d}{d\tau}D_{\varphi}g_{\tau}.\] Usually, either \(\varphi(d,z,g)=g(z)\) (first step prediction) or \(\varphi(d,z,g)=d-g(z)\) (first step residual). In those cases, \(D_{\varphi}g=g\) or \(D_{\varphi}g=-g\), respectively. The linearization of the first step effect is a rather complex function (see its definition in 3.2). The first term corresponds to the linearization of the _direct_ effect of \(g\). It is given by \(D_{11}\), the linearization of \(d\bar{m}(g,h_{0},\theta)/\tau\). The second term corresponds to the _indirect_ effect. Consistent estimation of the second term requires estimators for (i) \(g_{0}\), (ii) \(h_{0}\), (iii) \(\partial h_{0}/\partial v\), (iv) \(\alpha_{02}\), and (v) \(\partial\alpha_{02}/\partial v\). In Section 3.2, we propose an automatic estimator of the second step nuisance parameter, \(\alpha_{02}\). We can then plug-in to construct an automatic estimator of the first step nuisance parameter. An estimator for \(\partial h_{0}/\partial v\) is discussed in Section 4. We conclude the section by finding \(D_{1}\) for several examples: [continuing from p. 9]The Control Function setup introduced in this paper satisfies Assumption (A3). In addition, as discussed above, our result also covers the setup in which it is assumed that \(U|X,Z\sim U|X,V\sim U|V\)(see Blundell and Powell, 2003, 2004). In that case, since \(h_{0}(X,\varphi(D,Z,g_{0}))=\mathbb{E}[Y|D,Z]\), we would have that \(\Gamma_{2}(g)=L_{2}(D,Z)\) for every \(g\). Moreover, the Control Function approach we follow here uses the residual of the first step to control for potential endogeneity. Thus, \(\varphi(d,z,g)=d-g(z)\) and its linearization is \(D_{\varphi}g=-g\). Provided that \(h_{0}\) is differentiable w.r.t. \(v\) (Assumption (A4)), this allows us to linearize, w.r.t. \(g\), the moment condition defining the CASF. We have that: \[\frac{d}{d\tau}\bar{m}(g_{\tau},h_{0},\theta) =\frac{d}{d\tau}\mathbb{E}\left[\int h_{0}(x^{*},\varphi(D,Z,g_{ \tau}))dF^{*}(x^{*})-\theta\right]\] \[=\mathbb{E}\left[\int\frac{d}{d\tau}h_{0}(x^{*},\varphi(D,Z,g_{ \tau}))dF^{*}(x^{*})\right]\] \[=\mathbb{E}\left[\int\frac{\partial h_{0}}{\partial v}(x^{*}, \varphi(D,Z,g_{0}))\frac{d}{d\tau}\varphi(D,Z,g_{\tau})dF^{*}(x^{*})\right]\] \[=\frac{d}{d\tau}\mathbb{E}\left[-\int\frac{\partial h_{0}}{ \partial v}(x^{*},\varphi(D,Z,g_{0}))dF^{*}(x^{*})g_{\tau}(Z)\right].\] This means that the linearization of the moment condition w.r.t. \(g\) is \(D_{11}(w,g)=D_{11}(d,z)g(z)\), with \[D_{11}(d,z)\equiv-\int\frac{\partial h_{0}}{\partial v}(x^{*},d-g_{0}(z))dF^{ *}(x^{*}).\] We can now plug in the expression for \(D_{11}\) into equation (3.2), where the linearization of the first step effect is defined. Recall that \(D_{\varphi}g=-g\). Then, for the CASF, equation (3.2) becomes \[D_{1}(w,g)\equiv\left\{D_{11}(d,z)+\frac{\partial}{\partial v}\left[\alpha_{02 }(x,v)(y-h_{0}(x,v))\right]\right\}g(z).\] As discussed above, the linearization depends on \(h_{0}\) and \(\alpha_{02}\) and the derivatives of these functions w.r.t. \(v\). It also depends on \(g_{0}\), as \(v\equiv d-g_{0}(z)\). Section 3.2 discusses how to construct an automatic estimator for the first step nuisance parameter \(\alpha_{02}\), which we can latter use to compute its derivative. Finding an estimator of the derivative of \(h_{0}\) will depend on the estimator at hand. In Section 4 we propose a numerical derivative approach that works for a variety of second step estimators, such as Random Forest. \(\blacksquare\) **Example 3** (continuing from p. 10): Theorem 3.1 generalizes Theorem 5 in Hahn and Ridder (2013) to allow for (i) generated regressors given by arbitrary Hadamard differentiable functions \(\varphi\) and (ii) arbitrary functionals \(\bar{m}(g,h,\theta)\) that are Hadamard differentiable w.r.t. \(g\) and \(h\). We show how the expression for \(D_{1}\) simplifies to that in Hahn and Ridder (2013, Th. 5). We start by linearizing \(\bar{m}(g,h_{0},\theta)\) w.r.t. \(g\). Note that Hahn and Ridder (2013), in the nonparametric case, fix \(\varphi(d,z,g)=g(z)\). Then, \(D_{\varphi}g=g\). On top of \(\eta\) being differentiable w.r.t. \(y\), we require \(h_{0}\) to be differentiable w.r.t. \(v\) (Assumption (A4)). Then: \[\frac{d}{d\tau}\bar{m}(g_{\tau},h_{0},\theta) =\mathbb{E}\left[\frac{\partial\eta}{\partial y}(W,h_{0}(X,g_{0}( Z)))\frac{d}{d\tau}h_{0}(X,g_{\tau}(Z))\right]\] \[=\mathbb{E}\left[\frac{\partial\eta}{\partial y}(W,h_{0}(X,g_{0}( Z)))\frac{\partial h_{0}}{\partial v}(X,g_{0}(Z))\frac{d}{d\tau}g_{\tau}(Z)\right]\] \[=\frac{d}{d\tau}\mathbb{E}\left[\frac{\partial\eta}{\partial y} (W,h_{0}(X,g_{0}(Z)))\frac{\partial h_{0}}{\partial v}(X,g_{0}(Z))g_{\tau}(Z) \right],\] and therefore \(D_{11}(w,g)=\partial\eta/\partial y(w,h_{0}(x,g_{0}(z)))\cdot\partial h_{0}/ \partial v(x,g_{0}(z))\cdot g(z)\). Recall from the previous discussion that the Second Step nuisance parameter satisfies: \[\alpha_{02}(x,v)=\mathbb{E}\left[\left.\frac{\partial\eta}{\partial y}(W,h_{0}( X,g_{0}(Z)))\right|X=x,g_{0}(Z)=v\right].\] So, if we denote \(\xi(w)\equiv\partial\eta/\partial y(w,h_{0}(x,g_{0}(z)))\), equation (3.2) becomes: \[D_{1}(w,g)\equiv\left\{(y-h_{0}(x,v))\cdot\frac{\partial\alpha_{02}}{\partial v }(x,v)+(\xi(w)-\alpha_{02}(x,v))\cdot\frac{\partial h_{0}}{\partial v}(x,v) \right\}g(z),\] where \(v\equiv g_{0}(z)\). This is the result in Hahn and Ridder (2013, Th. 5). Moreover, note that \(\alpha_{02}(x,v)=\mathbb{E}[\xi(W)|X=x,V=v]\). Then, if \(\xi\) is only a function of \((x,v)\), the second term in the above equation cancels out. This is the case in Theorem 2 in Hahn and Ridder (2013). There, \(\eta\colon\mathbb{R}\to\mathbb{R}\), and therefore, \(\xi(w)=\partial\eta/\partial y(h_{0}(x,v))\) is a function of \((x,v)\). \(\blacksquare\) ### Building the automatic estimators Equations (2.7) can be thought of as a population moment condition for \((\alpha_{01},\alpha_{02})\) for each \((\delta_{1},\delta_{2})\in\Gamma_{1}\times\Gamma_{2}(g_{0})\). We start with the procedure to automatically estimate \(\alpha_{02}\), the nuisance parameter of the Second Step IF. We want to stress, nevertheless, that the procedure is quite general. Indeed, we will also apply it, _mutatis mutandis_, to the estimation of the nuisance parameter in the First Step IF. The starting point is to expand the second equation in (2.7). For \(\delta_{2}\in\Gamma_{2}(g_{0})\), \[\frac{d}{d\tau}\bar{m}(g_{0},h_{0}+\tau\delta_{2},\theta)+\frac{d}{d\tau} \mathbb{E}[\phi_{2}(W,h_{0}+\tau\delta_{2},\alpha_{20},\theta)]=0.\] We will now combine the above equation with Proposition 3.1. By continuity and linearity of \(D_{2}\), we have that \[\frac{d}{d\tau}\bar{m}(g_{0},h_{0}+\tau\delta_{2},\theta)=\frac{d}{d\tau} \mathbb{E}[D_{2}(W,h_{0}+\tau\delta_{2})]=\mathbb{E}[D_{2}(W,\delta_{2})],\ \text{for any}\ \delta_{2}\in\Gamma_{2}(g_{0}).\] Moreover, Proposition 3.1 gives us that \(\phi_{2}=\alpha_{02}(y-h_{0})\). Thus, for any \(\delta_{2}\in\Gamma_{2}(g_{0})\), \[\frac{d}{d\tau}\mathbb{E}[\phi_{2}(W,h_{0}+\tau\delta_{2},\alpha_{20},\theta) ]=-\mathbb{E}[\delta_{2}(D,Z)\alpha_{20}(X,V)].\] We note here that a sufficient condition to compute the derivative of \(\phi_{2}\) w.r.t. \(\tau\) is that \(\phi_{2}\) is affine and continuous in \(h\). This means that, for each \(\delta_{2}\in\Gamma_{2}(g_{0})\), the second equation in (2.7) leads to a moment condition for \(\alpha_{02}\): \[\mathbb{E}[D_{2}(W,\delta_{2})]-\mathbb{E}[\delta_{2}(D,Z)\alpha_{20}(X,V)]=0,\ \text{for each}\ \delta_{2}\in\Gamma_{2}(g_{0}). \tag{3.3}\] Since \(\mathbb{E}[D_{2}(W,\delta_{2})]\) is a linear functional, we will have a Riesz Representer \(r_{2}\in\mathbb{L}_{2}(X,V)\) that expresses the first term above as the \(L_{2}\) scalar product. This means that the above conditions are projection moment conditions. Indeed, they embed the notion that \(\alpha_{02}\) is the projection of \(r_{2}\) onto \(\Gamma_{2}(g_{0})\). However, the usefulness of the conditions in (3.3) is that they do not require finding the Riesz Representer. They are based in a linearization of the moment condition, which is generaly easier to find. We now assume that there is a dictionary \((b_{j})_{j=1}^{\infty}\), with \(b_{j}\in\Gamma_{2}(g_{0})\cap L_{2}(g_{0})\), whose closed linear span is \(\Gamma_{2}(g_{0})\cap L_{2}(g_{0})\). That is, any function in \(\Gamma_{2}(g_{0})\cap L_{2}(g_{0})\) is can be approximated, in the \(L_{2}\) sense, by a linear combination of \(b_{j}\)'s. Then, there exists a sequence of real numbers \((\rho_{j})_{j=1}^{\infty}\) such that \(\alpha_{02}=\sum_{j=1}^{\infty}\rho_{j}b_{j}\). Thus, \(\alpha_{02}\) can be approximated by \(\mathbf{b}_{J}^{\prime}\boldsymbol{\rho}_{J}\), where \(\mathbf{b}_{J}=(b_{1},...,b_{J})^{\prime}\) and \(\boldsymbol{\rho}_{J}=(\rho_{1},...,\rho_{J})^{\prime}\). We can now plug in \(\mathbf{b}_{J}^{\prime}\boldsymbol{\rho}_{J}\) into equation (3.3) for \(\delta_{2}=b_{j}\), \(j=1,...,J\). This gives the following \(J\) moment conditions: \[\mathbb{E}[\mathbf{b}_{J}(X,V)\mathbf{b}_{J}(X,V)^{\prime}]\boldsymbol{\rho}_ {J}=\mathbb{E}[D_{2}(W,\mathbf{b}_{J})],\] where \(D_{2}(w,\mathbf{b}_{J})\equiv(D_{2}(w,b_{1}),...,D_{2}(w,b_{J}))^{\prime}\). The above moment conditions can be used to construct an OLS-like estimator of \(\boldsymbol{\rho}\). Note, however, that in high dimensional settings \(\mathbb{E}[\mathbf{b}_{J}(X,V)\mathbf{b}_{J}(X,V)^{\prime}]\) may be near singular. Therefore, we rather focus on a regularized estimator for \(\boldsymbol{\rho}\). Note that the moment conditions are the first order conditions of the minimization problem: \[\min_{\boldsymbol{\rho}_{J}\in\mathbb{R}^{J}}\left\{-2\mathbb{E}[D_{2}(W, \mathbf{b}_{J})^{\prime}]\boldsymbol{\rho}_{J}+\boldsymbol{\rho}_{J}^{\prime} \mathbb{E}[\mathbf{b}_{J}(X,V)\mathbf{b}_{J}(X,V)^{\prime}]\boldsymbol{\rho}_ {J}\right\}.\] We can regularize the problem by adding a penalty to the above objective function. Let \(\|\boldsymbol{\rho}_{J}\|_{q}\equiv(\sum_{j=1}^{J}|\rho_{j}|^{q})^{1/q}\) for \(q\geq 1\). For a tunning parameter \(\lambda\geq 0\), we can estimate \(\boldsymbol{\rho}\) by minimizing: \[\min_{\boldsymbol{\rho}_{J}\in\mathbb{R}^{J}}\left\{-2\mathbb{E}[D_{2}(W, \mathbf{b}_{J})^{\prime}]\boldsymbol{\rho}_{J}+\boldsymbol{\rho}_{J}^{\prime} \mathbb{E}[\mathbf{b}_{J}(X,V)\mathbf{b}_{J}(X,V)^{\prime}]\boldsymbol{\rho}_ {J}+\lambda\|\boldsymbol{\rho}_{J}\|_{q}^{q}\right\}. \tag{3.4}\] For \(q=1\), the above is the Lasso objective function, while \(q=2\) corresponds to Ridge Regression. Additionally, we could consider elastic net type penalties, where \(\lambda(\xi\|\boldsymbol{\rho}_{J}\|_{2}^{2}+(1-\xi)\|\boldsymbol{\rho}_{J}\|_ {1})\), for \(\xi\in[0,1]\), is added to the objective function. We propose now an automatic estimator of \(\alpha_{01}\), the nuisance parameter of the First Step IF. The procedure is parallel to that proposed above. By Theorem 3.1, we can linearize \(\bar{m}(g,h(F_{0},g),\theta)\) by \(D_{1}(w,g)\) (see equation 3.2). Again, we assume that there is a dictionary \((c_{k})_{k=1}^{\infty}\) that spans \(\Gamma_{1}\). Thus, \(\alpha_{01}=\sum_{k=1}^{\infty}\beta_{k}c_{k}\) for a sequence of real numbers \((\beta_{k})_{k=1}^{\infty}\). We can therefore construct \(K\) moment conditions \[\mathbb{E}[\mathbf{c}_{K}(Z)\mathbf{c}_{K}(Z)^{\prime}]\boldsymbol{\beta}_{K} =\mathbb{E}[D_{1}(W,\mathbf{c}_{K})],\] where \(\mathbf{c}_{K}=(c_{1},...,c_{K})^{\prime}\), \(\boldsymbol{\beta}_{K}=(\beta_{1},...,\beta_{K})^{\prime}\), and \(D_{1}(w,\mathbf{c}_{K})\equiv(D_{1}(w,c_{1}),...,D_{1}(w,c_{K}))^{\prime}\). We use these conditions as a basis to construct the objective function to estimate \(\boldsymbol{\beta}\): \[\min_{\boldsymbol{\beta}_{K}\in\mathbb{R}^{K}}\left\{-2\mathbb{E}[D_{1}(W, \mathbf{c}_{K})^{\prime}]\boldsymbol{\beta}_{K}+\boldsymbol{\beta}_{K}^{ \prime}\mathbb{E}[\mathbf{c}_{K}(Z)\mathbf{c}_{K}(Z)^{\prime}]\boldsymbol{ \beta}_{K}+\lambda\|\boldsymbol{\beta}_{K}\|_{q}^{q}\right\}, \tag{3.5}\] where the tunning parameter \(\lambda\) may be different from that of the second step. From the above discussion we conclude that automatic estimation of the first and second step nuisance parameters reduces to finding a consistent estimator of \(\mathbb{E}[D_{2}(W,\mathbf{b}_{J})^{\prime}]\) and \(\mathbb{E}[D_{1}(W,\mathbf{c}_{K})]\). We note that, in general, both \(D_{2}\) and \(D_{1}\) depend on \((g_{0},h_{0},\theta)\). In the sample moment conditions, these are replaced by cross-fit estimators (Section 4.1). Furthermore, \(D_{1}\) may additionally depend on \(\partial h_{0}/\partial v\), the nuisance parameter of the Second Step \(\alpha_{02}\) and its derivative \(\partial\alpha_{02}/\partial v\) (see equation 3.2). Estimation of \(\partial h_{0}/\partial v\) is discussed in Section 4.1. Here, we sketch a parsimonious approach to estimate the derivative of \(\alpha_{02}\). Recall that the Second Step nuisance parameter can be approximated by \(\mathbf{b}_{J}^{\prime}\boldsymbol{\rho}_{J}\). We may assume that the atoms \(b_{j}(x,v)\) are differentiable w.r.t. \(v\). We can then replace the nuisance parameter by its approximation \(\mathbf{b}_{J}^{\prime}\boldsymbol{\rho}\) and its derivative by \((\partial\mathbf{b}_{J}/\partial v)^{\prime}\boldsymbol{\rho}_{J}\) in equation (3.2). ## 4 Estimation In this section, we build debiased sample moment conditions for GMM estimation of \(\theta\). Debiased sample moments are based in the orthogonal moment function \(\psi\) in equation (2.4). Note that the IF \(\phi\) that corrects for both the first and second step estimation is \(\phi=\phi_{1}+\phi_{2}\), the sum of the First and Second Step IFs. The shape of these functions is given in Proposition 3.1 and Theorem 3.1, respectively. We propose to construct the sample moment conditions using cross-fitting. That is, we split the sample so that \(\psi(W_{i},g,h,\alpha,\theta)\) is averaged over observations \(i\) that are not used to estimate \((g,h,\alpha,\theta)\). Cross-fitting (i) eliminates the "own observation bias", helping remainders to converge faster to zero, and (ii) eliminates the need for Donsker conditions for the estimators of \((g,h,\alpha)\), which is important for first and second step ML estimators (see CEINR; Chernozhukov et al., 2018; Newey and Robins, 2017). We partition the sample \((W_{i})_{i=1}^{n}\) into \(L\) groups \(I_{\ell}\), for \(\ell=1,...,L\). For each group, we have estimators \(\hat{g}_{\ell}\), \(\hat{h}_{\ell}\) and \(\hat{\alpha}_{\ell}=(\hat{\alpha}_{1\ell},\hat{\alpha}_{2\ell})\) that use observations that are not in \(I_{\ell}\). We construct automatic estimators of \(\alpha_{0}\) satisfying this property in Section 4.1. Moreover, for each group, we consider that there is an initial estimator of \(\theta_{0}\), namely \(\tilde{\theta}_{\ell}\), which does not use the observations in \(I_{\ell}\). CEINR propose to chose \(L=5\) for medium size datasets and \(L=10\) for small dataset. Following CEINR, debiased sample moment functions are \[\hat{\psi}(\theta)\equiv\hat{m}(\theta)+\hat{\phi}, \tag{4.1}\] with \[\hat{m}(\theta)\equiv\frac{1}{n}\sum_{\ell=1}^{L}\sum_{i\in I_{\ell}}m(W_{i}, \hat{g}_{\ell},\hat{h}_{\ell},\theta)\text{ and }\hat{\phi}\equiv\frac{1}{n}\sum_{\ell=1}^{L}\sum_{i\in I_{\ell}}\phi(W_{i}, \hat{g}_{\ell},\hat{h}_{\ell},\hat{\alpha}_{\ell},\tilde{\theta}_{\ell}).\] We use these moment functions to construct the debiased GMM estimator: \[\hat{\theta}=\arg\min_{\theta\in\Theta}\hat{\psi}(\theta)^{\prime}\hat{\Upsilon }\hat{\psi}(\theta), \tag{4.2}\] where \(\hat{\Upsilon}\) is a positive semi-definite weighting matrix. As usual in GMM, a choice of \(\hat{\Upsilon}\) that minimizes the asymptotic variance of \(\hat{\theta}\) is \(\hat{\Upsilon}=\hat{\Psi}^{-1}\), for \[\hat{\Psi}=\frac{1}{n}\sum_{\ell=1}^{L}\sum_{i\in I_{\ell}}\hat{\psi}_{i\ell} \hat{\psi}_{i\ell}^{\prime},\text{ with }\hat{\psi}_{i\ell}\equiv m(W_{i},\hat{g}_{\ell}, \hat{g}_{\ell},\tilde{\theta}_{\ell})+\phi(W_{i},\hat{g}_{\ell},\hat{h}_{\ell},\hat{\alpha}_{\ell},\tilde{\theta}_{\ell}).\] We illustrate the theory with the construction of debiased GMM estimator for the CASF: **Example 1** (continuing from p. 12).: We focus on the construction of \(\hat{m}(\theta)\). Note that \(\phi_{1}=\alpha_{01}(d-g_{0})\) and \(\phi_{2}=\alpha_{02}(y-h_{0})\) (see Theorem 3.1 and Proposition 3.1, respectively). Thus, finding \(\hat{\phi}\) is straightforward once we have cross-fit estimators for the nuisance parameters (see Section 4.1 for the construction of \(\hat{\alpha}_{1\ell}\) and \(\hat{\alpha}_{2\ell}\)). Recall that the moment function defining the CASF is \[m(w,g,h,\theta)=\int h(x^{*},\varphi(d,z,g))dF^{*}(x^{*})-\theta.\] We take as given that the econometrician has computed cross-fit estimators for the first and second steps: \(\hat{g}_{\ell}\) and \(\hat{h}_{\ell}\). Since the counterfactual distribution \(F^{*}\) is fixed by the econometrician, we propose a numerical integration approach to obtain the debias sample moments. We consider that the econometrician can sample from \(F^{*}\). Let \((X_{s}^{*})_{s=1}^{S}\) be sa sample of size \(S\) from \(F^{*}\). For and observation \(i\in I_{\ell}\), let \(\hat{V}_{i\ell}\equiv\varphi(D_{i},Z_{i},\hat{g}_{\ell})\). We approximate the value of the moment function \(m(W_{i},\hat{g}_{\ell},\hat{h}_{\ell},\theta)\) by \[\frac{1}{S}\sum_{s=1}^{S}\hat{h}_{\ell}(X_{s}^{*},\hat{V}_{i\ell})-\theta.\] Note that \(S\) may be arbitrarily large (increasing the computational cost), so that the above term is close to \(m(W_{i},\hat{g}_{\ell},\hat{h}_{\ell},\theta)\). Following equations (4.1) and (4.2), the debiased estimator for the CASF is \[\hat{\theta}=\frac{1}{nS}\sum_{\ell=1}^{L}\sum_{i\in I_{\ell}}\sum_{s=1}^{S} \hat{h}_{\ell}(X_{s}^{*},\hat{V}_{i\ell})+\hat{\phi}.\] ### Automatic estimation with cross-fitting Debiased sample moment function require estimators of the nuisance parameters \((\hat{\alpha}_{1\ell},\hat{\alpha}_{2\ell})\) for each group \(I_{\ell}\). These estimators must use only observations not in \(I_{\ell}\). This section is devoted to the construction of automatic estimators satisfying this property. Through the section, we consider that the econometrician has at her disposal first and second step estimators, \(\hat{g}_{\ell\ell^{\prime}}\) and \(\hat{h}_{\ell\ell^{\prime}}\), and an initial estimator, \(\tilde{\theta}_{\ell\ell^{\prime}}\), that use only observations not in \(I_{\ell}\cup I_{\ell^{\prime}}\). The key to automatic estimation of the Second Step nuisance parameter is to find a consistent estimator of the linearization of the moment condition. In this section, we will write \(D_{2}(w,h|g_{0},h_{0},\theta)\) to make explicit that the linearization may depend on \((h_{0},g_{0},\theta)\) (see Examples 1 and 3). For the linearization of the effect of first step estimation, we will write \(D_{1}(w,g|g_{0},h_{0},\alpha_{02},\theta)\), to emphasize that it may also depend on the Second Step nuisance parameter. \(D_{1}\) generally depends also on the derivatives \(\partial h_{0}/\partial v\) and \(\partial\alpha_{02}/\partial v\). We do not make this explicit, but we will address the issue in this section. We start with the automatic estimator for the Second Step nuisance parameter. For each \(\ell\), we provide a sample version of the objective function in (3.4) that uses only observations not in \(I_{\ell}\). Recall that we have a dictionary \((b_{j})_{j=1}^{\infty}\) that spans \(\Gamma_{2}(g_{0})\cap L_{2}(g_{0})\). We estimate \(\mathbb{E}[D_{2}(W,\mathbf{b}_{J})]\) by \[\hat{D}_{2\ell}\equiv\frac{1}{n-n_{\ell}}\sum_{\ell^{\prime}\neq\ell}\sum_{i \in I_{\ell^{\prime}}}D_{2}(W_{i},\mathbf{b}_{J}|\hat{g}_{\ell\ell^{\prime}}, \hat{h}_{\ell\ell^{\prime}},\tilde{\theta}_{\ell\ell^{\prime}}),\] where \(n_{\ell}\) is the number of observations in \(I_{\ell}\). In turn, \(\mathbb{E}[\mathbf{b}_{J}(X,\varphi(D,Z,g_{0}))\mathbf{b}_{J}(X,\varphi(D,Z,g _{0}))^{\prime}]\) is estimated by \[\hat{B}_{\ell}\equiv\frac{1}{n-n_{\ell}}\sum_{\ell^{\prime}\neq\ell}\sum_{i \in I_{\ell^{\prime}}}\mathbf{b}_{J}(X_{i},\varphi(D_{i},Z_{i},\hat{g}_{\ell \ell^{\prime}}))\mathbf{b}_{J}(X_{i},\varphi(D_{i},Z_{i},\hat{g}_{\ell\ell^{ \prime}}))^{\prime}.\] With this, we can build an automatic estimator of the Second Step nuisance parameter that only uses observations not in \(I_{\ell}\). It is given by \(\hat{\alpha}_{2\ell}=\mathbf{b}_{J}^{\prime}\hat{\boldsymbol{\rho}}_{J\ell}\), where \[\hat{\boldsymbol{\rho}}_{J\ell}=\arg\min_{\boldsymbol{\rho}_{J}\in\mathbb{R}^ {J}}\left\{-2\hat{D}_{2\ell}^{\prime}\boldsymbol{\rho}_{J}+\boldsymbol{\rho}_{ J}^{\prime}\hat{B}_{\ell}\boldsymbol{\rho}_{J}+\lambda\|\boldsymbol{\rho}_{J} \|_{q}^{q}\right\}. \tag{4.3}\] The tunning parameter \(\lambda\) can be chosen by cross-validation. Example?? (continuing from p. 17)We provide the ingredients to conduct automatic estimator of \(\alpha_{02}\) for the CASF. Recall that the moment condition for the CASF was already linear in \(h\) and hence \[D_{2}(w,h|g_{0},h_{0},\theta)=\int h(x^{*},\varphi(d,z,g_{0}))dF^{*}(x^{*}).\] We follow the same strategy as before and approximate \(D_{2}\) by numerical integration. For a sample \((X_{s}^{*})_{s=1}^{S}\) drawn from \(F^{*}\), we approximate \(D_{2}(W_{i},b_{j}|\hat{g}_{\ell\ell^{\prime}},\hat{h}_{\ell\ell^{\prime}},\tilde{ \theta}_{\ell\ell^{\prime}})\) by \[\frac{1}{S}\sum_{s=1}^{S}b_{j}(X_{s}^{*},\varphi(D_{i},Z_{i},\hat{g}_{\ell\ell ^{\prime}})),\] for \(j=1,...,J\). This can then be used to construct the objective function to estimate \(\hat{\mathbf{\rho}}_{J\ell}\). \(\quad\blacksquare\) We now discuss automatic estimation of the First Step nuisance parameter. Again, for each \(\ell\), the goal is to build a sample version of the objective function in (3.5) that uses only observations not in \(I_{\ell}\). The constructions is almost similar to the one above. We will focus in the main differences. For a dictionary \((c_{j})_{j=1}^{\infty}\) that spans \(\Gamma_{1}\), we can estimate \(\mathbb{E}[\mathbf{c}_{K}(Z)\mathbf{c}_{K}(Z)^{\prime}]\) by \[\hat{C}_{\ell}\equiv\frac{1}{n-n_{\ell}}\sum_{\ell^{\prime}\neq\ell}\sum_{i \in I_{\ell^{\prime}}}\mathbf{c}_{K}(Z_{i})\mathbf{c}_{K}(Z_{i})^{\prime},\] and \(\mathbb{E}[D_{1}(W,\mathbf{c}_{K})]\) by \[\hat{D}_{1\ell}\equiv\frac{1}{n-n_{\ell}}\sum_{\ell^{\prime}\neq\ell}\sum_{i \in I_{\ell^{\prime}}}D_{1}(W_{i},\mathbf{b}_{J}|\hat{g}_{\ell\ell^{\prime}}, \hat{h}_{\ell\ell^{\prime}},\hat{\alpha}_{2\ell\ell^{\prime}},\tilde{\theta}_ {\ell\ell^{\prime}}). \tag{4.4}\] The first difference is that \(D_{1}\) depends on \(\alpha_{02}\) on top of \((g_{0},h_{0},\theta)\). We therefore need to plug-in an estimator \(\alpha_{2\ell\ell^{\prime}}\) that only uses observations not in \(I_{\ell}\cup I_{\ell^{\prime}}\). This estimator can be constructed using the methodology above. The only adjustment needed is that one needs to replace \(I_{\ell}\) by \(I_{\ell}\cup I_{\ell^{\prime}}\) to define \(\hat{D}_{2\ell\ell^{\prime}}\) and \(\hat{B}_{\ell\ell^{\prime}}\). For instance, to construct \(\alpha_{2\ell\ell^{\prime}}=\mathbf{b}_{J}^{\prime}\hat{\mathbf{\rho}}_{J\ell\ell^ {\prime}}\), it is simple to define the optimization problem that \(\hat{\mathbf{\rho}}_{J\ell\ell^{\prime}}\) solves. Define \(\bar{L}\equiv\{\ell,\ell^{\prime}\}\) and let \(\hat{g}_{\bar{L}\ell^{\prime\prime}}\), \(\hat{h}_{\bar{L}\ell^{\prime\prime}}\), and \(\tilde{\theta}_{L\ell^{\prime\prime}}\) be estimators that only use observations not in \(I_{\ell}\cup I_{\ell^{\prime}}\cup I_{\ell^{\prime\prime}}\). Then, we can define \[\hat{D}_{2\ell\ell^{\prime}} \equiv\frac{1}{n-n_{\ell}-n_{\ell^{\prime}}}\sum_{\ell^{\prime \prime}\notin\bar{L}}\sum_{i\in I_{\ell^{\prime\prime}}}D_{2}(W_{i},\mathbf{b} _{J}|\hat{g}_{\bar{L}\ell^{\prime\prime}},\hat{h}_{\bar{L}\ell^{\prime\prime} },\tilde{\theta}_{\bar{L}\ell^{\prime\prime}})\text{ and }\] \[\hat{B}_{\ell\ell^{\prime}} \equiv\frac{1}{n-n_{\ell}-n_{\ell^{\prime}}}\sum_{\ell^{\prime \prime}\notin\bar{L}}\sum_{i\in I_{\ell^{\prime\prime}}}\mathbf{b}_{J}(X_{i}, \varphi(D_{i},Z_{i},\hat{g}_{\bar{L}\ell^{\prime\prime}}))\mathbf{b}_{J}(X_{i},\varphi(D_{i},Z_{i},\hat{g}_{\bar{L}\ell^{\prime\prime}}))^{\prime}.\] Thus, \(\hat{\mathbf{\rho}}_{J\ell\ell^{\prime}}\) is given by the optimization problem in (4.3) with \(\hat{D}_{2\ell\ell^{\prime}}\) and \(\hat{B}_{\ell\ell^{\prime}}\) replacing \(\hat{D}_{2\ell}\) and \(\hat{B}_{\ell}\), respectively. The most important difference is that \(D_{1}\) generally depends also on the derivatives \(\partial h_{0}/\partial v\) and \(\partial\alpha_{02}/\partial v\). In Section 3.2, we have presented a parsimonious approach to estimate the derivative of \(\alpha_{02}\). Indeed, it is simple to construct an estimator \(\partial\hat{\alpha}_{2\ell\ell^{\prime}}/\partial v\) of the derivative of \(\alpha_{02}\) that uses only observations not in \(I_{\ell}\cup I_{\ell^{\prime}}\). Since we have already estimated \(\hat{\alpha}_{2\ell\ell}=\mathbf{b}_{J}^{\prime}\hat{\mathbf{\rho}}_{J\ell\ell^{ \prime}}\), if each \(b_{j}\) is differentiable w.r.t. \(v\), we have that \(\partial\hat{\alpha}_{2\ell\ell^{\prime}}/\partial v\equiv(\partial\mathbf{b}_{ J}/\partial v)^{\prime}\hat{\mathbf{\rho}}_{J\ell\ell^{\prime}}\). Estimation of \(\partial h_{0}/\partial v\) may be more tricky. It will depend on the shape of the estimator \(\hat{h}_{\ell\ell}\). Note that, since \(h_{0}\in\Gamma_{2}(g_{0})\cap L_{2}(g_{0})\), we may use the dictionary \((b_{j})_{j=1}^{\infty}\) to approximate the parameter. In this case, \(\hat{h}_{\ell\ell}\) will be a Lasso or Ridge Regression estimator and we can estimate the derivative of \(h_{0}\) as we have estimated the derivative of \(\alpha_{02}\). Moreover, estimating \(h_{0}\) is usually a low dimensional problem. Hence, when \(\hat{h}_{\ell\ell}\) is a Kernel or a Local Linear Regression estimator, the derivatives of \(h_{0}\) can be estimated by finding the analytical expression of the derivatives of the Kernel Function. For a general ML estimator \(\hat{h}_{\ell\ell^{\prime}}\) (e.g., Random Forest), we propose a numerical derivative approach to estimate \(\partial h_{0}/\partial v\). Let \(t_{n}\) be a tunning parameter depending on the sample size. We propose to estimate \(\partial h_{0}/\partial v(x,v)\) by \[\frac{\partial\hat{h}_{\ell\ell^{\prime}}}{\partial v}(x,v)\equiv\frac{\hat{h }_{\ell\ell^{\prime}}(x,v+t_{n})-\hat{h}_{\ell\ell^{\prime}}(x,v)}{t_{n}}. \tag{4.5}\] Note that, usually, we need to compute the derivative evaluated at \((X_{i},\varphi(D_{i},Z_{i},\hat{g}_{\ell\ell^{\prime}}))\). We have now seen all the difficulties in estimating \(D_{1\ell}\) in equation (4.4). With these solved, we can proceed to construct and automatic estimator of the First Step nuisance parameter. The estimator is given by \(\hat{\alpha}_{1\ell}=\mathbf{c}_{K}^{\prime}\hat{\mathbf{\beta}}_{K\ell}\), where \[\hat{\mathbf{\beta}}_{K\ell}=\arg\min_{\mathbf{\beta}_{K}\in\mathbb{R}^{K}}\left\{-2 \hat{D}_{1\ell}^{\prime}\mathbf{\beta}_{K}+\mathbf{\beta}_{K}^{\prime}\hat{C}_{\ell} \mathbf{\beta}_{K}+\lambda\|\mathbf{\beta}_{K}\|_{q}^{q}\right\}. \tag{4.6}\] We illustrate this procedure by constructing an automatic estimator of the First Step nuisance parameter for the CASF: Example?? (continuing from p. 18)From the previous discussion, we have that: \[D_{1}(w,g) =\left\{D_{11}(d,z)+\frac{\partial}{\partial v}\left[\alpha_{02}( x,v)(y-h_{0}(x,v))\right]\right\}g(z),\text{ with }\] \[D_{11}(d,z) \equiv-\int\frac{\partial h_{0}}{\partial v}(x^{*},d-g_{0}(z))dF ^{*}(x^{*}).\] We approximate \(D_{11}\) by numerical integration. Let \((X_{s}^{*})_{s=1}^{S}\) be a sample from \(F^{*}\). To estimate \(D_{1\ell}\), we approximate \(D_{11}(D_{i},Z_{i})\), with \(i\in I_{\ell^{\prime}}\), by \[-\frac{1}{S}\sum_{s=1}^{S}\frac{\partial\hat{h}_{\ell\ell^{\prime}}}{\partial v }(X_{s}^{*},D_{i}-\hat{g}_{\ell\ell^{\prime}}(Z_{i})).\] To estimate \(D_{1\ell}\), it remains to show how to estimate the second term in the brackets, for an observation \(i\in I_{\ell^{\prime}}\). Define \(V_{i\ell\ell^{\prime}}\equiv\varphi(D_{i},Z_{i},\hat{g}_{\ell\ell^{\prime}})= D_{i}-\hat{g}_{\ell\ell^{\prime}}(Z_{i})\). Following the chain rule, we can estimate the second term by \[\left(\frac{\partial\mathbf{b}_{J}}{\partial v}(X_{i},\hat{V}_{i\ell\ell^{ \prime}})\right)^{\prime}\hat{\mathbf{\rho}}_{J\ell\ell^{\prime}}\cdot(Y_{i}- \hat{h}_{\ell\ell^{\prime}}(X_{i},\hat{V}_{i\ell\ell^{\prime}}))-\mathbf{b}_{ J}(X_{i},\hat{V}_{i\ell\ell^{\prime}})^{\prime}\hat{\mathbf{\rho}}_{J\ell\ell^{ \prime}}\cdot\frac{\partial\hat{h}_{\ell\ell^{\prime}}}{\partial v}(X_{i}, \hat{V}_{i\ell\ell^{\prime}}). \tag{4.7}\] Therefore, to estimate \(D_{1\ell}\) according to equation (4.4), we have that, for \(i\in I_{\ell^{\prime}}\), \[D_{1}(W_{i},c_{k}|\hat{g}_{\ell\ell^{\prime}},\hat{h}_{\ell\ell^{ \prime}},\hat{\alpha}_{2\ell\ell^{\prime}},\tilde{\theta}_{\ell\ell^{\prime}})=c _{k}(Z_{i})\cdot \left\{-\frac{1}{S}\sum_{s=1}^{S}\frac{\partial\hat{h}_{\ell\ell^ {\prime}}}{\partial v}(X_{s}^{*},V_{i\ell\ell^{\prime}})\right.\] \[+\left(\frac{\partial\mathbf{b}_{J}}{\partial v}(X_{i},\hat{V}_{i \ell\ell^{\prime}})\right)^{\prime}\hat{\boldsymbol{\rho}}_{J\ell\ell^{ \prime}}\cdot(Y_{i}-\hat{h}_{\ell\ell^{\prime}}(X_{i},\hat{V}_{i\ell\ell^{ \prime}}))\] \[\left.-\mathbf{b}_{J}(X_{i},\hat{V}_{i\ell\ell^{\prime}})^{ \prime}\hat{\boldsymbol{\rho}}_{J\ell\ell^{\prime}}\cdot\frac{\partial\hat{h }_{\ell\ell^{\prime}}}{\partial v}(X_{i},\hat{V}_{i\ell\ell^{\prime}})\right\},\] for each \(k=1,...,K\). This can then be use to construct the objective function to estimate \(\hat{\boldsymbol{\beta}}_{K\ell}\). \(\blacksquare\) ### Estimation Algorithm Here we provided an overview of our estimation algorithm. The inputs to the algorithm are cross-fit estimators of \(g_{0}\), \(h_{0}\) and \(\theta_{0}\). We note that one must provide a total of \(L\) estimators only using observations no in \(I_{\ell}\), \(L(L-1)/2\) estimators only using observations not in \(I_{\ell}\cup I_{\ell^{\prime}}\), and \(L(L-1)(L-2)/6\) estimators only using observations not in \(I_{\ell}\cup I_{\ell^{\prime}}\cup I_{\ell^{\prime\prime}}\). With these estimators at hand, we follow this algorithm: 1. Estimate \(\hat{\alpha}_{2\ell\ell^{\prime}}=\mathbf{b}_{J}^{\prime}\hat{\boldsymbol{ \rho}}_{J\ell\ell^{\prime}}\), with \(\hat{\boldsymbol{\rho}}_{J\ell\ell^{\prime}}\) satisfying the optimization problem in (4.3), with \(\hat{D}_{2\ell\ell^{\prime}}\) and \(\hat{B}_{\ell\ell^{\prime}}\) replacing \(\hat{D}_{2\ell}\) and \(\hat{B}_{\ell}\), respectively. 2. Estimate \(\partial\hat{h}_{\ell\ell^{\prime}}/\partial v\) by equation (4.5). 3. Use \(\hat{g}_{\ell\ell^{\prime}}\), \(\hat{h}_{\ell\ell^{\prime}}\), \(\partial\hat{h}_{\ell\ell^{\prime}}/\partial v\), \(\hat{\alpha}_{2\ell\ell^{\prime}}\), \(\partial\hat{\alpha}_{2\ell\ell^{\prime}}/\partial v\), and \(\tilde{\theta}_{\ell\ell^{\prime}}\) to construct \(\hat{D}_{1\ell}\) (see equation 3.2). Estimate \(\hat{\alpha}_{1\ell}=\mathbf{c}_{K}^{\prime}\hat{\boldsymbol{\beta}}_{K\ell}\), with \(\hat{\boldsymbol{\beta}}_{K\ell}\) satisfying equation (4.6). 4. Use \(\hat{g}_{\ell\ell^{\prime}}\), \(\hat{h}_{\ell\ell^{\prime}}\), and \(\tilde{\theta}_{\ell\ell^{\prime}}\) to construct \(\hat{D}_{2\ell}\). Estimate \(\hat{\alpha}_{2\ell}=\mathbf{b}_{J}^{\prime}\hat{\boldsymbol{\rho}}_{J\ell}\), with \(\hat{\boldsymbol{\rho}}_{J\ell}\) satisfying equation (4.3). 5. Compute the bias correction term \[\hat{\phi}=\frac{1}{n}\sum_{\ell=1}^{L}\sum_{i\in I_{\ell}}\left[\hat{\alpha}_ {1\ell}(Z_{i})\cdot(D_{i}-\hat{g}_{\ell}(Z_{i}))+\hat{\alpha}_{2\ell}(X_{i}, \hat{V}_{i\ell})\cdot(Y_{i}-\hat{h}_{\ell}(X_{i},\hat{V}_{i\ell}))\right],\] with \(\hat{V}_{i\ell}\equiv\varphi(D_{i},Z_{i},\hat{g}_{\ell})\). 6. Compute the debiased sample moment function in equation (4.1). 7. Construct the LR debiased GMM estimator of \(\theta_{0}\) by solving (4.2). ## 5 Examples **Example 2** (continuing from p. 5): Let \(\partial h/\partial x(x,v)\) denote the derivative of \(h(x,v)\) w.r.t. its first argument at \((x,v)\). Let \(\partial^{2}h/\partial x\partial v(x,v)\) denote the derivative w.r.t. both arguments at \((x,v)\). For the APE, we have that the moment function is linear in \(h\). Thus: \[D_{2}(w,h|g_{0},h_{0},\theta)=\frac{\partial h}{\partial x}(x,g_{0}(z)),\] where we have already make explicit the dependence of \(D_{2}\) on \((g_{0},h_{0},\theta)\). We can also linearize the moment condition in \(g\) to obtain: \[D_{1}(w,g|g_{0},h_{0},\alpha_{0},\theta) =\left\{D_{11}(d,z)+\frac{\partial}{\partial v}\left[\alpha_{02}( x,v)(y-h_{0}(x,v))\right]\right\}g(z),\text{ with }\] \[D_{11}(d,z) \equiv\frac{\partial^{2}h_{0}}{\partial x\partial v}(x,g_{0}(z)).\] The debias estimator for the APE is \[\hat{\theta}=\frac{1}{n}\sum_{\ell=1}^{L}\sum_{i\in I_{\ell}}\frac{\partial \hat{h}_{\ell}}{\partial x}(X_{i},\hat{g}_{\ell}(Z_{i}))+\hat{\phi},\] where, for the estimator \(\hat{h}_{\ell}\), we can estimate its derivative w.r.t. \(x\) by \[\frac{\hat{h}_{\ell}}{\partial x}(x,v)\equiv\frac{\hat{h}_{\ell}(x+s_{n},v)- \hat{h}_{\ell}(x,v)}{s_{n}},\] for a tunning parameter \(s_{n}\). To construct \(\hat{\phi}\), we need to estimate \(\alpha_{01}\) and \(\alpha_{02}\). We propose automatic estimators for these nuisance parameters. We assume that \(\partial h_{0}/\partial x\) and \(\partial h_{0}/\partial v\) are differentiable, so we can interchange the order of differentiation. Consider a dictionary \((b_{j})_{j=1}^{\infty}\) that is differentiable w.r.t. both \(x\) and \(v\). To Estimate \(\hat{D}_{2\ell}\), we can compute \(D_{2}(W_{i},b_{j}|\hat{g}_{\ell\ell^{\prime}},\hat{h}_{\ell\ell^{\prime}}, \tilde{\theta}_{\ell\ell^{\prime}})\) for an observation \(i\in I_{\ell^{\prime}}\) by \[\frac{\partial b_{j}}{\partial x}(X_{i},\hat{g}_{\ell\ell^{\prime}}(Z_{i})).\] This derivative can be found analytically for each atom. We can use this to obtain an automatic estimator of \(\alpha_{02}\). To construct \(\hat{D}_{1\ell}\), we need to estimate \(D_{1}(W_{i},c_{k}|\hat{g}_{\ell\ell^{\prime}},\hat{h}_{\ell\ell^{\prime}}, \hat{\alpha}_{2\ell\ell^{\prime}},\tilde{\theta}_{\ell\ell^{\prime}})\) for an observation \(i\in I_{\ell^{\prime}}\) and an arbitrary atom \(c_{k}\) in a dictionary. The first term, \(D_{11}(D_{i},Z_{i})\), can be estimated by \[\frac{\hat{h}_{\ell\ell^{\prime}}(X_{i}+s_{n},\hat{g}_{\ell\ell^{\prime}}(Z_{i} )+t_{n})-\hat{h}_{\ell\ell^{\prime}}(X_{i},\hat{g}_{\ell\ell^{\prime}}(Z_{i})) }{t_{n}s_{n}}.\] To estimate the second term we can use equation (4.7), replacing \(\hat{V}_{i\ell\ell^{\prime}}\) by \(\hat{g}_{\ell\ell^{\prime}}(Z_{i})\). These are the ingredients to build an automatic estimator for \(\alpha_{01}\). Estimation of the APE greatly simplifies in a partly linear model. We can propose it taking advantage of the DR property w.r.t. the second step. ## Appendix A Proofs of the results Proof of Lemma 2.1:Applying the chain rule several times to \(d\bar{m}(g(F_{\tau}),h(F_{\tau},g(F_{\tau})),\theta)/d\tau\), we have that: \[\frac{d}{d\tau}\bar{m}(g(F_{\tau}),h(F_{\tau},g(F_{\tau})),\theta)=\frac{d}{d \tau}\bar{m}(g(F_{\tau}),h_{0},\theta)+\frac{d}{d\tau}\bar{m}(g_{0},h(F_{\tau}, g(F_{\tau})),\theta).\] Then, using the chain rule again: \[\frac{d}{d\tau}\bar{m}(g_{0},h(F_{\tau},g(F_{\tau})),\theta) =\frac{d}{d\tau}\bar{m}(g_{0},h(F_{0},g(F_{\tau})),\theta)\] \[+\frac{d}{d\tau}\bar{m}(g_{0},h(F_{\tau},g_{0}),\theta).\] Combining the above equations leads to: \[\frac{d}{d\tau}\bar{m}(g(F_{\tau}),h(F_{\tau},g(F_{\tau})),\theta) =\frac{d}{d\tau}\bar{m}(g(F_{\tau}),h_{0},\theta)\] \[+\frac{d}{d\tau}\bar{m}(g_{0},h(F_{0},g(F_{\tau})),\theta)\] (A.1) \[+\frac{d}{d\tau}\bar{m}(g_{0},h(F_{\tau},g_{0}),\theta).\] Now, note that by the chain rule we have that: \[\frac{d}{d\tau}\bar{m}(g(F_{\tau}),h_{0},\theta) +\frac{d}{d\tau}\bar{m}(g_{0},h(F_{0},g(F_{\tau})),\theta)\] \[=\frac{d}{d\tau}\bar{m}(g(F_{\tau}),h(F_{0},g(F_{\tau})),\theta).\] Hence the first two terms in equation (A.1) equal the derivative of \(\bar{m}(g(F_{\tau}),h(F_{0},g(F_{\tau})),\theta)\). Proof of Proposition 3.1:For the (differentiable) path \(\tau\mapsto h(F_{\tau},g_{0})\), Assumption (A1) implies \[\frac{d}{d\tau}\bar{m}(g_{0},h(F_{\tau},g_{0}),\theta)=\frac{d}{d\tau}\mathbb{ E}[D_{2}(W,h(F_{\tau},g_{0})].\] This gives the linearization (LIN). To find the shape of the IF, note that \(\mathbb{E}[D_{2}(W,h)]\) is a linear and continuous functional in \(L_{2}(X,V)\), a Hilbert space of square-integrable functions. Thus, by the Riesz Representation Theorem, there exists a \(r_{2}\) such that \(\mathbb{E}[D_{2}(W,h)]=\mathbb{E}[r_{2}(X,V)h(X,V)]\), with \(V\equiv\varphi(D,Z,g_{0})\). Therefore: \[\frac{d}{d\tau}\bar{m}(g_{0},h(F_{\tau},g_{0}),\theta)=\frac{d}{d\tau} \mathbb{E}[r_{2}(X,V)h(F_{\tau},g_{0})(X,V)],\] where \(h(F,g)(x,v)\) denotes \(h(F,g)\) evaluated at \((x,v)\). This is Assumption 1 in Ichimura and Newey (2022). Since Assumption 2 in that paper is satisfied in our setup, Proposition 1 in Ichimura and Newey (2022) gives: \(\phi_{2}(w,h_{0},\alpha_{02},\theta)=\alpha_{02}(d,z)\{y-h_{0}(x,\varphi(d,z,g_{0} ))\}\). The parameter \(\alpha_{20}\) is the \(L_{2}\)-projection of \(r_{2}\) onto \(\Gamma_{2}(g_{0})\): \[\alpha_{20}=\arg\min_{\alpha\in\Gamma_{2}(g_{0})}\mathbb{E}[(r_{2}(X,\varphi( D,Z,g_{0}))-\alpha(D,Z))^{2}].\] (A.2) We now show that, necessarily, \(\alpha_{02}\in L_{2}(g_{0})\equiv\{(d,z)\mapsto\delta(x,\varphi(d,z,g))\colon \delta\in L_{2}(X,V)\}\). Note that \(r_{2}\in L_{2}(g_{0})\). Moreover, since \(L_{2}(g_{0})\) is a linear and closed subspace of \(L_{2}(D,Z)\), by Luenberger (1997, Th. 1 in Sec. 3.4), for every \(\alpha\in\Gamma_{2}(g_{0})\) we have the decomposition \(\alpha=m+m^{\perp}\), with \(m\in L_{2}(g_{0})\) and \(m^{\perp}\in L_{2}(g_{0})^{\perp}\), the orthogonal complement of \(L_{2}(g_{0})\). Therefore, for every \(\alpha\in\Gamma_{2}(g_{0})\), \[\|r_{2}-\alpha\|^{2}=\|r_{2}-m-m^{\perp}\|^{2}=\|r_{2}-m\|^{2}+\|m^{\perp}\|^{ 2}\geq\|r_{2}-m\|^{2}.\] Note that \(\|\delta\|^{2}=\mathbb{E}[\delta(D,Z)^{2}]\) for every \(\delta\in L_{2}(D,Z)\). The above result uses that \(r_{2}-m\in L_{2}(g_{0})\) and Pitagoras' Theorem (Luenberger, 1997, Lemma 1 in Sec. 3.3). Since equality is achieved when \(m^{\perp}=0\), we have that \(\|r_{2}-\alpha\|^{2}\) is minimized for an \(\alpha\in\Gamma_{2}(g_{0})\cap L_{2}(g_{0})\). This gives Point (IF). \(\blacksquare\) Proof of Theorem 3.1:We compute \(d\bar{m}(g(F_{\tau}),h_{0},\theta)/d\tau\) and \(d\bar{m}(g_{0},h(F_{0},g(F_{\tau})),\theta)/d\tau\) separatelly and then add them according to equation (3.1). By Assumptions (A1) and (A2), using the Riesz Representation Theorem, we have that for the differentiable paths \(\tau\mapsto g(F_{\tau})\) and \(\tau\mapsto h(F_{0},g(F_{\tau}))\): \[\frac{d}{d\tau}\bar{m}(g(F_{\tau}),h_{0},\theta)=\frac{d}{d\tau}\mathbb{E}[D_{ 11}(W,g(F_{\tau}))]=\frac{d}{d\tau}\mathbb{E}[r_{1}(Z)g(F_{\tau})(Z)]\] (A.3) and, being \(V\equiv\varphi(D,Z,g_{0})\), \[\frac{d}{d\tau}\bar{m}(g_{0},h(F_{0},g(F_{\tau})),\theta)=\frac{d}{d\tau} \mathbb{E}[D_{2}(W,h(F_{0},g(F_{\tau})))]=\frac{d}{d\tau}\mathbb{E}[r_{2}(X,V) h(F_{0},g(F_{\tau}))(X,V)].\] (A.4) In these equations, \(g(F)(z)\) means \(g(F)\) evaluated at \(z\), and \(h(F,g)(x,v)\) means \(h(F,g)\) evaluated at \((x,v)\). We now proceed as in Hahn and Ridder (2013, Lma. 1). For any function \(\delta\in\Gamma_{2}(g(F_{\tau}))\cap L_{2}(g_{0})\), we have that \[\mathbb{E}[\delta(X,\varphi(D,Z,g(F_{\tau})))\cdot\{Y-h(F_{0},g(F_{\tau}))(X, \varphi(D,Z,g(F_{\tau})))\}]=0.\] This is the orthogonality condition that defines \(h(F_{0},g(F_{\tau}))\), as equation (2.2) defines \(h_{0}\). Taking derivatives in the above equation leads to: \[\begin{split}\frac{d}{d\tau}\mathbb{E}[\delta_{2}(X,V)h(F_{0},g( F_{\tau}))(X,V)]&=-\frac{d}{d\tau}\mathbb{E}[\delta_{2}(X,V)h_{0}(X, \varphi(D,Z,g(F_{\tau})))]\\ &+\frac{d}{d\tau}\mathbb{E}\left[\delta_{2}(X,\varphi(D,Z,g(F_{ \tau})))\cdot(Y-h_{0}(X,V))\right].\end{split}\] (A.5) A final step is needed to connect equation (A.4) with the above result. To perform it, we use Assumption (A3) in two directions. First, since \(\alpha_{02}\in\Gamma_{2}(g_{0})\cap L_{2}(g_{0})\), we have that \(\alpha_{02}(\cdot,\varphi(\cdot,\cdot,g(F_{\tau})))\in\Gamma_{2}(g(F_{\tau})) \cap L_{2}(g_{0})\). We can then apply equation (A.5) to \(\alpha_{02}\). Moreover, since \(h(F_{0},g(F_{\tau}))(\cdot,\varphi(\cdot,\cdot,g(F_{\tau})))\in\Gamma_{2}(g(F_ {\tau}))\), we also have that \(h(F_{0},g(F_{\tau}))(\cdot,\varphi(\cdot,\cdot,g_{0}))\in\Gamma_{2}(g_{0})\). This means that, in equation (A.4), we can dismiss the component of \(r_{2}\) that is orthogonal to \(\Gamma_{2}(g_{0})\). Then, we can write \(d\mathbb{E}[\alpha_{02}(X,V)h(F_{0},g(F_{\tau}))(X,V)]/d\tau\) as RHS in equation (A.4). Combining this with equation (A.5): \[\begin{split}\frac{d}{d\tau}\bar{m}(g_{0},h(F_{0},g(F_{\tau})), \theta)&=\frac{d}{d\tau}\mathbb{E}[\alpha_{02}(X,V)h(F_{0},g(F_{ \tau}))(X,V)]\\ &=-\frac{d}{d\tau}\mathbb{E}[\alpha_{02}(X,V)h_{0}(X,\varphi(D, Z,g(F_{\tau})))]\\ &+\frac{d}{d\tau}\mathbb{E}\left[\alpha_{02}(X,\varphi(D,Z,g(F_{ \tau})))\cdot(Y-h_{0}(X,V))\right].\end{split}\] (A.6) Under Assumption (A4), the term in the second row can be linearized in \(g(F_{\tau})\) as \[\begin{split}\frac{d}{d\tau}\mathbb{E}[\alpha_{02}(X,V)h_{0}(X, \varphi(D,Z,g(F_{\tau})))]&=\mathbb{E}\left[\frac{d}{d\tau}\{ \alpha_{02}(X,V)h_{0}(X,\varphi(D,Z,g(F_{\tau})))\}\right]\\ &=\mathbb{E}\left[\alpha_{02}(X,V)\frac{\partial h_{0}}{\partial v }(X,V)\frac{d}{d\tau}\varphi(D,Z,g(F_{\tau}))\right]\\ &=\mathbb{E}\left[\alpha_{02}(X,V)\frac{\partial h_{0}}{\partial v }(X,V)\frac{d}{d\tau}D_{\varphi}g(F_{\tau})(D,Z)\right]\\ &=\frac{d}{d\tau}\mathbb{E}\left[\alpha_{02}(X,V)\frac{\partial h _{0}}{\partial v}(X,V)D_{\varphi}g(F_{\tau})(D,Z)\right],\end{split}\] where \(D_{\varphi}g(d,z)\) denotes \(D_{\varphi}g\) evaluated at \((d,z)\). We have assumed that derivatives and expectations can be interchanged (we may impose some regularity conditions on \(H\) such that this is possible). We can equivalently linearize the term in the third row of equation (A.6) to get Pluging in these results back in equation (A.6): \[\begin{split}\frac{d}{d\tau}\bar{m}(g_{0},h(F_{0},g(F_{\tau})), \theta)&=\frac{d}{d\tau}\mathbb{E}\left[\left\{-\alpha_{02}(X,V) \frac{\partial h_{0}}{\partial v}(X,V)\right.\right.\\ &\left.\left.+(Y-h_{0}(X,V))\frac{\partial\alpha_{02}}{\partial v }(X,V)\right\}D_{\varphi}g(F_{\tau})(D,Z)\right]\\ &=\mathbb{E}\left[\left.\frac{\partial}{\partial v}\left\{\alpha _{02}(X,v)\cdot(Y-h_{0}(X,v))\right\}\right|_{v=V}D_{\varphi}g(F_{\tau})(D,Z) \right].\end{split}\] (A.7) Since \(D_{\varphi}\) is linear in \(g\), the function inside the expectation in the RHS is linear in \(g\). We now use equation (3.1) to combine the results in equations (A.3) and (A.7). This gives: \[\frac{d}{d\tau}\bar{m}(g(F_{\tau}),h(F_{0},g(F_{\tau})),\theta)= \frac{d}{d\tau}\mathbb{E} \bigg{[}D_{11}(W,g(F_{\tau}))\] \[+\frac{\partial}{\partial v}\left\{\alpha_{02}(X,v)\cdot(Y-h_{0}( X,v))\right\}\bigg{|}_{v=V}D_{\varphi}g(F_{\tau})(D,Z)\bigg{]}\,,\] which gives the linearization result of the Theorem (LIN). To find the shape of the IF, note that the adjoint \(D_{\varphi}^{*}\) of \(D_{\varphi}\) is defined by the equation \(\mathbb{E}[\delta(D,Z)D_{\varphi}g(D,Z)]=\mathbb{E}[D_{\varphi}^{*}\delta(Z)g( Z)]\). Therefore, by the Law of Iterated Expectations in equation (A.7), noting that \(V\equiv\varphi(D,Z,g_{0})\) is a function of \((D,Z)\): \[\frac{d}{d\tau}\bar{m}(g_{0},h(F_{0},g(F_{\tau})),\theta) =\frac{d}{d\tau}\mathbb{E}\left[\mathbb{E}\left.\left[\frac{ \partial}{\partial v}\left\{\alpha_{02}(X,v)\cdot(Y-h_{0}(X,v))\right\} \right|_{v=V}D_{\varphi}g(F_{\tau})(D,Z)\right|D,Z\right]\right\}\] \[=\mathbb{E}[\nu(D,Z)D_{\varphi}g(F_{\tau})(D,Z)]=\mathbb{E}[D_{ \varphi}^{*}\nu(Z)g(F_{\tau})(Z)],\] with \[\nu(d,z)\equiv\left.\frac{\partial}{\partial v}\left\{\alpha_{02}(x,v)\cdot( \mathbb{E}[Y|D=d,Z=z]-h_{0}(x,v))\right\}\right|_{v=\varphi(d,z,g_{0})}.\] Again, we can use equation (3.1) to combine this last result with that in equation (A.3): \[\frac{d}{d\tau}\bar{m}(g(F_{\tau}),h(F_{0},g(F_{\tau})),\theta)=\frac{d}{d\tau }\mathbb{E}[\{r_{1}(Z)+D_{\varphi}^{*}\nu(Z)\}g(F_{\tau})(Z)].\] This is Assumption 1 in Ichimura and Newey (2022). Since Assumption 2 in that paper is satisfied in our setup, Proposition 1 in Ichimura and Newey (2022) gives the shape of the IF: \(\phi_{1}(w,g_{0},\alpha_{01},\theta)=\alpha_{01}(z)\cdot\{d-g_{0}(z)\}\). The parameter \(\alpha_{10}\) is the \(L_{2}\)-projection: \[\alpha_{10}=\arg\min_{\alpha\in\Gamma_{1}}\mathbb{E}[(\tilde{\nu}(Z)-\alpha(Z ))^{2}],\] (A.8) where \(\tilde{\nu}=r_{1}+D_{\varphi}^{*}\nu\). \(\blacksquare\)
2302.12504
Recovering Sparse and Interpretable Subgroups with Heterogeneous Treatment Effects with Censored Time-to-Event Outcomes
Studies involving both randomized experiments as well as observational data typically involve time-to-event outcomes such as time-to-failure, death or onset of an adverse condition. Such outcomes are typically subject to censoring due to loss of follow-up and established statistical practice involves comparing treatment efficacy in terms of hazard ratios between the treated and control groups. In this paper we propose a statistical approach to recovering sparse phenogroups (or subtypes) that demonstrate differential treatment effects as compared to the study population. Our approach involves modelling the data as a mixture while enforcing parameter shrinkage through structured sparsity regularization. We propose a novel inference procedure for the proposed model and demonstrate its efficacy in recovering sparse phenotypes across large landmark real world clinical studies in cardiovascular health.
Chirag Nagpal, Vedant Sanil, Artur Dubrawski
2023-02-24T08:10:23Z
http://arxiv.org/abs/2302.12504v1
Recovering Sparse and Interpretable Subgroups with Heterogeneous Treatment Effects with Censored Time-to-Event Outcomes ###### Abstract Studies involving both randomized experiments as well as observational data typically involve time-to-event outcomes such as time-to-failure, death or onset of an adverse condition. Such outcomes are typically subject to censoring due to loss of follow-up and established statistical practice involves comparing treatment efficacy in terms of hazard ratios between the treated and control groups. In this paper we propose a statistical approach to recovering sparse phenogroups (or subtypes) that demonstrate differential treatment effects as compared to the study population. Our approach involves modelling the data as a mixture while enforcing parameter shrinkage through structured sparsity regularization. We propose a novel inference procedure for the proposed model and demonstrate its efficacy in recovering sparse phenotypes across large landmark real world clinical studies in cardiovascular health. Time-to-Event, Survival Analysis, Heterogeneous Treatment Effects, Hazard Ratio ## 1 Introduction Data driven decision making across multiple disciplines including healthcare, epidemiology, econometrics and prognostics often involves establishing efficacy of an intervention when outcomes are measured in terms of the time to an adverse event, such as death, failure or onset of a critical condition. Typically the analysis of such studies involves assigning a patient population to two or more different treatment arms often called the 'treated' (or 'exposed') group and the 'control' (or 'placebo') group and observing whether the populations experience an adverse event (for instance death or onset of a disease) over the study period at a rate that is higher (or lower) than for the control group. Efficacy of a treatment is thus established by comparing the relative difference in the rate of event incidence between the two arms called the hazard ratio. However, not all individuals benefit equally from an intervention. Thus, very often potentially beneficial interventions are discarded even though there might exist individuals who benefit, as the population level estimates of treatment efficacy are inconclusive. In this paper we assume that patient responses to an intervention are typically heterogeneous and there exists patient subgroups that are unaffected by (or worse, **harmed**) by the intervention being assessed. The ability to discover or phenotype these patients is thus clinically useful as it would allow for more precise clinical decision making by identifying individuals that actually benefit from the intervention being assessed. Towards this end, we propose **Sparse Cox Subgrouping**, (SCS) a latent variable approach to model patient subgroups that demonstrate heterogeneous effects to an intervention. As opposed to existing literature in modelling heterogeneous treatment effects with censored time-to-event outcomes our approach involves structured regularization of the covariates that assign individuals to subgroups leading to parsimonious models resulting in phenogroups that are interpretable. We release a python implementation of the proposed SCS approach as part of the \(\mathsf{auton}\)-survival package (Nagpal et al., 2022b) for survival analysis: [https://autonlab.github.io/auton-survival/](https://autonlab.github.io/auton-survival/) ## 2 Related Work Large studies especially in clinical medicine and epidemiology involve outcomes that are time-to-events such as death, or an adverse clinical condition like stroke or cancer. Treatment efficacy is typically estimated by comparing event rates between the treated and control arms using the Proportional Hazards (Cox, 1972) model and its extensions. Identification of subgroups in such scenarios has been the subject of a large amount of traditional statistical literature. Large number of such approaches involve estimation of the factual and counterfactual outcomes using separate regression models (T-learner) followed by regressing the difference between these estimated potential outcomes. Within this category of approaches, Lipkovich et al. (2011) propose the subgroup identification based on differential effect search (SIDES) algorithm, Su et al. (2009) propose a recursive partitioning method for subgroup discovery, Dusseldorp and Mechelen (2014) propose the qualitative interaction trees (QUINT) algorithm, and Foster et al. (2011) propose the virtual twins (VT) method for subgroup discovery involving decision tree ensembles. We include a parametric version of such an approach as a competing baseline. Identification of heterogeneous treatment effects (HTE) is also of growing interest to the machine learning community with multiple approaches involving deep neural networks with balanced representations (Shalit et al., 2017; Johansson et al., 2020), generative models Louizos et al. (2017) as well as Non-Parametric methods involving random-forests (Wager and Athey, 2018) and Gaussian Processes (Alaa and Van Der Schaar, 2017). There is a growing interest in estimating HTEs from an interpretable and trustworthy standpoint (Lee et al., 2020; Nagpal et al., 2020; Morucci et al., 2020; Wu et al., 2022; Crabbe et al., 2022). Wang and Rudin (2022) propose a sampling based approach to discovering interpretable rule sets demonstrating HTEs. However large part of this work has focused extensively on outcomes that are binary or continuous. The estimation of HTEs in the presence of censored time-to-events has been limited. Xu et al. (2022) explore the problem and describe standard approaches to estimate treatment effect heterogeneity with survival outcomes. They also describe challenges associated with existing risk models when assessing treatment effect heterogeneity in the case of cardiovascular health. There has been some initial attempt to use neural network for causal inference with censored time-to-event outcomes. Curth et al. (2021) propose a discrete time method along with regularization to match the treated and control representations. Chapfuwa et al. (2021)'s approach is related and involves the use of normalizing flows to estimate the potential time-to-event distributions under treatment and control. While our contributions are similar to Nagpal et al. (2022a), in that we assume treatment effect heterogeneity through a latent variable model, our contribution differs in that 1) Our approach is free of the expensive Monte-Carlo sampling procedure and 2) Our generalized EM inference procedure allows us to naturally incorporate structured sparsity regularization, which helps recovers phenogroups that are parsimonious in the features they recover that define subgroups. Survival and time-to-event outcomes occur pre-eminently in areas of cardiovascular health. One such area is reducing combined risk of adverse outcomes from atherosclerotic disease1(Herrington et al., 2016; Furberg et al., 2002; Group, 2009; Buse et al., 2007) The ability of recovering groups with differential benefits to interventions can thus lead to improved patient care through framing of optimal clinical guidelines. Footnote 1: A class of related clinical conditions from increasing deposits of plaque in the arteries, leading to Stroke, Myorcardial Infarction and other Coronary Heart Diseases. ## 3 Proposed Model: Sparse Cox Subgrouping **Notation** As is standard in survival analysis, we assume that we either observe the true time-to-event or the time of censoring \(U=\min\{T,C\}\) indicated by the censoring indicator defined as \(\Delta=\mathbf{1}\{T<C\}\). We thus work with a dataset of right censored observations in the form of 4-tuples, \(\mathcal{D}=\{(\mathbf{x}_{i},\delta_{i},\mathbf{u}_{i},\mathbf{a}_{i})\}_{i=1}^{n}\), where \(\mathbf{u}_{i}\in\mathbb{R}^{+}\) is the time-to-event or censoring as indicated by \(\delta_{i}\in\{0,1\}\), \(\mathbf{a}_{i}\in\{0,1\}\) is the indicator of treatment assignment, and \(\mathbf{x}_{i}\) are individual covariates that confound the treatment assignment and the outcome. **Assumption 1** (Independent Censoring): _The time-to-event \(T\) and the censoring distribution \(C\) are independent conditional on the covariates \(X\) and the intervention \(A\)._ **Model** Consider a maximum likelihood approach to model the data \(\mathcal{D}\) the set of parameters \(\mathbf{\Omega}\). Under Assumption 1 the likelihood of the data \(\mathcal{D}\) can be given as, \[\mathcal{L}(\mathbf{\Omega};\mathcal{D})\propto\prod_{i=1}^{|\mathcal{D}|}\mathbf{ \lambda}(u_{i}|X=\mathbf{x}_{i},A=\mathbf{a}_{i})^{\delta_{i}}\mathbf{S}(u_{i}|X=\mathbf{x}_{ i},A=\mathbf{a}_{i}), \tag{1}\] Figure 1: Potential outcome distributions under the assumptions of treatment effect heterogeneity. **Case 1**: Amongst the treated population, conditioned on the latent \(Z\), there are two subgroups that **benefit** and are **unaffected** by the intervention. **Case 2**: There is an additional latent subgroup conditioned on which, the treated population is **harmed** with a worse survival rate. here \(\mathbf{\lambda}(t)=\lim_{\Delta t\to 0}\frac{\mathbb{P}(t\leq T<t+\Delta t|T\geq t)}{ \Delta t}\) is the hazard and \(\mathbf{S}(t)=\mathbb{P}(T>t)\) is the survival rate. **Assumption 2** (Ph): _The distribution of the time-to-event \(T\) conditional on the covariates and the treatment assignment obeys proportional hazards._ From Assumption 2 (Proportional Hazards), an individual with covariates \((X=\mathbf{x})\) under intervention \((A=\mathbf{a})\) under a Cox model with parameters \(\beta\) and treatment effect \(\omega\) is given as \[\mathbf{\lambda}(\mathbf{t}|A=\mathbf{a},X=\mathbf{x})=\mathbf{\lambda}_{0}(t)\exp \big{(}\mathbf{\beta}^{\top}\mathbf{x}+\mathbf{\omega}\cdot\mathbf{a}\big{)}, \tag{2}\] Here, \(\mathbf{\lambda}_{0}(\cdot)\) is an infinite dimensional parameter known as the base survival rate. In practice in the Cox's model the base survival rate is a nuisance parameter and is estimated non-parametrically. In order to model the heterogeneity of treatment response. We will now introduce a latent variable \(Z\in\{0,1,-1\}\) that mediates treatment response to the model, \[\mathbf{\lambda}(\mathbf{t}|A=\mathbf{a},X=\mathbf{x},Z=\mathbf{k})=\mathbf{\lambda}_{0 }(t)\exp(\beta^{\top}\mathbf{x})\exp(\mathbf{\omega})^{\mathbf{k}\mathbf{a}},\] \[\text{and,}\ \ \mathbb{P}(Z=\mathbf{k}|X=\mathbf{x})=\frac{\exp(\mathbf{ \theta}_{k}^{\top}\mathbf{x})}{\sum_{j}\exp(\mathbf{\theta}_{j}^{\top}\mathbf{x})}. \tag{3}\] Here, \(\mathbf{\omega}\in\mathbb{R}\) is the treatment effect, and \(\mathbf{\theta}\in\mathbb{R}^{k\times d}\) is the set of parameters that mediate assignment to the latent group \(Z\) conditioned on the confounding features \(\mathbf{x}\). Note that the above choice of parameterization naturally enforces the requirements from the model as in Figure 1. Consider the following scenarios, **Case 1**: The study population consists of two sub-strata ie. \(Z\in\{0,+1\}\), that are benefit and are unaffected by treatment respectively. **Case 2**: The study population consists of three sub-strata ie. \(Z\in\{0,+1,-1\}\), that benefit, are harmed or unaffected by treatment respectively. Following from Equations 1 & 3, the complete likelihood of the data \(\mathcal{D}\) under this model is, \[\mathcal{L}(\mathbf{\Omega};\mathcal{D})=\prod_{i=1}^{|\mathcal{D}|} \sum_{k\in Z}\bigg{(}\mathbf{\lambda}_{0}(u_{i})\mathbf{h}(\mathbf{x},\mathbf{a},\mathbf{k})\bigg{)} ^{\delta_{i}}\mathbf{S}_{0}(u_{i})^{\mathbf{h}(\mathbf{x},\mathbf{a},\mathbf{k})}\mathbb{P}(Z=k|X= \mathbf{x}_{i})\] \[\text{where,}\ \ln\mathbf{h}(\mathbf{x},\mathbf{a},\mathbf{k})=\mathbf{\beta}^{\top} \mathbf{x}+\mathbf{k}\cdot\mathbf{a}\cdot\mathbf{w}\ \text{and}\ \ln\mathbf{S}_{0}(\cdot)=-\mathbf{\Lambda}_{0}(\cdot), \tag{4}\] Note that \(\mathbf{\Lambda}_{0}(\cdot)=\int_{0}^{t}\mathbf{\lambda}_{0}(\cdot)\) is the infinite dimensional cumulative hazard and is inferred when learning the model. We will notate the set of all learnable parameters as \(\mathbf{\Omega}=\{\mathbf{\theta},\mathbf{\beta},\mathbf{w},\mathbf{\Lambda}_{0}\}\). **Shrinkage** In retrospective analysis to recover treatment effect heterogeneity a natural requirement is parsimony of the recovered subgroups in terms of the covariates to promote model interpretability. Such parsimony can be naturally enforced through appropriate shrinkage on the coefficients that promote sparsity. We want to recover phenogroups that are'sparse' in \(\mathbf{\theta}\). We enforce sparsity in the parameters of the latent \(Z\) gating function via a group \(\ell_{1}\) (Lasso) penalty. The final loss function to be optimized including the group sparsity regularization term is, \[\mathcal{L}(\mathbf{\Omega};\mathcal{D})+ \mathbf{\epsilon}\cdot\mathcal{R}(\mathbf{\theta})\ \text{where,}\ \mathcal{R}(\mathbf{\theta})=\sum_{d}\sqrt{\sum_{k\in \mathcal{Z}}\big{(}\mathbf{\theta}_{d}^{k}\big{)}^{2}}\] \[\text{and}\ \mathbf{\epsilon}>0\ \text{is the strength of the shrinkage parameter}. \tag{5}\] **Identifiability** Further, to ensure identifiability we restrict the gating parameters for the \((Z=0)\) to be \(0\). Thus \(\mathbf{\theta}_{1}=0\). **Inference** We will present a variation of the **Expectation Maximization** algorithm to infer the parameters in Equation 3. Our approach differs from Nagpal et al. (2022a, 2021) in that it does not require storochastic Monte-Carlo sampling. Further, our generalized EM inference allows for incorporation of the structured sparsity in the **M-Step**. **A Semi-Parametric \(Q(\cdot)\)** Note that the likelihood in Equation 3 is semi-parametric and consists of parametric components and the infinite dimensional base hazard \(\mathbf{\Lambda}(\cdot)\). We define the \(Q(\cdot)\) as: \[Q(\mathbf{\Omega};\mathcal{D})=\sum_{i=1}^{n}\sum_{k\in\mathcal{Z}}\mathbf{\gamma}_{i}^ {k}\bigg{(}\ln\mathbf{p}_{\mathbf{\theta}}(Z=k|X=\mathbf{x}_{i})+\ln\mathbf{p}_{\mathbf{w},\mathbf{ \beta},\mathbf{\Lambda}}(T|Z=k,X=\mathbf{x}_{i})\bigg{)}+\mathcal{R}(\mathbf{\theta})\] **The E-Step** Requires computation of the posteriors counts \(\mathbf{\gamma}:=\mathbf{p}(Z=k|T,X=\mathbf{x},A=\mathbf{a})\). **Result 1** (Posterior Counts): _The posterior counts \(\mathbf{\gamma}\) for the latent \(Z\) are estimated as,_ \[\mathbf{\gamma}^{k} =\widehat{\mathbb{P}}(Z=k|X=\mathbf{x},A=\mathbf{a},\mathbf{u})\] \[=\frac{\mathbb{P}(\mathbf{u}|Z=\mathbf{k},X=\mathbf{x},A=\mathbf{a})\mathbb{P}(Z= \mathbf{k}|X=\mathbf{x})}{\sum_{k}\mathbb{P}(\mathbf{u}|Z=\mathbf{k},X=\mathbf{x},A=\mathbf{a})\mathbb{ P}(Z=\mathbf{k}|X=\mathbf{x})}\] \[=\frac{\mathbf{h}(\mathbf{x},\mathbf{a},\mathbf{k})^{\delta_{i}}\widehat{\mathbf{S}}_ {0}(\mathbf{u})^{\mathbf{h}(\mathbf{x},\mathbf{a},\mathbf{k})}\exp(\mathbf{\theta}_{\mathbf{k}}^{\top}\bm {x})}{\sum_{j\in\mathcal{Z}}\mathbf{h}(\mathbf{x},\mathbf{a},\mathbf{j})^{\delta_{i}}\widehat{ \mathbf{S}}_{0}(\mathbf{u})^{\mathbf{h}(\mathbf{x},\mathbf{a},\mathbf{j})}\exp(\mathbf{\theta}_{\mathbf{j}}^{ \top}\mathbf{x})}. \tag{6}\] For a full discussion on derivation of the \(Q(\cdot)\) and the posterior counts please refer to Appendix B **The M-Step** Involes maximizing the \(Q(\cdot)\) function. Rewriting the \(Q(\cdot)\) as a sum of two terms, \[Q(\mathbf{\Omega})=\underbrace{\sum_{i=1}^{n}\sum_{k\in\mathcal{Z}}\mathbf{\gamma}_{i}^ {k}\ln\mathbf{p}_{\mathbf{w},\mathbf{\beta},\mathbf{\Lambda}_{0}}(T|Z=k,X=\mathbf{x}_{i},A=\mathbf{a} _{i})}_{\mathbf{A}(\mathbf{w},\mathbf{\beta},\mathbf{\Lambda}_{0})}+\underbrace{\sum_{i=1}^{n }\sum_{k\in\mathcal{Z}}\mathbf{\gamma}_{i}^{k}\ln\mathbf{p}_{\mathbf{\theta}}(Z=k|X=\mathbf{ x}_{i})+\mathcal{R}(\mathbf{\theta})}_{\mathbf{B}(\mathbf{\theta})}\] **Result 2** (Weighted Cox model): _The term \(\mathbf{A}\) can be rewritten as a weighted Cox model and thus optimized using the corresponding 'partial likelihood',_ **Updates for \(\{\mathbf{\beta},\mathbf{\omega}\}\)**: The partial-likelihood, \(\mathcal{PL}(\cdot)\) under sampling weights (Binder, 1992) is \[\mathcal{PL}(\mathbf{\Omega};\mathcal{D})=\sum_{i=1,\delta_{i}=1}^{n}\sum_{k\in \mathcal{Z}}\mathbf{\gamma}_{i}^{k}\bigg{(}\ln\mathbf{h}_{k}(\mathbf{x}_{i},\mathbf{a}_{i}, \mathbf{k})-\ln\sum_{j\in\text{RiskSet}(u_{i})}\sum_{k\in\mathcal{Z}}\mathbf{\gamma}_ {j}^{k}\mathbf{h}_{k}(\mathbf{x}_{j},\mathbf{a}_{j},\mathbf{k})\bigg{)}\bigg{]} \tag{7}\] Here \(\text{RiskSet}(\cdot)\) is the _'risk set'_ or the set of all individuals who haven't experienced the event till the corresponding time, i.e. \(\text{RiskSet}(t):=\{i:u_{i}>t\}\). \(\mathcal{PL}(\cdot)\) is convex in \(\{\mathbf{\beta},\mathbf{\omega}\}\) and we update these with a gradient step. **Updates for \(\mathbf{\Lambda}_{0}\)**: The base hazard \(\mathbf{\Lambda}_{0}\) are updated using a weighted Breslow's estimate (Breslow, 1972; Lin, 2007) assuming the posterior counts \(\mathbf{\gamma}\) to be sampling weights (Chen, 2009), \[\widehat{\mathbf{\Lambda}}_{0}(t)^{+}=\sum\limits_{i=1}^{n}\sum\limits_{k\in \mathcal{Z}}\mathbf{1}\{u_{i}<t\}\frac{\mathbf{\gamma}_{i}^{k}\cdot\delta_{i}}{\sum \limits_{j\in\text{RiskSet}(u_{i})}\sum\limits_{k\in\mathcal{Z}}\mathbf{\gamma}_ {j}^{k}\mathbf{h}_{k}(\mathbf{x}_{j},\mathbf{a}_{j},\mathbf{k})} \tag{8}\] Term \(\mathbf{B}\) is a function of the gating parameters \(\mathbf{\theta}\) that determine the latent assignment \(Z\) along with sparsity regularization. We update \(\mathbf{B}\) using a Proximal Gradient update as is the case with Iterative Soft Thresholding (ISTA) for group sparse \(\ell_{1}\) regression. **Updates for \(\mathbf{\theta}\)**: The proximal update for \(\mathbf{\theta}\) including the group regularization (Friedman et al., 2010) term \(\mathcal{R}(\cdot)\) is, \[\widehat{\mathbf{\theta}}^{+}=\mathsf{prox}_{\eta\epsilon}\bigg{(}\mathbf{\theta}- \frac{d}{d\mathbf{\theta}}\mathbf{B}(\mathbf{\theta})\bigg{)},\quad\text{where }\mathsf{ prox}_{\eta\epsilon}(\mathbf{y})=\frac{\mathbf{y}}{||\mathbf{y}||_{2}}\mathrm{max}\{0,|| \mathbf{y}||_{2}-\eta\epsilon\}. \tag{9}\] All together the inference procedure is described in Algorithm 1. ``` while\(\mathsf{<not}\) converged>do for\(b\in\{1,2,...,B\}\)do E-Step \(\mathbf{\gamma}_{i}^{k}=\frac{\mathbf{h}(\mathbf{x},\mathbf{a},\mathbf{k})^{\delta_{i}}\widehat{ \mathbf{S}}_{0}(\mathbf{u})^{h(\mathbf{x},\mathbf{a},\mathbf{k})}\exp(\mathbf{\theta}_{\mathbf{k}}^{T}\mathbf{ x})}{\sum_{j\in\mathcal{Z}}\mathbf{h}(\mathbf{x},\mathbf{a},\mathbf{j})^{\delta_{i}}\widehat{ \mathbf{S}}_{0}(\mathbf{u})^{h(\mathbf{x},\mathbf{a},\mathbf{j})}\exp(\mathbf{\theta}_{j}^{T}\mathbf{x})}\) \(\triangleright\) Compute posterior counts (Equation 6). M-Step \(\mathbf{\widehat{\beta}}^{+}\leftarrow\widehat{\mathbf{\beta}}-\eta\cdot\nabla_{\mathbf{ \beta}}\mathcal{P}\mathcal{L}(\mathbf{\beta},\mathbf{w};\mathcal{D})\) \(\widehat{\mathbf{w}}^{+}\leftarrow\widehat{\mathbf{w}}-\eta\cdot\nabla_{\mathbf{w}} \mathcal{P}\mathcal{L}(\mathbf{\beta},\mathbf{w};\mathcal{D})\)\(\triangleright\) Gradient descent update. \(\widehat{\mathbf{\Lambda}}_{0}(t)^{+}\leftarrow\sum\limits_{i=1}^{n}\sum\nolimits_{k \in\mathcal{Z}}\mathbf{1}\{u_{i}<t\}\frac{\mathbf{\gamma}_{i}^{k_{i}}\cdot\delta_{i}}{ \sum\limits_{j\in\text{RiskSet}(u_{i})}\sum\limits_{k\in\mathcal{Z}}\mathbf{ \gamma}_{j}^{k}\mathbf{h}_{k}(\mathbf{x}_{j},\mathbf{a}_{j},\mathbf{k})}\)\(\triangleright\)Breslow (1972)'s estimator. \(\widehat{\mathbf{\theta}}^{+}\leftarrow\widehat{\mathbf{\theta}}-\eta\cdot\nabla_{\theta} \mathbf{B}(\theta)\)\(\triangleright\) Update \(\mathbf{\theta}\) with gradient of \(\widehat{Q}\). \(\widehat{\mathbf{\theta}}^{+}\leftarrow\mathsf{prox}_{\epsilon\eta}(\widehat{\mathbf{ \theta}})\)\(\triangleright\) Proximal update. end for end while Return: learnt parameters \(\mathbf{\Omega}\); ``` **Algorithm 1**Parameter Learning for SCS with a Generalized EM **Input :** Training set, \(\mathcal{D}=\{(\mathbf{x}_{i},u_{i},a_{i},\delta_{i})_{i=1}^{n}\}\); maximum EM iterations, \(B\), step size \(\eta\) ## 4 Experiments In this section we describe the experiments conducted to benchmark the performance of SCS against alternative models for heterogenous treatment effect estimation on multiple studies including a synthetic dataset and multiple large landmark clinical trials for cardiovascular diseases. ### Simulation In this section we first describe the performance of the proposed Sparse Cox Subgrouping approach on a synthetic dataset designed to demonstrate heterogeneous treatment effects. We randomly assign individuals to the treated or control group. The latent variable \(Z\) is drawn from a uniform categorical distribution that determines the subgroup, \[A\sim\mathrm{Bernoulli}(\nicefrac{{1}}{{2}}),\quad Z\sim\mathrm{Categorical}( \nicefrac{{1}}{{3}})\] Conditioned on \(Z\) we sample \(X_{1:2}\sim\mathrm{Normal}(\mathbf{\mu}_{z},\mathbf{\sigma}_{z})\) as in Figure 2 that determine the conditional Hazard Ratios \(\text{HR}(k)\), and randomly sample noisy covariates \(X_{3:6}\sim\text{Uniform}(-1,1)\). The true Figure 3: The phenotypes recovered with Sparse Cox Subgrouping on the Synthetic Data. As expected, the recovered phenotypes conform to the modelling assumptions as in Figure 4. Figure 2: a) Population level Kaplan-Meier Estimates of the Survival Distribution stratified by the treatment assignment. b) Distribution of the Latent \(Z\) in \(X\) and the recovered decision boundary by SCS. c) Receiver Operator Characteristics of SCS in recovering the true phenotype. time-to-event \(T\) and censoring times \(C\) are then sampled as, \[T|(X=\mathbf{x},A=\mathbf{a},Z=\mathbf{k})\sim\mathrm{Gompertz}(\beta=1,\eta=0.25\cdot\text{ HR}(k)^{\mathbf{a}}),\quad C|T\sim\text{Uniform}(0,T)\] Finally we sample the censoring indicator \(\Delta\sim\mathrm{Bernoulli}(0.8)\) and set the observed time-to-event, \[U=T\text{ if }\Delta=1\text{, else we set }U=C.\] Figure 2 presents the ROC curves for SCS's ability to identify the groups with enhanced and diminished treatment effects respectively. In Figure 3 we present Kaplan-Meier estimators of the Time-to-Event distributions conditioned on the predicted \(Z\) by SCS. Clearly, SCS is able to identify the phenogroups corresponding to differential benefits. Recovering subgroups demonstrating Heterogeneous Treatment Effects from Landmark studies of Cardiovascular Health #### Antihypertensive and Lipid-Lowering Treatment to Prevent Heart Attack [20] The ALLHAT study was a large randomized experiment conducted to assess the efficacy of multiple classes of blood pressure lowering medicines for patients with hypertension in reducing risk of adverse cardiovascular conditions. We considered a subset of patients from the original **ALLHAT** study who were randomized to receive either Amlodipine (a calcium channel blocker) or Lisinopril (an Angiotensin-converting enzyme inhibitor). Overall, Amlodipine was found to be more efficacious than Lisinopril in reducing combined risk of cardio-vascular disease. Figure 4: Event-free Kaplan-Meier survival curves stratified by the treatment assignment and summary statistics for the **ALLHAT** and **BARI2D** studies. (Combined CVD: Coronary Heart Disease, Stroke, other treated angina, fatal or non-fatal Heart Failure, and Peripheral Arterial Disease.) #### Bypass Angioplasty Revascularization Investigation in Type II Diabetes Group (2009) Diabetic patients have been traditionally known to be at higher risk of cardiovascular disease however appropriate intervention for diabetics with ischemic heart disease between surgical coronary revascularization or management with medical therapy is widely debated. The **BARI2D** was a large landmark experiment conducted to assess efficacy between these two possible medical interventions. Overall **BARI2D** was inconclusive in establishing the appropriate therapy between Coronary Revascularization or medical management for patients with Type-II Diabetes. Figure 4 presents the event-free survival rates as well as the summary statistics for the studies. In our experiments, we included a large set of confounders collected at baseline visit of the patients which we utilize to train the proposed model. A full list of these features are in Appendix A. ### Baselines #### Cox PH with \(\ell_{1}\) Regularized Treatment Interaction (cox-int) We include treatment effect heterogeneity via interaction terms that model the time-to-event distribution using a proportional hazards model as in Kehl and Ulm (2006). Thus, \[\mathbf{\lambda}(t|X=\mathbf{x},A=\mathbf{a})=\mathbf{\lambda}_{0}(t)\exp\big{(}\mathbf{\beta}^{ \top}\mathbf{x}+\mathbf{a}\cdot\mathbf{\theta}^{\top}\mathbf{x}\big{)} \tag{10}\] The interaction effects \(\mathbf{\theta}\) are regularized with a lasso penalty in order to recover a sparse phenotyping rule defined as \(G(\mathbf{x})=\mathbf{\theta}^{\top}\mathbf{x}\). #### Binary Classifier with \(\ell_{1}\) Regularized Treatment Interaction (bin-int) Instead of modelling the time-to-event distribution we directly model the thresholded survival outcomes \(Y=\mathbf{1}\{T<t\}\) at a five year time horizon using a log-linear parameterization with a logit link function. As compared to cox-int, this model ignores the data-points that were right-censored prior to the thresholded time-to-event, however it is not sensitive to the strong assumption of Proportional Hazards. \[\mathbb{E}[T>t|X=\mathbf{x},A=\mathbf{a}]=\sigma(\mathbf{\beta}^{\top}\mathbf{x}+ \mathbf{\beta}_{0}+\mathbf{a}\cdot\mathbf{\theta}^{\top}\mathbf{x}),\] \[\text{and, }\sigma(\cdot)\text{ is the logistic link function.} \tag{11}\] #### Cox PH T-Learner with \(\ell_{1}\) Regularized Logistic Regression (cox-ltr) We train two separate Cox Regression models on the treated and control arms (T-Learner) to estimate the potential outcomes under treatment \((A=1)\) and control \((A=0)\). Motivated from the _'Virtual Twins'_ approach as in Foster et al. (2011), a logistic regression with an \(\ell_{1}\) penalty is trained to estimate if the risk of the potential outcome under treatment is higher than under control. This logistic regression is then employed as the phenotyping function \(G(\cdot)\) and is given as, \[G(\mathbf{x}) =\mathbb{E}[\mathbf{1}\{f_{1}(\mathbf{x},t)>f_{0}(\mathbf{x},t)\}|X=\mathbf{x}]\] \[\text{where, }f_{\mathbf{a}}(\mathbf{x},t) =\mathbb{P}(T>t|\text{do}(A=\mathbf{a}),X=\mathbf{x}). \tag{12}\] The above models involving sparse \(\ell_{1}\) regularization were trained with the glmnet(Friedman et al., 2009) package in R. #### The ACC/AHA Long term Atherosclerotic Cardiovascular Risk Estimate 2 Footnote 2: [https://tools.acc.org/ascvd-risk-estimator-plus/](https://tools.acc.org/ascvd-risk-estimator-plus/) The American College of Cardiology and the American Heart Association model for estimation of risk of Atherosclerotic disease risk (Goff Jr et al., 2014) involves pooling data from multiple observational cohorts of patients followed by modelling the 10-year risk of an adverse cardiovascular condition including death from coronary heart disease, Non-Fatal Myocardial Infarction or Non-fatal Stroke. While the risk model was originally developed to assess factual risk in the observational sense, in practice it is also employed to assess risk when making counterfactual decisions. ### Results and Discussion #### Protocol We compare the performance of SCS and the corresponding competing methods in recovery of subgroups with enhanced (or diminished treatment effects). For each of these studies we stratify the study population into equal sized sets for training and validation while persevering the proportion of individuals that were assigned to treatment and experienced the outcome in the follow up period. The models were trained on the training set and validated on the held-out test set. For each of the methods we experiment with models that do not enforce any sparsity (\(\mathbf{\epsilon}=0\)) as well as tune the level of sparsity to recover phenotyping functions that involve \(5\) and \(10\) features. The subgroup size are varied by controlling the threshold at which the individual is assigned to a group. Finally, the treatment effect is compared in terms of Hazard Ratios, Risk Differences as well as Restricted Mean Survival Time over a 5 Year event period. #### Results We present the results of SCS versus the baselines in terms of Hazard Ratios on the **ALLHAT** and **BARI2D** datasets in Figures 5 and 6. In the case of **ALLHAT**, SCS consistently recovered phenogroups with more pronounced (or diminished) treatment effects. On external validation on the heldout dataset, we found a subgroup of patients that had similar outcomes whether assigned to Lisinopril or Amlodipine, whereas the other subgroup clearly identified patients that were harmed with Lisinopril. The group harmed with Lisinopril had higher Diastolic BP. On the other hand, patients with Lower kidney function did not seem to benefit from Amlodipine. In the case of **BARI2D**, SCS recovered phenogroups that were both harmed as well as benefitted from just medical therapy without revascularization. The patients who were harmed from Medical therapy were typically older, on the other hand the patients who benefitted primarily included patients who were otherwise assigned to receive PCI instead of CABG revascularization, suggesting PCI to be harmful for diabetic patients. Tables 3 and 4 present the features that were selected by the proposed model for the studies. Additionally, we also report tabulated results involving metrics like risk difference and restricted mean survival time in the Appendix C. ## 5 Concluding Remarks We presented Sparse Cox Subgrouping (SCS) a latent variable approach to recover subgroups of patients that respond differentially to an intervention in the presence of censored time-to-event outcomes. As compared to alternative approaches to learning parsimonious hypotheses in such settings, ## Appendix A Figure 5: Conditional Average Treatment Effect in Hazard Ratio versus subgroup size for the latent phenogroups extracted from the **ALLHAT** study. ## Appendix A Figure 6: Conditional Average Treatment Effect in Hazard Ratio versus subgroup size for the latent phenogroups extracted from the **BARI2D** study. our proposed model recovered hypotheses with more pronounced treatment effects which we validated on multiple studies for cardiovascular health. While powerful in its ability to recover parsimonious subgroups there exists limitations in SCS in its current form. The model is sensitive to proportional hazards and may be ill-specified when the proportional hazards assumptions are violated as is evident in many real world clinical studies (Maron et al., 2018; Brethauer et al., 2022). Another limitation is that SCS in its current form looks at only a single endpoint (typically death, or a composite of multiple adverse outcome). In practice however real world studies typically involve multiple end-points. We envision that extensions of SCS would allow patient subgrouping across multiple endpoints, leading to discovery of actionable sub-populations that similarly benefit from the intervention under assessment.
2308.07634
Dark energy in conformal Killing gravity
The Friedmann equation, augmented with an additional term that effectively takes on the role of dark energy, is demonstrated to be an exact solution to the recently proposed gravitational theory named "conformal Killing gravity." This theory does not explicitly incorporate dark energy. This finding suggests that there is no necessity to postulate the existence of dark energy as an independent physical entity. The dark energy derived from this theory is characterized by a specific equation of state parameter, denoted as $\omega$, which is uniquely determined to be $-5/3$. If this effective dark energy is present, typically around 5% of the total energy density at the present time, and under the assumption of density parameters for matter and the cosmological constant, $\Omega_{\rm m}\sim 0.25$ and $\Omega_\Lambda \sim 0.7$, respectively, the expansion of the universe at low redshifts ($z < 1.5$) can exceed expectations, while the expansion at $z > 1.5$ remains unchanged. This offers a potential solution to the Hubble tension problem. Alternatively, effective dark energy could be a dominant component in the present-day universe. In this scenario, there is also the potential to address the Hubble tension, and furthermore, it resolves the coincidence problem associated with the cosmological constant.
Junpei Harada
2023-08-15T08:31:15Z
http://arxiv.org/abs/2308.07634v2
# Dark energy in conformal Killing gravity ###### Abstract The Friedmann equation, enriched by an additional term that effectively takes on the role of specific dark energy, is demonstrated to serve as an exact solution within the recently proposed gravitational theory named "conformal Killing gravity". This theory does not explicitly incorporate dark energy. This finding suggests that there's no necessity to postulate the existence of dark energy as an independent physical entity. The dark energy effectively arising from this theory is characterized by a specific equation of state parameter, denoted as \(\omega\), which is uniquely determined to be \(-5/3\), classifying it as phantom energy. If this effective dark energy is present in a moderate amount, typically around \(5\%\) of the total energy density at the present time, and under the assumption of density parameters for matter and the cosmological constant, \(\Omega_{\rm m}\sim 0.25\) and \(\Omega_{\Lambda}\sim 0.7\), respectively, the expansion of the universe at low redshifts (\(z<1.5\)) can exceed expectations, while the expansion at \(z>1.5\) remains unchanged. This holds the potential to address the Hubble tension problem. ## I Introduction Recently, within the framework of the gravitational theory proposed by [1] and referred to as "conformal Killing gravity" [2], the equation of motion governing the cosmological scale factor has been extended from the Friedmann equation [1; 2]: \[2\left(\frac{\dot{a}(t)}{a(t)}\right)^{2}-\frac{\ddot{a}(t)}{a(t)}=\frac{4\pi G }{3}(5\rho(t)+3p(t))-\frac{2k}{a^{2}(t)}+\frac{\Lambda}{3}. \tag{1}\] Here, \(a(t)\) represents the cosmological scale factor, the dots denote the time derivative, \(\rho(t)\) and \(p(t)\) stand for the energy density and pressure, respectively, and \(k\) is a constant representing the curvature of three-dimensional space. The cosmological constant \(\Lambda\) in Eq. (1) is obtained as an integration constant [1; 2]. Equation (1) has been independently derived through two distinct methods, as presented in Ref. [1] and in Ref. [2]. Equation (1) exhibits an intriguing feature [1; 2]: Despite the absence of negative pressure or the cosmological constant, the universe described by Eq. (1) undergoes a transition from decelerating to accelerating expansion. To illustrate this cosmological transition, a solution for the scale factor \(a(t)\) was derived within a matter-dominated universe [1]. The same solution was also obtained through a different approach [2]. This solution explicitly describes the transition from deceleration to acceleration. Importantly, this was achieved without the need for negative pressure or a positive cosmological constant \(\Lambda\). In contrast to the previous study [1], which focused solely on a matter-dominated universe as the simplest scenario, this work removes such limitations. Instead, we consider various components of the universe, including matter (m), radiation (r), curvature (_k_), and the cosmological constant (\(\Lambda\)). Throughout this study, we do not explicitly assume the existence of dark energy. We demonstrate that the Friedmann equation, enriched with an additional term that effectively takes on the role of specific dark energy, emerges as an exact solution to Eq. (1). This suggests that the gravitational interactions described by Eq. (1) naturally and effectively generate a particular type of dark energy without introducing it as a distinct physical entity. The dark energy derived from this approach possesses an equation of state parameter, \(\omega=p/\rho\), which is uniquely determined to be \(-5/3\). Just a few days ago, Mantica and Molinari reported the same result [2]. If this effective dark energy constitutes approximately \(5\%\) of the total energy density at the present time, and given the density parameters for matter and the cosmological constant, \(\Omega_{\rm m}\sim 0.25\) and \(\Omega_{\Lambda}\sim 0.7\) respectively, the expansion of the universe at low redshifts (\(z<1.5\)) can exceed expectations, while the expansion at \(z>1.5\) remains unaffected. The luminosity distances also remain unchanged, ensuring successful explanations of observed supernovae, similar to the \(\Lambda\)CDM (Lambda Cold Dark Matter) model. As a result, this has the potential to address the Hubble tension [3; 4; 5], a serious unsolved problem in the standard \(\Lambda\)CDM cosmology. This paper is organized as follows. Section II presents the derivation of a particular form of dark energy as a solution to Eq. (1). In Section III, we explore the potential of effective dark energy to address the problem of the Hubble tension. Finally, in Section IV, we provide a summary and conclusions. ## II Effective dark energy We assume that the universe consists of matter (m) and radiation (r), and do not assume the presence of dark energy. Therefore, the energy density \(\rho\) and pressure \(p\) in Eq. (1) are given by \(\rho=\rho_{\rm m}+\rho_{\rm r}\) and \(p=\rho_{\rm r}/3\) respectively. In this case, Eq. (1) takes the form of \[2\left(\frac{\dot{a}}{a}\right)^{2}-\frac{\ddot{a}}{a}=\frac{4\pi G}{3}(5\rho_{ \rm m}+6\rho_{\rm r})-\frac{2k}{a^{2}}+\frac{\Lambda}{3}. \tag{2}\] Here, the energy density \(\rho_{\rm m}\) and \(\rho_{\rm r}\) as functions of \(a\) can be derived from the conservation law, \(\nabla_{\mu}{T^{\mu}}_{\nu}=0\), \[0=-\nabla_{\mu}{T^{\mu}}_{0}=\dot{\rho}+3\frac{\dot{a}}{a}(\rho+p). \tag{3}\] Assuming \(p=\omega\rho\) with \(\omega\) time-independent, Eq. (3) gives \(\rho\propto a^{-3(1+\omega)}\). This provides expressions for matter (\(\omega=0\)) and radiation (\(\omega=1/3\)) as follows: \[\rho_{\rm m}(t) = \rho_{\rm m,0}\left(\frac{a(t)}{a_{0}}\right)^{-3}, \tag{4}\] \[\rho_{\rm r}(t) = \rho_{\rm r,0}\left(\frac{a(t)}{a_{0}}\right)^{-4}, \tag{5}\] where \(\rho_{\rm m,0}\) and \(\rho_{\rm r,0}\) represent the density for matter (m) and radiation (r) at the present time, respectively. The \(a_{0}\) represents the scale factor at the present time. Using the Hubble parameter and its time derivative, \[H\equiv\frac{\dot{a}}{a},\quad\dot{H}=\frac{\ddot{a}}{a}-H^{2}, \tag{6}\] the left-hand side of Eq. (2) can be expressed as \[2\left(\frac{\dot{a}}{a}\right)^{2}-\frac{\ddot{a}}{a}=H^{2}-\dot{H}. \tag{7}\] Substituting Eq. (7) into Eq. (2) and dividing Eq. (2) by \(H_{0}^{2}\), we find that Eq. (2) can be expressed as \[\frac{H^{2}-\dot{H}}{H_{0}^{2}}= \frac{5}{2}\Omega_{\rm m}\left(\frac{a}{a_{0}}\right)^{-3}+3\Omega _{\rm r}\left(\frac{a}{a_{0}}\right)^{-4} \tag{8}\] \[+2\Omega_{k}\left(\frac{a}{a_{0}}\right)^{-2}+\Omega_{\Lambda}.\] Here, the density parameters \(\Omega\)'s are defined as follows: \[\Omega_{\rm m}\equiv\frac{\rho_{\rm m,0}}{\rho_{\rm c}},\ \Omega_{\rm r}\equiv\frac{\rho_{\rm r,0}}{\rho_{\rm c}},\ \Omega_{k}\equiv-\frac{k}{a_{0}^{2}H_{0}^{2}},\ \Omega_{\Lambda}\equiv\frac{\Lambda}{3H_{0}^{2}}, \tag{9}\] and the critical density is defined as \(\rho_{\rm c}\equiv 3H_{0}^{2}/8\pi G\). Since we do not assume the presence of dark energy, Eq. (8) does not involve dark energy. Thus, Eq. (8) includes only four components: \(\Omega_{\rm m}\), \(\Omega_{\rm r}\), \(\Omega_{k}\), and \(\Omega_{\Lambda}\). Note that these four density parameters do not necessarily satisfy the relation, \(\Omega_{\rm m}+\Omega_{\rm r}+\Omega_{k}+\Omega_{\Lambda}=1\), which represents the Friedmann equation at the present time. Instead, they satisfy the following relation: \[2+q_{0}=\frac{5}{2}\Omega_{\rm m}+3\Omega_{\rm r}+2\Omega_{k}+\Omega_{\Lambda}. \tag{10}\] Here, the deceleration parameter \(q\) is defined by \[q\equiv-\frac{\ddot{a}a}{\dot{a}^{2}}=-\frac{\ddot{a}}{aH^{2}}=-\frac{\dot{H} }{H^{2}}-1, \tag{11}\] and \(q_{0}\) represents its present value. Equation (10) can be derived as follows. Using Eq. (11), the left-hand side of Eq. (8) can be expressed as \[\frac{H^{2}-\dot{H}}{H_{0}^{2}}=(2+q)\left(\frac{H}{H_{0}}\right)^{2}. \tag{12}\] Substituting Eq. (12) into the left-hand side of Eq. (8) and then taking the present value, we obtain Eq. (10). Equation (10) indicates that \(q_{0}\) can take a negative value even when \(\Omega_{\Lambda}=0\). For instance, in the case where \(\Omega_{\rm r}=\Omega_{k}=\Omega_{\Lambda}=0\), \(q_{0}\) becomes negative if \(\Omega_{\rm m}<4/5=0.8\) (This outcome is consistent with a previous study [1]). Consequently, within the cosmological framework described by Eq. (8), the present-day expansion of the universe can be accelerating (\(q_{0}<0\)), all without the need for negative pressure or a cosmological constant. In a previous study [1], a solution for the scale factor \(a(t)\) was derived by assuming a matter-dominated universe with \(\Omega_{\rm r}=\Omega_{k}=\Omega_{\Lambda}=0\). Notably, this same solution was recently derived in another study [2]. This solution represents the transition from decelerating to accelerating expansion. In the following, we will explicitly elucidate the mechanisms enabling such acceleration. It's worth noting that Eq. (8) includes a time derivative term \(\dot{H}\), which makes it into a differential equation for the Hubble parameter \(H\). By solving Eq. (8) with respect to \(H\) rather than to the scale factor \(a\), we obtain \[\left(\frac{H}{H_{0}}\right)^{2}= \Omega_{\rm m}\left(\frac{a}{a_{0}}\right)^{-3}+\Omega_{\rm r} \left(\frac{a}{a_{0}}\right)^{-4}+\Omega_{k}\left(\frac{a}{a_{0}}\right)^{-2} \tag{13}\] \[+ \Omega_{\Lambda}+(1-\Omega_{\rm m}-\Omega_{\rm r}-\Omega_{k}- \Omega_{\Lambda})\left(\frac{a}{a_{0}}\right)^{2}.\] In this equation, \(1-\Omega_{\rm m}-\Omega_{\rm r}-\Omega_{k}-\Omega_{\Lambda}\) represents an integration constant that has been determined to ensure the validity of Eq. (13) at the present time. Intriguingly, Eq. (13) constitutes an exact solution to Eq. (8). This can be verified easily as follows: Differentiating Eq. (13) with respect to \(t\), and then dividing the result by \(-2H\), we obtain \[-\frac{\dot{H}}{H_{0}^{2}}= \frac{3}{2}\Omega_{\rm m}\left(\frac{a}{a_{0}}\right)^{-3}+2\Omega _{\rm r}\left(\frac{a}{a_{0}}\right)^{-4}+\Omega_{k}\left(\frac{a}{a_{0}} \right)^{-2} \tag{14}\] \[-(1-\Omega_{\rm m}-\Omega_{\rm r}-\Omega_{k}-\Omega_{\Lambda}) \left(\frac{a}{a_{0}}\right)^{2}.\] We find that the sum of Eqs. (13) and (14) is equal to Eq. (8). Thus, Eq. (13) is an exact solution to Eq. (8). Conversely, we can readily observe the following: Equation (13) can be identified as the Friedmann equation, wherein an extra term, \((1-\Omega_{\rm m}-\Omega_{\rm r}-\Omega_{k}-\Omega_{\Lambda})(a/a_{0})^{2}\), comes into play. The exponent "2" in this term is uniquely determined as a consequence of Eq. (8). Referring to Eq. (3), we deduce that \(2=-3(1+\omega)\), signifying that the equation of state parameter \(\omega\) equals \(-5/3\). Consequently, the supplementary term, \((1-\Omega_{\rm m}-\Omega_{\rm r}-\Omega_{k}-\Omega_{\Lambda})(a/a_{0})^{2}\), effectively takes on the role of dark energy with \(\omega=-5/3\). Its density parameter at the present time is given by \(1-\Omega_{\rm m}-\Omega_{\rm r}-\Omega_{k}-\Omega_{\Lambda}\). Hence, we deduce that the constituent serving as dark energy inherently arises from Eq. (8). For practical cosmological applications, we can readily begin with Eq. (13). The last term in Eq. (13) vanishes as \(1-\Omega_{\rm m}-\Omega_{\rm r}-\Omega_{k}-\Omega_{\Lambda}\to 0\). Consequently, as \(1-\Omega_{\rm m}-\Omega_{\rm r}-\Omega_{k}-\Omega_{\Lambda}\) comes exceedingly close to zero, Eq. (13) reduces to the standard Friedmann equation with no dark energy. However, if \(1-\Omega_{\rm m}-\Omega_{\rm r}-\Omega_{k}-\Omega_{\Lambda}\) assumes a small yet nonzero value, it results in deviations from the standard Friedmann cosmology. In Section III, we will explore the cosmological implications of having a small value of \(1-\Omega_{\rm m}-\Omega_{\rm r}-\Omega_{k}-\Omega_{\Lambda}\). ## III Cosmological Implications It is convenient to express Eq. (13) in terms of the redshift as \[\left(\frac{H(z)}{H_{0}}\right)^{2}= \Omega_{\rm m}(1+z)^{3}+\Omega_{\rm r}(1+z)^{4}+\Omega_{k}(1+z)^{ 2}+\Omega_{\Lambda} \tag{15}\] \[+(1-\Omega_{\rm m}-\Omega_{\rm r}-\Omega_{k}-\Omega_{\Lambda})(1 +z)^{-2},\] where \(a(t)/a_{0}=1/(1+z)\). It is also convenient to consider the quantity, \(H(z)/(1+z)=\dot{a}(t)/a_{0}\), and its derivative. From Eq. (15), we obtain \[\frac{H(z)}{1+z}=H_{0}\sqrt{\Omega_{\rm m}(1+z)+\Omega_{\rm r}(1+z)^{2}+\Omega _{k}+\Omega_{\Lambda}(1+z)^{-2}+(1-\Omega_{\rm m}-\Omega_{\rm r}-\Omega_{k}- \Omega_{\Lambda})(1+z)^{-4}}, \tag{16}\] and its derivative with respect to \(z\) as \[\frac{d}{dz}\left(\frac{H(z)}{1+z}\right)=H_{0}\frac{\frac{1}{2}\Omega_{\rm m }+\Omega_{\rm r}(1+z)-\Omega_{\Lambda}(1+z)^{-3}-2(1-\Omega_{\rm m}-\Omega_{ \rm r}-\Omega_{k}-\Omega_{\Lambda})(1+z)^{-5}}{\sqrt{\Omega_{\rm m}(1+z)+ \Omega_{\rm r}(1+z)^{2}+\Omega_{k}+\Omega_{\Lambda}(1+z)^{-2}+(1-\Omega_{\rm m }-\Omega_{\rm r}-\Omega_{k}-\Omega_{\Lambda})(1+z)^{-4}}}. \tag{17}\] Here, the left-hand side of Eq. (17) is calculated as \[\frac{d}{dz}\left(\frac{H(z)}{1+z}\right)=\frac{\dot{H}(1+z)/\dot{z}-H}{(1+z)^ {2}}, \tag{18}\] where dot denotes the time derivative. Substituting \(\dot{z}=-(1+z)H\) into Eq. (18) and using Eq. (11), we obtain a useful formula: \[\frac{d}{dz}\left(\frac{H(z)}{1+z}\right)=\frac{H(z)q(z)}{(1+z)^{2}}. \tag{19}\] Here, \(q\) represents a deceleration parameter defined by Eq. (11). From Eq. (19), we can see that the derivative, \(d(H(z)/(1+z))/dz\), approaches \(H_{0}q_{0}\) as \(z\to 0\). Equation (19) indicates that when we plot \(H(z)/(1+z)\) as a function of \(z\), the slope is positive for decelerating expansion (\(H>0,q>0\)), and negative for accelerating expansion (\(H>0,q<0\)). The point where the transition from decelerating to accelerating expansion occurs corresponds to Eq. (19) becoming zero. Although this can be easily deduced from the relation \(H(z)/(1+z)=\dot{a}/a_{0}\), Eqs. (16)-(19) are convenient because they are expressed in terms of \(\Omega\)'s and \(z\). Substituting Eqs. (16) and (17) into Eq. (19), we can express the deceleration parameter \(q(z)\) in terms of \(\Omega\)'s as follows: \[q(z)=\frac{\frac{1}{2}\Omega_{\rm m}(1+z)^{3}+\Omega_{\rm r}(1+z)^{4}-\Omega_ {\Lambda}-2(1-\Omega_{\rm m}-\Omega_{\rm r}-\Omega_{k}-\Omega_{\Lambda})(1+z) ^{-2}}{\Omega_{\rm m}(1+z)^{3}+\Omega_{\rm r}(1+z)^{4}+\Omega_{k}(1+z)^{2}+ \Omega_{\Lambda}+(1-\Omega_{\rm m}-\Omega_{\rm r}-\Omega_{k}-\Omega_{\Lambda} )(1+z)^{-2}}. \tag{20}\] The present deceleration parameter, denoted as \(q_{0}\), can be obtained by substituting \(z=0\) into Eq. (20): \[q_{0} = \frac{1}{2}\Omega_{\rm m}+\Omega_{\rm r}-\Omega_{\Lambda}-2(1- \Omega_{\rm m}-\Omega_{\rm r}-\Omega_{k}-\Omega_{\Lambda}) \tag{21}\] \[= \frac{5}{2}\Omega_{\rm m}+3\Omega_{\rm r}+2\Omega_{k}+\Omega_{ \Lambda}-2.\] This is consistent with the previous result, Eq. (10). The transition redshift, represented as \(z_{q}\), is defined as the redshift at which the universe undergoes a transition from decelerating to accelerating expansion. It is determined by the condition \(q(z=z_{q})=0\). Equivalently, it is determined by the vanishing of Eq. (17). Substituting \(z=z_{q}\) into Eq. (20), or equivalently into Eq. (17), we obtain the condition that determines \(z_{q}\) as \[\frac{1}{2}\Omega_{\rm m}(1+z_{q})^{3}+\Omega_{\rm r}(1+z_{q})^{4}-\Omega_{\Lambda }-2(1-\Omega_{\rm m}-\Omega_{\rm r}-\Omega_{k}-\Omega_{\Lambda})(1+z_{q})^{-2}=0. \tag{22}\] Figure 1 illustrates \(H(z)/(1+z)\) as a function of redshift for three cosmological models. This figure demonstrates that if the effective dark energy exists in a modest quantity, typically \(1-\Omega_{\rm m}-\Omega_{\rm r}-\Omega_{k}-\Omega_{\Lambda}=0.05\), then it has the potential to revolve the Hubble tension problem. As mentioned in the caption of Fig. 1, we observe that if the effective dark energy exists in a small quantity, \(1-\Omega_{\rm m}-\Omega_{\rm r}-\Omega_{k}-\Omega_{\Lambda}=0.05\), it leads to a change of approximately \(20\%\) in the values of the present deceleration parameter \(q_{0}\) and the transition redshift \(z_{q}\). This change can be considered relatively large. Consequently, precise measurements of these values may help determine whether \(1-\Omega_{\rm m}-\Omega_{\rm r}-\Omega_{k}-\Omega_{\Lambda}\) is zero or nonzero. Finally, using Eq. (15) and following Ref. [8], we can derive the expression for the luminosity distance \(d_{\rm L}(z)\) of an observed source: \[d_{\rm L}(z)=\frac{1+z}{H_{0}\sqrt{\Omega_{k}}}\sinh\left[\sqrt{\Omega_{k}} \int_{\frac{1}{1+z}}^{1}\frac{dx}{x^{2}\sqrt{\Omega_{\rm m}x^{-3}+\Omega_{\rm r }x^{-4}+\Omega_{k}x^{-2}+\Omega_{\Lambda}+(1-\Omega_{\rm m}-\Omega_{\rm r}- \Omega_{k}-\Omega_{\Lambda})x^{2}}}\right], \tag{23}\] which can be used for any \(\Omega_{k}\). For \(\Omega_{k}=\Omega_{\rm r}=0\), the expression is given by \[d_{\rm L}(z)=\frac{1+z}{H_{0}}\int_{\frac{1}{1+z}}^{1}\frac{dx}{x^{2}\sqrt{ \Omega_{\rm m}x^{-3}+\Omega_{\Lambda}+(1-\Omega_{\rm m}-\Omega_{\Lambda})x^{2 }}}, \tag{24}\] where the \(\Omega\)'s satisfy the relation, \(5\Omega_{\rm m}/2+\Omega_{\Lambda}=2+q_{0}\). Figure 2 illustrates the Hubble constant-free luminos Figure 1: The \(H(z)/(1+z)\) (in km s\({}^{-1}\) Mpc\({}^{-1}\)) is plotted as a function of the redshift for three cosmological models. The lower solid curve (blue) and the upper dotted curve (orange) correspond to the case with no dark energy: \(\Omega_{\rm m}=0.3\), \(\Omega_{\Lambda}=0.7\), \(\Omega_{k}=\Omega_{\rm r}=0\), and \(1-\Omega_{\rm m}-\Omega_{\rm r}-\Omega_{k}-\Omega_{\Lambda}=0\). The Hubble constant is assumed to be \(H_{0}=67\) km s\({}^{-1}\) Mpc\({}^{-1}\) for the lower solid curve, and \(H_{0}=73\) km s\({}^{-1}\) Mpc\({}^{-1}\) for the upper dotted curve, respectively. The middle dashed curve (green) represents the case including the effective dark energy at 5%: \(\Omega_{\rm m}=0.25\), \(\Omega_{\Lambda}=0.7\), \(\Omega_{k}=\Omega_{\rm r}=0\), \(1-\Omega_{\rm m}-\Omega_{\rm r}-\Omega_{k}-\Omega_{\Lambda}=0.05\), and the Hubble constant is assumed to be \(H_{0}=73\) km s\({}^{-1}\) Mpc\({}^{-1}\). The blue point with bar at \(z=0\) represents \(H_{0}=73.0\pm 1.0\) km s\({}^{-1}\) Mpc\({}^{-1}\) obtained from the local distance measurements [6], while the red point with bar at \(z=0\) represents \(H_{0}=67.4\pm 0.5\) km s\({}^{-1}\) Mpc\({}^{-1}\) obtained from the Planck CMB data [7]. The slope of the curves at \(z=0\) represents \(H_{0}q_{0}\). Using Eq. (21), we obtain \(q_{0}=-0.55\) for the lower solid curve and the upper dotted curve, and \(q_{0}=-0.675\) for the middle dashed curve. The three points on the curves represent the cosmological transition point from decelerating to accelerating expansion. Using Eq. (22), we obtain \(z_{q}\simeq 0.67\) for the lower solid curve and the upper dotted curve, and \(z_{q}\simeq 0.80\) for the middle dashed curve. The age of the universe in Gyr is 14.1 for the solid curve, 12.9 for the dotted curve, and 13.6 for the dashed curve, respectively. ity distance, \(\log_{10}(H_{0}d_{\rm L}(z))\), as a function of redshift for three cosmological models. This figure demonstrates that a small quantity of the effective dark energy has a negligible impact on the luminosity distance. Therefore, the success of the \(\Lambda\)CDM model in explaining luminosity distances, and consequently the observations of supernovae, remains unaffected even in the presence of a modest quantity of the effective dark energy. ## IV Summary and Conclusions In this study, we have demonstrated that the Friedmann equation, enhanced by an additional term that effectively takes on the role of a specific type of dark energy, accurately resolves the recently proposed gravitational field equation (2), which does not explicitly include dark energy. Consequently, there is no need to assume the existence of dark energy as a separate physical entity. The effective dark energy derived using this approach is characterized by a unique equation of state parameter, \(\omega=-5/3\), which is solely determined by the gravitational field equation. As depicted in Fig. 1, our findings demonstrate that when the effective dark energy is present in a moderate amount, typically around \(5\%\) of the total energy density, it holds the potential to resolve the Hubble tension issue. Furthermore, we have observed that this relatively small portion (\(\sim 5\%\)) of effective dark energy leads to a notable alteration of approximately \(20\%\) in both the current deceleration parameter \(q_{0}\) and the transition redshift \(z_{q}\). Precisely determining these parameters could help in distinguishing whether the energy density of effective dark energy is zero or nonzero. Additionally, as shown in Fig. 2, our research has revealed that the effective dark energy has minimal influence on luminosity distances, thereby leaving the explanation of observed supernova data unaffected. _Note added:_ While completing this paper, the author received a paper by C. A. Mantica and L. G. Molinari [2] which also reports that an equation of state parameter for the additional component is determined to be \(-5/3\). ###### Acknowledgements. This work was supported by JSPS KAKENHI Grant No. JP22K03599.
2305.07810
Depth Dependence of $μ$P Learning Rates in ReLU MLPs
In this short note we consider random fully connected ReLU networks of width $n$ and depth $L$ equipped with a mean-field weight initialization. Our purpose is to study the dependence on $n$ and $L$ of the maximal update ($\mu$P) learning rate, the largest learning rate for which the mean squared change in pre-activations after one step of gradient descent remains uniformly bounded at large $n,L$. As in prior work on $\mu$P of Yang et. al., we find that this maximal update learning rate is independent of $n$ for all but the first and last layer weights. However, we find that it has a non-trivial dependence of $L$, scaling like $L^{-3/2}.$
Samy Jelassi, Boris Hanin, Ziwei Ji, Sashank J. Reddi, Srinadh Bhojanapalli, Sanjiv Kumar
2023-05-13T01:10:49Z
http://arxiv.org/abs/2305.07810v1
# Depth Dependence of \(\mu\)P Learning Rates in ReLU MLPs ###### Abstract In this short note we consider random fully connected ReLU networks of width \(n\) and depth \(L\) equipped with a mean-field weight initialization. Our purpose is to study the dependence on \(n\) and \(L\) of the maximal update (\(\mu\)P) learning rate, the largest learning rate for which the mean squared change in pre-activations after one step of gradient descent remains uniformly bounded at large \(n,L\). As in prior work on \(\mu\)P [9], we find that this maximal update learning rate is independent of \(n\) for all but the first and last layer weights. However, we find that it has a non-trivial dependence of \(L\), scaling like \(L^{-3/2}\). ## 1 Introduction Using a neural network requires many choices. Even after fixing an architecture, one must still specify initialization scheme, learning rate (schedule), batch size, data augmentation, regularization strength, and so on. Moreover, model performance is often highly sensitive to the setting of these hyperparameters, and yet exhaustive grid search type approaches are computationally expensive. It is therefore important to develop theoretically grounded principles for reducing the cost of hyperparameter tuning. In this short note we focus specifically on the question of how to select learning rates in a principled way. More precisely, our purpose is to generalize the _maximal update_ (\(\mu\)P) approach of [9] to setting learning rates to take into account network depth. ### Overview of \(\mu\)P Approach to Learning Rates We study learning rates in the simple setting of depth \(L\) fully connected neural networks with ReLU activations and a uniform value \(n\) for the input dimension and the hidden layers widths.1 In such a network, by definition, each input \(x\in\mathbb{R}^{n}\) produces an output \(z^{(L+1)}(x)\in\mathbb{R}^{n}\) through a sequence of pre-activations \(z^{(\ell)}(x)\in\mathbb{R}^{n}\) given by Footnote 1: Our computations readily generalize to the case of variable layer widths. Indeed, we carry out the proof of Theorem 1.1 in this context. \[z^{(\ell+1)}(x)=\begin{cases}W^{(\ell+1)}\sigma\left(z^{(\ell)}(x)\right),& \ell\geq 1\\ W^{(1)}x,&\ell=0\end{cases},\qquad\sigma(t):=\max\left\{0,t\right\}. \tag{1.1}\] Selecting learning rates cannot be done independently of an initialization scheme. As in [9], we draw random weights for the network (1.1) from the so-called mean-field initialization \[W_{ij}^{(\ell)}\sim\begin{cases}\mathcal{N}\left(0,2/n\right),&\ell=1,\ldots,L \\ \mathcal{N}\left(0,1/n^{2}\right),&\ell=L+1\end{cases}. \tag{1.2}\] The factor of two in variance of hidden layer weights corresponds to the well-known He initialization [3], which ensures that the expected squared activations neither grow nor decay with depth: \[\mathbb{E}\left[\left|\left|z^{(\ell)}(x)\right|\right|^{2}\right]=\left| \left|x\right|\right|^{2},\quad\forall\ell=1,\ldots,L. \tag{1.3}\] The much smaller variance of weights in the final layer distinguishes the initialization scheme (1.2) from the so-called NTK initialization [4]. The difference is twofold. First, when \(n\) is large the network output \(z^{(L+1)}(x)\) is close to zero. However, crucially, the parameter gradients \(\nabla_{\theta}z^{(L+1)}(x)\) are remain non-zero. Second, even in the infinite width limit \(n\to\infty\) networks trained by gradient descent are capable of feature learning [6, 7, 8, 9]. This is in contrast to the setting where the final layer weight variance scales like \(1/n\), which corresponds to the kernel regime in which neural networks trained by SGD with a small learning rate on a mean squared error loss converge to linear models and hence cannot learn data-adaptive features [1, 4, 5]. A key contribution of [9] is that the initialization (1.2) not only leads to feature learning at large \(n\) but also allows for zero-shot learning rate transfer with respect to variable width. This means that, empirically, for a fixed depth \(L\) the learning rate at small \(n\) that leads to the smallest training loss after one epoch is close to constant as one varies \(n^{2}\). Hence, in practice, one may do logarithmic grid search for good learning rates in relatively small models (with small \(n\)) and then simply re-use the best learning rate for wider networks. ### Main Result: Extending the \(\mu\)P Heuristic to Deeper Networks Instead of studying directly the training loss after one epoch [9] introduces what we will refer to here as the _maximal update heuristic_, which says that a good learning rate is one that corresponds to the largest change in hidden layer pre-activations after one step of GD that does not lead to a divergence at large \(n\). More precisely, the relation (1.3) shows that \(i\)-th neuron pre-activation in layer \(\ell\) corresponding to an input \(x\) that satisfies \[\mathbb{E}\left[\left(z_{i}^{(\ell)}(x)\right)^{2}\right]=\frac{1}{n}\left| \left|x\right|\right|^{2},\qquad i=1,\ldots,n,\quad\ell=1,\ldots,L,\] with the average being over initialization. To study the change in neuron pre-activations under GD we consider a batch \(\mathcal{B}=\{(x,y)\}\) size of \(1\) and the associated mean-squared error \[\mathcal{L}_{\mathcal{B}}(\theta):=\frac{1}{2}\left|\left|z^{(L+1)}(x;\theta )-y\right|\right|^{2},\] where we've emphasized the dependence of the network output \(z^{(L+1)}(x;\theta)\) on the network weights \(\theta\). Let us denote by \[\Delta^{\mathcal{B}}z_{i}^{(\ell)}(x)=\text{change in }z_{i}^{(\ell)}(x) \text{ after first step of GD on }\mathcal{L}_{\mathcal{B}}.\] The maximal update heuristic then asks that we set the learning rate \(\eta\) so that \[\mu\text{P learning rate }\eta^{*}:=\text{learning rate for which }\mathbb{E}\left[\left(\Delta^{\mathcal{B}}z_{i}^{(\ell)}(x) \right)^{2}\right]=1, \tag{1.4}\] where the average is over initialization. A priori, \(\eta^{*}\) depends on both network \(n\) width and depth \(L\). The article [9] shows that \(\eta^{*}\) does not depend on \(n\) and hence can be estimated accurately at small \(n\). In this article, we take up the question of how \(\eta^{*}\) depends on depth. The following theorem shows that \(\eta^{*}\) is not depth-independent: **Theorem 1.1**.: _For each \(c_{1}>0\) there exists \(c_{2},c_{3}>0\) with the following property. Fix a network width \(n\) and depth \(L\) so that \(L/n<c_{1}\). Then,_ \[\sup_{n\geq 1}\left|\mathbb{E}\left[\left(\Delta^{\mathcal{B}}z_{i}^{(\ell)}(x )\right)^{2}\right]-c_{2}\eta^{2}\ell^{3}\right|\leq c_{3}\eta^{2}\ell^{2}, \tag{1.5}\] _where \(\mathcal{B}=\{(x,y)\}\) is any batch of size one consisting of a normalized datapoint \((x,y)\) sampled independent of network weights and biases with:_ \[\mathbb{E}\left[\frac{1}{n}\left|\left|x\right|\right|^{2}\right]=1,\qquad \mathbb{E}\left[\left|\left|y\right|\right|^{2}\right]=1.\] Theorem 1.1 shows that the \(\mu\text{P}\) heuristic (1.4) dictates that \[\eta^{*}(L)=\text{const}\cdot L^{-3/2}.\] ## 2 Proof of Theorem 1.1 ### Notation and Problem Setting We prove a slightly more general result than Theorem 1.1 in two senses. First, we allow for variable widths: \[n_{\ell}=\text{width of layer }\ell=0,\ldots,L+1\] Second, we will also allow for parameter-dependent learning rates: \[\eta_{\mu}=\text{ learning rate used for parameter }\mu.\] At the end we will restrict to the case where \(\eta_{\mu}=\eta\) is independent of \(\mu\). Moreover, in order to state our proof most efficiently, we introduce some notation. Namely, we will write \(x_{\alpha}\in\mathbb{R}^{n_{0}}\) for the network input at which we study both the forward and backward pass and will denote for brevity \[z_{i;\alpha}^{(\ell)}:=z_{i}^{(\ell)}(x_{\alpha}),\qquad z_{\alpha}^{(\ell)}:= z^{(\ell)}(x_{\alpha}).\] Thus, the batch loss \(\mathcal{L}_{\mathcal{B}}\) we consider is \[\frac{1}{2}\left|\left|z_{\alpha}^{(L+1)}-y_{\alpha}\right|\right|^{2}.\] Further, we abbreviate \[\Delta z_{i;\alpha}^{(\ell)}:=\Delta^{\mathcal{B}}z_{i;\alpha}^{(\ell)}.\] With this notation, the forward pass now takes the form \[z_{i;\alpha}^{(\ell+1)}=\begin{cases}\sum_{j=1}^{n_{0}}W_{ij}^{(1)}x_{j;\alpha},& \ell=0\\ \sum_{j=1}^{n_{\ell-1}}W_{ij}^{(\ell)}\sigma\left(z_{j;\alpha}^{(\ell)}\right),& \ell=1,\ldots,L\end{cases}\] and the initialization scheme is \[W_{ij}^{(\ell+1)}\sim\begin{cases}\mathcal{N}\left(0,\frac{1}{n_{\ell}^{2}} \right),&\ell=L\\ \mathcal{N}\left(0,\frac{2}{n_{\ell}}\right),&\ell=0,\ldots,L-1\end{cases}.\] ### Proof Details We begin with the following Lemma. **Lemma 2.1**.: _For any depth \(\ell\leq L\), the pre-activation change satisfies_ \[\mathbb{E}[(\Delta z_{i;\alpha}^{(\ell)})^{2}]=A^{(\ell)}+B^{(\ell)},\] _where_ \[A^{(\ell)}:=\mathbb{E}\left[\frac{1}{n_{L}^{2}}\sum_{\mu_{1}, \mu_{2}\leq\ell}\eta_{\mu_{1}}\eta_{\mu_{2}}\partial_{\mu_{1}}z_{1;\alpha}^{( \ell)}\partial_{\mu_{2}}z_{1;\alpha}^{(\ell)}\right. \tag{2.1}\] \[\qquad\qquad\times\left.\frac{1}{n_{L}^{2}}\sum_{j_{1},j_{2}=1}^{ n_{L}}\left\{\partial_{\mu_{1}}z_{j_{1};\alpha}^{(L)}\partial_{\mu_{2}}z_{j_{1}; \alpha}^{(L)}\left(z_{j_{2};\alpha}^{(L)}\right)^{2}+2z_{j_{1};\alpha}^{(L)} \partial_{\mu_{1}}z_{j_{1};\alpha}^{(L)}z_{j_{2};\alpha}^{(L)}\partial_{\mu_ {2}}z_{j_{2};\alpha}^{(L)}\right\}\right],\] \[B^{(\ell)}:=\mathbb{E}\left[\frac{1}{n_{L}}\sum_{\mu_{1},\mu_{2} \leq\ell}\eta_{\mu_{1}}\eta_{\mu_{2}}\partial_{\mu_{1}}z_{1;\alpha}^{(\ell)} \partial_{\mu_{2}}z_{1;\alpha}^{(\ell)}\frac{1}{n_{L}}\sum_{j=1}^{n_{L}} \partial_{\mu_{1}}z_{j;\alpha}^{(L)}\partial_{\mu_{2}}z_{j;\alpha}^{(L)} \right]. \tag{2.2}\] Proof of Lemma 2.1.: We first expand \(\Delta z_{i;\alpha}^{(\ell)}\) by applying the chain rule: \[\Delta z_{i;\alpha}^{(\ell)}=\sum_{\mu\leq\ell}\cdot\partial_{\mu}z_{i;\alpha }^{(\ell)}\Delta\mu, \tag{2.3}\] where \(\Delta\mu\) is the change in \(\mu\) after one step of GD. The SGD update satisfies: \[\Delta\mu=-\eta_{\mu}\partial_{\mu}\left\{\frac{1}{2}\left|\left|z_{\alpha}^{ (L+1)}-y_{\alpha}\right|\right|^{2}\right\}=-\eta_{\mu}\sum_{k=1}^{n_{L+1}} \partial_{\mu}z_{k;\alpha}^{(L+1)}\left(z_{k;\alpha}^{(L+1)}-y_{k;\alpha} \right). \tag{2.4}\] We now combine (2.3) and (2.4) to obtain: \[\Delta z_{i;\alpha}^{(\ell)}=\sum_{\mu\leq\ell}\sum_{k=1}^{n_{L+1}}\eta_{\mu} \partial_{\mu}z_{i;\alpha}^{(\ell)}\partial_{\mu}z_{k;\alpha}^{(L+1)}\left(y_ {k;\alpha}-z_{k;\alpha}^{(L+1)}\right). \tag{2.5}\] Using (2.5), we obtain \[\mathbb{E}\left[\left(\Delta z_{i;\alpha}^{(\ell)}\right)^{2}\right]= \mathbb{E}\left[\left(\sum_{\mu\leq\ell}\eta_{\mu}\partial_{\mu}z_ {1;\alpha}^{(\ell)}\partial_{\mu}z_{1;\alpha}^{(L+1)}\left(z_{1;\alpha}^{(L+1) }-y_{1;\alpha}\right)\right)^{2}\right]\] \[= \mathbb{E}\left[\sum_{\mu_{1},\mu_{2}\leq\ell}\eta_{\mu_{1}}\eta_ {\mu_{2}}\partial_{\mu_{1}}z_{1;\alpha}^{(\ell)}\partial_{\mu_{2}}z_{1;\alpha} ^{(\ell)}\partial_{\mu_{1}}z_{1;\alpha}^{(L+1)}\partial_{\mu_{2}}z_{1;\alpha}^ {(L+1)}\mathbb{E}_{y}\left[\left(z_{1;\alpha}^{(L+1)}-y_{1;\alpha}\right)^{2} \right]\right]. \tag{2.6}\] Given the distribution of \(z_{1;\alpha}^{(L+1)}\) and \(y\), we have \[\mathbb{E}_{y}\left[\left(z_{1;\alpha}^{(L+1)}-y\right)^{2}\right]=(z_{1; \alpha}^{(L+1)})^{2}+1 \tag{2.7}\] We plug (2.7) in (2.6) and obtain \[\mathbb{E}[(\Delta z_{i;\alpha}^{(\ell)})^{2}]=A^{(\ell)}+B^{(\ell)}, \tag{2.8}\] where \[A^{(\ell)} =\mathbb{E}\left[\sum_{\mu_{1},\mu_{2}\leq\ell}\eta_{\mu_{1}}\eta _{\mu_{2}}\partial_{\mu_{1}}z_{1;\alpha}^{(\ell)}\partial_{\mu_{2}}z_{1; \alpha}^{(\ell)}\partial_{\mu_{1}}z_{1;\alpha}^{(L+1)}\partial_{\mu_{2}}z_{1; \alpha}^{(L+1)}\left(z_{1;\alpha}^{(L+1)}\right)^{2}\right] \tag{2.9}\] \[B^{(\ell)} =\mathbb{E}\left[\sum_{\mu_{1},\mu_{2}\leq\ell}\eta_{\mu_{1}}\eta _{\mu_{2}}\partial_{\mu_{1}}z_{1;\alpha}^{(\ell)}\partial_{\mu_{2}}z_{1; \alpha}^{(\ell)}\partial_{\mu_{1}}z_{1;\alpha}^{(L+1)}\partial_{\mu_{2}}z_{1; \alpha}^{(L+1)}\right]. \tag{2.10}\] We integrate out the weights in layer \(L+1\) in (2.9) and (2.10) which yields the stated result. **Lemma 2.2**.: _For any depth \(\ell\leq L\), the constant \(A^{(\ell)}\) in Lemma 2.1 satisfies \(A^{(\ell)}=O(n^{-1})\)._ Proof of Lemma 2.2.: The result is obtained essentially the same analysis at we apply to \(B^{(\ell)}\) below combined with the observation that there is an extra \(1/n_{L}\) in front of \(A^{(\ell)}\) compared with \(B^{(\ell)}\). Lemma 2.2 indicates that we may neglect the contribution of \(A^{(\ell)}\) in Lemma 2.1. We now focus on obtaining a recursive description for \(B^{(\ell)}\). **Lemma 2.3**.: _For any depth \(\ell\leq L\), the constant \(B^{(\ell)}\) in Lemma 2.1 satisfies_ \[B^{(\ell)}=\mathbb{E}\left[\frac{1}{n_{L}}\sum_{\mu_{1},\mu_{2}\leq\ell}\eta_{ \mu_{1}}\eta_{\mu_{2}}\frac{1}{n_{\ell}^{2}}\sum_{j_{1},j_{2}=1}^{n_{\ell}} \partial_{\mu_{1}}z_{j_{1};\alpha}^{(\ell)}\partial_{\mu_{2}}z_{j_{1};\alpha} ^{(\ell)}\partial_{\mu_{1}}z_{j_{2};\alpha}^{(\ell)}\partial_{\mu_{2}}z_{j_{2}; \alpha}^{(\ell)}\right]. \tag{2.11}\] Proof of Lemma 2.3.: The idea of this proof is to condition on \(z_{\alpha}^{(\ell)}\) and integrate out weights in layers \(\ell+1,\ldots,L\) to obtain \[\mathbb{E}\left[\frac{1}{n_{L}}\sum_{j=1}^{n_{L}}\partial_{\mu_{1}}z_{j; \alpha}^{(L)}\partial_{\mu_{2}}z_{j;\alpha}^{(L)}\ \bigg{|}\ z_{\alpha}^{(\ell)}\right]=\frac{1}{n_{\ell}}\sum_{j=1}^{n_{\ell}} \partial_{\mu_{1}}z_{j;\alpha}^{(\ell)}\partial_{\mu_{2}}z_{j;\alpha}^{(\ell)}. \tag{2.12}\] This will yield the result once we plug (2.12) into (2.2). To see (2.12), we proceed by induction on \(L\) starting with \(\ell=L\). In this case, the result is trivial. Suppose now \(\ell<L\). Then we have \[\mathbb{E}\left[\frac{1}{n_{L}}\sum_{j=1}^{n_{L}}\partial_{\mu_{1} }z_{j;\alpha}^{(L)}\partial_{\mu_{2}}z_{j;\alpha}^{(L)}\ \bigg{|}\ z_{\alpha}^{(\ell)}\right]\] \[\qquad=\mathbb{E}\left[\frac{1}{n_{L}}\sum_{j=1}^{n_{L}}\sum_{k_{ 1},k_{2}=1}^{n_{L-1}}W_{jk_{1}}^{(L)}W_{jk_{2}}^{(L)}\partial_{\mu_{1}}\sigma \left(z_{k_{1};\alpha}^{(L-1)}\right)\partial_{\mu_{2}}\sigma\left(z_{k_{2}; \alpha}^{(L-1)}\right)\ \bigg{|}\ z_{\alpha}^{(\ell)}\right]\] \[\qquad=\mathbb{E}\left[\frac{1}{n_{L}}\sum_{j=1}^{n_{L}}\frac{2} {n_{L-1}}\sum_{k=1}^{n_{L-1}}\partial_{\mu_{1}}\sigma\left(z_{k;\alpha}^{(L-1 )}\right)\partial_{\mu_{2}}\sigma\left(z_{k;\alpha}^{(L-1)}\right)\ \bigg{|}\ z_{\alpha}^{(\ell)}\right]\] \[\qquad=\mathbb{E}\left[\frac{2}{n_{L-1}}\sum_{k=1}^{n_{L-1}} \left(\sigma^{\prime}\left(z_{k;\alpha}^{(L-1)}\right)\right)^{2}\partial_{\mu _{1}}z_{k;\alpha}^{(L-1)}\partial_{\mu_{2}}z_{k;\alpha}^{(L-1)}\ \bigg{|}\ z_{\alpha}^{(\ell)}\right]\] \[\qquad=\frac{1}{n_{L-1}}\sum_{k=1}^{n_{L-1}}\partial_{\mu_{1}}z_ {k;\alpha}^{(L-1)}\partial_{\mu_{2}}z_{k;\alpha}^{(L-1)},\] where in the last equality we use that \(\sigma^{\prime}(z_{k;\alpha}^{(\ell)})\) is distributed according to a Bernoulli \(1/2\) random variable and is independent of \(\partial_{\mu_{1}}z_{k;\alpha}^{(L-1)}\partial_{\mu_{2}}z_{k;\alpha}^{(L-1)}\) (this can be seen by symmetrizing \(W^{(L-1)}\to-W^{(L-1)}\)). Our next step is to derive a recursion for \(B^{(\ell+1)}\) in terms of \(B^{(\ell)}\). This is done in Lemma 2.5 below, which relies on the following result: **Proposition 2.4**.: _Consider a random ReLU network with input dimension \(n_{0}\), \(L\) hidden layers of widths \(n_{1},\ldots,n_{L}\), and output dimension \(n_{L+1}\) as in (1.1). Suppose that_ \[\frac{1}{n_{1}}+\cdots+\frac{1}{n_{L}}\leq c_{1}\] _for some \(c_{1}>0\). For any fixed network input \(x_{\alpha}\in\mathbb{R}^{n_{0}}\) and any \(\ell=1,\ldots,L\) we have_ \[\mathbb{E}\left[\frac{1}{n_{\ell}}\sum_{j=1}^{n_{\ell}}\left(z_{j; \alpha}^{(\alpha)}\right)^{4}\right]=\Theta\left(\frac{1}{n_{0}^{2}}\left| \left|x_{\alpha}\right|\right|^{4}\right), \tag{2.13}\] _where the implicit constants depend on \(c_{1}\) but are otherwise independent are \(n\),\(\ell\)._ Proof.: This result is proved in Theorem 1 [2]. We have the following result. **Lemma 2.5**.: _For any depth \(\ell\leq L\), \(B^{(\ell)}\) satisfies the following recursion:_ \[B^{(\ell)}=\Theta\left(\frac{(\eta_{W}^{(\ell)})^{2}n_{\ell-1}^{2}}{n_{L}n_{ \ell}}\frac{1}{n_{0}^{2}}\left|\left|x_{\alpha}\right|\right|^{4}\right)+\frac {\eta_{W}^{(\ell)}n_{\ell-1}}{n_{\ell}}C^{(\ell-1)}+\frac{1}{n_{\ell}}\widetilde {B}^{(\ell-1)}+\left(1+\frac{1}{n_{\ell}}\right)B^{(\ell-1)}, \tag{2.14}\] _where \(C^{(\ell)},\widetilde{B}^{(\ell)}>0\) are defined as follows:_ \[\widetilde{B}^{(\ell)} :=\frac{1}{n_{\ell+1}}\mathbb{E}\left[\frac{1}{n_{L}}\sum_{\mu_{1}, \mu_{2}\leq\ell}\eta_{\mu_{1}}\eta_{\mu_{2}}\frac{1}{n_{\ell}^{2}}\sum_{j_{1}, j_{2}=1}^{n_{\ell}}\left(\partial_{\mu_{1}}z_{j_{1};\alpha}^{(\ell)} \partial_{\mu_{2}}z_{j_{2};\alpha}^{(\ell)}\right)^{2}\right], \tag{2.15}\] \[C^{(\ell)} :=\mathbb{E}\left[\frac{1}{n_{L}}\sum_{\mu\leq\ell}\eta_{\mu} \frac{1}{n_{\ell}^{2}}\sum_{j_{1},j_{2}=1}^{n_{\ell}}\left(z_{j_{1};\alpha}^{ (\ell)}\partial_{\mu}z_{j_{2};\alpha}^{(\ell)}\right)^{2}\right]. \tag{2.16}\] Proof of Lemma 2.5.: We distinguish several cases to expand the recursion of \(B^{(\ell)}\). If \(\mu_{1},\mu_{2}\in\ell\), then the contribution to \(B^{(\ell)}\) is \[\frac{(\eta_{W}^{(\ell)})^{2}n_{\ell-1}^{2}}{n_{L}n_{\ell}}\mathbb{E}\left[ \frac{1}{n_{\ell-1}^{2}}\sum_{j_{1},j_{2}=1}^{n_{\ell}-1}\left(\sigma_{j_{1}} ^{(\ell-1)}\sigma_{j_{2}}^{(\ell-1)}\right)^{2}\right]=\frac{(\eta_{W}^{(\ell) })^{2}n_{\ell-1}^{2}}{n_{L}n_{\ell}}\Theta\left(\frac{1}{n_{0}^{2}}\left| \left|x_{\alpha}\right|\right|^{2}\right) \tag{2.17}\] Further, if \(\mu_{1}\leq\ell-1\) and \(\mu_{2}\in\ell\) (or vice versa), then the contribution to \(B^{(\ell)}\) is \[2\frac{\eta_{W}^{(\ell)}n_{\ell-1}}{n_{\ell}}\mathbb{E}\left[ \frac{1}{n_{L}}\sum_{\mu_{1}\leq\ell-1}\eta_{\mu_{1}}\frac{1}{n_{\ell-1}}\sum _{k=1}^{n_{\ell-1}}\left(\sigma_{k}^{(\ell-1)}\right)^{2}\frac{1}{n_{\ell}} \sum_{j=1}^{n_{\ell}}\left(\partial_{\mu_{1}}z_{j}^{(\ell)}\right)^{2}\right]= \frac{\eta_{W}^{(\ell)}n_{\ell-1}}{n_{\ell}}C^{(\ell-1)}. \tag{2.18}\] Finally, if \(\mu_{1},\mu_{2}\leq\ell-1\), we find the contribution to \(B^{(\ell)}\) is \[\mathbb{E}\left[\frac{1}{n_{L}}\sum_{\mu_{1},\mu_{2}\leq\ell-1} \eta_{\mu_{1}}\eta_{\mu_{2}}\left\{\frac{1}{n_{\ell}}\left(\partial_{\mu_{1}}z _{1}^{(\ell)}\partial_{\mu_{2}}z_{1}^{(\ell)}\right)^{2}+\left(1-\frac{1}{n_{ \ell}}\right)\partial_{\mu_{1}}z_{1}^{(\ell)}\partial_{\mu_{2}}z_{1}^{(\ell)} \partial_{\mu_{1}}z_{2}^{(\ell)}\partial_{\mu_{2}}z_{2}^{(\ell)}\right\}\right]\] \[=\left(1+\frac{1}{n_{\ell}}\right)B^{(\ell-1)}+\frac{1}{n_{\ell}} \widetilde{B}^{(\ell-1)}. \tag{2.19}\] We adding the contributions (2.17), (2.18) and (2.19) in (2.11) gives the stated result. We now compute the recursion that \(\widetilde{B}^{(\ell)}\) satisfies. **Lemma 2.6**.: _For any depth \(\ell\leq L\), \(\widetilde{B}^{(\ell)}\) defined in (2.15) satisfies the following recursion:_ \[\frac{1}{n_{\ell}}\widetilde{B}^{(\ell)}=\Theta\left(\frac{(\eta_{W}^{(\ell)}) ^{2}n_{\ell-1}^{2}}{n_{L}n_{\ell}}\frac{||x_{\alpha}||^{4}}{n_{0}^{2}}\right) +\frac{\eta_{W}^{(\ell)}n_{\ell-1}}{n_{\ell}}C^{(\ell-1)}+\frac{n_{\ell-1}}{n_ {\ell}}\frac{1}{n_{\ell-1}}\widetilde{B}^{(\ell-1)}+\frac{2}{n_{\ell}^{2}}B^{( \ell-1)}. \tag{2.20}\] Proof of Lemma 2.6.: We apply the same proof strategy as in Lemma 2.5 to get the result. Note that (2.14) and (2.20) also depends on \(C^{(\ell)}\). Its recursion is given by the following lemma. **Lemma 2.7**.: _For any depth \(\ell\leq L\), \(C^{(\ell)}\) defined in (2.16) satisfies the following recursion_ \[C^{(\ell)}=\Theta\left(\eta_{W}^{(\ell)}\frac{n_{\ell-1}}{n_{L}} \frac{||x_{\alpha}||^{4}}{n_{0}^{2}}\right)+\frac{1}{n_{\ell}}C^{(\ell-1)}+ \left(1+\frac{1}{n_{\ell}}\right)\widetilde{C}^{(\ell-1)}, \tag{2.21}\] _where \(\widetilde{C}^{(\ell)}>0\) is a sequence defined as_ \[\widetilde{C}^{(\ell)}:=\frac{1}{n_{L}}\mathbb{E}\left[\sum_{\mu \leq\ell}\eta_{\mu}\frac{1}{n_{\ell}^{2}}\sum_{j_{1},j_{2}=1}^{n_{\ell}} \partial_{\mu}z_{j_{1}}^{(\ell)}z_{j_{1}}^{(\ell)}\partial_{\mu}z_{j_{2}}^{( \ell)}z_{j_{2}}^{(\ell)}\right]. \tag{2.22}\] Proof of Lemma 2.7.: We distinguish several cases to expand the recursion of \(C^{(\ell)}\). If \(\mu\in\ell\), the contribution to (2.16) is \[\eta_{W}^{(\ell)}\frac{n_{\ell-1}}{n_{L}}\mathbb{E}\left[\frac{1}{n_{\ell-1}^{2} }\sum_{j_{1},j_{2}=1}^{n_{\ell-1}}\left(z_{j_{1}}^{(\ell-1)}z_{j_{2}}^{(\ell-1) }\right)^{2}\right]=\eta_{W}^{(\ell)}\Theta\left(\frac{n_{\ell-1}}{n_{L}}\frac{ \left\lvert x_{\alpha}\right\rvert\right\rvert^{4}}{n_{0}^{2}}\right) \tag{2.23}\] Finally, when \(\mu\leq\ell-1\), the contribution to (2.16) is \[\begin{split}&\frac{1}{n_{L}}\mathbb{E}\left[\sum_{\mu\leq\ell-1} \eta_{\mu}\left\{\frac{1}{n_{\ell}}\left(\partial_{\mu}z_{1}^{(\ell)}z_{1}^{( \ell)}\right)^{2}+\left(1-\frac{1}{n_{\ell}}\right)\left(\partial_{\mu}z_{1}^ {(\ell)}\right)^{2}\left(z_{2}^{(\ell)}\right)^{2}\right\}\right]\\ =& C^{(\ell-1)}+\frac{1}{n_{\ell}}\widetilde{C}^{( \ell-1)}.\end{split} \tag{2.24}\] Combining (2.23) and (2.24) yields the result. We finally find the recursion of \(\widetilde{C}^{(\ell)}\) that appears in (2.21). **Lemma 2.8**.: _For any depth \(\ell\leq L\), \(\widetilde{C}^{(\ell)}\) satisfies the following recursion:_ \[\widetilde{C}^{(\ell)}=\Theta\left(\frac{\eta_{W}^{(\ell)}n_{\ell-1}}{n_{\ell }n_{L}}\frac{\left\lvert|x_{\alpha}\right\rvert\right\rvert^{4}}{n_{0}^{2}} \right)+\frac{1}{n_{\ell}}C^{(\ell-1)}+\left(1+\frac{1}{n_{\ell}}\right) \widetilde{C}^{(\ell-1)}. \tag{2.25}\] Proof of Lemma 2.8.: We apply the same proof strategy as in Lemma 2.7 to get the result. **Lemma 2.9**.: _For any depth \(\ell\leq L\), we have:_ \[\widetilde{C}^{(\ell)} =O(n^{-1}), \tag{2.26}\] \[C^{(\ell)} =\Theta\left(\frac{\left\lvert|x_{\alpha}\right\rvert\right\rvert ^{4}}{2n_{0}^{2}}\sum_{\ell^{\prime}=1}^{\ell}\frac{\eta_{W}^{(\ell^{\prime})}n _{\ell^{\prime}-1}}{n_{L}}\right) \tag{2.27}\] Proof of Lemma 2.9.: The first result is obtained by observing that there is extra \(1/n_{L}\) in front of \(\widetilde{C}^{(\ell)}\). Regarding the recursion of \(C^{(\ell)}\), we use the fact \(\widetilde{C}^{(\ell)}\) is small in (2.21) and then sum this equation for \(\ell^{\prime}=1,\dots,\ell\) to obtain the value of \(C^{(\ell)}\). We now specialize to the setting of uniform layer width \(n_{\ell}=n\) and a global learning rate \(\eta_{\mu}=\eta\) to obtain \[C^{(\ell)}=\Theta\left(\eta\ell\right)\quad\Longrightarrow\quad\frac{1}{n} \widetilde{B}^{(\ell)}=\Theta\left(\eta^{2}\ell^{2}\right)\quad\Longrightarrow \quad B^{(\ell)}=\Theta\left(\eta^{2}L^{3}\right)\left(1+O(L^{-1})\right),\] completing the proof of Theorem 1.1. ## 3 Conclusion In this short note we've computed how variable network depth influences the learning rate predicted by the \(\mu\)P heurisdtic. We found that, unlike with respect to width, this learning rate has a non-trivial power law scaling with respect to depth (see Theorem 1.1). We leave for future work empirical validation of whether this depth dependence indeed leads to learning rate transfer in practice.
2308.10348
Competition-exclusion and coexistence in a two-strain SIS epidemic model in patchy environments
This work examines the dynamics of solutions of a two-strain SIS epidemic model in patchy environments. The basic reproduction number $\mathcal{R}_0$ is introduced, and sufficient conditions are provided to guarantee the global stability of the disease-free equilibrium (DFE). In particular, the DFE is globally stable when either: (i) $\mathcal{R}_0\le \frac{1}{k}$, where $k\ge 2$ is the total number of patches, or (ii) $\mathcal{R}_0<1$ and the dispersal rate of the susceptible population is large. Moreover, the questions of competition-exclusion and coexistence of the strains are investigated when the single-strain reproduction numbers are greater than one. In this direction, under some appropriate hypotheses, it is shown that the strain whose basic reproduction number and local reproduction function are the largest always drives the other strain to extinction in the long run. Furthermore, the asymptotic dynamics of the solutions are presented when either both strain's local reproduction functions are spatially homogeneous or the population dispersal rate is uniform. In the latter case, the invasion numbers are introduced and the existence of coexistence endemic equilibrium (EE) is proved when these invasion numbers are greater than one. Numerical simulations are provided to complement the theoretical results.
Jonas T. Doumatè, Tahir B. Issa, Rachidi B. Salako
2023-08-20T19:51:59Z
http://arxiv.org/abs/2308.10348v2
# Competition-exclusion and coexistence in a two-strain SIS epidemic model in patchy environments ###### Abstract This work examines the dynamics of solutions of a two-strain SIS epidemic model in patchy environments. The basic reproduction number \(\mathcal{R}_{0}\) is introduced, and sufficient conditions are provided to guarantee the global stability of the disease-free equilibrium (DFE). In particular, the DFE is globally stable when either: (i) \(\mathcal{R}_{0}\leq\frac{1}{k}\), where \(k\geq 2\) is the total number of patches, or (ii) \(\mathcal{R}_{0}<1\) and the dispersal rate of the susceptible population is large. Moreover, the questions of competition-exclusion and coexistence of the strains are investigated when the single-strain reproduction numbers are greater than one. In this direction, under some appropriate hypotheses, it is shown that the strain whose basic reproduction number and local reproduction function are the largest always drives the other strain to extinction in the long run. Furthermore, the asymptotic dynamics of the solutions are presented when either both strain's local reproduction functions are spatially homogeneous or the population dispersal rate is uniform. In the latter case, the invasion numbers are introduced and the existence of coexistence endemic equilibrium (EE) is proved when these invasion numbers are greater than one. Numerical simulations are provided to complement the theoretical results. **Keywords**: Patch model; Epidemic model; Asymptotic Behavior; Competition-Exclusion; Coexistence. **2020 Mathematics Subject Classification**: 34D05, 34D23, 91D25, 92D30, 37N25 ## 1 Introduction The novel human coronavirus disease 2019 (COVID-19) was first reported in the last trimester of 2019, and quickly spread around the world. The emergence of different strains of the disease generated significant concerns due to attendant waves after waves of infected populations across the world. As of March, 2020, the World Health Organization's International Health Regulation Emergency Committee has declared the COVID-19 outbreak a Public Health Emergency of International Concern. In fact, in general, the questions of developing and implementing effective and adequate control strategies to alleviate the effects of infectious diseases, such as the COVID-19, on populations remain major concerns for public and health officials. These challenges are related in part to the lack of resources and good understanding of the dynamics of the disease. They are also significantly influenced by the unprecedented increase in human migration rates, which has made the word more interconnected. Hence, the studies of mathematical models, which incorporate population movements and environmental heterogeneity, may help public and health officials to take informed decisions and implement safe disease control strategies. In the current work, we study the large time behavior of solutions of a two-strain susceptible-infected-susceptible (SIS) epidemic model in patchy environments and investigate how the parameters of the model affect its dynamics. In particular, our results reveal the important of both spatial heterogeneity and population movements on the dynamics of infectious diseases. Consider the Susceptible-Infected-Susceptible (SIS) two-strain epidemic model \[\begin{cases}\frac{dS_{j}}{dt}=d_{S}\sum_{i\in\Omega}(L_{j,i}S_{i}-L_{i,j}S_{j})-( \beta_{1,j}I_{1,j}+\beta_{2,j}I_{2,j})S_{j}+(\gamma_{1,j}I_{1,j}+\gamma_{2,j}I_{2,j}),&j\in\Omega,\ t>0,\\ \frac{dI_{1,j}}{dt}=d_{1}\sum_{i\in\Omega}(L_{j,i}I_{1,i}-L_{i,j}I_{1,j})+\beta _{1,j}I_{1,j}S_{j}-\gamma_{1,j}I_{1,j},&j\in\Omega,\ t>0,\\ \frac{dI_{2,j}}{dt}=d_{2}\sum_{i\in\Omega}(L_{j,i}I_{2,i}-L_{i,j}I_{2,j})+\beta _{2,j}I_{2,j}S_{j}-\gamma_{2,j}I_{2,j},&j\in\Omega,\ t>0,\\ N=\sum_{j\in\Omega}(S_{j}+I_{1,j}+I_{2,j}),\end{cases} \tag{1.1}\] where \(N\) is the total number of individuals, \(k\geq 2\) is the number of patches, and \(\Omega=\{1,2,\cdots,k\}\). For \(j\in\Omega\) and \(l\in\{1,2\}\): \(S_{j}\) is the total size of susceptible population on patch-\(j\); \(I_{l,j}\) is the total number of infected population with strain-\(l\) on patch-\(j\); \(d_{S}\) and \(d_{l}\) are positive numbers and stand respectively for the dispersal rates of susceptible population and infected population with strain-\(l\); \(\beta_{l,j}\) is positive and represents the disease transmission rate resulting from interaction of the susceptible and infected population with strain-\(l\) on patch \(j\); \(\gamma_{l,j}\) is positive and represents the recovery rate from strain-\(l\) on patch-\(j\). Note that the \(L_{j,j}\)'s terms do not appear in (1.1), so we set \(L_{j,j}=0\) for convenience. For \(i,j\in\Omega\), \(L_{i,j}\) is a nonnegative number and represents migration rate from the patch-\(j\) to the patch-\(i\). Throughout this work, we shall suppose that the following standing assumption holds. **(A1)** The matrix \(L=(L_{i,j})_{i,j=1}^{k}\) is nonnegative, symmetric and irreducible. Assumption **(A1)** indicates that the migration rates between any two patches are the same and individuals can move directly or indirectly from any patch to another. When the dispersal rates are neglected in (1.1), we obtain a particular type of a two-strain ODE-SIS epidemic model studied by Bremermann and Thieme in [8]. An important conclusion reached in [8] for (1.1) when \(d_{S}=d_{1}=d_{2}=0\) is that the pathogen strain that does not optimize the _basic reproduction_ number dies out asymptotically. This fact is known as the _competition-exclusion principle_ in the literature. The basic reproduction number measures the expected number of secondary cases caused by a single index in an otherwise susceptible population. Since this interesting work, several studies have been devoted to investigate the competition-exclusion and coexistence of different strains of an infectious disease. In [2], Ackleh and Allen examined the competition-exclusion and coexistence for pathogens in an SIR epidemic model with variable population size. In particular, under appropriate hypotheses on the parameters, they showed that several pathogens for a single host leads to exclusion of all pathogens except the one with the largest basic reproduction number. In [3], Acklesh and Allen extended their study to SIR and SIS epidemic models with multiple pathogen strains which assume total cross immunity, standard incidence, and density-dependent host mortality. Again, under some conditions on the parameters of these models, they established the competition-exclusion between multiple strains of the disease. For some related studies on multi-strain ODE models, we refer the interested readers to [5, 7, 20, 25, 26] and the references cited therein. Assume that \(I_{2}=0\). Then system (1.1) reduces to a single-strain model, which was recently studied by Li and Peng [21]. In this setting, they introduced the single-strain basic reproduction number \(\mathcal{R}_{0,1}(N)\) (see formula (2.9) below) and investigated the large-time behavior of solutions. Among other results, they established the global stability of the disease free equilibrium (DFE) if \(\mathcal{R}_{0,1}(N)\leq 1\) and either the local basic reproduction function \(\mathfrak{R}_{1}(N)\) (see formula (2.11) below) is constant or the population dispersal rate is uniform. However, when \(\mathcal{R}_{0,1}(N)>1\), they proved that the disease is endemic and the single-strain model has at least one endemic equilibrium (EE). Furthermore, the work [21] established the uniqueness of the single-strain EE under some additional assumptions, which include: (i) the dispersal rate of the susceptible group is greater or equal to that of the infected group, (ii) transmission rate is patch independent, and (iii) \(\mathfrak{R}_{1}(N)\) is constant. The asymptotic profiles of the single-strain EE as either the dispersal rate of the susceptible or infected groups approximate zero were also studied in [21]. The results of [21] extend some known results ([14, 33, 35, 10]) on the continuous diffusive models to the patch model. Indeed, when dispersal movement of the population is assumed to occur locally and randomly in adjacent directions, the following PDE-SIS model \[\begin{cases}\partial_{t}S=d_{S}\Delta S-(\beta_{1}I_{1}+\beta_{2}I_{2})S+(\gamma _{1}I_{1}+\gamma_{2}I_{2}),&x\in\Omega,\ t>0,\\ \partial_{t}I_{1}=d_{1}\Delta I_{1}+\beta_{1}I_{1}S-\gamma_{1}I_{1}&x\in\Omega,\ t>0,\\ \partial_{t}I_{2}=d_{2}\Delta I_{2}+\beta_{2}I_{2}S-\gamma_{2}I_{2}&x\in\Omega,\ t>0,\\ 0=\partial_{\vec{n}}S=\partial_{\vec{n}}I_{1}=\partial_{\vec{n}}I_{2}&x\in \partial\Omega,\ t>0,\\ N=\int_{\Omega}(S+I_{1}+I_{2}),\end{cases} \tag{1.2}\] can be used to study the dynamics of the disease. In (1.2), \(\Omega\) is an open bounded domain in \(\mathbb{R}^{n}\), \(n\geq 1\), with a smooth boundary \(\partial\Omega\). \(\vec{n}\) denotes the outward normal unit derivative on \(\partial\Omega\). The parameters of (1.2) have the same meanings as those in the multiple patches model (1.1). Hence, the ODE-SIS system (1.1) can be seen as a discrete in space of the continuous in space PDE-SIS model (1.2). The two-strain PDE-SIS model (1.2) was first studied by Ackleh, Deng and Wu [4], then by Salako [29], and recently by Castellano and Salako [9]. In these works, the authors defined the basic reproduction number of (1.2) and studied the asymptotic profiles of coexistence steady states, and the large-time behavior of classical solutions. In particular, the authors of [4] established the competition-exclusion of the strains when the local reproduction functions are spatially homogeneous. In [29], sufficient criteria for existence and non-existence of coexistence EE equlibria of (1.2) are obtained. Moreover, the asymptotic profiles of coexistence EE solutions of (1.2) as the diffusion rates of some of the subgroups of the population converge to zero are established in [29]. The work [9] considered a more general model and also established the competition-exclusion of the strains if at least one strain local reproduction function is spatially homogeneous. Our results in the current manuscript examine the extent to which the findings of [4] and [9] on the PDE-SIS model hold for the two-strain multiple patches model (1.1). Furthermore, we establish some new results for the two-strain multiple patches model (1.1), which remain open for the PDE-SIS model 1.2. In particular, for system (1.1), Theorem 2.1 below establishes the uniform persistence of the susceptible population. It also establishes the global stability of the DFE when either the basic reproduction number is: (i) less or equal to the reciprocal of the total number of patches, or (ii) less than one and the dispersal rate of the susceptible population is sufficiently large. The current work also examines the global dynamics of solutions of the two-strain SIS multiple patches system (1.1) and discusses the extent to which the competition-exclusion principle hold. In particular Theorems 2.2, 2.3, and 2.4 establish the competition exclusion of the strains under quite general assumption on the parameters of the model. Theorem 2.6 introduces the invasion numbers and establishes the coexistence of the strains when these numbers are greater than one. There are several studies on the single-strain PDE-SIS epidemic model of system (1.2) (see [10, 14, 33, 35] and the references therein). Note that the force of infection used in the mathematical models (1.1) and (1.2) is \(\beta_{i}I_{i}\), and refers as the mass-action or density-dependent transmission mechanism. Another popular transmission mechanism used in the modeling of infectious diseases is the standard or frequency-dependent infection mechanism, in which case the force of infection is given by \(\frac{\beta_{i}I_{i}}{S+I_{1}+I_{2}}\). For some results on the single-strain epidemic models with the standard transmission mechanism, we refer to [1, 6, 13, 16, 17, 18, 22, 27, 28, 31, 36] and the references cited therein. See also [23, 24, 34] for some recent progress on the multi-strain diffusive epidemic model with standard transmission mechanism. The rest of the paper is organized as follows. In section 2, we first introduce some notations and definitions, and then state our main results. We complete this section with some numerical simulations and some discussion of our main results. Section 3 introduces some preliminary results, essential for the clarity of our presentations. The proofs of our main results are given in section 4. Notations, Definitions and Main Results ### Notations and Definitions For convenience, we introduce a few notations and definitions. First, we would like to rewrite system (1.1) is a compact form. To this end, let \(\mathcal{L}=(\mathcal{L}_{i,j})_{i,j=1}^{k}\) denote the square matrix with entries \[\mathcal{L}_{i,j}=\begin{cases}-\sum_{l\in\Omega}L_{l,j}&i=j\in \Omega\\ L_{i,j}&i\neq j\in\Omega.\end{cases} \tag{2.1}\] It follows from assumption **(A1)** that the matrix \(\mathcal{L}\) is symmetric, irreducible and has nonnegative off diagonal entries. For each \(X\in\{S,I_{l},\beta_{l},\gamma_{l}\}\), let \(X\) denote the column vector in \(\mathbb{R}^{k}\), \(X=(X_{1},\cdots,X_{k})^{T}\). Given two column vectors \(X\) and \(Y\) in \(\mathbb{R}^{k}\), define the Hadamard product \(X\circ Y\), \(X\circ Y=(X_{1}Y_{1},\cdots,X_{k}Y_{k})^{T}\), and denote by \(\text{diag}(X)\) the diagonal matrix with diagonal entries \([\text{diag}(X)]_{ii}=X_{i}\), \(i=1,\cdots,k\). Using the above notations, system (1.1) can be written as \[\begin{cases}\frac{d}{dt}S=d_{S}\mathcal{L}S+\sum_{l=1}^{2}( \gamma_{l}-\beta_{l}\circ S)\circ I_{l}&t>0,\\ \frac{d}{dt}I_{l}=d_{l}\mathcal{L}I_{l}+(\beta_{l}\circ S-\gamma_{l})\circ I_ {l}&t>0,\ l=1,2,\\ N=\sum_{j\in\Omega}(S_{j}+\sum_{l=1}^{2}I_{l,j})>0.\end{cases} \tag{2.2}\] Due to biological interpretations of the vectors \(S\) and \(I_{l}\), we will only be interested in nonnegative solutions of (2.2). Let \(\mathbb{R}_{+}\) denote the set of nonnegative real numbers. Note that the right-hand side of (2.2) is locally Lipschitz on \(\left[\mathbb{R}^{k}\right]^{3}\). Hence, given an initial data \((S(0),I_{1}(0),I_{2}(0))\in\left[\mathbb{R}_{+}^{k}\right]^{3}\), (2.2) has a unique solution \((S(t),I_{1}(t),I_{2}(t))\) defined on a maximal interval of existence \((0,T_{\max})\). Observe that \(\mathcal{L}\) generates a strongly-positive matrix-semigroup \(\{e^{t\mathcal{L}}\}_{t\geq 0}\) on \(\mathbb{R}^{k}\). We have endowed \(\mathbb{R}^{k}\) with the usual order induced by the cone of vectors with nonnegative entries \(\mathbb{R}_{+}^{k}\). Hence, it follows from the comparison principle for cooperative systems that \((S(t),I_{1}(t),I_{2}(t))\in\left[\mathbb{R}_{+}^{k}\right]^{3}\) for every \(t\in[0,T_{\max})\). A direct computation based on (1.1) gives \[\frac{d}{dt}\sum_{j\in\Omega}(S_{j}(t)+\sum_{l=1}^{2}I_{l,j}(t))= 0\quad\forall\ t\in[0,T_{\max}).\] Hence, \[\sum_{j\in\Omega}(S_{j}(t)+\sum_{l=1}^{2}I_{l,j}(t))=N\quad\forall \ t\in[0,T_{\max}), \tag{2.3}\] from which we deduce that \(T_{\max}=\infty\). Note also from (2.3) that solution operator of (2.2) is globally bounded. Thanks to (2.3) and the fact that the positive constant \(N\) is fixed throughout the whole manuscript, the semiflow generated by solutions of (1.1) leaves invariant the set \[\mathcal{E}:=\Big{\{}(S,I_{1},I_{2})\in[\mathbb{R}_{+}^{k}]^{3}\ :\ \sum_{j\in\Omega}(S_{j}+I_{1,j}+I_{2,j})=N\Big{\}}.\] It is easy to see that \(\mathcal{E}\) is a compact subset of \([\mathbb{R}_{+}^{k}]^{3}\), being closed and bounded. Note from (2.2) that if for some \(l=1,2\), \(I_{l,j}(0)=0\) for every \(j\in\Omega\), then \(I_{l,j}(t)=0\) for all \(t>0\) and \(j\in\Omega\). However, since \(\mathcal{L}\) is irreducible with nonnegative off diagonal entries, if \(I_{l,j_{0}}(0)>0\) for some \(l=1,2\) and \(j_{0}\in\Omega\), then \(I_{l,j}(t)>0\) for all \(t>0\) and \(j\in\Omega\). We say that a solution \((S(t),I_{1}(t),I_{2}(t))\) has positive initial data if for each \(l=1,2\), \(I_{l,j}(0)>0\) for some \(j\in\Omega\). A vector \((S,I_{1},I_{2})\in\left[\mathbb{R}_{+}^{k}\right]^{3}\) is an equilibrium solution of (2.2) if it satisfies the system of algebraic equation \[\begin{cases}0=d_{S}\mathcal{L}S+\sum_{l=1}^{2}(\gamma_{l}-\beta_ {l}\circ S)\circ I_{l},\\ 0=d_{l}\mathcal{L}I_{l}+(\beta_{l}\circ S-\gamma_{l})\circ I_{l}&l=1,2,\\ N=\sum_{j\in\Omega}(S_{j}+\sum_{l=1}^{2}I_{l,j}).\end{cases} \tag{2.4}\] Given a \(k\times k\) square matrix \(M\), we denote by \(\lambda_{*}(M)\) its spectral bounds, \[\lambda_{*}(M):=\sup\{\mathfrak{Re}(\lambda)\ :\ \lambda\in\sigma(M)\}, \tag{2.5}\] where \(\mathfrak{Re}(\lambda)\) is the real part of \(\lambda\in\mathbb{C}\), and by \(\rho(M)\) its spectral radius, \[\rho(M):=\max\{|\lambda|\ :\ \lambda\in\sigma(M)\},\] where \(\sigma(M)\) is the spectrum of \(M\). A disease free equilibrium (DFE) is an equilibrium of the form \((S,0,0)\). Since the matrix \(\mathcal{L}\) generates a strongly-positive semigroup and \(\sum_{i\in\Omega}\mathcal{L}_{j,i}=0\) for each \(j\in\Omega\), then by the Perron-Frobenius theorem, \(\lambda_{*}(\mathcal{L})=0\). Moreover, \(\lambda_{*}(\mathcal{L})\) is simple. For convenience, let \(\mathbf{0}\) denote the null vector in \(\mathbb{R}^{k}\) and \(\mathbf{1}\) denote the \(k\)-column vector with entries ones, that is \(\mathbf{1}:=(1,\cdots,1)^{T}\). It is easy to see that column vector \(\frac{N}{k}\mathbf{1}\) is an eigenvector associated with \(\lambda_{*}(\mathcal{L})\). As a result, we obtain that \(\mathbf{E}^{0}:=(\frac{N}{k}\mathbf{1},\mathbf{0},\mathbf{0})\) is the unique DFE of (2.2). An equilibrium solution \((S,I_{1},I_{2})\) of (2.2) for which \(I_{l,j}>0\) for every \(j\in\Omega\) for some \(l\in\{1,2\}\) is called an endemic equilibrium (EE) solution of (2.2). An EE solution of the form \((S,I_{1},0)\) (resp. \((S,0,I_{2})\) ) is called a strain-1 (resp. strain-2) EE solution. An EE solution \((S,I_{1},I_{2})\) for which \(I_{l,j}>0\) for each \(j\in\Omega\) and \(l\in\{1,2\}\) is called a coexistence-EE solution. Linearizing (2.2) at \(\mathbf{E}^{0}\), we get \[\begin{cases}\frac{d}{dt}P=d_{S}\mathcal{L}P+\sum_{l=1}^{2}(\gamma_{l}-\frac {N}{k}\beta_{l})\circ Q_{l}&t>0,\\ \frac{d}{dt}Q_{l}=d_{l}\mathcal{L}Q_{l}+(\frac{N}{k}\beta_{l}-\gamma_{l})\circ Q _{l}&t>0,\ l=1,2,\\ 0=\sum_{j\in\Omega}(P_{j}+\sum_{l=1}^{2}Q_{l,j}).\end{cases} \tag{2.6}\] Note that for each \(l=1,2\), the second equation in (2.6) decouples from the first equation. Clearly, for each \(l\in\{1,2\}\), the square matrix \(\mathcal{V}_{l}\) defined by \[\mathcal{V}_{l}:=\mathrm{diag}(\gamma_{l})-d_{l}\mathcal{L} \tag{2.7}\] is invertible. Thanks to (2.6), for each \(l\in\{1,2\}\), setting \[\mathcal{F}_{l}=\mathrm{diag}(\beta_{l}), \tag{2.8}\] and following the next generation matrix theory, strain-\(l\)'s basic reproduction \(\mathcal{R}_{0,l}(N)\) is given by \[\mathcal{R}_{0,l}(N)=\frac{N}{k}\rho(\mathcal{F}_{l}\mathcal{V}_{l}^{-1}). \tag{2.9}\] For each \(l\in\{1,2\}\), it is well known that \(\mathcal{R}_{0,l}(N)-1\) and \(\lambda_{*}(\frac{N}{k}\mathcal{F}_{l}-\mathcal{V}_{l})\) have the same sign (see [1]). The basic reproduction \(\mathcal{R}_{0}(N)\) of the two-strain model (2.2) is \[\mathcal{R}_{0}(N)=\max\Big{\{}\mathcal{R}_{0,l}(N)\ :\ l=1,2\Big{\}}=\frac{N}{k} \max\Big{\{}\rho\big{(}\mathcal{F}_{l}\mathcal{V}_{l}^{-1}\big{)}\ :\ l=1,2\Big{\}}. \tag{2.10}\] For \(l\in\{1,2\}\), it follows from [21] that (1.1) has a single-strain-\(l\) EE solution if \(\mathcal{R}_{0,l}>1\). The work [21] also provided sufficient conditions for the uniqueness and stability of single-strain EE of (1.1). Given, \(j\in\Omega\) and \(l\in\{1,2\}\), the local basic reproduction on patch-\(j\) of strain-\(l\) is \[\mathfrak{R}_{l,j}(N)=\frac{N}{k}\mathfrak{R}_{l,j}\quad\text{where}\quad \mathfrak{R}_{l,j}:=\frac{\beta_{l,j}}{\gamma_{l,j}}, \tag{2.11}\] and the two-strain local reproduction number is \[\mathfrak{R}_{j}(N)=\max_{l=1,2}\mathfrak{R}_{l,j}(N). \tag{2.12}\] Note that \(\beta_{l}=\mathfrak{R}_{l}\circ\gamma_{l}\) for each \(l=1,2\), where \(\mathfrak{R}_{l}=(\mathfrak{R}_{l,1},\cdots,\mathfrak{R}_{l,k})^{T}\). We also introduce the following sets \[\Sigma_{1}:=\Big{\{}j\in\Omega:\mathfrak{R}_{1,j}>\mathfrak{R}_{2,j}\Big{\}},\ \Sigma_{2}:=\Big{\{}j\in\Omega:\mathfrak{R}_{2,j}>\mathfrak{R}_{1,j}\Big{\}} \ \text{and}\ \Sigma_{0}:=\Big{\{}j\in\Omega\ :\ \mathfrak{R}_{1,j}=\mathfrak{R}_{2,j}\Big{\}}.\] The set \(\Sigma_{1}\) represents the region of \(\Omega\) where strain-1 is able to spread more quickly than strain-2. Conversely, \(\Sigma_{2}\) is the region of \(\Omega\) where strain-2 is more infectious than strain-1. The set \(\Sigma_{0}\) is the region where strain-1 and strain-2 local reproduction functions are equal. We complete this section by introducing the high/low-risk patches. Fix \(j\in\Omega\) and \(l\neq p=1,2\). Patch-\(j\) is said to be high-risk (resp. low-risk) patch for strain-\(l\) if the local reproduction number \(\mathfrak{R}_{l}(N)\) is bigger (resp. smaller) than one on that patch, that is \(\mathfrak{R}_{l,j}(N)>1\) (resp. \(\mathfrak{R}_{l,j}(N)<1\)). Hence the sets \(H_{l}^{+}\) and \(H_{l}^{-}\) given by \[H_{l}^{+}:=\big{\{}j\in\Omega:\mathfrak{R}_{l,j}(N)>1\big{\}}\quad\text{and} \quad H_{l}^{-}:=\big{\{}j\in\Omega:\mathfrak{R}_{l,j}(N)<1\big{\}}\] are called the strain-\(l\)'s high-risk region and low-risk region, respectively. Note that in the absence of movement and when the other strain is absent, strain-\(l\) is endemic on any patch in its high-risk region while it goes extinct on any patch in its low-risk region. Hence, thanks to the competition exclusion-principle, when \(d_{S}=d_{1}=d_{2}=0\), strain-\(l\) is dominant and drives out strain-\(p\) on any patch in \(\Sigma_{l}\cap H_{l}^{+}\). When \(\Sigma_{1}\cap H_{1}^{+}\neq\emptyset\) and \(\Sigma_{2}\cap H_{2}^{+}\neq\emptyset\), Theorem 2.6 below establishes the existence of a coexistence EE solution of (1.1) for small diffusion rates of the population under some appropriate hypothesis. ### Main Results We state our main results. Our first result reads as follows. **Theorem 2.1**.: 1. _There is a positive number_ \(m_{0}>0\) _such that_ \[\liminf_{t\to\infty}\min_{j\in\Omega}S_{j}(t)\geq m_{0}\] (2.13) _for every solution_ \((S(t),I_{1}(t),I_{2}(t))\) _of (_1.1_) with initial data in_ \(\mathcal{E}\)_._ 2. _The DFE is linearly stable if_ \(\mathcal{R}_{0}(N)<1\) _and unstable if_ \(\mathcal{R}_{0}(N)>1\)_. Furthermore, the following conclusions hold._ 1. _If_ \(\mathcal{R}_{0,l}(N)\leq\frac{1}{k}\) _for some_ \(l=1,2\)_, then strain-_\(l\) _eventually dies out. In particular if_ \(\mathcal{R}_{0}(N)\leq\frac{1}{k}\)_, then the DFE is globally stable._ 2. _If_ \(\mathcal{R}_{0,l}(N)<1\) _for some_ \(l=1,2\)_, then there is_ \(d^{l}>0\) _such that the strain-_\(l\) _eventually dies out for any diffusion of susceptible population_ \(d_{S}>d^{l}\)_. In particular if_ \(\mathcal{R}_{0}(N)<1\)_, then there is_ \(d^{*}>0\) _such that the DFE is globally stable for any diffusion of susceptible population_ \(d_{S}>d^{*}\)_._ 3. _If_ \(\mathcal{R}_{0,l}(N)>1\) _for each_ \(l=1,2\)_, then the disease is persistence in the sense that there is a positive number_ \(m_{*}>0\) _such that for every solution_ \((S(t),I_{1}(t),I_{2}(t))\) _with a positive initial data in_ \(\mathcal{E}\)_,_ \[\liminf_{t\to\infty}\min_{j\in\Omega}\sum_{l=1}^{2}I_{l,j}(t)\geq m_{*}.\] (2.14) _Furthermore,_ \[\limsup_{t\to\infty}\max_{j\in\Omega}S_{j}(t)\leq s_{\max}\quad\text{and} \quad\liminf_{t\to\infty}\min_{j\in\Omega}S_{j}(t)\geq s_{\min},\] (2.15) _where_ \(s_{\max}:=\max_{l=1,2}\max_{j\in\Omega}\frac{\gamma_{l,j}}{\beta_{l,j}}\) _and_ \(s_{\min}:=\min_{l=1,2}\min_{j\in\Omega}\frac{\gamma_{l,j}}{\beta_{l,j}}\)_._ Theorem 2.1-(i) indicates that the susceptible population persists uniformly, hence it will never be driven to extinction. On the other hand, thanks to Theorem 2.1-(ii-1), when \(\mathcal{R}_{0}(N)\leq\frac{1}{k}\), irrespective of the dispersal rate of the susceptible population, the disease will eventually go extinct. Recall that \(\mathcal{R}_{0}(N)\) is independent of the dispersal rate of the susceptible population, and \(k\) is the total number of patches. In the particular case of \(k=1\), the model reduces the single-patch ODE model for which it is well known that the DFE is globally stable if and only if \(\mathcal{R}_{0}(N)\leq 1.\) So, our result is sharp in the case of single-patch model. Note also that, for large diffusion rate of the susceptible population, Theorem 2.1-(ii-2) shows that the DFE is globally stable when \(\mathcal{R}_{0}(N)<1\). When \(k\geq 2\), that is there are at least two patches, the question of the global stability of the DFE remains open when \(\frac{1}{k}<\mathcal{R}_{0}(N)\leq 1\) and the dispersal rate of the susceptible is small. We expect that the answer to this question will dependent delicately on the details on the spatial distribution of the recovery and transmission rates and how the dispersal rates are selected. For example, when dispersal rates are uniform, Theorem 2.4 below shows that the DFE is globally stable whenever \(\mathcal{R}_{0}(N)\leq 1\). Theorem 2.3 below also confirms that the same conclusion holds when the local reproduction functions are spatially homogeneous. In the case of the single-strain PDE-SIS model (1.2), Castellano and Salako [11] recently proved, under some appropriate hypotheses, the existence of at least two EE solutions when \(\mathcal{R}_{0,1}(N)<1\) and \(d_{S}\) is sufficiently small. We suspect that such result will also hold for the multiple patches SIS model (1.1). When each strain's basic reproduction number is bigger than one, Theorem 2.1-(ii-3) establishes the uniform persistence of the disease, but does not exclude the possibility of the extinction of one strain. Hence, it becomes pertinent to examine the conditions on the parameters of the model which lead to a competition-exclusion of the strains. This is carried out in Theorems 2.2, 2.3, and 2.4 below. Our first result on the competition-exclusion of the strains when one local reproductive number is spatially homogeneous reads a follows. **Theorem 2.2**.: _Suppose that \(\mathcal{R}_{0,l}(N)>1\), \(l=1,2\), and \(\mathfrak{R}_{1,j}\) is constant in \(j\in\Omega\)._ * _If_ \(\mathcal{R}_{0,1}(N)>\mathcal{R}_{0,2}(N)\)_, then the single strain-_\(1\) _EE is linearly stable. Furthermore, if_ \(\Sigma_{1}=\Omega\)_, then the single strain-_\(1\) _EE is globally stable with respect to positive initial data._ * _If_ \(\mathcal{R}_{0,1}(N)<\mathcal{R}_{0,2}(N)\)_, then the single strain-_\(1\) _is unstable. Furthermore, if_ \(\Sigma_{2}=\Omega\)_, then for solutions of (_1.1_) with positive initial data, the strain-_\(1\) _infected population eventually goes extinct while strain-_\(2\) _persists uniformly in time._ Note that Theorem 2.2 can also be stated if we interchange the roles of strain-\(1\) and strain-\(2\). Now, suppose that strain-\(1\) local reproduction function is spatially homogeneous, that is, its transmission and recovery rates are proportional to each other. If it also has the largest basic reproduction number, Theorem 2.2-(i) indicates that it cannot be invaded at equilibrium by an initially small size of the infected population with strain-\(2\). If in addition, it strictly locally maximizes the local reproduction functions on all patches, then it drives the other strain to extinction. However, if strain-\(1\) has a smaller basic reproduction number, Theorem 2.2-(ii) shows that it will be invaded at equilibrium by the other strain. Interestingly, these results hold irrespective of the dispersal rates of the population and suggest that the asymptotic dynamics of the solutions are independent of the dispersal rates when the local reproduction functions are spatially homogeneous. Our next result answers this question with an affirmation and provide a complete dynamics of solutions of (1.1) when both local reproduction functions are spatially homogeneous. **Theorem 2.3**.: _Suppose that \(\mathfrak{R}_{l,j}\) is constant in \(j\in\Omega\) for each \(l\in\{1,2\}\)._ * _If_ \(\mathcal{R}_{0}(N)\leq 1\)_, then the DFE is globally stable._ * _If_ \(\mathcal{R}_{0,l}(N)>\max\{1,\mathcal{R}_{0,q}(N)\}\)_, for_ \(l\neq q\in\{1,2\}\)_. Then, the strain-_\(l\) _EE solution is globally stable._ When the local reproduction functions are spatially homogeneous, Theorem 2.3 establishes that the strain with the largest basic reproduction number drives the other strain to extinction, irrespective of the dispersal rates of the population. In this case, the dispersal rates of the populations play no role on the asymptotic dynamics of solutions as it is determined by that of the model with no dispersal rate. To gain some understanding on how the dispersal rates may affect the global dynamics of the solutions of (1.1), in the next two results, we study the large time behaviors of solution in the special case of the following assumption. **(A2)**\(d:=d_{1}=d_{2}=d_{S}\). Assumption **(A2)** means that the population dispersal rate is uniform and independent of the relevant subgroups of the population. In our next result, we again establish the competition-exclusion of the strains when at least one has its basic reproduction number less or equal to one. **Theorem 2.4**.: _Suppose that hypothesis_ **(A2)** _holds. The following conclusions hold._ * _If_ \(\mathcal{R}_{0}(N)\leq 1\)_, then the DFE is globally stable._ * _If_ \(\mathcal{R}_{0,l}(N)>1\geq\mathcal{R}_{0,q}(N)\)_,_ \(l\neq q\in\{1,2\}\)_, then the strain-l EE is globally stable with respect to positive initial data._ When hypothesis **(A2)** holds, that is the population's dispersal rate is uniform and independent of the relevant subgroups, the basic reproduction numbers serve as threshold values for the global dynamics of solutions of (1.1). In this case, it follows from Theorem 2.4 that any strain whose basic reproduction number is less or equal to one eventually dies out. Note that, when both strain's reproduction numbers are bigger than one, Theorems 2.2-2.3 provide sufficient conditions for the extinction of at least one strain. So, a natural question is to know whether both strains can coexist, and if so, provide some sufficient conditions on the parameters of the model which guarantee this. To tackle these questions, it seems appropriate to first introduce the strains' _invasion numbers_, which measure the ability for a strain to invade another strain when rare. Fix \(l\neq p=1,2\) and suppose that \(\min\{\mathcal{R}_{0,1}(N),\mathcal{R}_{0,2}(N)\}>1\). It follows from [21, Theorem 4] that (1.1) has at least one strain-\(l\) EE solution. Under some further assumptions on the parameters of the model, [21, Theorem 4] established the uniqueness of the single-strain EE. When these assumptions do not hold, it is not clear whether single-strain EE are unique. Multiplicity of single-strain EE solutions for the PDE-SIS model (1.2) was recently established in [11]. So, for convenience, let \[\mathcal{E}_{l}:=\{(S,I_{1},I_{2})\in\mathcal{E}\ :\ I_{p}=\mathbf{0}\}\] and \(\mathcal{E}_{l}^{*}\) denote the set of single-strain EE solutions of (1.1) in \(\mathcal{E}_{l}\). It is clear that both \(\mathcal{E}_{l}\) and \(\mathcal{E}_{l}^{*}\) are compact sets and invariant under the semiflow of solutions of (1.1). Moreover, we have that \(\mathbf{E}^{0}\notin\mathcal{E}_{l}^{*}\) and \(\mathcal{E}_{1}^{*}\cap\mathcal{E}_{2}^{*}=\emptyset\). Given \(\mathbf{E}_{1}^{*}=(S_{1}^{*},I_{1}^{*},\mathbf{0})\in\mathcal{E}_{1}^{*}\) and \(\mathbf{E}_{2}^{*}=(S_{1}^{*},\mathbf{0},I_{2}^{*})\in\mathcal{E}_{1}^{*}\) define \[\tilde{\mathcal{R}}_{2}(\mathbf{E}_{1}^{*})=\rho\Big{(}\mathrm{diag}(\beta_{2 }\circ S_{1}^{*})\mathcal{V}_{2}^{-1}\Big{)},\qquad\tilde{\mathcal{R}}_{1}( \mathbf{E}_{2}^{*})=\rho\Big{(}\mathrm{diag}(\beta_{1}\circ S_{2}^{*}) \mathcal{V}_{1}^{-1}\Big{)}, \tag{2.16}\] \[\tilde{\mathcal{R}}_{1}(N)=\min\{\tilde{\mathcal{R}}_{1}(\mathbf{E}_{2}^{*}): \ \mathbf{E}_{2}^{*}\in\mathcal{E}_{2}^{*}\}\quad\text{and}\quad\tilde{ \mathcal{R}}_{2}(N)=\min\{\tilde{\mathcal{R}}_{2}(\mathbf{E}_{1}^{*}):\ \mathbf{E}_{1}^{*}\in\mathcal{E}_{1}^{*}\}. \tag{2.17}\] The quantity \(\tilde{\mathcal{R}}_{l}(N)\) is the strain-l's invasion number. Note that when **(A2)** holds, it follows from [21, Theorem 9] that \(\mathcal{E}_{l}^{*}\) consists of a single element for each \(l=1,2\). For any element \(E\in\mathcal{E}\) and a subset \(\mathcal{O}\subset\mathcal{E}\), let \[\mathrm{dist}(E,\mathcal{O}):=\inf\{\|E-P\|_{\infty}:\ P\in\mathcal{O}\}\] denote the distance from the point \(E\) to the set \(\mathcal{O}\). Our result on the coexistence EE of (1.1) reads as follows. **Theorem 2.5**.: _Suppose that \(\min\{\mathcal{R}_{0,1}(N),\mathcal{R}_{0,2}(N)\}>1\). Let \(\tilde{\mathcal{R}}_{1}(N)\) and \(\tilde{\mathcal{R}}_{2}(N)\) be defined as in (2.17)._ * _If_ \(\tilde{\mathcal{R}}_{1}(N)>1\)_, then_ \[\liminf_{t\to\infty}\frac{1}{t}\int_{0}^{t}\mathrm{dist}((S(\tau),I_{1}(\tau),I_{2}(\tau))(t),\mathcal{E}_{2}^{*})d\tau\geq\frac{\beta_{\min}}{\beta_{\max} }\frac{(\tilde{\mathcal{R}}_{1}(N)-1)}{\tilde{\mathcal{R}}_{1}(N)}\] (2.18) _for every solution_ \((S,I_{1},I_{2})(t)\) _of (_1.1_) with initial data satisfying_ \(\|I_{1}(0)\|_{\infty}>0\)_. Furthermore, if in addition_ \(\mathcal{E}_{2}^{*}\cup\{\mathbf{E}^{0}\}\) _is the global attractor for classical solutions of (_1.1_) with initial data in_ \(\mathcal{E}_{2}\)_, then there is a positive number_ \(\sigma_{1}^{*}>0\) _such that_ \[\liminf_{t\to\infty}\min_{j\in\Omega}I_{1,j}(t)\geq\sigma_{1}^{*},\] (2.19) _for every solution_ \((S,I_{1},I_{2})(t)\) _of (_1.1_) with initial data satisfying_ \(\|I_{1}(0)\|_{\infty}>0\)_. Similar result holds for the strain-2._ _._ 2. _Suppose that_ \(\min\{\mathcal{R}_{1}(N),\mathcal{R}_{2}(N)\}>1\)_. If_ \(\mathcal{E}_{1}^{*}\cup\{\mathbf{E}^{0}\}\) _and_ \(\mathcal{E}_{2}^{*}\cup\{\mathbf{E}^{0}\}\) _are the global attractor for classical solutions of (_1.1_) with initial data_ \(\mathcal{E}_{1}\) _and_ \(\mathcal{E}_{2}\)_, respectively, then (_1.1_) has at least one coexistence EE solution._ Suppose that \(\mathcal{R}_{0,l}(N)>1\) for each \(l=1,2\) and hypothesis **(A2)** holds. Let \(d>0\) be the uniform dispersal rate as in **(A2)**. It follows from [21, Theorem 3] that (1.1) has a unique single-strain EE solutions \(\mathbf{E}_{1}^{*}:=\left(\frac{N}{k}\mathbf{1}-I_{1}^{*},I_{1}^{*},\mathbf{0}\right)\) and \(\mathbf{E}_{2}^{*}:=\left(\frac{N}{k}\mathbf{1}-I_{2}^{*},\mathbf{0},I_{2}^{*}\right)\), where \(I_{1}^{*}\) is the unique positive solution of the multiple-patch logistic equation \[0=d\mathcal{L}I_{1}^{*}+\left(\beta_{l}\circ\Big{(}\frac{N}{k}\mathbf{1}-I_{1 }^{*}\Big{)}-\gamma_{l}\Big{)}\circ I_{l}^{*}. \tag{2.20}\] Furthermore, it holds that \[\tilde{\mathcal{R}}_{l}(N)=\rho\Big{(}\Big{(}\frac{N}{k}\mathcal{F}_{l}- \operatorname{diag}(\beta_{l}\circ I_{p}^{*})\Big{)}\mathcal{V}_{l}^{-1}\Big{)} \quad p\neq l\in\{1,2\}. \tag{2.21}\] As an application of Theorem 2.5, we can state the following result. **Theorem 2.6**.: _Suppose that hypothesis_ **(A2)** _holds and \(\min\{\mathcal{R}_{0,1}(N),\mathcal{R}_{0,2}(N)\}>1\). Let \(\mathbf{E}_{1}^{*}\) and \(\mathbf{E}_{2}^{*}\) denote the single-strain EE solutions of (1.1). Fix \(l\neq p\in\{1,2\}\)._ 1. _If_ \(\tilde{\mathcal{R}}_{l}(N)>1\)_, the single-strain EE,_ \(\mathbf{E}_{p}^{*}\)_, is unstable and strain-_\(l\) _is uniformly persistent in the sense of (_2.19_)._ 2. _If_ \(\min\{\tilde{\mathcal{R}}_{1}(N),\tilde{\mathcal{R}}_{2}(N)\}>1\)_, then (_1.1_) has at least one coexistence EE solution._ 3. _If_ \(\Sigma_{1}\cap H_{1}^{+}\) _and_ \(\Sigma_{2}\cap H_{2}^{+}\) _are both nonempty sets, then there is_ \(d^{*}>0\) _such that for any_ \(0<d<d^{*}\)_,_ \(\min\{\tilde{\mathcal{R}}_{1}(N),\tilde{\mathcal{R}}_{2}(N)\}>1\)_._ When hypothesis **(A2)** holds, \(\Sigma_{1}\cap H_{1}^{+}\neq\emptyset\) and \(\Sigma_{2}\cap H_{2}^{+}\neq\emptyset\), Theorem 2.6 establishes the existence of a coexistence EE for small dispersal rate of the population. We expect that the same conclusion should hold when hypothesis **(A2)** is dropped. The questions of uniqueness and global stability of coexistence EE will be explored in our future work. Another interesting question is to investigate how the spatial distribution of the population at coexistence EE solutions depend on the dispersal rates. This question will also be investigated in our future work. ### Numerical Simulations and Discussion To further understand the dynamics of solutions of (1.1), we perform some simulations. Some of the simulations illustrate our theoretical results, while others explore some aspects that our theoretical results did not cover. For all the simulations, we consider two patches models, that is \(k=2\), \(\Omega=\{1,2\}\) and take \(L_{1,1}=L_{2,2}=0\) and \(L_{1,2}=L_{2,1}=1\). **Simulation 2.3.1**.: _We fix the parameters: \(\beta_{1}=(2,3)^{T}\), \(\beta_{2}=(1,4)^{T}\), \(\gamma_{1}=(1,2)^{T}\), \(\gamma_{2}=(2,3)^{T}\), \(d_{S}=3\), \(d_{1}=1\), and \(d_{2}=2\). Next, we perform three simulations with three choices of the total population \(N\). (a) In Fig 1a, we take \(S(0)=(0.05,0.05)^{T}\), \(I_{1}(0)=(0.05,0.05)^{T}\), and \(I_{2}(0)=(0.05,0.05)^{T}\). Hence, \(N=0.3\), \(\mathcal{R}_{0,1}(N)=0.1627\), \(\mathcal{R}_{0,2}(N)=0.2535\), and \(\mathcal{R}_{0}(N)<1\). The simulations indicate an extinction of the disease. (b) In Fig 1b, we take \(S(0)=(0.25,0.25)^{T}\), \(I_{1}(0)=(0.25,0.25)^{T}\), and \(I_{2}(0)=(0.25,0.25)^{T}\). Hence, \(N=1.5\), \(\mathcal{R}_{0,1}(N)=1.2674\), \(\mathcal{R}_{0,2}(N)=0.81375\) and \(\mathcal{R}_{0}(N)=\mathcal{R}_{0,1}(N)>1>\mathcal{R}_{0,2}(N)\). We also observe an exclusion of strain-2 while strain-1 persists. (c) In Fig1c, we take \(S(0)=(1,2)\), \(I_{1}(0)=(0.5,0.5)\), and \(I_{2}(0)=(0.5,0.5)\). Hence, \(N=4\), \(\mathcal{R}_{0,1}(N)=2.7125\), \(\mathcal{R}_{0,2}(N)=4.2247\) and \(\mathcal{R}_{0}(N)=\mathcal{R}_{0,1}(N)>1\). In this case, we observe from Fig 1c that strain-1 persists while strain-2 eventually dies out. This agrees with persistence of at least one strain of the disease as predicted by Theorem 2.1-(ii-3). However, we notice an exclusion of strain-2 even though its basic reproduction number is bigger than one._ **Simulation 2.3.2**.: _We fix the parameters: \(\beta_{1}=(4,6)^{T}\), \(\beta_{2}=(1,4)^{T}\), \(\gamma_{1}=(2,3)^{T}\), and \(\gamma_{2}=(2,3)^{T}\). So that \(\mathfrak{R}_{1}=(2,2)^{T}\) is constant and \(\|\mathfrak{R}_{2}\|_{\infty}<2\). We also fix the initial data to \(S_{0}=(1,2)\), \(I_{1}=(0.5,0.5)\) and \(I_{2}=(0.5,0.5)\). We then perform three simulations for different choices of the migration rates: (a) \(d_{S}=3\), \(d_{1}=1\) and \(d_{2}=2\) in Figure 1(a); (b) \(d_{S}=4\), \(d_{1}=6\) and \(d_{2}=2\) in Figure 1(b); and \(d_{S}=10\), \(d_{1}=0.5\) and \(d_{2}=20\) in Figure 1(c). We observe that strain-2 eventually dies out in all the three simulations irrespective of the choices of the migration rates. These agree with the conclusion of Theorem 2.2-(i)._ **Simulation 2.3.3**.: _We fix the parameters: \(\beta_{1}=(\frac{2}{3},1)^{T}\), \(\beta_{2}=(1,4)^{T}\), \(\gamma_{1}=(2,3)^{T}\), and \(\gamma_{2}=(2,3)^{T}\). So that \(\mathfrak{R}_{1}=(\frac{1}{3},\frac{1}{3})^{T}\) is constant and \(\mathfrak{R}_{2,\min}>\frac{1}{3}\). We also fix the initial data to \(S_{0}=(1,2)\), \(I_{1}=(1,1)\) and \(I_{2}=(1,1)\). We then perform three simulations for different choices of the migration rates: (a) \(d_{S}=5\), \(d_{1}=1\) and \(d_{2}=2\) in Figure 2(a); (b) \(d_{S}=0.005\), \(d_{1}=0.5\) and \(d_{2}=2\) in Figure 2(b); and \(d_{S}=10\), \(d_{1}=0.005\) and \(d_{2}=2\) in Figure 2(c). We observe that strain-1 eventually dies out in all the three simulations irrespective of the choices of the migration rates. These agree with the conclusion of Theorem 2.2-(ii)._ **Simulation 2.3.4**.: _We fix the parameters: \(\beta_{1}=(2,3)^{T}\), \(\beta_{2}=(1,4)^{T}\), \(\gamma_{1}=(2,3)^{T}\), and \(\gamma_{2}=(2,3)^{T}\). So that \(\mathfrak{R}_{1}=(1,1)^{T}\) is constant and \(\mathfrak{R}_{2,\min}<1<\mathfrak{R}_{2,\max}\). Next, we fix the diffusion rate \(d_{2}=2\) of the infected population with strain-2. Finally, we fix the initial data to \(S_{0}=(1,2)\), \(I_{1}=(2,1)\) and \(I_{2}=(4,1)\). We Figure 1: Numerical simulations illustrating disease extinction when \(\mathcal{R}_{0}(N)<1\), and competition-exclusion of the strains when \(\mathcal{R}_{0}(N)>1\). Figure 3: Numerical simulations illustrating extinction of strain-1 when its local reproductive function is constant and locally strictly minimizes the local reproductive functions on all patches. then perform three simulations for different choices of the migration rates \(d_{S}\) and \(d_{1}\): \(d_{S}=5\) and \(d_{1}=1\) in Figure 3(a); \(d_{S}=35\) and \(d_{1}=35\) in Figure 3(b); and \(d_{S}=40\) and \(d_{1}=40\) in Figure 3(c). We notice that both strains coexist when the migration rates are small as illustrated by Figure 3(a), while strain-1 dies out for large diffusion rates as seen from Figures 3(b) and 3(c). These simulations complement Theorem 2.2 and suggest that the dynamics of the disease delicately depends on the choices of the migration rates when one of the strain local reproductive function is constant and neither maximizes nor minimizes the local reproductive functions._ **Simulation 2.3.5**.: _We fix the parameters: \(\beta_{1}=(2,3)^{T}\), \(\beta_{2}=(1,4)^{T}\), \(\gamma_{1}=(2,3)^{T}\), and \(\gamma_{2}=(2,3)^{T}\). So that \(\mathfrak{R}_{1}=(1,1)^{T}\) is constant and \(\mathfrak{R}_{2,\min}<1<\mathfrak{R}_{2,\max}\). We also fix the initial data to \(S_{0}=(1,2)\), \(I_{1}=(2,1)\) and \(I_{2}=(4,1)\), so that \(N=11\). Hence, \(\Sigma_{1}\cap H_{1}^{+}=\{1\}\) and \(\Sigma_{2}\cap H^{+}=\{2\}\) We then perform three simulations for different choices of equal migration rates \(d:=d_{S}=d_{1}=d_{2}\): (a) \(d=0.005\) in Figure 4(a); (b) \(d=35\) in Figure 3(b); and \(d=40\) in Figure 4(c). All the three simulations illustrate the coexistence of the strain. Hence, support the conclusions of Theorem 2.6. When the diffusion rates are sufficiently small, Fig 4(a) indicates a spatial segregation of the infected populations. However, for large diffusion rates, Figures 4(b) and 4(c) show that both strains persists uniformly on all patches._ #### 2.3.6 Discussion We examined the dynamics of solutions of a two-strain epidemic model in patchy environment. To this end, we introduced the basic reproduction number \(\mathcal{R}_{0}(N)\), and then first discussed the extinction of the disease under some sufficient conditions on the parameters of the models. In particular, Theorem 2.1 suggests that the disease would be eventually eradicated if either the basic reproduction number is smaller than the reciprocal of the total number of patches, or the migration rate of the susceptible population is sufficiently large and \(\mathcal{R}_{0}(N)<1\). However, at least one strain of the disease would persist if \(\mathcal{R}_{0}(N)>1\). Observing that the latter assertion does not exclude the possibility of extinction of at least one strain, we then investigated sufficient conditions which lead to the competition-exclusion principle. Simulations in Figure 1 provide an illustration of the dynamics of the disease as suggested by Theorem 2.1. Figure 4: Numerical simulations illustrating both competition-exclusion and coexistence of the two strains when strain-1 local reproductive function is constant and locally strictly maximizes the local reproductive functions on exactly one patch. Figure 5: Numerical simulations illustrating coexistence both strains for equal dispersal rates of all the subgroup of the population. As noted in the introduction, in the celebrated work [8], Bremermann and Thieme showed that, when diffusion rates are neglected and there is only one patch, any strain that does not maximize the reproduction number would eventually die out. In Theorems 2.2 and 2.3, we showed that this conclusion extends to the multiple patches model with diffusion rates if the local reproduction function is patch independent. In fact, the conclusion of Theorem 2.2 is somehow stronger as it only requires that one strain to have its local reproductive function to be constant and to either maximize or minimize the local reproductive function on all patches. These results were perfectly illustrated by our simulations 2, 3, and 4. Interestingly, our simulations in Figure 4 indicate that the conclusions of Theorem 2.2 might not hold if some of the assumptions are dropped. We further investigated some sufficient conditions on the parameters of the model which may lead to the coexistence of the strains. In this direction, we introduced the invasion numbers and then established the existence of coexistence EE when these numbers are bigger than one in Theorems 2.5 and 2.6. Our numerical simulations in Figure 5 support these theoretical results. An important question left open by our work is the uniqueness and global stability of the coexistence EE solution. Also, it would be interesting to study the effect of the diffusion rates on the spatial distributions of the coexistence EE as such information might help to developing and implementing adequate and effective disease control strategies. Another aspect that is not considered by our work is the effect of a total lockdown of one part of the population (say for example the movements of the infected populations are completely restricted) on the dynamics of the disease. We plan to devote some of our future studies on these open problems. ## 3 Preliminaries We introduce some notations and collect a few preliminary results in the current section. For convenience, set \[\sigma_{\max}=\max_{j\in\Omega,l=1,2}\sigma_{l,j}\quad\mbox{and}\quad\sigma_{ \min}=\min_{j\in\Omega,l=1,2}\sigma_{l,j}\quad\forall\ \sigma\in\{\beta,\gamma\},\] \[r_{l,j}=\frac{\gamma_{l,j}}{\beta_{l,j}}\quad l=1,2\ \mbox{and}\ j\in\Omega, \quad r_{l}=(r_{l,1},\cdots,r_{l,k})^{T},\] \[r_{l,\min}:=\min_{j\in\Omega}r_{l,j},\quad\mbox{and}\quad r_{l,\max}:=\max_{j \in\Omega}r_{l,j}\quad\forall\ l=1,2.\] Note that \(\gamma_{l}=\beta_{l}\circ r_{l}\) for every \(l=1,2\). In general, given a column vector \(V\in\mathbb{R}^{k}\), set \[\|V\|_{\infty}=\max_{j=1,\cdots,k}|V_{j}|,\quad\|V\|=\sqrt{\sum_{j=1}^{k}V_{j} ^{2}},\quad\widehat{V}_{j}=V_{j}-\frac{\sum_{i=1}V_{i}}{k}\quad\mbox{and}\quad \widehat{V}=(\widehat{V}_{1},\cdots,\widehat{V}_{k})^{T}.\] Given \(V\) and \(\tilde{V}\in\mathbb{R}^{k}\), we say that \[V\leq_{1}\tilde{V}\quad\mbox{if}\quad V_{j}\leq\tilde{V}_{j}\quad\forall\ j=1, \cdots,k,\] \[V<_{1}\tilde{V}\quad\mbox{if}\quad V\leq\tilde{V}\quad\mbox{and}\quad V_{j}< \tilde{V}_{j}\ \mbox{for some}\ j\in\{1,\cdots,k\},\] and \[V\ll_{1}\tilde{V}\quad\mbox{if}\quad V_{j}<\tilde{V}_{j}\quad\forall\ j=1, \cdots,k.\] Next, since \(\lambda_{*}(\mathcal{L})\) is simple, and \(\mathcal{L}\) is real symmetric, there is an orthogonal matrix \(Q\) such that \[Q^{T}\mathcal{L}Q=\mbox{diag}(\tilde{d}_{1},\cdots,\tilde{d}_{k})\quad\mbox{ and}\quad Q^{T}Q=I,\] where \(\tilde{d}_{i}\), \(i=1,\cdots,k\) are the eigenvalues of the symmetric matrix \(\mathcal{L}\) with \(\tilde{d}_{1}=\lambda_{*}(\mathcal{L})=0>\max_{i=2,\cdots,k}\tilde{d}_{i}\). Hence, \[e^{t\mathcal{L}}=Qe^{t\mbox{diag}(\tilde{d}_{1},\cdots,\tilde{d}_{k})}Q^{T} \quad\forall\ t\geq 0. \tag{3.1}\] Moreover, if we set \(Q_{j}\), \(j=1,\cdots,k\), as the column vectors of the square matrix \(Q\), then \(\{Q_{j}\}_{j=1}^{k}\) is an orthonormal basis of \(\mathbb{R}^{k}\). It then follows from (3.1) that \[e^{t\mathcal{L}}U=\sum_{j=1}^{k}e^{t\tilde{d}_{j}}<U,Q_{j}>Q_{j}\quad\forall\ t \geq 0,\ U\in\mathbb{R}^{k}, \tag{3.2}\] where \(<,>\) denotes the inner product on \(\mathbb{R}^{k}\). As a result, it follows from (3.2) and the fact that \(\max_{j=2,\cdots,k}\{\tilde{d}_{j}\}<\tilde{d}_{1}=0\) that \[\left\|e^{t\mathcal{L}}U-<U,Q_{1}>Q_{1}\right\|\leq e^{t\tilde{d}_{*}}\|U\| \quad\forall\ U\in\mathbb{R}^{k} \tag{3.3}\] where \(\tilde{d}_{*}:=\min_{j=2,\cdots,k}\tilde{d}_{j}<0\). Note that \(Q_{1}=\frac{1}{\sqrt{k}}\mathbf{1}\) so that \[<U,Q_{1}>Q_{1}=\frac{\sum_{j=1}^{k}U_{j}}{k}\mathbf{1},\quad\text{and}\quad \widehat{U}=U-\frac{\sum_{j=1}^{k}U_{j}}{k}\mathbf{1}\quad\forall\ U\in\mathbb{ R}^{k}. \tag{3.4}\] Observe also that \[\|\widehat{U}\|_{\infty}\leq 2\|U\|_{\infty},\quad\|\widehat{U}\|\leq 2\|U\| \quad\text{and}\quad\|U\circ V\|\leq\|V\|_{\infty}\|U\|\quad\forall\ U,V\in \mathbb{R}^{k}. \tag{3.5}\] Let \(\{B_{1},\cdots,B_{k}\}\) denote the standard orthonormal basis of \(\mathbb{R}^{k}\). Recall that since \(\mathcal{L}\) is irreducible and cooperative, \(\{e^{t\mathcal{L}}\}_{t\geq 0}\) generates a strongly monotone semiflow on \(\mathbb{R}^{k}\). Then \[\mathbf{0}\ll_{1}e^{t\mathcal{L}}B_{j}\quad\forall\ j=1,\cdots,k,\quad\text{ and}\quad t>0. \tag{3.6}\] As a direct consequence of (3.6), we have the following Harnack's inequality type, which plays an important in the proofs of our main results. **Lemma 3.1**.: _Let \(d>0\) and \(M\in C(\mathbb{R}_{+}:[\mathbb{R}]^{k})\) such that_ \[m_{\infty}:=\sup_{t\geq 0}\|M(t)\|_{\infty}<\infty.\] _Let \(\tilde{\mathcal{L}}\) be a \(k\times k\) irreducible square matrix generating a strongly positive semigroup \(\{e^{t\tilde{\mathcal{L}}}\}_{t\geq 0}\) on \(\mathbb{R}^{k}\). Then there is a positive number \(\tilde{c}_{d,m_{\infty}}\), such that any nonnegative solution \(U(t)\) of_ \[\frac{dU}{dt}=d\tilde{\mathcal{L}}U+M(t)\circ U,\ t>0, \tag{3.7}\] _satisfies_ \[\|U(t)\|_{\infty}\leq\tilde{c}_{d,m_{\infty}}U_{\min}(t)\quad\forall\ t\geq 1. \tag{3.8}\] Proof.: Without loss of generality, we may suppose \(\lambda_{*}(\tilde{\mathcal{L}})=0\), otherwise we can replace \(\tilde{\mathcal{L}}\) by \(\tilde{\mathcal{L}}-\lambda_{*}(\tilde{\mathcal{L}})\text{diag}(\mathbf{1})\) and \(M(t)\) by \(M(t)+d\lambda_{*}(\tilde{\mathcal{L}})\text{diag}(\mathbf{1})\). Let \(\tilde{E}\in\mathbb{R}^{k}\) be the positive eigenvalue of \(\tilde{L}\) satisfying \(\min_{j=1,\cdots,k}\tilde{E}_{j}=1\). Let \(U(t)\) be a nonnegative solution of (3.7). Then \[\frac{dU}{dt}\leq d\tilde{\mathcal{L}}U+m_{\infty}U\quad\forall\ t>0.\] This implies that \[U(t+\tau)\leq_{1}e^{tm_{\infty}}\|U(\tau)\|_{\infty}\tilde{E}\quad\forall\ t>0,\ \tau>0.\] Hence \[\|U(t+1)\|_{\infty}\leq e^{m_{\infty}}\|\tilde{E}\|_{\infty}\|U(t)\|_{\infty} \quad\forall\ t>0. \tag{3.9}\] Next, for every \(\tau>0\), there is some \(j_{\tau}=1,\cdots,k\) such that \[\|U(\tau)\|_{\infty}B_{j_{\tau}}\leq_{1}U(\tau),\] where \(\{B_{1},\cdots,B_{k}\}\) is the standard orthonormal basis of \(\mathbb{R}^{k}\). Observing that \[d\tilde{\mathcal{L}}(e^{tm_{\infty}}U(\tau+t))\leq_{1}\frac{d(e^{tm_{\infty}}U(t +\tau))}{dt}\quad\forall t>0,\ \tau>0,\] then \[\|U(\tau)\|_{\infty}e^{td\tilde{\mathcal{L}}}B_{j_{\tau}}\leq_{1}e^{tm_{\infty }}U(t+\tau)\quad\forall\ t>0,\ \tau>0.\] In particular, \[e^{-m_{\infty}}\|U(t)\|_{\infty}e^{d\tilde{\mathcal{L}}}B_{j_{t}}\leq U(t+1) \quad\forall\ t>0. \tag{3.10}\] But, since \(\{e^{t\tilde{\mathcal{L}}}\}_{t>0}\) is strictly positive, setting \(\tilde{B}^{d}_{l}=e^{d\tilde{\mathcal{L}}}B_{l}\), for each \(l=1,\cdots,k\), we have that \[\mathbf{0}\ll_{1}\tilde{B}^{*,d}:=(\min_{j\in\Omega}\tilde{B}^{d}_{j,1}, \cdots,\min_{j\in\Omega}\tilde{B}^{d}_{j,k})^{T}.\] Therefore, in view of (3.10), we have that \[e^{-m_{\infty}}\|U(t)\|_{\infty}\tilde{B}^{*,d}\leq_{1}U(t+1)\quad\forall\ t>0,\] which implies that \[\Big{(}\frac{\min_{j\in\Omega}\tilde{B}^{*,d}_{j}}{e^{m_{\infty}}}\Big{)}\|U( t)\|_{\infty}\leq U_{\min}(t+1)\quad\forall\ t>0. \tag{3.11}\] Finally, by (3.9) and (3.11), we have that \[\|U(t+1)\|_{\infty}\leq\frac{e^{2m}\|\tilde{E}\|_{\infty}}{\min_{j\in\Omega} \tilde{B}^{*,d}}U_{\min}(t+1)\quad\forall\ t>0.\] Therefore, (3.8) holds with \(\tilde{c}_{d,m_{\infty}}=\frac{e^{2m_{\infty}}\|\tilde{E}\|_{\infty}}{\min_{j \in\Omega}B^{*,d}}>0\). **Remark 3.2**.: _We note that in Lemma 3.1, it is not required that the irreducible matrix \(\tilde{\mathcal{L}}\) to be irreducible. Lemma 3.1 generalizes the Hanack's inequality for continuous reaction-diffusion models subject to the homogeneous boundary condtions to the patch model._ Given a solution \((S(t),I_{1}(t),I_{2}(t))\) of (1.1) with initial data in \(\mathcal{E}\), we have that \[\|\beta_{l}S-\gamma_{l}\|_{\infty}\leq N\beta_{\max}+\gamma_{\max}:=m_{\infty }\quad\forall\ t\geq 0,\ l=1,2.\] It then follows from Lemma 3.1 that there is a positive constant \(c_{*}=c_{*}(d_{1},d_{2},m_{\infty})\) such that \[\|I_{l}(t)\|_{\infty}\leq c_{*}\min_{j\in\Omega}I_{l,j}(t)\quad\forall\ t>1. \tag{3.12}\] Inequality (3.12) will be of significant to completing some of the proofs of our main results in the subsequent section. The following basic result on the uniform persistence of the susceptible population holds. **Lemma 3.3**.: _Let \((S(t),I_{1}(t),I_{2}(t))\) be a solution of (1.1) with initial data in \(\mathcal{E}\). Then \(\mathbf{0}<<_{1}S(t)\) for all \(t>0\) and_ \[\liminf_{t\to\infty}\sum_{j\in\Omega}S_{j}(t)\geq\min\Big{\{}N,\frac{\gamma_{ \min}}{\beta_{\max}}\Big{\}}. \tag{3.13}\] Proof.: Let \((S(t),I_{1}(t),I_{2}(t))\) be a solution of (1.1) with initial data in \(\mathcal{E}\). We distinguish two cases. **Case 1.**\(I_{l,j}(0)=0\) for every \(l=1,2\) and \(j\in\Omega\). Then \(I_{l,j}(t)=0\) for every \(t>0\), \(l=1,2\) and \(j\in\Omega\). This shows that \(S(t)\) satisfies \[\begin{cases}\frac{dS}{dt}=d_{S}\mathcal{L}S\quad\quad\quad\quad t>0,\\ \sum_{j\in\Omega}S_{j}(0)=N.\end{cases}\] Hence, there is some \(j_{0}\in\Omega\) such that \(S_{j_{0}}(0)>0\). Hence \(\mathbf{0}<<_{1}S_{j_{0}}(0)e^{t\mathcal{L}}B_{j_{0}}\leq_{1}S(t)\) for all \(t>0\). **Case 2.**\(I_{l,j_{0}}(0)>0\) for some \(l\in\{1,2\}\) and \(j_{0}\in\{1,\cdots,k\}\). Then \(\mathbf{0}<<_{1}I_{l}(t)\) for every \(t>0\). As a result, by (2.2), \[d_{S}\mathcal{L}\Big{(}e^{2N\beta_{\max}t}S\Big{)}+e^{2N\beta_{\max}t}\gamma_{ l}\circ I_{l}\leq_{1}\frac{d(e^{2N\beta_{\max}t}S)}{dt}\quad\forall\ t>0.\] This implies that, since \(\{e^{t\mathcal{L}}\}\) is strongly positive, \[\mathbf{0}\ll_{1} \int_{0}^{t}e^{\tau d_{S}\mathcal{L}}\Big{(}e^{2N\beta_{\max} \tau}\gamma_{l}\circ I_{l}(\tau)\Big{)}d\tau\] \[\leq_{1} e^{2N\beta_{\max}t}e^{td_{S}\mathcal{L}}(S(0))+\int_{0}^{t}e^{ \tau d_{S}\mathcal{L}}\Big{(}e^{2N\beta_{\max}\tau}\gamma_{l}\circ I_{l}(\tau )\Big{)}d\tau\] \[\leq_{1} e^{2N\beta_{\max}t}S(t)\quad\forall\ t>0.\] Hence \(\mathbf{0}<<_{1}S(t)\) for every \(t>0\). It remains to show that (3.13) holds. Observe from (2.3) that \[\frac{d\sum_{j\in\Omega}S_{j}}{dt}= \sum_{j\in\Omega}\sum_{l=1}^{2}\gamma_{l,j}I_{l,j}-\sum_{j\in \Omega}S_{j}\sum_{l=1}^{2}\beta_{l,j}I_{l,j}\] \[\geq \gamma_{\min}\sum_{j\in\Omega}\sum_{l=1}^{2}I_{l,j}-\beta_{\max} \sum_{j\in\Omega}S_{j}\sum_{l=1}^{2}I_{l,j}\] \[= \gamma_{\min}(N-\sum_{j\in\Omega}S_{j})-\beta_{\max}\sum_{j\in \Omega}S_{j}\sum_{l=1}^{2}I_{l,j}\] \[\geq \gamma_{\min}(N-\sum_{j\in\Omega}S_{j})-\beta_{\max}\sum_{j\in \Omega}S_{j}(N-\sum_{j\in\Omega}S_{j})\] \[= \gamma_{\min}\Big{(}\frac{\gamma_{\min}}{\beta_{\max}}-\sum_{j\in \Omega}S_{j}\Big{)}\Big{(}N-\sum_{j\in\Omega}S_{j}\Big{)}.\] Therefore, since \(\mathbf{0}<<_{1}S(t)\) for every \(t>0\), we deduce from the last inequality that (3.13). The following results hold. **Lemma 3.4**.: _Fix \(l=1,2\) and suppose that \(\mathcal{R}_{0,l}(N)>1\). There is \(\delta_{l}^{*}>0\) such that_ \[\limsup_{t\to\infty}\Big{\|}S(t)-\frac{N}{k}\mathbf{1}\Big{\|}\geq\delta_{l}^{*} \tag{3.14}\] _for any solution \((S(t),I_{1}(t),I_{2}(t))\) of (1.1) with a initial data in \(\mathcal{E}\) satisfying \(\|I_{l}(0)\|_{\infty}>0\)._ Proof.: First, note that \(\lambda_{*}(d_{l}\mathcal{L}+\mathrm{diag}(\frac{N}{k}\beta_{l}-\gamma_{l}))>0\) since \(\mathcal{R}_{0,l}(N)>1\). We claim that (3.14) works with \(\delta_{l}^{*}:=\frac{\lambda_{*}(d_{l}\mathcal{L}+\mathrm{diag}(\frac{N}{k} \beta_{l}-\gamma_{l}))}{4\beta_{\max}}\). Indeed, suppose by contradiction this is false. Hence, there is some initial data \((S(0),I_{1}(0),I_{2}(0))\in\mathcal{E}\) with \(\|I_{l}(0)\|_{\infty}>0\) such that \[\Big{\|}S(t)-\frac{N}{k}\mathbf{1}\Big{\|}\leq 2\delta_{l}^{*}\quad\forall\ t \geq 0.\] This in turn together with (2.2) imply that the function \((\underline{I}_{1}(t),\underline{I}_{2}(t))=(e^{\tilde{b}_{1}t}I_{1}(t),e^{ \tilde{\delta}_{1}t}I_{2}(t))\) satisfies \[\begin{cases}\frac{d\underline{I}_{1}}{dt}\geq d_{1}\mathcal{L}\underline{I}_{ 1}+\Big{(}\gamma_{1}-\frac{N}{k}\beta_{1}\Big{)}\circ\underline{I}_{1}&t>0,\\ \frac{d\underline{I}_{2}}{dt}\geq d_{2}\mathcal{L}\underline{I}_{2}+\Big{(} \gamma_{2}-\frac{N}{k}\beta_{2}\Big{)}\circ\underline{I}_{2}&t>0.\end{cases} \tag{3.15}\] where \(\tilde{\delta}_{l}:=2\delta_{l}^{*}\beta_{\max}.\) Let \(E_{l}\) be a positive eigenvector associated with \(\lambda_{*}(d_{l}\mathcal{L}+\operatorname{diag}(\frac{N}{k}\beta_{l}-\gamma_{l}))\). Since \(\mathbf{0}<<_{1}I_{l}(1)\), then there is some \(\eta_{l}>0\) such that \(\eta_{l}E_{l}<<_{1}e^{\overline{\delta}}I_{l}(1)\). It then follows from the monotonicity of the semiflow generated by the matrix semigroup \(\{e^{t(d_{l}\mathcal{L}+\operatorname{diag}(\frac{N}{k}\beta_{l}-\gamma_{l}) )}\}_{t\geq 0}\) and (3.15) that \[\underline{I}_{l}(t)\geq\eta_{l}e^{\epsilon\lambda_{*}(d_{l}\mathcal{L}+ \operatorname{diag}(\frac{N}{k}\beta_{l}-\gamma_{l}))}E_{l}\quad\forall\ t>0,\] which implies \[I_{l}(t)\geq\eta_{l}e^{t(\lambda_{*}(d_{l}\mathcal{L}+\operatorname{diag}( \frac{N}{k}\beta_{l}-\gamma_{l}))-\delta)}E_{l}\quad\forall\ t>0.\] As a result, \(\sum_{j\in\Omega}I_{l,j}(t)\to\infty\). This contradicts with (2.3), so we deduce that the desired result hold. **Lemma 3.5**.: _Fix \(l=1,2\) and suppose that \(\mathcal{R}_{0,l}(N)>1\). Then there is \(\sigma_{l}^{*}>0\) such that_ \[\limsup_{t\to\infty}\sum_{j\in\Omega}\sum_{p=1}^{2}I_{p,j}(t)\geq\sigma_{l}^{*} \tag{3.16}\] _for every solution \((S(t),I_{1}(t),I_{2}(t))\) of (1.1) with an data in \(\mathcal{E}\) satisfying \(\|I_{l}(0)\|_{\infty}>0\)._ Proof.: Let \((S(t),I_{1}(t),I_{2}(t))\) be a solution of (1.1) with an initial data satisfying \(\|I_{l}(0)\|_{\infty}>0\) and set \[\overline{I}^{*}:=\limsup_{t\to\infty}\sum_{j\in\Omega}\sum_{p=1}^{2}I_{p,j}(t).\] Observe that \[\frac{d\widehat{S}_{j}}{dt}= d_{S}\sum_{i\in\Omega}(L_{j,i}S_{i}-L_{i,j}S_{j})+\sum_{q=1}^{2}( \gamma_{q,j}I_{q,j}-\beta_{q,j}I_{q,j}S_{j})\] \[-\frac{1}{k}\sum_{k\in\Omega}\left(d_{S}\sum_{p\in\Omega}(L_{k,p} S_{p}-L_{p,k}S_{k})+\sum_{q=1}^{2}(\gamma_{q,k}I_{q,k}-\beta_{q,k}I_{q,k}S_{k})\right)\] \[= d_{S}\sum_{i\in\Omega}(L_{j,i}S_{i}-L_{i,j}S_{j})+\sum_{q=1}^{2 }(\gamma_{q,j}I_{q,j}-\beta_{q,j}I_{q,j}S_{j})\] \[-\frac{1}{k}\left(d_{S}\sum_{k\in\Omega}\sum_{p\in\Omega}(L_{k,p} S_{p}-L_{p,k}S_{k})+\sum_{k\in\Omega}\sum_{q=1}^{2}(\gamma_{q,k}I_{q,k}-\beta_{q,k} I_{q,k}S_{k})\right)\] \[= d_{S}\sum_{i\in\Omega}(L_{j,i}S_{i}-L_{i,j}S_{j})+\sum_{q=1}^{2 }(\gamma_{q,j}I_{q,j}-\beta_{q,j}I_{q,j}S_{j})-\frac{1}{k}\sum_{k\in\Omega} \sum_{q=1}^{2}(\gamma_{q,k}I_{q,k}-\beta_{q,k}I_{q,k}S_{k})\] \[= d_{S}\sum_{i\in\Omega}(L_{j,i}S_{i}-L_{i,j}S_{j})+\sum_{q=1}^{2 }\big{(}\widehat{\gamma_{q,j}I_{q,j}}-\widehat{\beta_{q,j}I_{q,j}}S_{j}\big{)}\] \[= d_{S}\sum_{i\in\Omega}\Big{(}L_{j,i}\Big{(}\widehat{S}_{i}+ \frac{1}{k}\sum_{k\in\Omega}S_{k}\Big{)}-L_{i,j}\Big{(}\widehat{S}_{j}+\frac{1} {k}\sum_{k\in\Omega}S_{k}\Big{)}\Big{)}+\sum_{q=1}^{2}\big{(}\widehat{\gamma_ {q,j}I_{q,j}}-\widehat{\beta_{q,j}I_{q,j}}S_{j}\Big{)}\] \[= d_{S}\sum_{i\in\Omega}\Big{(}L_{j,i}\widehat{S}_{i}-L_{i,j}\widehat {S}_{j}+(L_{j,i}-L_{i,j})\frac{1}{k}\sum_{k\in\Omega}S_{k}\Big{)}+\sum_{q=1}^{ 2}\big{(}\widehat{\gamma_{q,j}I_{q,j}}-\widehat{\beta_{q,j}I_{q,j}}S_{j}\big{)}\] \[= d_{S}\sum_{i\in\Omega}\Big{(}L_{j,i}\widehat{S}_{i}-L_{i,j}\widehat {S}_{j}\Big{)}+\sum_{q=1}^{2}\big{(}\widehat{\gamma_{q,j}I_{q,j}}-\widehat{ \beta_{q,j}I_{q,j}}S_{j}\Big{)},\] since \(\tilde{L}\) is symmetric. Thus, \[\frac{d\widehat{S}}{dt}=d_{S}\mathcal{L}\widehat{S}+\sum_{q=1}^{2}\Big{(} \widehat{\gamma_{q}\circ I_{q}}-\beta_{q}\widehat{\circ S\circ}\,I_{q}\Big{)} \quad t>0. \tag{3.17}\] As a result, by the variation of constant formula, \[\hat{S}(t+t^{\prime})=e^{td_{S}\mathcal{L}}\hat{S}(t^{\prime})+\int_{0}^{t}e^{ (t-\sigma)d_{S}\mathcal{L}}\sum_{q=1}^{2}\Big{(}\widehat{\gamma_{q}\circ I_{q }}-\beta_{q}\widehat{\circ S\circ}\,I_{q}\Big{)}(t^{\prime}+\sigma)d\sigma \quad\forall\ t>0,\ t^{\prime}\geq 0. \tag{3.18}\] Hence, by (3.3) and (3.5), since \(<\widehat{U},Q_{1}>=0\) for every \(U\in\mathbb{R}^{k}\), then for every \(t>0\) and \(t^{\prime}\geq 0\). \[\|\widehat{S}(t+t^{\prime})\|\leq e^{td_{S}\widetilde{d}_{*}}\|\widehat{S}(t^{\prime})\|+\int_{0}^{t}e^{ (t-\sigma)\widetilde{d}_{*}d_{S}}\sum_{q=1}^{2}\Big{\|}\Big{(}\widehat{\gamma _{q}\circ I_{q}}-\beta_{q}\widehat{\circ S\circ}\,I_{q}\Big{)}(t^{\prime}+ \sigma)\Big{\|}d\sigma\] \[\leq 2e^{t\widetilde{d}_{*}d_{S}}\|S(t^{\prime})\|+2\int_{0}^{t}e^{ (t-\sigma)\widetilde{d}_{*}d_{S}}\sum_{q=1}^{2}(\|\gamma_{q}\circ I_{q}\|+\| \beta_{q}\circ S\circ I_{q}\|)(t^{\prime}+\sigma)d\sigma\] \[\leq 2\sqrt{k}Ne^{t\widetilde{d}_{*}d_{S}}+2\int_{0}^{t}e^{(t-\sigma) \widetilde{d}_{*}d_{S}}\sum_{q=1}^{2}(\|\gamma_{q}\|+\|\beta_{q}\|\|S\|_{ \infty})\|I_{q}(t^{\prime}+\sigma)\|_{\infty}d\sigma\] \[\leq 2\sqrt{k}Ne^{t\widetilde{d}_{*}d_{S}}+2\sqrt{k}(\gamma_{\max}+ \beta_{\max}N)\int_{0}^{t}e^{(t-\sigma)\widetilde{d}_{*}d_{S}}\sum_{q=1}^{2} \sum_{j\in\Omega}I_{q,j}(t^{\prime}+\sigma)d\sigma.\] Taking limsup as \(t^{\prime}\to\infty\) in the last inequality, we get \[\limsup_{\tau\to\infty}\|\widehat{S}(\tau)\|\leq 2\sqrt{k}Ne^{t\widetilde{d}_{*}d_{S}}+2\overline{I}^{*}\sqrt{k} (\gamma_{\max}+\beta_{\max}N)\int_{0}^{t}e^{(t-\sigma)\widetilde{d}_{*}d_{S}}d\sigma\] \[\leq 2\sqrt{k}Ne^{t\widetilde{d}_{*}d_{S}}+\frac{2\overline{I}^{*} \sqrt{k}(\gamma_{\max}+\beta_{\max}N)}{|\widetilde{d}_{*}|d_{S}}\quad\forall \ t>0,\] which implies that \[\limsup_{\tau\to\infty}\|\widehat{S}(\tau)\|\leq\frac{2\overline{I}^{*}\sqrt{ k}(\gamma_{\max}+\beta_{\max}N)}{|\widetilde{d}_{*}|d_{S}}:=\eta_{*}\overline{I}^{*}, \tag{3.19}\] where \(\eta_{*}=\frac{2\sqrt{k}(\gamma_{\max}+N\beta_{\max})}{|\widetilde{d}_{*}|d_{ S}}\). Recalling that \(Q_{1}=\frac{N}{\sqrt{k}}\mathbf{1}\), then by (2.3) and (3.4), \[\widehat{S}=S-\frac{\sum_{j\in\Omega}S_{j}}{k}\mathbf{1}=S-\frac{N}{k} \mathbf{1}+\frac{\sum_{j\in\Omega}\sum_{l=1}^{2}I_{l,j}}{k}\mathbf{1}\] which yields \[\Big{\|}S-\frac{N}{k}\mathbf{1}\Big{\|}\leq\|\widehat{S}\|+\frac{\sum_{i\in \Omega}\sum_{l=1}^{2}I_{l,i}}{k}\|\tilde{\mathbf{E}}\|=\|\widehat{S}\|+\frac{ 1}{\sqrt{k}}\sum_{i\in\Omega}\sum_{l=1}^{2}I_{l,i}.\] Hence, by (3.14) and (3.19), we derive that \[\delta_{l}^{*}\leq(\eta_{*}+\frac{1}{\sqrt{k}})\overline{I}^{*},\] which implies that (3.16) holds for \(\sigma_{l}^{*}=\frac{\delta_{l}^{*}}{\eta_{*}+\frac{1}{\sqrt{k}}}\). Proofs of the Main Results ### Proof of Theorem 2.1 We give a proof of Theorem 2.1. Proof.: (i) Let \((S(t),I_{1}(t),I_{2}(t))\) be a solution of (1.1) with initial data in \(\mathcal{E}\). Thanks to Lemma 3.3, we know that \(\mathbf{0}\ll_{1}S(t)\) for all \(t>0\) and that \[\liminf_{t\to\infty}\sum_{j\in\Omega}S_{j}(t)\geq\eta_{0}:=\min\left\{N,\frac{ \gamma_{\min}}{\beta_{\max}}\right\}.\] Hence there is \(t_{0}>0\) such that \[\frac{\eta_{0}}{2}\leq\sum_{j\in\Omega}S_{j}(t)\quad\forall\ t\geq t_{0}.\] Hence, for every \(t\geq t_{0}\), there is \(j_{t}\in\Omega\) such that \[\frac{\eta_{0}}{2k}B_{j_{t}}\leq_{1}S(t).\] But, \[d_{S}\mathcal{L}S-2N\beta_{\max}S\leq_{1}\frac{dS}{dt}\quad\forall\ t>0.\] Therefore, in view of (4.1) and the monotonicity of \(\{e^{t\mathcal{L}}\}\), we have that \[\frac{\eta_{0}}{2k}e^{-2N\beta_{\max}}e^{d_{S}\mathcal{L}}B_{j_{t}}\leq_{1}S (t+1)\quad\forall\ t\geq t_{0}.\] Therefore, setting \(B_{j}^{d_{S}}:=e^{d_{S}\mathcal{L}}B_{j}\) for each \(j=1,\cdots,k\) and \(B^{*,d_{S}}:=(\min_{j\in\Omega}B_{j,1}^{d_{S}},\cdots,\min_{j\in\Omega}B_{j,1} ^{d_{S}})^{T}\), then \[\frac{\eta_{0}}{2k}B^{*,d_{S}}\leq_{1}S(t)\quad\forall\ t\geq t_{0}+1,\] which implies that (2.13) holds since \(\mathbf{0}\ll_{1}B^{*,d_{S}}\) by (3.6), and \(B^{*,d_{S}}\) is independent of the initial data. (ii) The fact that the DFE is linearly stable when \(\mathcal{R}_{0}(N)<1\) and unstable when \(\mathcal{R}_{0}(N)>1\) follows from standard results, (see for example [15, Theorem 2]). Next, we proceed to prove assertions (ii-1)-(ii-3). (ii-1) First, suppose that \(\mathcal{R}_{0,l}(N)<\frac{1}{k}\) for some \(l=1,2\). Hence, in view of (2.9), we have that \[N\rho\big{(}\mathcal{F}_{l}\mathcal{V}_{l}^{-1}\big{)}<1,\] which is equivalent to \[\lambda_{l}:=\lambda_{*}\big{(}d_{l}\mathcal{L}+\mathrm{diag}\big{(}N\beta_{l }-\gamma_{l}\big{)}\big{)}<0.\] Now, let \((S(t),I_{1}(t),I_{2}(t))\) be a solution of (1.1) with initial data in \(\mathcal{E}\). From (2.3) and (2.2), we have that \[\frac{dI_{l}}{dt}\leq_{1}d_{l}\mathcal{L}I_{l}+(N\beta_{l}-\gamma_{l})\circ I _{l}\quad\forall\ t\geq 0. \tag{4.1}\] Hence, if \(E_{l}^{*}\) denotes the positive eigenvector associated with \(\lambda_{l}\) with \(\min_{j\in\Omega}E_{I_{l},j}^{*}=1\), it follows from (4.1) and the comparison principle that \[I_{l}(t)\leq_{1}\|I_{l}(0)\|_{\infty}e^{t\lambda_{l}}E_{l}^{*}\to\mathbf{0} \quad\text{as}\quad t\to\infty.\] As result, if \(\mathcal{R}_{0}(N)<1\), then \(\sum_{l=1}^{2}\|I_{l}(t)\|\to 0\) as \(t\to\infty\). Moreover, it follows from (3.19) and the fact that \(\sum_{j\in\Omega}S_{j}=N-\sum_{j\in\Omega}\sum_{l=1}^{2}I_{l,j}\) that \[\lim_{t\to\infty}\Big{\|}S(t)-\frac{N}{k}\mathbf{1}\Big{\|}=\lim_{t\to\infty} \Big{\|}S(t)-\frac{N-\sum_{j\in\Omega}\sum_{l=1}^{l}I_{l,j}}{k}\mathbf{1} \Big{\|}=\lim_{t\to\infty}\|\widehat{S}(t)\|=0,\] which yields the desired result. Next, suppose that \({\cal R}_{0,l}=\frac{1}{k}\) for some \(l=1,2\). For every \(t\geq 0\), let \(c_{l}(t)\) be given by \[c_{l}(t)=\inf\{c>0\ :\ I_{l}(t)\leq_{1}cE_{l}^{*}\}.\] Hence, since (4.1) holds, by the comparison principle, we deduce that \(c_{l}(t)\) is nonincreasing in \(t\geq 0\). Thus \[c_{l}^{\infty}=\lim_{t\to\infty}c_{l}(t)=\inf_{t\geq 0}c_{l}(t)\in[0,c_{l}(0)].\] It is clear from the definition of \(c_{l}(t)\) that \[I_{l}(t)\leq_{1}c_{l}(t)E_{l}^{*}\ \ \ \forall\ t\geq 0. \tag{4.2}\] We claim that \[c_{l}^{\infty}=0. \tag{4.3}\] Indeed, since \(\min_{j\in\Omega}E_{l,j}^{*}=1\), it follows from the definition of \(c_{l}(t)\) that for each \(t\geq 0\), there is \(j_{t}\in\Omega\) such that \[I_{l,j_{t}}(t)\geq c_{l}(t)E_{l,j_{t}}^{*}\geq c_{l}(t)\geq c_{l}^{\infty}, \tag{4.4}\] which implies that \[\sum_{j\in\Omega}\sum_{p=1}^{2}I_{p,j}(t)\geq c_{l}^{\infty}\ \ \ \forall\ t\geq 0.\] Therefore, by (2.3), \[\sum_{j\in\Omega}S_{j}(t)=N-\sum_{j\in\Omega}\sum_{p=1}^{2}I_{p,j}(t)\leq N-c_ {l}^{\infty}\ \ \ \forall\ t\geq 0.\] This in turn implies that \(S(t)\leq(N-c_{l}^{\infty}){\bf 1}\) for all \(t\geq 0\). Therefore, \[\frac{dI_{p}}{dt}\leq_{1}d_{p}{\cal L}I_{p}+((N-c_{l}^{\infty})\beta_{p}- \gamma_{l})\circ I_{p}\leq_{1}d_{p}{\cal L}I_{p}-c_{l}^{\infty}\beta_{\min}I_{ p}+(N\beta_{p}-\gamma_{p})\circ I_{p}\ \ \ \forall\ t\geq 0,p=1,2.\] Therefore, \[I_{p}(t)\leq e^{-tc_{l}^{\infty}\beta_{\min}}\|I_{p}(0)\|_{\infty}E_{p}^{*}, \ \ \ p=1,2,\ t\geq 0.\] This together with (4.4) implies that \[c_{l}^{\infty}\leq e^{-tc_{l}^{\infty}\beta_{\min}}\|I_{l}(0)\|_{\infty}\|E_{ l}^{*}\|_{\infty}\ \ \ \forall\ t\geq 0,\] which implies that \(c_{l}^{\infty}=0\). So (4.3) holds. Now, from (4.3) and (4.2) we have that \(\|I_{l}(t)\|\to 0\) as \(t\to\infty\). Therefore, if \({\cal R}_{0}(N)=\frac{1}{k}\), we can now proceed as in the previous case to establish that \(\sum_{l=1}^{2}I_{l}(t)\|\to 0\) and \(\left\|S(t)-\frac{N}{k}{\bf 1}\right\|\to 0\) as \(t\to\infty\). This completes the proof of (ii-1). (ii-2) Suppose that \({\cal R}_{0,l}(N)<1\) for some \(l=1,2\). Then \(\lambda_{*}(d_{l}{\cal L}+{\rm diag}(\frac{N}{k}\beta_{l}-\gamma_{l}))<0\). Therefore, by the continuity of the principal eigenvalue with respect to parameters, there is \(\varepsilon_{l}>0\) such that \[\lambda_{l}^{*}:=\lambda_{*}\Big{(}d_{l}{\cal L}+{\rm diag}\Big{(}\frac{(N+ \varepsilon_{l})}{k}\beta_{l}-\gamma_{l}\Big{)}\Big{)}<0.\] Let \(E_{l}^{*}\) denote the eigenvector of \(\lambda_{l}^{*}\) with \(\min_{j\in\Omega}E_{l,j}^{*}=1\) and set \[d^{l}:=\frac{3Nk\sqrt{k}(\gamma_{\max}+N\beta_{\max})}{|\tilde{d}_{*}| \varepsilon_{l}}. \tag{4.5}\] We now show that the desired result holds for any \(d_{S}>d^{l}\). So fix \(d_{S}>d^{l}\). Let \((S(t),I_{1}(t),I_{2}(t))\) be solution of (1.1) with a positive initial data in \(\mathcal{E}\). Since \(\sum_{j\in\Omega}\sum_{l=1}^{l}I_{l,j}(t)\leq N\) for every \(t\geq 0\), it follows from (3.19) that \[\limsup_{t\to\infty}\left\|S(t)-\frac{\sum_{j\in\Omega}S_{j}(t)}{k}\mathbf{1} \right\|\leq\frac{2N\sqrt{k}(\gamma_{\max}+\beta_{\max}N)}{|\vec{d}_{*}|d_{S} }=\frac{2d^{l}\varepsilon_{l}}{3kd_{S}}<\frac{2}{3k}\varepsilon_{l}.\] Recalling that \(\sum_{j\in\Omega}S_{j}=N-\sum_{j\in\Omega}\sum_{p=1}^{2}I_{p,j}\), then there is \(t_{1}\gg 1\) such that \[S(t)-\frac{\left(N-\sum_{j\in\Omega}\sum_{p=1}^{2}I_{p,j}\right)}{k}\mathbf{1 }\leq_{1}\frac{2\varepsilon_{l}}{3k}\mathbf{1}\quad\forall\ t>t_{1}.\] This implies that \[\frac{dI_{l}}{dt}\leq_{1} d_{l}\mathcal{L}I_{l}+\Big{(}\frac{\left(N+\varepsilon_{l}-\sum_{j \in\Omega}\sum_{p=1}^{2}I_{p,j}\right)}{k}\beta_{l}-\gamma_{l}\Big{)}\circ I_{l}\] \[\leq_{1} d_{l}\mathcal{L}I_{l}+\Big{(}\frac{\left(N+\varepsilon_{l}\right)}{k} \beta_{l}-\gamma_{l}\Big{)}\circ I_{l}\quad\forall\ t>t_{1}.\] Hence, by the comparison principle, \[I_{l}(t)\leq_{1}\|I_{l}(t_{1})\|_{\infty}e^{(t-t_{1})\lambda_{l}^{*}}E_{l}^{* }\to\mathbf{0}\quad\text{as $t\to\infty$}.\] In particular, if \(\mathcal{R}_{0}(N)<1\), taking \(d^{*}=\max\{d^{1},d^{2}\}\), where \(d^{l}\) is given by (4.5). We have that \(\sum_{l=1}^{2}\|I_{l}(t)\|\to 0\) as \(t\to\infty\) for every \(d_{S}>d^{*}\). Moreover, we can proceed as in the proof of (ii-1) to prove that \((S(t),I_{1}(t),I_{2}(t))\to(\frac{N}{k}\mathbf{1},\mathbf{0},\mathbf{0})\) as \(t\to\infty\) for every \(d_{S}>d^{*}\). (ii-3) Suppose that \(\mathcal{R}_{0,l}(N)>1\) for each \(l=1,2\). We proceed in four steps. **Step 1.** In the current step, we show that there is a positive constant \(m_{\mathrm{up}}>0\) such that \[\liminf_{t\to\infty}\sum_{j\in\Omega}\sum_{l=1}^{2}I_{l,j}(t)\geq m_{\mathrm{ up}} \tag{4.6}\] for any solution \((S(t),I_{1}(t),I_{2}(t))\) of (1.1) with a positive initial in \(\mathcal{E}\). To see this, first recall that the set \(\mathcal{E}\) is compact, and invariant for the semiflow generated by classical solution of (1.1). Now define the mapping \(\Xi:\mathcal{E}\to[0,\infty)\) by \[\Xi(S,I_{1},I_{2})=\sum_{j\in\Omega}\sum_{l=1}^{2}I_{l,j}\quad\forall\ (S,I_{1},I_{2})\in\mathcal{E}.\] Clearly, the mapping \(\Xi\) is continuous. Furthermore, \(\Xi(S(t),I_{1}(t),I_{2}(t))>0\) for every \(t>0\) whenever \((S(0),I_{1}(0),I_{2}(0))\in\mathcal{E}\) and satisfies \(\Xi((S(0),I_{1}(0),I_{2}(0))>0\). Note from (3.16) that \[\limsup_{t\to\infty}\Xi(S(t),I_{1}(t),I_{2}(t))\geq\min\{\sigma_{1}^{*},\sigma _{2}^{*}\}\quad\forall\ (S(0),I_{1}(0),I_{2}(0))\in\mathcal{E},\ \Xi(S(0),I_{1}(0),I_{2}(0))>0.\] Then applying persistence theory [30, Theorem 5.2], we deduce that (4.6) holds. **Step 2.** In the current step, we show that there is a positive \(m_{\mathrm{low}}>0\) constant such that \[\liminf_{t\to\infty}\min_{j\in\Omega}\sum_{l=1}^{2}I_{l,j}(t)\geq m_{\mathrm{ low}} \tag{4.7}\] for any solution \((S(t),I_{1}(t),I_{2}(t))\) of (1.1) with a positive initial. To this end, let \((S(t),I_{1}(t),I_{2}(t))\) be a solution of (1.1) with a positive initial data. Note that \[d_{l}\mathcal{L}I_{l}-\gamma_{\max}I_{l}\leq_{1}\frac{dI_{l}(t)}{dt}\quad t>0, \ l=1,2,\] which, thanks to the positivity of \(\{e^{t\mathcal{L}}\}_{t\geq 0}\), implies that \[e^{-\gamma_{\max}t}e^{d_{l}t\mathcal{L}}I_{l}(t^{\prime})\leq_{1}I_{l}(t+t^{ \prime})\quad t>0,\ t^{\prime}>0,\ l=1,2.\] Thus, setting \(\tilde{I}(t)=\sum_{l=1}^{2}I_{l}(t)\), we obtain \[e^{-\gamma_{\max}t}e^{\min\{d_{1},d_{2}\}t}\tilde{I}(t^{\prime})\leq_{1} \tilde{I}(t+t^{\prime})\quad\forall\ t>0,\ t^{\prime}>0. \tag{4.8}\] Next, thanks to (4.6), there is \(t^{\prime}_{0}>0\) such that \[\sum_{j\in\Omega}\tilde{I}_{j}(t^{\prime})\geq\frac{m_{\text{up}}}{2}\quad \forall\ t^{\prime}\geq t^{\prime}_{0}. \tag{4.9}\] By (4.9) and (3.12), \[\frac{m_{\text{up}}}{2}\leq\sum_{j\in\Omega}\sum_{l=1}^{2}I_{l,j}(t)\leq k \sum_{l=1}^{2}\|I_{l}(t)\|_{\infty}\leq kc_{*}\sum_{l=1}^{2}\min_{j\in\Omega}I _{l,j}(t)\quad\forall\ t>t^{\prime}_{0}.\] As a result, \[\liminf_{t\to\infty}\sum_{l=1}^{2}\min_{j\in\Omega}I_{l,j}(t)\geq\frac{m_{ \text{up}}}{2kc_{*}}>0.\] Therefore, (3.2) holds for \(m_{\text{low}}:=\frac{m_{\text{up}}}{2kc_{*}}\). **Step 3.** Here, we show that \[\limsup_{t\to\infty}S_{j}(t)\leq s_{\max} \tag{4.10}\] for any solution \((S(t),I_{1}(t),I_{2}(t))\) of (1.1) with a positive initial data. Let \((S(t),I_{1}(t),I_{2}(t))\) be a solution of (1.1) with a positive initial data. Thanks to (2.14), without loss of generality, we may suppose that \[\min_{j\in\Omega}\sum_{l=1}^{2}I_{l,j}(t)\geq\frac{m_{*}}{2}\quad\forall\ t \geq 0. \tag{4.11}\] Observe from (2.2) that \(S(t)\) satisfies \[\frac{dS(t)}{dt}\leq_{1}d_{S}\mathcal{L}S+\sum_{l=1}^{2}(S_{\max}-S)\circ\beta _{l}\circ I_{l}\quad t>0,\] where \(S_{\max}=(s_{\max},\cdots,s_{\max})^{T}\). Next set \[\overline{S}(t)=S_{\max}+e^{-\frac{m_{*}\beta_{\min}}{2}t}S(0)\quad\forall\ t \geq 0.\] Noting from (4.11) that \[\mathbf{0}\leq_{1}\Big{(}\beta_{\min}\min_{j\in\Omega}\sum_{l=1}^{2}I_{l,j}- \frac{m_{*}\beta_{\min}}{2}\Big{)}\mathbf{1}\leq\sum_{l=1}^{2}\beta_{l}\circ I _{l}-\frac{m_{*}\beta_{\min}}{2}\mathbf{1},\quad\ t>0,\] then \[\mathbf{0}\leq_{1} e^{-\frac{m_{*}\beta_{\min}}{2}t}\Big{(}\sum_{l=1}^{2}\beta_{l} \circ I_{l}-\frac{m_{*}\beta_{\min}}{2}\mathbf{1}\Big{)}\circ S(0)\] \[= \frac{d\overline{S}(t)}{dt}-d_{S}\mathcal{L}\overline{S}(t)-\sum_ {l=1}^{2}(S_{\max}-\overline{S}(t))\circ\beta_{l}\circ I_{l}\quad\forall\ t>0.\] It then follows from the positivity of \(\{e^{t\mathcal{L}}\}_{t\geq 1}\) and the fact that \(\overline{S}(0)>S(0)\) that \[S(t)\leq_{1}\overline{S}(t)=S_{\max}+e^{-\frac{m_{*}\beta_{\min}}{2}t}S(0)\quad t >0,\] from which we deduce that \(\limsup_{t\to\infty}\max_{j\in\Omega}S(t)\leq\max_{j\in\Omega}S_{\max}=s_{\max}\). **Step 4.** Finally, we show that \[\liminf_{t\to\infty}S_{j}(t)\geq s_{\min} \tag{4.12}\] for any solution \((S(t),I_{1}(t),I_{2}(t))\) of (1.1) with a positive initial data. We proceed as in step 3. This time we note that \[d_{S}\mathcal{L}S+\sum_{l=1}^{2}(S_{\min}-S)\circ\beta_{l}\circ I_{l}\leq_{1} \frac{dS(t)}{dt}\quad t>0,\] where \(S_{\min}=(s_{\min},\cdots,s_{\min})^{T}\), and that the function \[\underline{S}(t)=S_{\min}-M_{0}e^{-N\beta_{\max}t}\quad t\geq 0,\] where \(M_{0}\in\mathbb{R}^{k}_{+}\) is chosen such that \(S(0)>S_{\min}-M_{0}\), satisfies \(\underline{S}(0)\leq S(0)\) and \[\frac{d\underline{S}(t)}{dt}-d_{S}\mathcal{L}\underline{S}(t)- \sum_{l=1}^{2}(S_{\min}-\underline{S}(t))\circ\beta_{l}\circ I_{l}= e^{-\frac{\beta_{\min}m_{*}}{2}t}\Big{(}\frac{\beta_{\min}m_{*}}{2}M_{ 0}-\sum_{l=1}^{2}M_{0}\circ\beta_{l}\circ I_{l}\Big{)}\] \[\leq_{1} e^{-\frac{\beta_{\min}m_{*}}{2}t}(\frac{\beta_{\min}m_{*}}{2}- \frac{\beta_{\min}m_{*}}{2})M_{0}\] \[= \mathbf{0}\quad\forall\ t>0.\] Thus, we deduce that \(S(t)\geq\underline{S}(t)\) for all \(t\geq 0\), which implies that \(\liminf_{t\to\infty}\min_{j\in\Omega}S(t)\geq\min_{j\in\Omega}S_{\min}=s_{ \min}\). This completes the proof of (iii). Thanks to the proof of Theorem 2.1-(iii), we can state the following result on the persistence of the disease for a single strain model. **Proposition 4.1**.: _Suppose that \(\mathcal{R}_{0,1}(N)>1\). Then there is a positive number \(m_{1,*}>0\) such that for every solution \((S(t),I_{1}(t),I_{2}(t))\) with a initial data in \(\mathcal{E}\) satisfying \(\|I_{1}(0)\|>0\) and \(\|I_{2}(0)\|_{\infty}=0\),_ \[\liminf_{t\to\infty}\min_{j\in\Omega}I_{1,j}(t)\geq m_{1,*}. \tag{4.13}\] _Furthermore,_ \[\limsup_{t\to\infty}\max_{j\in\Omega}S_{j}(t)\leq r_{1,\max}\quad\text{and} \quad\liminf_{t\to\infty}\min_{j\in\Omega}S_{j}(t)\geq r_{1,\min}. \tag{4.14}\] ### Proofs of Theorems 2.2 and 2.3 **Lemma 4.2**.: _Suppose that \(\mathfrak{R}_{1,j}=\mathfrak{R}_{1,i}\) for each \(i,j\in\Omega\), \(\min_{l=1,2}\mathcal{R}_{0,l}(N)>1\), and \(\mathfrak{R}_{1,\min}>\mathfrak{R}_{2,\max}\). Then, there exists \(\eta_{1}>0\) such that_ \[\liminf_{t\to\infty}\min_{j\in\Omega}I_{1,j}\geq\eta_{1} \tag{4.15}\] _for every solution \((S(t),I_{1}(t),I_{2}(t))\) of (1.1) with initial data in \(\mathcal{E}\) satisfying \(\|I_{1}(0)\|_{\infty}>0\)._ Proof.: We proceed in two steps. First, note that \(\mathcal{R}_{0}(N)=\mathcal{R}_{0,1}(N)>1\). Since \(r_{1}:=r_{1,j}\) is constant in \(j\in\Omega\), and \(\mathcal{R}_{0,1}>\|\mathfrak{R}_{2}\|_{\infty}\), then \(r_{1}<\min_{j\in\Omega}r_{2,j}\). So, we can choose \(0<\lambda_{0}<1\) such that \(r_{1}<\lambda_{0}\min_{j\in\Omega}r_{2,j}\). Let \(\tilde{m}_{*}=\min\{m_{*},m_{1,*}\}\) where \(m_{*}\) and \(m_{1,*}\) are given by (2.14) and 4.13, respectively. Set \[\eta_{0}:=\frac{\Big{(}\frac{\beta_{\min}}{\lambda_{0}\beta_{\max}}\Big{)} \frac{(1-\lambda_{0})\tilde{m}_{*}}{2}}{1+\Big{(}\frac{\beta_{\min}}{\lambda_{0 }\beta_{\max}}\Big{)}(1-\lambda_{0})}\quad\text{and}\quad m_{0}=\frac{\tilde{ m}_{*}}{2}-\eta_{0}.\] Then \[m_{0}>0\quad\text{and}\quad\beta_{\min}m_{0}(1-\lambda_{0})=\lambda_{0}\beta_{ \max}\eta_{0}. \tag{4.16}\] **Step 1.** We proceed by contradiction to establish that \[\limsup_{t\to\infty}\sum_{j\in\Omega}I_{1,j}(t)\geq\eta_{0} \tag{4.17}\] for every solution \((S(t),I_{1}(t),I_{2}(t))\) of (1.1) with a positive initial data satisfying \(I_{1}\neq 0\). So, suppose that there is a solution \((S(t),I_{1}(t),I_{2}(t))\) of (1.1) with a positive initial data satisfying \(I_{1}(0)>(0,\cdots,0)^{T}\) and \[\limsup_{t\to\infty}\sum_{j\in\Omega}I_{1,j}(t)<\eta_{0}. \tag{4.18}\] Hence, after translation in time, we may suppose that \[\sum_{j\in\Omega}I_{1,j}(t)\leq\eta_{0}\quad\forall\ t\geq 0. \tag{4.19}\] Observe that \(S(t)\) satisfies \[d_{S}\mathcal{L}S-\beta_{\max}\eta_{0}S+(r_{2}-S)\circ\beta_{2}\circ I_{2} \leq_{1}d_{S}\mathcal{L}S+(\gamma_{1}-\beta_{1}\circ S)\circ I_{1}+(\gamma_{2 }-\beta_{2}\circ S)\circ I_{2}=\frac{dS(t)}{dt}, \tag{4.20}\] where \(r_{2,j}=\frac{\gamma_{2,j}}{\beta_{2,j}}\) for each \(j\in\Omega\). Now, thanks to (2.14), (4.13) and (4.19), we have that \(\|I_{2}(0)\|_{\infty}>0\) and there is \(t_{0}>0\) such that \[I_{2,j}(t)\geq m_{0}\quad\forall\ t\geq t_{0}.\] Next, define \[\underline{S}(t)=\lambda_{0}R_{2,\min}-e^{-m_{0}\beta_{\min}t}M_{0}\quad \forall\ t\geq t_{0},\] where \(\mathbf{0}<<_{1}M_{0}\in[\mathbb{R}_{+}]^{k}\) is fixed and satisfies \(\lambda_{0}R_{2,\min}-e^{-m_{0}\beta_{\min}t_{0}}M_{2}<S(t_{0})\). Thanks to (4.16), we have \[\frac{d_{\underline{S}}^{\underline{S}}}{dt}-d_{S}\mathcal{L} \underline{S}+\beta_{\max}\eta_{0}\underline{S}-(r_{2}-\underline{S})\circ \beta_{2}\circ I_{2}\] \[= m_{0}\beta_{\min}M_{0}e^{-m_{0}\beta_{\min}t}+\beta_{\max}\eta _{0}(\lambda_{0}R_{2,\min}-M_{0}e^{-m_{0}\beta_{\min}t})-(r_{2}-\lambda_{0}R_ {2,\min}+M_{0}e^{-m_{0}\beta_{\min}t})\circ\beta_{2}\circ I_{2}\] \[= e^{-m_{0}\beta_{\min}t}\Big{(}(m_{0}\beta_{\min}-\beta_{\max} \eta_{0})M_{0}-M_{0}\circ\beta_{2}\circ I_{2}\Big{)}+\lambda_{0}\beta_{\max} \eta_{0}R_{2,\min}-(r_{2}-\lambda_{0}R_{2,\min})\circ\beta_{2}\circ I_{2}\] \[\leq e^{-m_{0}\beta_{\min}t}\Big{(}(m_{0}\beta_{\min}-\beta_{\max} \eta_{0})M_{0}-\beta_{\min}m_{0}M_{0}\Big{)}+\beta_{\max}\eta_{0}\lambda_{0}R _{2,\min}-\beta_{\min}m_{0}(r_{2}-\lambda_{0}R_{2,\min})\] \[\leq e^{-m_{0}\beta_{\min}t}\Big{(}(m_{0}\beta_{\min}-\beta_{\max} \eta_{0})M_{0}-\beta_{\min}m_{0}M_{0}\Big{)}+\beta_{\max}\eta_{0}\lambda_{0}R _{2,\min}-\beta_{\min}m_{0}(R_{2,\min}-\lambda_{0}R_{2,\min})\] \[= e^{-m_{0}\beta_{\min}t}\Big{(}(m_{0}\beta_{\min}-\beta_{\max} \eta_{0})M_{0}-\beta_{\min}m_{0}M_{0}\Big{)}+(\beta_{\max}\eta_{0}\lambda_{0}- \beta_{\min}m_{0}(1-\lambda_{0}))R_{2,\min}\] \[= -e^{-m_{0}\beta_{\min}t}\beta_{\max}\eta_{0}M_{0}<_{1}\mathbf{0} \qquad\forall\ t>t_{0}.\] Hence, in view of (4.20), \(S(t)\geq\underline{S}(t)\) for all \(t\geq t_{0}\). Thus, \(I_{1}(t)\) satisfies \[d_{1}\mathcal{L}I_{1}(t)+(\lambda_{0}R_{2,\min}-R_{1,\min}-e^{-m_{0}\beta_{\min }t}M_{0})\circ\beta_{1}\circ I_{1}\leq_{1}\frac{dI_{1}(t)}{dt}\quad\forall\ t \geq t_{0}.\] As a result, since \(\lambda_{0}R_{2,\min}>R_{1,\min}\), \(e^{-m_{0}\beta_{\min}t}\to 0\) as \(t\to\infty\), and \(I_{1}(t_{0})>>0\), then \(\|I_{1}(t)\|_{\infty}\to\infty\) as \(t\to\infty\). This clearly, contradicts the fact that \(\|I_{1}(t)\|_{\infty}\leq N\) for all \(t\geq 0\). Therefore, (4.17) must hold. **Step 2.** We complete the proof of (4.15). Indeed, first recall that the set \(\mathcal{E}\) is compact, and invariant for the semiflow generated by classical solution of (1.1). Now define the mapping \(\Xi:\mathcal{E}\to[0,\infty)\) by \[\Xi(S,I_{1},I_{2})=\sum_{j\in\Omega}I_{1,j}\quad\forall\ (S,I_{1},I_{2})\in \mathcal{E}.\] Clearly, the mapping \(\Xi\) is continuous. Furthermore, \(\Xi(S(t),I_{1}(t),I_{2}(t))>0\) for every \(t>0\) whenever \((S(0),I_{1}(0),I_{2}(0))\in\mathcal{E}\) and satisfies \(\Xi((S(0),I_{1}(0),I_{2}(0))>0\). Then applying persistence theory [30, Theorem 5.2], it follows from (4.17) that there is some \(\eta_{0}^{*}>0\) such that \[\liminf_{t\to\infty}\sum_{j\in\Omega}I_{1,j}(t)\geq\eta_{0}^{*} \tag{4.21}\] for every solution \((S(t),I_{1}(t),I_{2}(t))\) of (1.1) with a positive initial data satisfying \(I_{1}\neq 0\). Finally, we can employ (3.12) to conclude that (4.15) follows from (4.21). **Lemma 4.3**.: _Suppose that \(\min_{l=1,2}\mathcal{R}_{0,l}(N)>1\), \(\mathfrak{R}_{1,j}=\mathfrak{R}_{1,i}\) for each \(i,j\in\Omega\) and \(\mathcal{R}_{0,1}(N)<\mathfrak{R}_{2,\min}(N)\). Then, there exists \(\eta_{1}>0\) such that_ \[\liminf_{t\to\infty}\min_{j\in\Omega}I_{2,j}\geq\eta_{1} \tag{4.22}\] _for every solution \((S(t),I_{1}(t),I_{2}(t))\) of (1.1) with initial data in \(\mathcal{E}\) satisfying \(\|I_{2}(0)\|_{\infty}>0\)._ Proof.: The proof follows a proper modification of that of Lemma 4.8, hence it is omitted. **Lemma 4.4**.: _Suppose that \(\mathcal{R}_{0,1}(N)>1\) and \(\mathfrak{R}_{1,j}\) is constant in \(j\in\Omega\). Then the strain-1 EE is linearly stable if \(\mathcal{R}_{0,1}(N)>\mathcal{R}_{0,2}(N)\) and unstable if \(\mathcal{R}_{0,1}(N)<\mathcal{R}_{0,2}(N)\)._ Proof.: Set \(r_{1,\min}=\min_{j\in\Omega}r_{1,j}\). Since \(\mathfrak{R}_{1,j}\) is constant in \(j\in\Omega\), then \(r_{1}=r_{1,\min}\mathbf{1}\) and \(\mathbf{E}_{1}^{*}:=(r_{1},(\frac{N}{k}-r_{1,\min})\mathbf{1},\mathbf{0})\) is the strain-1 EE solution. Linearizing (1.1) at this EE, we obtain the eigenvalue problem \[\begin{cases}\lambda W=d_{S}\mathcal{L}W-(\frac{N}{k}-r_{1,\min})\beta_{1} \circ W+(\gamma_{2}-r_{1,\min}\beta_{2})\tilde{W}\\ \lambda\hat{W}=d_{1}\mathcal{L}\hat{W}+(\frac{N}{k}-r_{1,\min})\beta_{1}\circ W \\ \lambda\tilde{W}=d_{2}\mathcal{L}\tilde{W}+(r_{1,\min}\beta_{2}-\gamma_{2}) \circ\tilde{W}\\ 0=\sum_{j\in\Omega}(W_{j}+\hat{W}_{j}+\tilde{W}_{j})\end{cases} \tag{4.23}\] in \([\mathbb{R}^{k}]^{3}\). Note that the third equation of (4.23) decouples from the first two equations. Let \((\lambda,(W,\hat{W},\tilde{W}))\) be an eigenpair of (4.23). If \(\tilde{W}\neq\mathbf{0}\), then \(\lambda\) is an eigenvalue of the symmetric and irreducible matrix \(d_{2}\mathcal{L}+\mathrm{diag}(r_{1,\min}\beta_{2}-\gamma_{2})\), hence it is a real number. If \(\tilde{W}=\mathbf{0}\), then we have two cases. First, if \(W\neq\mathbf{0}\), then \(\lambda\) is an eigenvalue of the symmetric and irreducible matrix \(d_{S}\mathcal{L}-\mathrm{diag}((\frac{N}{k}-r_{1,\min})\beta_{1})\), hence it is a real number. Next, if \(W=\mathbf{0}\), then \(\hat{W}\neq\mathbf{0}\) and \(\lambda\) is an eigenvalue of the symmetric and irreducible matrix \(d_{1}\tilde{L}\), hence it is a real number. This shows that any eigenvalue of (4.23) is a real number. Now, we proceed in two cases to complete the proof of the lemma. **Case 1.** Here, suppose that \(\mathcal{R}_{0,1}(N)>\mathcal{R}_{0,2}(N)\) and show that \(\mathbf{E}_{1}^{*}\) is linearly stable. Since \(\mathcal{R}_{0,2}(N)=\frac{N}{k}\rho(\mathcal{F}_{2}\mathcal{V}_{2}^{-1})\) (see (2.9)), then \(\mathcal{R}_{0,1}(N)>\frac{N}{k}\rho(\mathcal{F}_{2}\mathcal{V}_{2}^{-1})\), which is equivalent to \(\frac{N}{k\mathcal{R}_{0,1}(N)}\rho(\mathcal{F}_{2}\mathcal{V}_{2}^{-1})<1.\) This in turn implies that \(\lambda_{*}(\frac{N}{k\mathcal{R}_{0,1}(N)}\mathcal{F}_{2}-\mathcal{V}_{2})<0.\) Observing that \(\mathcal{R}_{0,1}(N)=\frac{N}{kr_{1,\min}}\) (since \(\mathfrak{R}_{1,j}\) is constant in \(j\in\Omega\)), then \[\frac{N}{k\mathcal{R}_{0,1}(N)}\mathcal{F}_{2}-\mathcal{V}_{2}=d_{2}\mathcal{ L}+\mathrm{diag}(r_{1,\min}\beta_{2}-\gamma_{2}).\] Therefore \[\lambda_{*}\big{(}d_{2}\mathcal{L}+\mathrm{diag}(r_{1,\min}\beta_{2}-\gamma_{2} )\big{)}=\lambda_{*}(\frac{N}{k\mathcal{R}_{0,1}(N)}\mathcal{F}_{2}-\mathcal{V }_{2})<0. \tag{4.24}\] Now, let \((\lambda,(W,\hat{W},\tilde{W}))\) be an eigenpair of (4.23). If \(\tilde{W}\neq\{\mathbf{0}\}\), then, by (4.24), \(\lambda\leq\lambda_{*}(d_{2}\mathcal{L}+\mathrm{diag}(r_{1,\min}\beta_{2}- \gamma_{2}))<0\). If \(W=\mathbf{0}\) and \(\tilde{W}=0\), then either \(W\neq\mathbf{0}\) or \(W=\mathbf{0}\). If \(W\neq\mathbf{0}\), then \[\lambda\leq\lambda_{*}(d_{S}\mathcal{L}-r_{1,\min}\mathrm{diag}(\mathcal{R}_{0, 1}(N)-1)\beta_{1})\leq r_{1,\min}(\mathcal{R}_{0,1}(N)-1)\beta_{\min}<0.\] Finally, if \(W=\tilde{W}=\mathbf{0}\), then \(\hat{W}\neq\mathbf{0}\) and satisfies \[\lambda\hat{W}=d_{1}\mathcal{L}\hat{W}\quad\text{and}\quad\sum_{j\in\Omega} \hat{W}_{j}=0.\] Then \(\hat{W}\) is not strictly positive and \(\lambda\) is an eigenvalue of \(d_{1}\mathcal{L}\). It then follows from the Perron-Frobenius Theorem that \(\lambda<\lambda_{*}(d_{1}\mathcal{L})=0\). In view of the above, we see that any eigenvalue of (4.23) is negative, which implies that \(\mathbf{E}_{1}^{*}\) is linearly stable. **Case 2.** Here, suppose that \(\mathcal{R}_{0,1}(N)<\mathcal{R}_{0,2}(N)\) and show that \(\mathbf{E}_{1}^{*}\) is unstable. By the similar arguments leading to (4.24), we have that \(\lambda:=\lambda_{*}(d_{2}\mathcal{L}+\mathrm{diag}(r_{1,\min}\beta_{2}- \gamma_{2}))>0\). Let \(\tilde{W}\) be a positive eigenvector associated to \(\lambda\). Since \(\lambda>0=\lambda_{*}(d_{S}\mathcal{L})\), then \(\lambda\) is in the resolvent set of \(d_{S}\mathcal{L}\). Thus, there is a unique \(W\in\mathbb{R}^{k}\) solving the first equation of (4.23). Similarly, since \(\lambda>0=\lambda_{*}(d_{1}\mathcal{L})\), there is a unique \(\hat{W}\) solving the second equation of (4.23). Therefore, \((W,\tilde{W},\tilde{W})\) solves the first three equations of (4.23). Moreover, since \(\mathcal{L}\) is symmetric, we get \[\lambda\sum_{j\in\Omega}(W_{j}+\hat{W}_{J}+\tilde{W}_{J})=\sum_{i,j}L_{i,j}(d _{S}(W_{i}-W_{j})+d_{1}(\tilde{W}_{i}-\hat{W}_{j})+d_{2}(\tilde{W}_{i}-\tilde {W}_{j}))=0,\] which yields that \(\sum_{j\in\Omega}(W_{j}+\hat{W}_{J}+\tilde{W}_{J})=0\) since \(\lambda>0\). Therefore, \((\lambda,(W,\hat{W},\tilde{W}))\) solves (4.24) with \(\mathbf{0}\ll_{1}\tilde{W}\). This shows that \(\mathbf{E}_{1}^{*}\) is unstable, since \(\lambda>0\). Proof of Theorem 2.2.: Suppose that \(\min_{l=1,2}\mathcal{R}_{0,1}(N)>1\) and \(\mathfrak{R}_{1,j}\) is constant in \(j\in\Omega\). (i) If \(\mathcal{R}_{0,1}(N)>\mathcal{R}_{0,2}(N)\), it follows from Lemma 4.4 that the strain-1 EE solution, \(\mathbf{E}_{1}^{*}=(r_{1},(\frac{N}{k}-r_{1,\min})\mathbf{1},\mathbf{0})\), is is linearly stable. Next, suppose that \(\mathfrak{R}_{2,\max}<\mathfrak{R}_{1,\min}\). Let \((S(t),I_{1}(t),I_{2}(t))\) be solution of (1.1) with a positive initial data in \(\mathcal{E}\). By Lemma 4.2, since \(\|I_{1}(0)\|_{\infty}>0\), then there is \(t_{1}>0\) such that \[\min_{j\in\Omega}I_{1,j}(t)\geq\frac{\eta_{1}}{2}\quad\forall\ t>t_{1}, \tag{4.25}\] where \(\eta_{1}\) is the positive number given in (4.15). Next, we claim that \[\lim_{t\to\infty}\|S(t)-r_{1,\min}\mathbf{1}\|_{\infty}=0. \tag{4.26}\] Suppose by contradiction that (4.26) is false. Then there is a sequence of positive numbers \(\{t_{n}\}_{n\geq 1}\) converging to infinity such that \[\inf_{n\geq 1}\|S(t_{n})-r_{1,\min}\mathbf{1}\|_{\infty}>0. \tag{4.27}\] Since \(\|S(t)\|_{\infty}+\|I_{1}(t)\|_{\infty}+\|I_{2}(t)\|_{\infty}\leq N\) for all \(t\geq 0\), by the Arzela-Ascoli theorem, there is a subsequence \(\{t_{n_{1}}\}_{n}\) of \(\{t_{n}\}_{n\geq 1}\) and \((S^{\infty},I_{1}^{\infty},I_{2}^{\infty})\in C^{1}(\mathcal{R}:[\mathbb{R}_{ +}^{k}]^{3})\) such that \((S(t+t_{n_{1}}),I_{1}(t+t_{n_{1}}),I_{2}(t+t_{n_{1}}))\to(S^{\infty}(t),I_{1} ^{\infty}(t),I_{2}^{\infty}(t))\) as \(n\to\infty\), for \(t\) on compact subsets of \(\mathbb{R}\). Furthermore, \((S^{\infty},I^{\infty})\) satisfies \[\begin{cases}\frac{dS^{\infty}}{dt_{n}^{\infty}}=d_{S}\mathcal{L}S^{\infty}+(r _{1,\min}\mathbf{1}-S^{\infty})\circ\beta_{1}\circ I_{1}^{\infty}+(r_{2}-S^{ \infty})\circ\beta_{2}\circ I_{2}^{\infty}&t\in\mathbb{R},\\ \frac{dI_{1}^{\infty}}{dt}=d_{1}\mathcal{L}I_{1}^{\infty}+(S^{\infty}-r_{1, \min}\mathbf{1})\circ\beta_{1}\circ I_{1}^{\infty}&t\in\mathbb{R},\\ \frac{dI_{2}^{\infty}}{dt}=d_{2}\mathcal{L}I_{2}^{\infty}+(S^{\infty}-r_{2}) \circ\beta_{2}\circ I_{2}^{\infty}&t\in\mathbb{R},\\ N=\sum_{j\in\Omega}(S_{j}^{\infty}+I_{1,j}^{\infty}+I_{2,j}^{\infty})&t\in \mathbb{R}.\end{cases} \tag{4.28}\] Moreover, thanks to (2.15) and (4.25), \[r_{1,\min}\mathbf{1}\leq_{1}S^{\infty}(t)\leq_{1}N\mathbf{1},\quad\mathbf{0} \leq_{1}I_{2}^{\infty}(t)\leq_{1}N\mathbf{1},\quad\text{and}\quad\frac{\eta_{1 }}{2}\mathbf{1}\leq_{1}I_{1}^{\infty}(t)\leq_{1}N\mathbf{1}\quad\forall\ t\in \mathbb{R}. \tag{4.29}\] Now, we claim that \[I_{2}^{\infty}(t)=\mathbf{0},\quad\forall\ t\in\mathbb{R}. \tag{4.30}\] To see this, observe from (4.28) and the fact that \(\mathcal{L}\) is symmetric that \[\frac{d\sum_{j\in\Omega}I_{2,j}^{\infty}}{dt}= \sum_{j\in\Omega}(S_{j}^{\infty}-r_{1,\min})\beta_{2,j}I_{2,j}^{ \infty}+\sum_{j\in\Omega}(r_{1,\min}-r_{2,j})\beta_{2,j}I_{2,j}^{\infty}\] \[\leq \|I_{2}(t)\|_{\infty}\beta_{\max}\sum_{j\in\Omega}(S_{j}^{\infty}-r _{1,\min})+(r_{1,\min}-r_{2,\min})\beta_{\min}\sum_{j\in\Omega}I_{2,j}^{\infty},\] \[\leq \Big{(}\beta_{\max}\sum_{j\in\Omega}(S_{j}^{\infty}-r_{1,\min})+(r _{1,\min}-r_{2,\min})\beta_{\min}\Big{)}\sum_{j\in\Omega}I_{2,j}^{\infty} \quad\forall\ t\in\mathbb{R},\] since \(r_{1,\min}<r_{2,\min}\) and \(S^{\infty}_{\min}\geq r_{1,\min}\) (see (4.29)). Therefore, \[\sum_{j\in\Omega}I^{\infty}_{2,j}(t)\leq e^{\beta_{\max}\int_{\tau}^{t}(\sum_{j\in\Omega}S^{\infty}(\sigma)- r_{1,\min})d\sigma+(t-\tau)(r_{1,\min}-r_{2,\min}))}\sum_{j\in\Omega}I^{\infty}_{2,j}(\tau)\] \[\leq Ne^{\beta_{\max}\int_{\tau}^{t}(\sum_{j\in\Omega}S^{\infty}( \sigma)-r_{1,\min})d\sigma+(t-\tau)(r_{1,\min}-r_{2,\min}))}\quad\forall\ t>\tau. \tag{4.31}\] On the other, since \[\frac{d\sum_{j\in\Omega}I^{\infty}_{1,j}}{dt}=\sum_{j\in\Omega}(S^{\infty}_{j} -r_{1,\min})\beta_{1,j}I^{\infty}_{1,j},\] where we have used the fact that \(\mathcal{L}\) is symmetric, then \[N\geq\sum_{j\in\Omega}I^{\infty}_{1,j}(t+\tau)= \sum_{j\in\Omega}I^{\infty}_{1,j}(t+\tau)+\int_{\tau}^{t}\sum_{j \in\Omega}(S^{\infty}_{j}(s)-r_{1,\min})\beta_{1,j}I^{\infty}_{1,j}(s)ds\] \[\geq \sum_{j\in\Omega}I^{\infty}_{1,j}(t+\tau)+\frac{\eta_{1}\beta_{ \min}}{2}\int_{\tau}^{t}\sum_{j\in\Omega}(S^{\infty}_{j}(s)-r_{1,\min})ds\quad \forall\ t>\tau. \tag{4.32}\] From (4.31) and (4.32), we obtain that \[0\leq\|I^{\infty}_{2}(t)\|_{\infty}\leq\sum_{j\in\Omega}I^{\infty}_{2,j}(t) \leq Ne^{\frac{2N\beta_{\max}}{\eta_{1}\beta_{\min}}}\,e^{-(r_{2,\min}-r_{1, \min})(t-\tau)}\quad t>\tau.\] Letting \(\tau\to-\infty\) in this inequality yields (4.30). Therefore, (4.28) becomes \[\begin{cases}\frac{dS^{\infty}}{dt}=d_{S}\mathcal{L}S^{\infty}+(r_{1,\min} \mathbf{1}-S^{\infty})\circ\beta_{1}\circ I^{\infty}_{1}&t\in\mathbb{R},\\ \frac{dI^{\infty}_{1}}{dt}=d_{1}\mathcal{L}I^{\infty}_{1}+(S^{\infty}-r_{1, \min}\mathbf{1})\circ\beta_{1}\circ I^{\infty}_{1}&t\in\mathbb{R},\\ N=\sum_{j\in\Omega}(S^{\infty}_{j}+I^{\infty}_{1,j})&t\in\mathbb{R}.\end{cases} \tag{4.33}\] Setting \(\tilde{S}^{\infty}=S^{\infty}-r_{1,\min}\mathbf{1}\), then by (4.29), \(\mathbf{0}\leq\tilde{S}^{\infty}(t)\), for every \(t\in\mathbb{R}\). Moreover, by (4.33) and the fact that \(\frac{\eta_{1}}{2}\mathbf{1}\leq I^{\infty}_{1}\), \(\tilde{S}^{\infty}\) satisfies \[\frac{d\tilde{S}^{\infty}}{dt}=d_{S}\mathcal{L}\tilde{S}^{\infty}-\frac{\eta _{1}}{2}\beta_{\min}\tilde{S}^{\infty}\quad t\in\mathbb{R},\] which implies \[\mathbf{0}\leq_{1}\tilde{S}^{\infty}(t)\leq_{1}e^{-\frac{\eta_{1}}{2}\beta_{ \min}(t-\tau)}\tilde{S}^{\infty}(\tau)\leq_{1}Ne^{-\frac{\eta_{1}}{2}\beta_{ \min}(t-\tau)}\mathbf{1}\to\mathbf{0}\ \text{as}\ \tau\to-\infty,\] for any \(t\in\mathbb{R}\). Hence \(\tilde{S}^{\infty}=\mathbf{0}\) for every \(t\in\mathbb{R}\), that is \(S^{\infty}(t)=r_{1,\min}\mathbf{1}\) for every \(t\in\mathbb{R}\). As, a result, \(S(t_{n_{1}})\to r_{1,\min}\mathbf{1}\) as \(n\to\infty\), which contradicts with (4.27). Therefore, (4.26) holds. Therefore, for \(0<\varepsilon<\frac{1}{2}(r_{2,\min}-r_{1,\min})\), there is \(t_{\varepsilon}>0\) such that \(S(t)\leq_{1}(r_{1,\min}+\varepsilon)\mathbf{1}\) for every \(t\geq t_{\varepsilon}\). Hence, \[\frac{dI_{2}}{dt}=d_{2}\mathcal{L}I_{2}+(S(t)-r_{2})\circ\beta_{2}\circ I_{2} \leq_{1}d_{2}\mathcal{L}I_{2}-(r_{2,\min}-r_{1,\min}-\varepsilon)\beta_{\min} I_{2}\quad\forall\ t\geq t_{\varepsilon},\] which implies that \(I_{2}(t)\to\mathbf{0}\) as \(t\to\infty\), since \(r_{2,\min}-r_{1,\min}-\varepsilon>0\). Finally, by the similar computations yielding (3.17), we have that \[\frac{d\widehat{I}_{1}}{dt}=d_{1}\mathcal{L}\widehat{I}_{1}+(\widehat{\beta_{1} \circ S\circ I_{1}}-\widehat{\gamma_{1}\circ I_{1}})=d_{1}\mathcal{L}\widehat{ I}_{1}+\widehat{Q}\] where \[Q=\beta_{1}\circ(S-r_{1,\min}\mathbf{1})\circ I_{1}+\beta_{1}\circ r_{1}\circ I _{1}-\gamma_{1}\circ I_{1}=\beta_{1}\circ(S-r_{1,\min}\mathbf{1})\circ I_{1}.\] It then follows from the variation of constant formula that \[\widehat{I}_{1}(t)=e^{d_{1}(t-\tau)\mathcal{L}}\widehat{I}_{1}(\tau)+\int_{0}^{t- \tau}e^{d_{1}(t-\tau-\sigma)\mathcal{L}}\widehat{Q}(\tau+\sigma)d\sigma\quad \forall\ t>\tau.\] We can now employ (3.3) and (3.5), to derive that \[\|\widehat{I}_{1}(t)\|\leq e^{d_{1}(t-\tau)\tilde{d}_{*}}\|\widehat{I}_{1}(\tau)\|+\int_{0}e^{ \tilde{d}_{*}d_{1}(t-\tau-\sigma)}\|\widehat{Q}(\tau+\sigma)\|d\sigma\] \[\leq 2e^{d_{1}(t-\tau)\tilde{d}_{*}}\|I_{1}(\tau)\|+2\int_{0}e^{ \tilde{d}_{*}d_{1}(t-\tau-\sigma)}\|Q(\tau+\sigma)\|d\sigma\] \[= 2e^{d_{1}(t-\tau)\tilde{d}_{*}}\|I_{1}(\tau)\|+2\int_{0}e^{ \tilde{d}_{*}d_{1}(t-\tau-\sigma)}\|\beta_{1}\circ(S-r_{1,\min}\mathbf{1}) \circ I_{1}(\tau+\sigma)\|d\sigma\] \[\leq 2e^{d_{1}(t-\tau)\tilde{d}_{*}}\|I_{1}(\tau)\|+2\int_{0}e^{ \tilde{d}_{*}d_{1}(t-\tau-\sigma)}\|\beta_{1}\|_{\infty}\|I_{1}(\tau+\sigma) \|_{\infty}\|(S-r_{1,\min}\mathbf{1})(\tau+\sigma)\|d\sigma\] \[\leq 2Ne^{\tilde{d}_{*}d_{1}(t-\tau)}+2\frac{N\|\beta_{1}\|_{\infty}} {d_{1}|\tilde{d}_{*}|}\sup_{\eta\geq\tau}\|S(\eta)-r_{1,\min}\mathbf{1}\|.\] Hence, letting \(t\to\infty\) and then \(\tau\to\infty\) in the right hand side of this inequality and recalling that (4.26), we obtain that \[\lim_{t\to\infty}\left\|I_{1}(t)-\frac{\sum_{j\in\Omega}I_{1,j}}{k}\mathbf{1} \right\|=\lim_{t\to\infty}\|\widehat{I}_{1}(t)\|=0.\] Therefore, \[\lim_{t\to\infty}\left\|I_{1}(t)-\Big{(}\frac{N}{k}-r_{1,\min}\Big{)}\mathbf{ 1}\right\|=0\] since \(\sum_{j\in\Omega}I_{1,j}=N-\sum_{j\in\Omega}S_{j}-\sum_{j\in\Omega}I_{2,j}\to N -kr_{1,\min}\) as \(t\to\infty\). This completes the proof of (i). (ii) If \(\mathcal{R}_{0,1}(N)<\mathcal{R}_{0,2}(N)\), we know from Lemma 4.4 that \(\mathbf{E}_{1}^{*}\) is unstable. Now, suppose that \(\mathfrak{R}_{1,\max}<\mathfrak{R}_{2,\min}\) (that is \(r_{1,\min}>r_{2,\max}\)) and we show that strain-1 eventually goes extinct. Let \((S(t),I_{1}(t),I_{2}(t))\) be solution of (1.1) with a positive initial in \(\mathcal{E}\). We claim that \[\lim_{t\to\infty}\|I_{1}(t)\|_{\infty}=0. \tag{4.34}\] Suppose by contradiction that there is a sequence of positive numbers \(\{t_{n}\}_{n\geq 1}\) converging to infinity such that \[\eta^{*}:=\inf_{n\geq 1}\|I_{1}(t_{n})\|_{\infty}>0.\] Hence, by (3.12), there is \(\tilde{\eta}_{*}>0\) such that \[\inf_{n\geq 1}\min_{j\in\Omega}I_{1,j}(t_{n})\geq\tilde{\eta}_{*}. \tag{4.35}\] By the Arzela-Ascoli theorem, if possible after passing to a subsequence, there is \((S^{\infty},I_{1}^{\infty},I_{2}^{\infty})\in C^{1}(\mathcal{R}:[\mathbb{R}_{ +}^{k}]^{3})\) such that \((S(t+t_{n}),I_{1}(t+t),I_{2}(t+t_{n}))\to(S^{\infty}(t),I_{1}^{\infty}(t),I_{2 }^{\infty}(t))\) as \(n\to\infty\), for \(t\) on compact subsets of \(\mathbb{R}\). Furthermore, \((S^{\infty},I_{1}^{\infty},I_{2}^{\infty})\) solves (4.28). By Lemma 4.3, there is \(\eta_{1}>0\) such that \[I_{2,j}^{\infty}(t)\geq\eta_{1}\quad\forall\ t\in\mathbb{R}, \tag{4.36}\] and by (2.15), \[S_{j}^{\infty}(t)\leq S_{\max}=r_{1,\min}\quad\forall\ t\in\mathbb{R},\ j\in\Omega. \tag{4.37}\] Moreover, by (4.35), we have that \[I_{1,j}^{\infty}(0)\geq\tilde{\eta}_{*}\quad\forall\ j\in\Omega. \tag{4.38}\] From the second equation of (4.28) and the fact that \({\cal L}\) is symmetric, we get \[\frac{d\sum_{j\in\Omega}I_{1,j}^{\infty}}{dt}=\sum_{j\in\Omega}(S_{j}^{\infty}-r_ {1,\min})\beta_{1,j}I_{1,j}^{\infty}\quad\forall\ t\in\mathbb{R}.\] Integrating this equation yields \[\sum_{j\in\Omega}I_{1,j}^{\infty}(t)=\sum_{j\in\Omega}I_{1,j}^{\infty}(t-\tau) +\int_{t-\tau}^{t}\sum_{j\in\Omega}(S_{j}^{\infty}(\sigma)-r_{1,\min})\beta_{1,j}I_{1,j}^{\infty}(\sigma)d\sigma\quad\forall\ t\in\mathbb{R},\ \tau>0. \tag{4.39}\] Thus, in view of (4.37), we obtain \[\int_{-\infty}^{0}\sum_{j\in\Omega}(r_{1,\min}-S_{j}^{\infty}(\sigma))\beta_{ 1,j}I_{1,j}^{\infty}(\sigma)d\sigma\leq N \tag{4.40}\] and \[\sum_{j\in\Omega}I_{1,j}^{\infty}(0)\leq\sum_{j\in\Omega}I_{1,j}^{\infty}(t) \quad\forall\ t\leq 0. \tag{4.41}\] From (4.38) and (4.40), we get \[k\tilde{\eta}_{*}\leq\sum_{j\in\Omega}I_{1,j}^{\infty}(t)\quad\forall\ t\leq 0.\] As a result, it follows from (3.12) that \[k\tilde{\eta}_{*}\leq\sum_{j\in\Omega}I_{1,j}^{\infty}(t)\leq k\|I_{1}^{ \infty}(t)\|_{\infty}\leq kc_{*}I_{1,\min}^{\infty}(t)\leq c_{*}kI_{1,j}^{ \infty}(t)\quad\forall\ t\leq 0.\] We then conclude from (4.40) and the last inequality that \[\int_{-\infty}^{0}\sum_{j\in\Omega}(r_{1,\min}-S_{j}^{\infty}(\sigma))d\sigma \leq\frac{c_{*}}{\tilde{\eta}_{*}\beta_{\min}}\int_{-\infty}^{0}\sum_{j\in \Omega}(r_{1,\min}-S_{j}^{\infty}(\sigma))\beta_{1,j}I_{1,j}^{\infty}(\sigma) d\sigma\leq\frac{N}{c_{*}\tilde{\eta}_{*}\beta_{\min}}. \tag{4.42}\] Next, from (4.28), we have that \[\frac{d\sum_{j\in\Omega}I_{2,j}^{\infty}}{dt}= \sum_{j\in\Omega}(S_{j}^{\infty}-r_{1,\min})\beta_{2,j}I_{2,j}^{ \infty}+\sum_{j\in\Omega}(r_{1,\min}-r_{2,j})\beta_{2,j}I_{2,j}^{\infty}\] \[\geq \beta_{\max}\Big{(}\sum_{j\in\Omega}(S_{j}^{\infty}-r_{1,\min}) \Big{)}\sum_{j\in\Omega}I_{2,j}^{\infty}+(r_{1,\min}-r_{2,\max})\sum_{j\in \Omega}\beta_{2,j}I_{2,j}^{\infty} \tag{4.43}\] where we have used (4.37). An integration of the last inequality gives \[\sum_{j\in\Omega}I_{2,j}^{\infty}(t)\geq e^{(r_{1,\min}-r_{2,\max})\tau-\beta_{\max}\int_{t-\tau}^{t} \sum_{j\in\Omega}(r_{1,\min}-S_{j}^{\infty}(\sigma))d\sigma}\sum_{j\in\Omega }I_{2,j}^{\infty}(t-\tau)\quad\ t\in\mathbb{R},\ \tau>0.\] This along with (4.36) and (4.42) yield \[N\geq\sum_{j\in\Omega}I_{2,j}^{\infty}(0)\geq e^{(r_{1,\min}-r_{2,\max})\tau-\beta_{\max}\int_{-\tau}^{0}\sum_{j\in \Omega}(r_{1,\min}-S_{j}^{\infty}(\sigma))d\sigma}\sum_{j\in\Omega}I_{2,j}^{ \infty}(-\tau)\] \[\geq e^{(r_{1,\min}-r_{2,\max})\tau-\beta_{\max}\int_{-\infty}^{0} \sum_{j\in\Omega}(r_{1,\min}-S_{j}^{\infty}(\sigma))d\sigma}\sum_{j\in\Omega }I_{2,j}^{\infty}(-\tau)\] \[\geq \eta_{1}e^{(r_{1,\min}-r_{2,\max})\tau-\frac{\beta_{\max}N}{c_{*} \tilde{\eta}_{*}\beta_{\min}}}\quad\forall\ \tau>0.\] This is clearly impossible since \(r_{1,\min}>r_{2,\max}\). Therefore, (4.34) must hold. This together with (2.15) implies that \(\liminf_{t\to\infty}\min_{j\in\Omega}I_{2,j}(t)\geq m_{*}\) for some positive number \(m_{*}\) in dependent of initial data. This completes the proof of the theorem. Next, we give a proof of Theorem 2.3. Proof of Theorem 2.3.: We suppose that \(\mathfrak{R}_{l,j}\) is constant in \(j\in\Omega\) for each \(l=1,2\). Hence \(r_{l,\min}=r_{l,\max}\) for each \(l=1,2\). Fix \(l\neq p=1,2\). We distinguish three cases. **Case 1.**\(\mathcal{R}_{0,l}(N)>\mathcal{R}_{0,p}(N)>1\). In this case, it follows from Theorem 2.2 that the strain-l EE is globally stable with respect to positive initial data. **Case 2.**\(\mathcal{R}_{0,l}(N)>1\geq\mathcal{R}_{0,p}(N)\). Without loss of generality, we suppose that \(l=1\) and \(p=2\). Let \((S(t),I_{1}(t),I_{2}(t))\) be solution of (1.1) with a positive initial data in \(\mathcal{E}\) and define \[\mathcal{L}(t)=\frac{1}{2}\sum_{j\in\Omega}S_{j}^{2}(t)+r_{1,\min}\sum_{j\in \Omega}I_{1,j}(t)+r_{2,\min}\sum_{j\in\Omega}I_{2,j}(t)\quad\forall\ t\geq 0. \tag{4.44}\] Clearly, \(\mathcal{L}>0\) for every \(t>0\) and uniformly bounded above by \(\frac{N^{2}}{2}+(r_{1,\max}+r_{2,\max})N\). Moreover, \[\frac{d\mathcal{L}}{dt}= \sum_{j\in\Omega}\Big{(}d_{S}\sum_{i\in\Omega}L_{i,j}(S_{i}-S_{j} )+\sum_{l=1}^{2}(r_{l,\min}-S_{j})\beta_{l,j}I_{l,j}\Big{)}S_{j}+\sum_{l=1}^{ 2}\sum_{j\in\Omega}r_{l,\min}(S_{j}-r_{l,\min})\beta_{l,j}I_{l,j}\] \[= d_{S}\sum_{j\in\Omega}\sum_{i\in\Omega}(S_{i}-S_{j})S_{j}+\sum_{ l=1}^{2}\sum_{j\in\Omega}S_{j}(r_{l,\min}-S_{j})\beta_{l,j}I_{l,j}+\sum_{l=1}^{ 2}\sum_{j\in\Omega}r_{l,\min}(S_{j}-r_{l,\min})\beta_{l,j}I_{l,j}\] \[= -\frac{d_{S}}{2}\sum_{i,j\in\Omega}(S_{i}-S_{j})^{2}+\sum_{l=1}^{ 2}\sum_{j\in\Omega}(r_{l,\min}-S_{j})(S_{j}-r_{l,\min})\beta_{l,j}I_{l,j}\] \[= -\frac{d_{S}}{2}\sum_{i,j\in\Omega}(S_{i}-S_{j})^{2}-\sum_{l=1}^{ 2}\sum_{j\in\Omega}(S_{j}-r_{l,\min})^{2}\beta_{l,j}I_{l,j}\] \[\leq 0.\] Therefore, \(\mathcal{L}\) is a Lyapunov function. It then follows from the LaSalle's invariant principle ([19, Theorem 4.3.4]), the maximal invariant set, \(\mathcal{I}\), of \(\tilde{\mathcal{E}}:=\{(S,I_{1},I_{2})\in\mathcal{E}\ :\ \frac{d\mathcal{L}}{dt}=0\}\) is the global attractor for solutions of (1.1). Now, observe that \((S,I_{1},I_{2})\in\tilde{\mathcal{E}}\) if and only if \[S_{j}=S_{1},\quad\text{and}\quad(S_{1}-r_{l,\min})^{2}I_{l,j}=0\quad\forall\ l=1,2,\ j\in\Omega. \tag{4.45}\] As a result since \(\mathcal{I}\subset\tilde{\mathcal{E}}\) and is invariant for the semiflow generated by solutions of (1.1) on \(\mathcal{E}\), we conclude from (4.45) that it is a subset of the spatially homogeneous EE solutions. Thus which is equivalent to \[\mathcal{I}\subset\Big{\{}\big{(}\frac{N}{k}\mathbf{1},\mathbf{0},\mathbf{0} \big{)},\ (r_{1,\min}\mathbf{1},\big{(}\frac{N}{k}-r_{1,\min}\big{)}\mathbf{1},\mathbf{0} \big{)}\Big{\}}:=\{\mathbf{E}^{0},\mathbf{E}_{1}^{*}\}.\] Note here that (1.1) doesn't have a strain-2 constant EE solution since \(\frac{N}{k}\leq r_{2,\min}\). It is clear that \(\{\mathbf{E}^{0},\mathbf{E}_{1}^{*}\}\subset\tilde{\mathcal{E}}\) and is invariant for the semiflow generated by solution of (1.1). Therefore, \(\mathcal{I}=\{\mathbf{E}^{0},\mathbf{E}_{1}^{*}\}\). However, since \(\mathcal{R}_{0}(N)=\mathcal{R}_{0,1}(N)>1\), the DFE solution \(\mathbf{E}^{0}\) is a repeller for solutions of (1.1) with positive initial data (see Lemma 3.4). Therefore, \(\mathbf{E}_{1}^{*}\) is the global attractor for solution of (1.1) **Case 3.**\(1\geq\max\{\mathcal{R}_{0,1}(N),\mathcal{R}_{0,2}(N)\}\). In this, consider again the Lyapunov function introduced in (4.44). This time we have that \(\mathcal{I}=\{\mathbf{E}^{0}\}\). Hence the DFE attracts all solutions of (1.1) with initial data in \(\mathcal{E}\). Observe that (i) of the theorem follows from case 3, while statement (ii) follows from case 1 and case 2. ### Proofs of Theorems 2.4, 2.5 and 2.6 Proof of Theorem 2.4.: Suppose that \(d=d_{1}=d_{2}=d_{S}\). Let \((S(t),I_{1}(t),I_{2}(t))\) be solution of (1.1) with a positive initial data in \(\mathcal{E}\). Then \[\begin{cases}\frac{d(S+I_{1}+I_{2})}{dt}=d\mathcal{L}(S+I_{1}+I_{2})&t>0,\\ N=\sum_{j\in\Omega}(S_{j}+I_{1,j}+I_{2,j})&t\geq 0.\end{cases}\] It then follows from (3.3) and (3.4) that \[\Big{\|}S(t)+I_{1}(t)+I_{2}(t)-\frac{N}{k}\mathbf{1}\Big{\|}\leq e^{t\tilde{d }_{*}}\|S(0)+I_{1}(0)+I_{2}(0)\|\leq N\sqrt{k}e^{t\tilde{d}_{*}}\quad\forall\ t \geq 0.\] Therefore, \[\Big{(}\frac{N}{k}-N\sqrt{k}e^{t\tilde{d}_{*}}\Big{)}\mathbf{1}-(I_{1}(t)+I_{ 2}(t)\Big{)}\leq_{1}S(t)\leq_{1}\Big{(}\frac{N}{k}+N\sqrt{k}e^{t\tilde{d}_{*}} \Big{)}\mathbf{1}-(I_{1}(t)+I_{2}(t))\quad\forall\ t\geq 0. \tag{4.46}\] As a result, \[\frac{dI_{l}}{dt}\leq_{1}d_{l}\mathcal{L}I_{l}+\Big{(}\beta_{l}\circ\Big{(} \Big{(}\frac{N}{k}+N\sqrt{k}e^{t\tilde{d}_{*}}\Big{)}\mathbf{1}-I_{1}-I_{2} \Big{)}-\gamma_{l})\circ I_{l}. \tag{4.47}\] Now, suppose that \(\mathcal{R}_{0,l}(N)\leq 1\) for some \(l=1,2\). This implies that \(\lambda_{l}:=\lambda_{*}(d_{l}\mathcal{L}+\mathrm{diag}(\frac{N}{k}\beta_{l}- \gamma_{l}))\leq 0\). Let \(E_{l}\) be the positive eigenvector associated with \(\lambda_{l}\) satisfying \(\|E_{l}\|_{\infty}=1\). Observe that \[\lambda_{l}E_{l}=d_{l}\mathcal{L}E_{l}+(\frac{N}{k}\beta_{l}-\gamma_{l})\circ E _{l}. \tag{4.48}\] From (4.47) and (4.48), we obtain that \[\frac{d}{dt}E_{l}\circ I_{l}\leq d_{l}(E_{l}\circ\mathcal{L}I_{l}-I_{l} \mathcal{L}E_{l})-I_{l}\circ I_{l}\circ E_{l}+N\sqrt{k}e^{t\tilde{d}_{*}}\beta _{l}\circ E_{l}\circ I_{l}\quad\forall\ t>0. \tag{4.49}\] Since \(\mathcal{L}\) is symmetric, it then follows that \[\frac{d}{dt}\sum_{j\in\Omega}E_{l,j}I_{l,j}\leq -\sum_{j\in\Omega}E_{l,j}I_{l,j}^{2}+N\sqrt{k}e^{t\tilde{d}_{*}} \beta_{\max}\sum_{j\in\Omega}E_{l,j}I_{l,j}\] \[\leq -\sum_{j\in\Omega}(E_{l,j}I_{l,j})^{2}+N\beta_{\max}\sqrt{k}e^{t \tilde{d}_{*}}\sum_{j\in\Omega}E_{l,j}I_{l,j}\quad(\text{since }\|E_{l}\|_{\infty}=1)\] \[\leq -\frac{1}{k}\Big{(}\sum_{j\in\Omega}E_{l,j}I_{l,j}\Big{)}^{2}++N \beta_{\max}\sqrt{k}e^{t\tilde{d}_{*}}\sum_{j\in\Omega}E_{l,j}I_{l,j}\quad( \text{by the Cauchy-Schwarz inequality})\] \[= \Big{(}N\beta_{\max}\sqrt{k}e^{t\tilde{d}_{*}}-\frac{1}{k}\sum_{ j\in\Omega}E_{l,j}I_{l,j}\Big{)}\sum_{j\in\Omega}E_{l,j}I_{l,j}. \tag{4.50}\] Therefore, since \(\tilde{d}_{*}<0\), and hence \(e^{t\tilde{d}_{*}}\to 0\) as \(t\to\infty\), we then conclude that \(\sum_{j\in\Omega}E_{l,j}I_{l,j}(t)\to 0\) as \(t\to\infty\). Consequently, \(\|I_{l}(t)\|_{\infty}\to 0\) as \(t\to\infty\) since \(E_{l}\) is strictly positive. (i) Now, if \(\mathcal{R}_{0}(N)\leq 1\), it follows from the above that \(\|I_{l}(t)\|\to 0\) as \(t\to\infty\), for every \(l=1,2\). This together with (3.19), yields that \(\|S(t)-\frac{N}{k}\mathbf{1}\|\to 0\) as \(t\to\infty\). This shows that the DFE is globally stable if \(\mathcal{R}_{0}(N)\leq 1\). So, (i) is proved. (ii) Suppose without lost of generality that \(\mathcal{R}_{0,1}(N)>1\geq\mathcal{R}_{0,2}(N)\). We know from the above that \(\|I_{2}(t)\|\to 0\) as \(t\to\infty\), which implies that \(I_{2}(t)\pm N\sqrt{k}e^{t\tilde{d}_{*}}\to 0\) as \(t\to\infty\). Recalling that \(I_{1}(t)\) satisfies (4.46) and (4.47). and \[d_{1}\mathcal{L}I_{1}+\Big{(}\beta_{1}\circ\Big{(}\Big{(}\frac{N} {k}-N\sqrt{k}e^{t\tilde{d}_{*}}\Big{)}\mathbf{1}-I_{1}-I_{2}\Big{)}-\gamma_{1} )\circ I_{1}\] \[\leq_{1}\frac{dI_{l}}{dt}\leq_{1}d_{1}\mathcal{L}I_{1}+\Big{(} \beta_{1}\circ\Big{(}\Big{(}\frac{N}{k}+N\sqrt{k}e^{t\tilde{d}_{*}}\Big{)} \mathbf{1}-I_{1}\Big{)}-\gamma_{1})\circ I_{1}\quad t>0,\] we can employ a perturbation argument to conclude that \(I_{1}(t)\to I_{1}^{*}\) as \(t\to\infty\), where \(I^{*}\) is the unique positive solution of the multiple-patch logistic system \[0=d_{1}\mathcal{L}I_{1}^{*}+\Big{(}\beta_{1}\circ\Big{(}\frac{N}{k}\mathbf{1}-I _{1}^{*}\Big{)}-\gamma_{1}\Big{)}I_{1}^{*}.\] The existence of \(I_{1}^{*}\) follows from standard results on the logistic equations and the fact that \(\lambda_{*}(d_{1}\mathcal{L}+\mathrm{diag}(\frac{N}{k}\beta_{1}-\gamma_{1}))>0\). Note that the last inequality holds since \(\mathcal{R}_{0,1}(N)=\frac{N}{k}\rho(\mathcal{F}_{1}\mathcal{V}^{-1})>1\). We then conclude from (4.46) that \((S(t),I_{1}(t),I_{2}(t))\to\mathbf{E}_{1}^{*}:=(\frac{N}{k}\mathbf{1}-I_{1}^{*},I_{1}^{*},0)\) as \(t\to\infty\). This completes the proof of (ii). Proof of Theorem 2.5.: Suppose that \(k=2\), \(\min\{\mathcal{R}_{0,1}(N),\mathcal{R}_{0,2}(N)\}>1\). Let \(\tilde{\mathcal{R}}_{1}(N)\) and \(\tilde{\mathcal{R}}_{2}(N)\) be defined by (2.17). (i) Suppose that \(\tilde{\mathcal{R}}_{1}(N)>1\). Let \((S(0),I_{1}(0),I_{2}(0))\in\mathcal{E}\) such that \(\|I_{1}(0)\|_{\infty}>0\). Take \(\mathbf{E}^{*}:=(S^{*},\mathbf{0},I_{2}^{*})\in\mathcal{E}_{2}^{*}\). Let \(\mathbf{P}^{*}\) be an eigenvector, with positive entries, associated with \(\tilde{\mathcal{R}}_{1}(\mathbf{E}_{2}^{*})\), that is \(\mathbf{E}_{2}^{*}\) satisfies \[\mathrm{diag}((\beta_{1}\circ S_{2}^{*})\mathcal{V}_{1}^{-1})\mathbf{P}^{*}= \tilde{\mathcal{R}}_{1}(\mathbf{E}_{2}^{*})\mathbf{P}^{*}\] which is equivalent to \[0=d_{1}\mathcal{L}\mathbf{P}^{*}+\Big{(}\frac{1}{\tilde{\mathcal{R}}_{1}( \mathbf{E}_{2}^{*})}\beta_{1}\circ S_{2}^{*}-\gamma_{1}\Big{)}\circ\mathbf{P} ^{*}\] Hence, since \(\mathcal{L}\) is symmetric, \[\frac{d}{dt}\sum_{j\in\Omega}P_{j}^{*}I_{1,j}(t)= \sum_{j\in\Omega}\beta_{1,j}(S_{j}(t)-\frac{1}{\tilde{\mathcal{R }}_{1}(\mathbf{E}_{2}^{*})}S_{2,j}^{*})P_{j}^{*}I_{1,j}(t)\] \[= \sum_{j\in\Omega}\beta_{1,j}(S_{j}(t)-S_{2,j}^{*})P_{j}^{*}I_{1,j }(t)+\frac{(\tilde{\mathcal{R}}_{1}(\mathbf{E}_{2}^{*})-1)}{\tilde{\mathcal{R }}_{1}(\mathbf{E}_{2}^{*})}\sum_{j\in\Omega}\beta_{1,j}S_{2,j}^{*}P_{j}^{*}I_{ 1,j}(t)\] \[\geq \beta_{\max}\Big{(}\frac{\beta_{\min}}{\beta_{\max}}\frac{( \tilde{\mathcal{R}}_{1}(N)-1)}{\tilde{\mathcal{R}}_{1}(N)}-\mathrm{dist}((S(t),I_{1}(t),I_{2}(t)),\mathcal{E}_{2}^{*})\Big{)}\sum_{j\in\Omega}P_{j}^{*}I_{1, j}(t)\quad t>0.\] So, an integration of this yields that \[\sum_{j\in\Omega}P_{j}^{*}I_{1,j}(t)\geq e^{\int_{0}^{t}\beta_{\max}\Big{(} \frac{\beta_{\min}}{\beta_{\max}}\frac{(\tilde{\mathcal{R}}_{1}(N)-1)}{ \tilde{\mathcal{R}}_{1}(N)}-\mathrm{dist}((S(\tau),I_{1}(\tau),I_{2}(\tau)), \mathcal{E}_{2}^{*})\Big{)}d\tau}\sum_{j\in\Omega}P_{j}^{*}I_{1,j}(0)\quad t>0,\] from which it follows that \[\frac{\ln\Big{(}\sum_{j\in\Omega}P_{j}^{*}I_{1,j}(t)\Big{)}}{t\beta_{\max}} \geq\frac{\beta_{\min}(\tilde{\mathcal{R}}_{1}(N)-1)}{\beta_{\max}\tilde{ \mathcal{R}}_{1}(N)}-\frac{1}{t}\int_{0}^{t}\mathrm{dist}((S(\tau),I_{1}(\tau),I_{2}(\tau)),\mathcal{E}_{2}^{*})d\tau+\frac{\ln\Big{(}\sum_{j\in\Omega}P_{j }^{*}I_{1,j}(0)\Big{)}}{\beta_{\max}t}.\] Since, \(\frac{\ln(kN\|\mathbf{P}^{*}\|)}{\beta_{\max}t}\geq\frac{\ln\Big{(}\sum_{j\in \Omega}P_{j}^{*}I_{1,j}(t,j)\Big{)}}{t\beta_{\max}}\) for all \(t>0\), letting \(t\to\infty\) in the last inequality implies that (2.18) holds. Next, suppose in addition that \(\mathcal{E}_{2}^{*}\cup\{\mathbf{E}^{0}\}\) is the global attractor for classical solutions of (1.1) with initial data \((S(0),I_{1}(0),I_{2}(0))\in\mathcal{E}\) satisfying \(\|I_{1}(0)\|=0\). We appeal to the persistence theory [30] to prove (2.19). To this end, define \[\xi(S,I_{1},I_{2})=\|I_{1}\|_{\infty},\qquad(S,I_{1},I_{2})\in\mathcal{E}. \tag{4.51}\] Clearly, \(\xi\) is continuous. Moreover, by the uniqueness of solution of (1.1), we see that for any given initial data \((S(0),I_{1}(0),I_{2}(0))\in\mathcal{E}\), \(\xi((S(t_{0}),I_{1}(t_{0}),I_{2}(t_{0})))=0\) for some \(t_{0}\geq 0\) if and only if \(\xi((S(t),I_{1}(t),I_{2}(t)))=0\) for all \(t\geq 0.\) Since \(\min\{\mathcal{R}_{0,1},\mathcal{R}_{0,2}\}>1\), by Lemma 3.4, we know that there is \(\sigma_{0}>0\) such that \[\limsup_{t\to\infty}\|(S(t),I_{1}(t),I_{2}(t))-\mathbf{E}^{0}\|\geq\sigma_{0} \quad\text{whenever}\quad\|I_{1}(0)\|+\|I_{2}(0)\|>0. \tag{4.52}\] Note also from (2.18) there there is \(\tilde{\sigma}_{0}>0\) such that \[\limsup_{t\to\infty}\mbox{dist}((S(t),I_{1}(t),I_{2}(t)),\mathcal{E}_{2}^{*}) \geq\tilde{\sigma}_{0}\quad\mbox{whenever}\quad\xi((S(0),I_{1}(0),I_{2}(0)))>0.\] Therefore, \[\limsup_{t\to\infty}\mbox{dist}((S(t),I_{1}(t),I_{2}(t)),\mathcal{E}_{2}^{*} \cup\{\mathbf{E}^{0}\})\geq\min\{\sigma_{0},\tilde{\sigma}_{0}\}\quad\mbox{ whenever}\quad\xi((S(0),I_{1}(0),I_{2}(0)))>0. \tag{4.53}\] Now, we claim that there is a positive \(\sigma_{1}>0\) such that \[\limsup_{t\to\infty}\xi(S(t),I_{1}(t),I_{2}(t))\geq\sigma_{1}\quad\mbox{whenever }\;(S(0),I_{1}(0),I_{2}(0))\in\mathcal{E},\;\xi(S(0),I_{1}(0),I_{2}(0))>0. \tag{4.54}\] We proceed by contradiction. So, suppose that there a sequence \(\{(S^{n}(0),I_{1}^{n}(0),I_{2}^{n}(0))\}_{n\geq 1}\) of initial data in \(\mathcal{E}\) satisfying \(\xi(S^{n}(0),I_{1}^{n}(0),I_{2}^{n}(0))>0\) and \[\sup_{t\geq 0}\|I_{1}^{n}(t)\|_{\infty}=\sup_{t\geq 0}\xi(S^{n}(t),I_{1}^{n}(t),I_{2}^{n}(t))\leq\frac{1}{n}\quad\forall\ n\geq 1. \tag{4.55}\] By (2.14), there is \(\sigma_{2}>0\), such that, for each \(n\geq 1\), there is \(t_{n}\gg 1\) such that \[\min_{j\in\Omega}\sum_{l=1}^{2}I_{l,j}^{n}(t_{n}+t)\geq\sigma_{2}\quad\forall \ t\geq 0. \tag{4.56}\] It then follows from (4.55) that there is \(n_{0}>1\) \[\min_{j\in\Omega}I_{2,j}^{n}(t_{n}+t)\geq\frac{\sigma_{2}}{2},\quad\forall\ t \geq 0,n\geq n_{0}. \tag{4.57}\] However, thanks to (4.53), for each \(n\geq 1\), there is \(\tilde{t}_{n}>t_{n}\), \(\tilde{t}_{n}-t_{n}\to\infty\), as \(n\to\infty\), such that \[\mbox{dist}((S^{n}(\tilde{t}_{n}),I_{1}^{n}(\tilde{t}_{n}),I_{2}(\tilde{t}_{n })),\mathcal{E}_{2}^{*}\cup\{\mathbf{E}^{0}\})\geq\frac{1}{2}\min\{\tilde{ \sigma}_{0},\sigma_{0}\}. \tag{4.58}\] Finally, consider the sequence of solutions \((S^{n}(t+\tilde{t}_{n}),I^{n}(t+\tilde{t}_{n}),I_{2}^{n}(t+\tilde{t}_{n}))\) of (1.1). By the Arzela-Ascoli theorem, possibly after passing to a subsequence, there is a nonnegative solution \((S^{*}(t),I_{1}^{*}(t),I_{2}^{*}(t))\) of (1.1), defined for all \(t\in\mathbb{R}\), such that \((S^{n}(t+\tilde{t}_{n}),I^{n}(t+\tilde{t}_{n},I_{2}^{n}(t+\tilde{t}_{n})))\to( S^{*}(t),I_{1}^{*}(t),I_{2}^{*}(t))\) as \(n\to\infty\), locally uniformly on \(\mathbb{R}\). Clearly, by (4.55), \(I_{1}^{*}(t)=0\) for all \(t\in\mathbb{R}\). Moreover, we claim that \[(S^{*}(t),\mathbf{0},I_{2}^{*}(t))\in\mathcal{E}_{2}^{*}\quad\forall\ t\in \mathbb{R}. \tag{4.59}\] To see this, note from (4.57) and the fact that \(\tilde{t}_{n}-t_{n}\to\infty\) as \(n\to\infty\) that \(I_{2}^{*}(t)\geq\frac{\sigma_{2}}{2}\) for all \(t\in\mathbb{R}\). This also implies that \(\|(S^{*}(t),0,I_{2}^{*}(t))-\mathbf{E}^{0}\|\geq\frac{\sigma_{2}}{2}\) for all \(t\in\mathbb{R}\). Therefore, since \(\mathcal{E}_{2}^{*}\cup\{\mathbf{E}^{0}\}\) is the global attractor for solutions of (1.1), when restricted to initial data \(I_{1}(0)=0\), and (4.52) holds, it follows from [30, Theorem 5.7] that (4.59) holds. As a result, we obtain that \(\mbox{dist}((S^{*}(0),0,I_{2}^{*}(0)),\mathcal{E}_{2}^{*}\cap\{\mathbf{E}^{0} \})=0\), which contradicts with (4.58). Therefore, claim (4.54) holds. We can now invoke persistence theory [30, Theorem 5.2], we deduce that there is \(\sigma_{1,*}>0\) such that \[\liminf_{t\to\infty}\xi(S(t),I_{1}(t),I_{2}(t))\geq\sigma_{1}\quad\mbox{whenever }\;(S(0),I_{1}(0),I_{2}(0))\in\mathcal{E},\;\xi(S(0),I_{1}(0),I_{2}(0))>0.\] Therefore, by the similar arguments as those in the proof of step 2 of Theorem 2.1-(ii-3) derive that (2.19) holds. (ii) Suppose that desired hypotheses hold. Define the mapping \(\tilde{\xi}\;:\;\mathcal{E}\to[0,\infty)\) by \[\tilde{\xi}(S,I_{1},I_{2})=\min_{j\in\Omega}\min_{l=1,2}I_{l,j}.\] Then \(\tilde{\xi}\) is continuous and concave. Note that \(\tilde{\xi}(S(t),I_{1}(t),I_{2}(t))>0\) for all \(t>0\) if \(\tilde{\xi}(S(0),I_{1}(0),I_{2}(0))>0\). Furthermore, by (4.52), there is \(\nu_{*}>0\) such that \[\liminf_{t\to\infty}\tilde{\xi}(S(t),I_{1}(t),I_{2}(t))\geq\nu_{*}\quad\text{ whenever}\quad\tilde{\xi}(S(0),I_{1}(0),I_{2}(0))>0.\] Therefore, since the set \(\mathcal{E}\) is convex and compact, and the semiflow induced by the solutions of (1.1) is also compact and continuous, we can invoke [30, Theorem 6.2] to conclude that system (1.1) has at least one coexistence EE solution. We conclude this section with a proof of Theorem 2.6. Proof of Theorem 2.6.: Suppose that **(A2)** holds and \(\min\big{\{}\mathcal{R}_{0,1}(N),\mathcal{R}_{0,2}(N)\big{\}}>1\). Let \(\mathbf{E}_{1}^{*}=(\frac{N}{k}\mathbf{1}-I_{1}^{*},I_{1}^{*},\mathbf{0})\) and \(\mathbf{E}_{2}^{*}=(\frac{N}{k}\mathbf{1}-I_{2}^{*},\mathbf{0},I_{2}^{*})\) be the single-strain EE solutions of (1.1). It follows from [21, Theorem 3] that the single strain EE solution are globally stable with respect to positive initial on each of the set \(\mathcal{E}_{1}\) and \(\mathcal{E}_{2}\). Hence, the assertions (i) and (ii) follows from Theorem 2.5. (iii) By [12, Theorem 2.7], \(\mathcal{R}_{0,l}(N)\to\mathfrak{R}_{l,\max}(N)\) as \(d\to 0\), for each \(i=1,2\). Hence, since for each \(j=1,2\), \(H_{l}^{+}\neq\emptyset\), there is \(d_{*,1}>0\) such that \(\min\{\mathcal{R}_{0,1}(N),\mathcal{R}_{0,2}(N)\}>1\) for every \(0<d<d_{*,1}\). This ensures that the unique single-strain EE solutions, \(\mathbf{E}_{1}^{*}=(\frac{N}{k}\mathbf{1}-I_{1}^{*},I_{1}^{*},\mathbf{0})\) and \(\mathbf{E}_{2}^{*}=(\frac{N}{k}\mathbf{1}-I_{2}^{*},\mathbf{0},I_{2}^{*})\), exist whenever \(0<d_{*,1}\). Moreover, by [21, Theorem 9-b], it holds that \[I_{l}^{*}\to\Big{(}\frac{k}{N}\mathbf{1}-r_{l}\Big{)}_{+}\quad\text{as }d\to 0, \quad l=1,2.\] Therefore, by [12, Theorem 2.7], it holds that \[\tilde{\mathcal{R}}_{l}(N)=\rho\Big{(}\Big{(}\frac{N}{k}\mathcal{F}_{l}- \operatorname{diag}(\beta_{l}\circ I_{p}^{*})\Big{)}\mathcal{V}_{l}^{-1}\Big{)} \to\tilde{\mathcal{R}}_{l}^{*}(N):=\max_{j\in\Omega}\mathfrak{R}_{l,j}\min \Big{\{}\frac{N}{k},r_{p,j}\Big{\}}\quad\text{as}\quad d\to 0,\quad p\neq l\in\{1,2\}. \tag{4.60}\] However, for each \(l\neq p\in\{1,2\}\) and \(j\in\Sigma_{l}\cap H_{l}^{+}\neq\emptyset\), it holds that \(\frac{N}{k}\mathfrak{R}_{l,j}>1\) and \(\mathfrak{R}_{l,j}r_{p,j}>1\), that is, \(\mathfrak{R}_{l,j}\min\Big{\{}\frac{N}{k},r_{p,l}\Big{\}}>1\). Therefore, \(\min\Big{\{}\tilde{\mathcal{R}}_{1}^{*}(N),\tilde{\mathcal{R}}_{2}^{*}(N) \Big{\}}>1\). Hence, by (4.60), there is \(0<d_{*}<d_{*,1}\) such that \(\min\{\tilde{\mathcal{R}}_{1}(N),\tilde{\mathcal{R}}_{2}(N)\}>1\) for every \(0<d<d_{*}\).
2306.01931
Simple Data Augmentation Techniques for Chinese Disease Normalization
Disease name normalization is an important task in the medical domain. It classifies disease names written in various formats into standardized names, serving as a fundamental component in smart healthcare systems for various disease-related functions. Nevertheless, the most significant obstacle to existing disease name normalization systems is the severe shortage of training data. Consequently, we present a novel data augmentation approach that includes a series of data augmentation techniques and some supporting modules to help mitigate the problem. Our proposed methods rely on the Structural Invariance property of disease names and the Hierarchy property of the disease classification system. The goal is to equip the models with extensive understanding of the disease names and the hierarchical structure of the disease name classification system. Through extensive experimentation, we illustrate that our proposed approach exhibits significant performance improvements across various baseline models and training objectives, particularly in scenarios with limited training data.
Wenqian Cui, Xiangling Fu, Shaohui Liu, Mingjun Gu, Xien Liu, Ji Wu, Irwin King
2023-06-02T22:12:05Z
http://arxiv.org/abs/2306.01931v3
Exploring semantic information in disease: Simple Data Augmentation Techniques for Chinese Disease Normalization ###### Abstract The disease is a core concept in the medical field, and the task of normalizing disease names is the basis of all disease-related tasks. However, due to the multi-axis and multi-grain nature of disease names, incorrect information is often injected and harms the performance when using general text data augmentation techniques. To address the above problem, we propose a set of data augmentation techniques that work together as an augmented training task for disease normalization. Our data augmentation methods are based on both the clinical disease corpus and standard disease corpus derived from ICD-10 coding. Extensive experiments are conducted to show the effectiveness of our proposed methods. The results demonstrate that our methods can have up to 3% performance gain compared to non-augmented counterparts, and they can work even better on smaller datasets. Semantic Textual Similarity Disease Normalization Biomedical Language Processing ## 1 Introduction The disease is a central concept in medical text processing problems. One of the most important tasks, i.e. disease normalization, uses diseases as both input and output to match the diagnoses terms used in clinical documents to standard names in ICD coding. The disease normalization task mainly faces the following three challenges. First, **different writing styles**. The writing styles of the diseases can be diversified, where different doctors have different writing habits, so a single disease might result in thousands of versions of names. Second, **data scarcity**, where some diseases may not be covered in the training set, which often leads to few-shot or zero-shot scenarios. For example, in the Chinese disease normalization dataset CHIP-CDN, there are 40472 diseases to classify, but only data of 3505 diseases (i.e. less than 10% of all diseases) are provided in the training set. Figure 1 illustrates the data scarcity problem in CHIP-CDN dataset. Third, **semantics density**. The length of disease names is usually short, which makes every character carries huge semantic information. The meanings of the diseases are very different from each other even if they share a lot of common characters, and a single change in characters could result in dramatic change in semantic meaning. For instance, (Common iliac artery dissection)" and (Common carotid artery dissection)" are only different in one character, but the positions of those diseases are very distinct, from the upper half of the body part to the lower half. Among all the challenges we discussed, data scarcity is the biggest one, since other problems usually can be solved by providing larger datasets for models to learn. A common way to address the data scarcity problem is through data augmentation. There are numerous data augmentation methods for general corpora such as synonym replacement or back translation. Wei and Zou (2019) has shown that simple text data augmentation methods can be effective for text classification problems. However, because of the unique structure of disease names (i.e. semantics density), general text data augmentation methods do not work well on them, and sometimes even hurt the overall performance. For example, if random deletion Wei and Zou (2019) is performed on disease (Obstructive Sleep Apnoea)" and results in " augmentation methods, a same input disease in the dataset may get different labels, which will make the model difficult to train due to label confusion. To overcome this problem, we treat the data augmentation operation as a pre-training task (we call it augmented training) prior to the original task, so that the model can first learn the necessary semantic information within diseases and then leverage that information when fine-tuning on the actual normalization dataset. Additionally, both unnormalized disease names from the tasks and standard ICD names of the diseases can be used as inputs in the data augmentation process. A unique advantage of using standard ICD names to perform data augmentation as a pre-training task is that the model can get the whole picture of the disease-related information from ICD coding, which includes all classes of diseases, even before the actual training of the downstream task. Therefore, with all those information injected, the model can perform much stronger on smaller datasets where lots of class labels are not able to be seen in the training set. To the best of our knowledge, we are the first to explore the semantic components and information within disease names. We believe the research on disease name enhancement has high research value and can benefit various downstream tasks. **To summarize our contributions:** * We propose a set of data augmentation methods for the Chinese disease normalization tasks. * Experiments validate that general data augmentation methods have the potential to impair the disease normalization task. However, our method has obvious performance gain on the task based on various baseline models. * We also analyze the reasons why the proposed method is effective. ## 2 Background **ICD coding.** ICD, the acronym of the International Classification of Diseases, is an international unified classification of diseases developed by the World Health Organization, and ICD-10 is the 10th version of ICD coding which is used in our work. The coding is a combination of letters and numbers, which classifies diseases according to their etiology, pathology, clinical manifestations, and anatomical locations, so that they form a hierarchical coding structure. ICD also adopts a multi-grain fashion where coarse-grained disease are followed by fine-grained diseases. **Disease normalization task.** In clinical practice, doctors will fill in the name of the disease according to clinical diagnosis standards along with their own writing habits, which makes a single disease name hundreds of versions. The disease normalization task is to match disease names written in different styles into a single standard name provided by ICD coding. After the disease normalization process, researchers can perform further operations upon the normalized names to realize all kinds of functions used in wise medical applications. The task can be formalized into the following operation: X -> Y, where X represents the clinical disease names and Y represents the standard ICD names. **NER.** NER stands for Named Entity Recognition, which is a common task in Natural Language Processing. It aims to identify entities that have practical values and their locations from unstructured texts. The classification of these entities may include persons, organizations, locations, etc. In this work, we use an NER tool trained by ourselves to identify elements in disease names in order to perform data augmentation. Additionally, we argue that any NER tool that can identify elements in disease names should be fine, and our work mainly focus on the data augmentation methods. ## 3 Related Work In this section, we first introduce related works of data augmentation, then we introduce medical data-driven research works that are similar to ours. ### Data Augmentation Data augmentation is a technology to synthesize new data based on existing data as a way to expand the amount of dataset. It is often used when the amount of data is not enough, and it can also act as a regularizer to prevent the model from overfitting the training set. Unlike images, where it is relatively easy to augment data as well as keep the semantic information intact, data augmentation in texts is more difficult, due to its unstructured form Ng et al. (2020). Many works focus on augmentations directly on the input: Wei and Zou (2019) propose four simple augmentation methods base on character-level noise injection, which are replacement, insertion, swap, and deletion. Their methods are quite straightaway and effective, but the augmentation results may cause unwanted noise by not following the grammar rules. Back translation, augments data by translating the original text to a second language and then translating it back. This method can keep the semantic meaning well of the original text, but the augmented results are lack of diversity and sometimes restricted by the translation tool. In order to make the augmented data more realistic, Kim et al. (2022) leverages lexicalized probabilistic context-free grammars to capture the intricate compositional structure of natural language and then perform word replacements. This method yields good results, but grammar-based methods for general text are difficult to generalize to specialized areas, such as medicine. There are also methods that leverage pre-trained language models to perform data augmentation. Ng et al. (2020) use MLM objective in BERT Devlin et al. (2018) to mask out some words and then regenerate it. Wu et al. (2019) also uses MLM task as well as changing the segment ids to class labels. Kumar et al. (2020) compares three kinds of data augmentation methods using a conditional pre-trained model, namely auto-encoder, auto-regressive, and seq2seq. A problem with these methods is that the semantic meaning of the original sentence may change after several MLM replacements. Semi-supervised learning can also be a way to perform data augmentation by leveraging the vast amount of unlabeled data. Berthelot et al. (2019) uses MixUp to guess the low-entropy labels of the augmented data and then mixes the labeled and unlabeled data to derive a loss term, and Xie et al. (2020) performs data augmentation on unlabeled data for consistency training. However, we only focus on augmenting the data itself instead of semi-supervised learning objectives in this work. ### Data approaches on medical data While most researches focus on the effect of data augmentation on general text data, there are also works that try to explore the possibility of data augmentation operations on medical text data. In this section, we mainly introduce data augmentation on medical text data and other related research works. There are works that focus on the synonym replacement in medical terms. Falis et al. (2022) and Abdollahi et al. (2021) leverage Unified Medical Language System (UMLS) to find medical synonyms to perform replacements after certain medical terms are identified in classification texts. Focusing on the ICD-coding task, Falis et al. (2022) also replaces both the medical terms in raw texts and the classification label to get new training data. While their works mainly focus on replacing the whole medical term, we investigate the possibility of replacing the components of the medical terms by exploring the semantic structures within them. Additionally, Ansari et al. (2021) investigates the performance of EDA, conditional pre-trained language models and back translation to perform data augmentation on social media texts for mental health classification. Wang et al. (2020) proposes Segment Reordering as a data augmentation technique to keep the medical semantic meaning intact. Wang et al. (2020) use pre-trained language models fine-tuned on General Semantic Textual Similarity (STS-G) data to generate pseudo-labels on medical STS data, and then perform iterative training. ## 4 Methods In this section, we introduce the details of our proposed data augmentation methods and the overall pipeline. Since the significance of data augmentation is to inject the model with extra knowledge, the key point is to explore the components and relations in diseases so that the model can have a broad sense of the internal structures of the diseases. Therefore, we leverage the multi-axis and multi-grain nature of the diseases to design all of the data augmentation methods. First of all, the disease names are composed of several elements, which include but are not limited to etiology, pathology, clinical manifestations, anatomical location, chronicity, degree type, etc. For ease of expression, we merge and select from all those elements into three main categories, which are disease center, anatomical location and disease quality. This shows the multi-axis nature of the diseases. * Disease Center: Disease center, which may include etiology and pathology, is the minimal word that describes the nature of a disease. It defines the main category of a disease, such as "disorders" for "Other disorders of the eye with mcc". * Anatomical Location: Anatomical Location is a part of the human body that have actual meaning in anatomy. It indicates which part of the human body is ill. * Disease Quality: The quality of a disease which indicates the subtype of the disease, such as "Drug-induced" for "Drug-induced peripheral neuropathy". With these three axis words, all kinds of disease names can be combined by them. Second, a disease can be described by multiple granularities. An upper disease is a coarse-defined disease and a lower disease is a fine-grained disease. The ICD coding contains lots of upper-lower disease pairs by assigning them different lengths of code. For example, in "ICD-10 Beijing Clinical Version 601", the disease name of code "A18.2" is "(") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") ") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") ") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") ") (") (") (") (") (") (") (") (") (") (") (") (") (") (") ") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") ") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") ") (") (") (") (") (") (") (") (") (") (") ") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") ") (") (") (") (") (") (") (") (") ") (") (") (") (") (") (") (") (") (") (") ") (") (") (") (") (") (") (") (") (") (") (") (") ") (") (") (") (") (") (") (") (") (") (") (") (") (") (") ") (") (") (") (") (") (") (") (") (") (") (") (") (") ") (") (") (") (") (") (") ") (") (") (") (") (") (") (") (") ") (") (") (") (") (") (") (") (") (") ") (") (") (") (") (") ") (") (") (") (") (") (") (") ") (") (") (") (") (") (") (") ") (") (") (") (") (") (") (") (") (") (") ") (") (") (") (") (") (") (") (") (") (") (") (") (") (") ") (") (") (") (") (") (") (") (") (") (") (") (") (") ") (") (") (") (") (") ") (") (") (") (") (") (") ") (") (") (") (") (") (") (") (") (") (") (") (") (") ") (") (") (") (") (") (") ") (") (") (") (") (") (") (") (") ") (") (") (") (") (") (") (") ") (") (") (") (") (") (") (") ") (") (") (") (") (") (") (") (") (") (") ") (") (") (") (") (") (") (") (") (") (") (") ") (") (") (") (") (") (") (") (") (") (") ") (") (") (") (") (") (") ") (") (") (") (") (") (") (") (") (") (") ") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") ") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") ") (") (") (") (") ") (") (") (") (") (") (") ") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (")) (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") (") ") (") (") (") (") (") (") (") the disease normalization task, replacing the corresponding Axis-word in the clinical name with the standard name in the pair at the same time can ensure that the newly-generated pair will still match. To locate all axis-word in the disease, we leverage a Named Entity Recognition (NER) tool trained by ourselves1. The entity type includes but is not limited to disease center, anatomical location, and disease quality. We note that the NER tool is just for the use of locating axis-words, and it can be replaced by any modules that can achieve the same function. Footnote 1: We will open source the code of our experiment along with the NER tool for disease names on Github. We leverage both the ICD-coding and the disease normalization training set to perform axis-word replacement. The detailed descriptions of each category of axis-word replacements are as follows: * AR1: AR1 is illustrated in the top left corner of Figure 2. First, select a pair of diseases (disease A and disease B) that shares one or more axis (part1 in figure) but is different in another axis (part 2 in figure). Then, replace the part 2 in disease A to be the same part2 in disease B. (Note: disease A can be chosen from any sources, but disease B can only be chosen from the standard ICD-coding list as it serves as the label of a disease normalization pair.) * AR1-posotion: Perform AR1 by fixing the disease center and replacing the anatomical location. * AR1-center: Perform AR1 by fixing the anatomical location and replacing the disease center. * AR1-quality: Perform AR1 by fixing both the disease center and the anatomical location and replacing the disease quality. * AR2: AR2 is illustrated in the top right corner of Figure 2. First, select a pair of unnormalized-standard diseases from the disease normalization training set. Let the unnormalized disease be disease A, and the standard disease be disease B. Then, find disease C from ICD-coding list that shares one or more axis (part1) but is different in another axis (part2). Finally, replace part2 in disease A to be the same part2 in disease C, so that the replaced disease A and disease C can form a new disease normalization pair. * AR2-position: Perform AR2 by fixing the disease center and replacing the anatomical location. * AR2-center: Perform AR2 by fixing the anatomical location and replacing the disease center. * AR2-quality: Perform AR2 by fixing both the disease center and the anatomical location and replacing the disease quality. **Multi-Grain Aggregation (MGA):** We assume that labels in the disease normalization task have transitivity properties. In specific, a more specified description of an object can be comprised into a larger group where the descriptions are more coarse. In the ICD coding system, there are also clear granularities of diseases. The maximum length of code that can be shared between hospitals is 6, and the multi-grain structure contains 3-digit, 4-digit, and 6-digit codes. We observe that the semantic meaning between diseases that share the first 3-digit code but are different in the 4th-digit code can be quite different, but the meaning would be a lot similar if the diseases share the first 4-digit code. Therefore, We implement MGA augmentation using the following method. * MGA-code: we leverage the multi-grain nature of the ICD coding by assigning the label of a 6-digit disease to its corresponding 4-digit disease. We call the method "aggregation" because normally a 4-digit disease can be matched to several 6-digit diseases, so the model can learn which diseases are similar. MGA-code is illustrated in the left bottom of Figure 2. * MGA-code1: The 6-digit diseases are directly derived from the ICD-coding list. * MGA-code2: The 6-digit diseases are derived from the diseases in CHIP-CDN training set whose labels are a 6-digit ICD disease. Footnote 1: We will open source the code of our experiment along with the NER tool for disease names on Github. * MGA-position: Apart from the ICD coding, anatomical locations also follow a hierarchical structure, where several smaller positions can be grouped together to form a larger position. Thus, we search for diseases in ICD coding that share the same center and one position is the upper position of another one, and we grouped the classification labels of the lower position diseases to their upper position diseases. MGA-position is illustrated in the right bottom of Figure 2. (Note: the upper position diseases must come from the standard ICD-coding list.) * MGA-position1: The lower position diseases are directly derived from the ICD-coding list. * MGA-position2: The lower position diseases are derived from the diseases in CHIP-CDN training set. (Note: In the human body, we call a location the upper position to another position if that location covers a larger area than another. In order to find the upper or lower positions of a position, we construct a position tree document where the anatomical positions in the human body are organized into a tree data structure. We use the constructed position tree to recognize the upper and lower relations above. The same goal can be achieved with other sources containing knowledge bases of human anatomy.) ### Training Process * Taking the augmented data to train the disease normalization task. * Fine-tuning the original disease normalization dataset. ## 5 Experiments ### Dataset We evaluate the effectiveness of our data augmentation methods on a Chinese disease normalization dataset called CHIP-CDN. CHIP-CDN originates in the CHIP-2019 competition and was collected in A Chinese Biomedical Language Understanding Evaluation Benchmark called CBLUE Zhang et al. (2021). The dataset contains 6000 unnormalized-standard disease pairs in the training set, 1000 pairs in the dev set, and 2000 pairs in the test set. ### Experimental Setup We evaluate our methods on three baselines: BILSTM Sak et al. (2014)and BERT-base Devlin et al. (2018), CDN-Baseline(from CBLUE)Zhang et al. (2021). For BILSTM, we use two BILSTM layers followed by a MLP layer to perform classification. For BERT-based models, we use the CLS vector to perform classification. For CDN-Baseline, we use the original model provided by its git repository2, which follows a "recall-match" two step training approach based on pre-trained language models. The choose of the baseline models is to demonstrate the effectiveness of our method under different types of models and training settings. In specific, we verify the effectiveness of DDA to a train-from-scratch model using a BILSTM model, we verify the effectiveness to models with pre-trained knowledge using the BERT-base model, and we verify the effectiveness to complex models using CDN-Baseline model. Footnote 2: [https://github.com/CBLUEbenchmark/CBLUE](https://github.com/CBLUEbenchmark/CBLUE) For the BILSTM model and BERT-base model, we use accuracy to judge the model performance. In our evaluation, we treat this disease normalization as a multi-class classification rather than multi-label classification task despite that there are few data samples that a single unnormalized disease is matched to several standard diseases. Hence, if an unnormalized disease is matched to several standard diseases, this data sample is considered correctly predicted as long as one of the standard diseases is correctly predicted. We design the experiments in this way to simplify the model as much as possible to more clearly illustrate the effectiveness of DDA. For CDN-Baseline, we stick to the settings in CBLUE Zhang et al. (2021), which use the F1 as the evaluation metric, use BERT-base as the baseline model, and use the two step training paradigm provided by CBLUE for better comparison. To ensure fairness, we use the exact same parameter settings for the same model. In particular, for CDN-Baseline, we use almost the same parameter settings as CBLUE's git repository, including random seed numbers. Additionally, we use devset for performance comparison, since the label of test set of the CHIP-CDN dataset is not given. For all experiments, we keep the best performing result as the final score. ### Results The results are shown in Table 1. The trainset in the table represents CHIP-CDN training set. From top to bottom, the performance of different models using different data augmentation methods is represented. Among them, BT is the back-translation data augment method3, and DDA is the semantic-based disease name data augmentation method proposed by us. The experimental results demonstrate that although EDA and back-translation increase diversity, they both hurt performances in some settings (especially for EDA). However, DDA improves the performance in every settings. Clearly, DDA avoids the problem of EDA, and its effect is much better than BT. We observe that the performances improve for all models above after applying the DDA methods, showing the effectiveness of our proposed methods. For the BILSTM model, the relative performance improvement reaches 6%. We further observe that there is more performance gain on BILSTM than BERT-based models and CDN-Baseline, probably because the knowledge in pre-trained language models has already covered some of the similar information, but our proposed method can further improve their performance, showing the effectiveness of DDA. ### Ablation Study In this section, we evaluate the effectiveness of every data augmentation methods on BILSTM, BERT-base models and CDN-Baseline. As we propose two types of data augmentation methods, we evaluate them by taking out these methods one by one to see the resulting performances. The results are shown in Table 2. We observe that removing data generated by either types of methods would lead to performance degradation, thus proving the effectiveness of every method that we propose. ### Smaller datasets experiments We also evaluate the performance improvements over smaller datasets that derived from CHIP-CDN since the data scarcity problem is more severe in smaller datasets. We evaluate the training set whose sizes range from 5%, to 100% of the CHIP-CDN training set size. For the convenience of training, for augmented training \begin{table} \begin{tabular}{c|c|c|c} Model & BILSTM & BERT-base & CDN-Baseline \\ \hline trainset & 0.455 & 0.558 & 0.577 \\ trainset+EDA & 0.451 & 0.519 & 0.561 \\ trainset+BT & 0.466 & 0.556 & 0.578 \\ trainset+DDA & **0.485** & **0.578** & **0.592** \\ \end{tabular} \end{table} Table 1: Comparison of the devset accuracy (%) for the choice of different data augmentation methods on various baseline models. \begin{table} \begin{tabular}{c|c|c|c} Strategy & BILSTM & BERT-base & CDN-Baseline \\ \hline DDA full & 0.485 & 0.578 & 0.592 \\ - AR & 0.467 & 0.568 & 0.588 \\ - MGA & 0.455 & 0.558 & 0.577 \\ \end{tabular} \end{table} Table 2: Ablation study for CHIP-CDN dataset. We remove our proposed data augmentation methods once at a time and evaluate the results. Figure 3: Performance comparison on smaller datasets for BILSTM and BERT-base. The smaller datasets are derived by randomly sampling the original CHIP-CDN training set, and the devset of CHIP-CDN stays the same. in this setting, we only leverage standard disease names in ICD-coding. No data from disease normalization training set are used. We draw curves to illustrate the comparison on whether to use our proposed methods or not, which is shown in figure 3. When the size of the training set increase, both curves steadily improve. We also notice that the performance gain is higher when the size of the training set is smaller. ## 6 Conclusion In this paper, we propose two main types of data augmentation methods for Chinese disease normalization tasks based on two hypothesis respectively, where the disease names have the property of structural invariance, and the labels in disease normalization task have the transitivity properties. Our data augmentation methods explore the semantic information and the relation information in diseases, and are adopted in augmented training fashion to avoid introducing misinformation. Experimental results show that our DDA method can better solve the three main challenges in disease normalization task, namely description diversity, data scarcity, and semantics density. Compared to EDA and back-translation methods, our method has obvious advantages on the disease normalization task. Furthermore, we prove that our data augmentation methods work even better on smaller datasets.
2306.07398
Continuity and Boundedness of Minimum-Norm CBF-Safe Controllers
The existence of a Control Barrier Function (CBF) for a control-affine system provides a powerful design tool to ensure safety. Any controller that satisfies the CBF condition and ensures that the trajectories of the closed-loop system are well defined makes the zero superlevel set forward invariant. Such a controller is referred to as safe. This paper studies the regularity properties of the minimum-norm safe controller as a stepping stone towards the design of general continuous safe feedback controllers. We characterize the set of points where the minimum-norm safe controller is discontinuous and show that it depends solely on the safe set and not on the particular CBF that describes it. Our analysis of the controller behavior as we approach a point of discontinuity allows us to identify sufficient conditions to ensure it grows unbounded or it remains bounded. Examples illustrate our results, providing insight into the conditions that lead to (un)bounded discontinuous minimum-norm controllers.
Mohammed Alyaseen, Nikolay Atanasov, Jorge Cortes
2023-06-12T19:55:10Z
http://arxiv.org/abs/2306.07398v1
# Continuity and Boundedness of Minimum-Norm CBF-Safe Controllers ###### Abstract The existence of a Control Barrier Function (CBF) for a control-affine system provides a powerful design tool to ensure safety. Any controller that satisfies the CBF condition and ensures that the trajectories of the closed-loop system are well defined makes the zero superlevel set forward invariant. Such a controller is referred to as _safe_. This paper studies the regularity properties of the minimum-norm safe controller as a stepping stone towards the design of general continuous safe feedback controllers. We characterize the set of points where the minimum-norm safe controller is discontinuous and show that it depends solely on the safe set and not on the particular CBF that describes it. Our analysis of the controller behavior as we approach a point of discontinuity allows us to identify sufficient conditions to ensure it grows unbounded or it remains bounded. Examples illustrate our results, providing insight into the conditions that lead to (un)bounded discontinuous minimum-norm controllers. ## I Introduction Safety-critical control for dynamical systems is an active area of research with applications to multiple domains such as transportation, autonomy, power systems, robotics, and manipulation. The notion of Control Barrier Function (CBF) has revealed to be a particularly useful tool as it provides a mathematically precise formulation of the range of design choices available to keep a desired set safe. This has spurred a flurry of activity aimed at synthesizing safe controllers as solutions to optimization-based formulations whose cost functions may encode energy considerations, minimal deviation from prescribed controllers, or other performance goals. A critical aspect in this endeavor is ensuring that safe controllers enjoy appropriate regularity (boundedness, continuity, Lipschitzness, smoothness) properties for ease of implementation and to ensure well-posedness of the resulting closed-loop system. Motivated by these observations, this work studies the continuity properties of the minimum-norm safe controller and analyzes conditions under which the existence of a bounded safe controller is guaranteed. _Literature Review:_ The notion of CBF builds on Nagumo's theorem [1], which establishes the invariance of a set with respect to trajectories of an autonomous system given suitable transversality conditions are satisfied on the boundary of that set. The extension to control systems introduced in [2] enforces a strict Nagumo-like condition to hold on the whole set to be made invariant. This condition was relaxed in [3] to arrive at the concept of CBF used here. The use of CBFs to enforce safety as forward set invariance has since expanded to many domains (we refer to [4, 5] for a comprehensive overview). Particularly useful is the fact that, if a CBF-certified safe controller is Lipschitz, then the closed-loop system is well posed and the superlevel set of the CBF is forward invariant. It is common to synthesize such controllers via optimization formulations which are examples of parametric optimization problems, with the optimization variable being the control signal and the parameter being the state. The resulting controller is well defined but is generally not guaranteed to be continuous, let alone Lipschitz. If the controller is discontinuous, then it might become unbounded even if the safe set is compact, violating hard limits imposed by hardware constraints or energy considerations. This has motivated the study in the literature of various sufficient conditions to ensure Lipschitzness or continuity of optimization-based controllers. One set of conditions [3] relies on assuming uniform relative degree 1 of the CBF with respect to the dynamical system. Another condition [6] asks that the properties defining the CBF are satisfied on an open set containing the safe set. Other works [7] derive continuity-ensuring conditions resorting to the classical parametric optimization literature [8], of which the optimization-based controller synthesis problem is a special case. In parametric optimization, the work [9] proves the continuity of the optimizer under continuity properties of the point-to-set map defined by the constraints. Other works derive continuity results under different types of constraint qualification conditions, including linear independence [10] and Mangasarian-Fromovitz [11]. The work [7] builds on this body of work to relax linear independence qualification for the special case of a convex linearly constrained quadratic parametric program. Our exposition here unifies these conditions under a common framework and provides a generalization, ensuring continuity of the min-norm safe controller under weaker conditions. We also analyze the boundedness of the controller when the conditions are not met and discontinuity arises. Finally, because of the connection with bounded control, relevant to the present work are methods for constructing CBFs under limited control authority [12, 13, 14] and the combination of CBFs with Hamilton-Jacobi reachability analysis to consider the impact of control bounds on the computation of safe sets [15]. _Statement of Contributions:_ Given a CBF for a control-affine system, we study the boundedness properties of the associated minimum-norm safe controller. Apart from its intrinsic interest, the focus on this controller is justified by the fact that if it is not bounded, then no safe controller is. We start by explaining the limitations of the state of the art to guarantee the boundedness of safe controllers and illustrating them in two examples. Our first contribution is a rigorous characterization of the points of discontinuity of the minimum-norm safe controller. As a byproduct, this result allows us to generalize the known conditions for ensuring continuity. We show that the points of discontinuity are fully determined by the safe set and are independent of the specific choice of the CBF or the sensitivity to the violation of the CBF condition. These results set the basis for our second contribution, which is the identification of tight conditions to ensure the (un)boundedness of the minimum-norm controller when approaching a point of discontinuity. We revisit the two examples in light of the technical discussion to explain the observed behavior of the minimum-norm controller. Our results are applicable to more general formulations of safety filters beyond the minimum-norm controller and have important implications for the synthesis of safe feedback controllers subject to hard constraints on the control effort. _Notation:_ The closure, interior, and boundary of a set \(\mathcal{X}\) are denoted by \(\bar{\mathcal{X}}\), \(\mathrm{int}(\mathcal{X})\), and \(\partial\mathcal{X}\), respectively. Given \(s:\mathcal{X}\subseteq\mathbb{R}^{n}\to\mathbb{R}\), \(s\in C\) denotes that \(s\) is continuous and \(s\in C^{n}\) denotes that \(s\) has a continuous \(n^{\text{th}}\) derivative. The gradient of \(s\in C^{1}\) is denoted by \(\nabla s\) and written as a row vector. A function \(s\) is locally Lipschitz at \(x\) with respect to \(\mathcal{X}\) if there exists a neighborhood \(\mathcal{N}\) and a constant \(L\in\mathbb{R}\) such that \(\|s(x_{1})-s(x_{2})\|\leq L\|x_{2}-x_{1}\|\), for all \(x_{1},x_{2}\in\mathcal{N}\cap\mathcal{X}\). A function \(s\) is locally Lipschitz on \(\mathcal{X}^{\prime}\) if it is locally Lipschitz at \(x\) with respect to \(\mathcal{X}^{\prime}\), for all \(x\in\mathcal{X}^{\prime}\). A function \(\alpha:(-a,b)\to\mathbb{R}\) is an extended class-\(\kappa\) function if it is strictly increasing and \(\alpha(0)=0\). ## II Problem Statement We consider a non-linear control affine system over an open set \(\mathcal{X}\subseteq\mathbb{R}^{n}\) \[\dot{x}=f(x)+G(x)u, \tag{1}\] where \(x\in\mathcal{X}\) and \(u\in\mathbb{R}^{m}\). Here, \(f:\mathcal{X}\to\mathbb{R}^{n}\) and the column components \(g_{i}:\mathcal{X}\to\mathbb{R}^{n}\), \(i\in\{1,\ldots,m\}\) of \(G\) are locally Lipschitz on \(\mathcal{X}\). Safety of the system can be certified through the following notion. **Definition II.1** (Control Barrier Function [4]).: _Let \(h:\mathcal{X}\to\mathbb{R}\) be \(C^{1}\) and define its superlevel set \(\mathcal{C}\triangleq\{x\in\mathbb{R}^{n}\mid h(x)\geq 0\}\subseteq\mathcal{X}\). The function \(h\) is a CBF if \(\nabla h(x)\neq 0\) for all \(x\in\partial\mathcal{C}\) and there exists a set \(\mathcal{D}\subseteq\mathcal{X}\) such that \(\mathcal{C}\subseteq\mathcal{D}\) and for all \(x\in\mathcal{D}\), there exists \(u\in\mathbb{R}^{m}\),_ \[\nabla h(x)f(x)+\alpha(h(x))+\nabla h(x)G(x)u\geq 0. \tag{2}\] _where \(\alpha\) is an extended class-\(\kappa\) function._ If \(h\) admits an open set \(\mathcal{D}\) satisfying the above definition, then we refer to it as a _strong CBF_, otherwise we call it a _weak CBF_. For each \(x\in\mathcal{D}\), we denote by \(K_{\text{cbf}}(x)\) the set of input values \(u\) satisfying (2) which, by Definition II.1, is nonempty. The central result [4, Theorem 2] of CBF-based safety is that, if there exists a Lipschitz feedback controller \(\bar{u}:\mathbb{R}^{n}\to\mathbb{R}^{m}\) satisfying \(\bar{u}(x)\in K_{\text{cbf}}(x)\) in \(\mathcal{D}\), then the set \(\mathcal{C}\) is forward invariant with respect to the trajectories of the closed-loop system (1) under \(u=\bar{u}(x)\). One particular choice of controller that satisfies the CBF condition (2) by construction is the so-called min-norm safe feedback controller \(u^{*}(x)\triangleq\arg\min_{u\in K_{\text{cbf}}(x)}\|u\|^{2}\). In general, this controller is not necessarily Lipschitz. In fact, it might not even be bounded. This motivates our problem statement. **Problem 1**.: Let \(h\) be a CBF with a compact superlevel set \(\mathcal{C}\). Determine the states in \(\mathcal{C}\) where the min-norm safe feedback controller \(u^{*}\) is discontinuous and find conditions under which it is bounded/unbounded as the discontinuous states are approached. \(\bullet\) Our focus on establishing boundedness when continuity of the min-norm controller fails is motivated by three reasons. First, proving that the min-norm controller is unbounded shows that no safe bounded controller exists. This would also mean that there does not exit a continuous safe feedback controller. Second, if the min-norm is discontinuous but bounded, then there is room for finding a safe continuous controller. Finally, our investigation provides grounds for exploring whether the use of discontinuous controllers to ensure control-invariance for safety is applicable to a larger class of scenarios. We end this section by noting that our results are directly applicable to safety filters based on quadratic programming (QP). In fact, any controller \(u\) that minimizes a cost function \(\|u-u_{\mathrm{nom}}(x)\|^{2}\) subject to (2), where \(u_{\mathrm{nom}}\) is a predefined nominal controller, can be interpreted as a min-norm controller after the change of variables \(u^{\prime}=u-u_{\mathrm{nom}}\). ## III Continuity of the Min-Norm Safe Controller: Limitations of the State of the Art This section reviews known conditions in the literature that ensure the min-norm controller \(u^{*}\) is continuous and thus bounded in a compact set \(\mathcal{C}\), and illustrates its limitations in a couple of simple examples. Considering the CBF condition (2), notice that if \(\nabla h(x)f(x)+\alpha(h(x))\geq 0\), then \(u=0\) validates (2). For such points, the min-norm controller \(u^{*}(x)=0\). On the other hand, when \(\nabla h(x)f(x)+\alpha(h(x))<0\), a non-zero control is needed to ensure (2). We thus split \(\mathcal{D}\) into the two sets \[\mathcal{D}_{+} \triangleq\{x\in\mathcal{D}\ \mid\ \nabla h(x)f(x)+\alpha(h(x))\geq 0\}, \tag{3a}\] \[\mathcal{D}_{-} \triangleq\{x\in\mathcal{D}\ \mid\ \nabla h(x)f(x)+\alpha(h(x))<0\}. \tag{3b}\] Notice that \(u^{*}\) is defined as the optimizer of a quadratic program with one linear constraint. Such programs have a unique solution, cf. [16, 8.1.1], with the closed-form formula \[u^{*}(x)\!=\!\begin{cases}0,&x\in\mathcal{D}_{+}\\ -\frac{\nabla h(x)f(x)+\alpha(h(x))}{\|\nabla h(x)G(x)\|^{2}}(\nabla h(x)G(x))^{ T},&x\in\mathcal{D}_{-}.\end{cases} \tag{4}\] This expression is well defined on \(\mathcal{D}\) since (2) implies that, if \(\bar{x}\in\mathcal{D}_{-}\), then \(\|\nabla h(\bar{x})G(\bar{x})\|\neq 0\). **Lemma III.1** (Strong CBF Implies Continuous Min-Norm Controller [6, Thm. 5]).: _Let \(h\) be a strong CBF with a compact superlevel set \(\mathcal{C}\). Then \(u^{*}\) is continuous on \(\mathcal{C}\)._ According to [3, Thm. 8], \(u^{*}\) is locally Lipschitz if the CBF \(h\) has relative degree 1, that is, for all \(x\in\mathcal{D}\), \(\|\nabla h(x)G(x)\|\neq 0\). The next result is a generalization of this fact. **Lemma III.2** (Generalization of Relative Degree 1 CBF Implies Continuous Min-Norm Controller).: _Let \(h\) be a CBF with compact superlevel set \(\mathcal{C}\). If for all \(x\in\partial\mathcal{C}\), \(\|\nabla h(x)G(x)\|=0\) implies \(\nabla h(x)f(x)>0\), then \(u^{*}\) is locally Lipschitz on \(\mathcal{C}\)._ We postpone the proof of Lemma III.2 as it is a corollary of Lemma IV.1 below. **Remark III.3** (Assumption of uniform relative degree is limiting).: The assumption of uniform relative degree of the CBF, cf. [3, Thm. 8], has also been exploited for higher-order relative degree CBFs, cf. [17]. However, this assumption fails for the following two general cases: 1. Let \(h\) be a continuously differentiable CBF with compact superlevel set \(\mathcal{C}\). For such \(h\), there always exists \(y\in\mathrm{int}(\mathcal{C})\) where \(\|\nabla h(y)G(y)\|=0\). To see that, note that by continuity of \(h\) and compactness of its superlevel set, \(h\) has a maximum value at some state \(y\in\mathcal{C}\)[18, Thm. 4.16]. Recalling that \(h(x)=0\) at \(\partial\mathcal{C}\) and \(h(x)>0\) in \(\mathrm{int}(\mathcal{C})\), we deduce that \(y\in\mathrm{int}(\mathcal{C})\). By differentiability and first-order optimality [16, 4.2.3], \(\nabla h(y)=0\) and, hence, \(\|\nabla h(y)G(y)\|=0\). 2. Consider the \(n\)-dimensional linear system \((A,B)\), where \(B\) does not have full row rank. Let \(h\) be a continuously differentiable CBF with compact convex superlevel set \(\mathcal{C}\). Then, there always exists \(y\in\partial\mathcal{C}\) where \(\|\nabla h(y)G(y)\|=\|\nabla h(y)B\|=0\). To see this, note that since \(B\) is not full row rank, there is a unit vector \(v\in\mathbb{R}^{n}\) such that \(\|v^{T}B\|=0\). By the surjectivity of the Gauss map 1 on the compact smooth surface \(\partial\mathcal{C}\)[19, Thm. A], there is a point \(y\in\partial\mathcal{C}\) at which the unit normal vector to \(\partial\mathcal{C}\) is \(v\). By [20, Thm. 3.15], \(\nabla h(y)\) is normal to \(\partial\mathcal{C}\) at \(y\) and thus parallel to \(v\). Hence, \(\|\nabla h(y)B\|=0\). \(\bullet\) Footnote 1: The Gauss map assigns points on the manifold \(\partial\mathcal{C}\) to the unit sphere embedded in \(\mathbb{R}^{n}\) such that the image of any point in \(\partial\mathcal{C}\) is the unit vector normal to \(\partial\mathcal{C}\) at that point. From the continuity of the min-controller on \(\mathcal{C}\) ensured by either Lemmas III.1 or III.2, it follows from standard results in analysis, cf. [18, Thm. 5.15], that \(u^{*}\) is bounded if \(\mathcal{C}\) is compact. As we will show later, the conditions of Lemmas III.1 and III.2 are not totally independent: rather, if the condition of Lemma III.1 is not met, i.e., \(h\) is weak, then the condition of Lemma III.2 is not met either. CBFs that do not meet the conditions of these results are easy to encounter and arise in practice in contexts as simple as the problem of confining a double integrator to a circle centered at the origin. We next present two examples that do not satisfy the assumptions and generate discontinuous min-norm controllers: one being bounded and the other one unbounded. **Example III.4** (Weak CBF with Bounded Min-Norm Controller).: Consider the double-integrator dynamics on \(\mathbb{R}^{2}\) defined by \(f(x)=(x_{2},0)\) and \(G(x)=(0,1)\). The function \(h(x)=1-x_{1}^{2}-x_{2}^{2}\) is a CBF with any extended class-\(\kappa\) function \(\alpha\). Notice further that \(h\) is a weak CBF. To see this, let \(\bar{x}=(1+\epsilon,0)\) with any arbitrarily small \(\epsilon>0\). Since \(\bar{x}\notin\mathcal{C}\), we have \(\nabla h(\bar{x})f(\bar{x})+\alpha(h(\bar{x}))+\nabla h(\bar{x})G(\bar{x})u= \alpha(h(\bar{x}))<0\), and therefore condition (2) cannot be satisfied at \(\bar{x}\). Therefore, \(h\) does not admit an open set \(\mathcal{D}\) satisfying Definition II.1. In addition, the condition of Lemma III.2 is not satisfied at the boundary point \((1,0)\). Consider now the norm of the min-norm safe controller (4) defined on \(\mathcal{D}=\mathcal{C}\), \[|u_{1}^{*}(x)|=\begin{cases}0,&x\in\mathcal{D}_{+},\\ \frac{2x_{1}x_{2}-\alpha(h(x))}{2x_{2}},&x\in\mathcal{D}_{-}.\end{cases}\] Note that \(u_{1}^{*}\) is continuous on \(\mathcal{C}\setminus\{(\pm 1,0)\}\). However, choosing \(\alpha(r)=r\), we have that \(\limsup_{x\to(1,0),x\in\mathcal{D}_{-}}|u_{1}^{*}(x)|\) and \(\lim_{x\to(1,0),x\in\mathcal{D}_{+}}|u_{1}^{*}(x)|=0\). Thus, although discontinuous at \((1,0)\), \(u_{1}^{*}\) is bounded at this point, cf. top plot in Figure 1. \(\bullet\) Example III.4 shows that the min-norm safe controller might be bounded even if the CBF does not satisfy the continuity conditions in the literature. The next example shows this fact is not generic. **Example III.5** (Weak CBF with Unbounded Min-Norm Controller).: Consider the dynamics \(f(x)=(x_{2},0)\) and \(G(x)=(0,x_{2}^{2})\). With the same reasoning as in Example III.4, \(h(x)=1-x_{1}^{2}-x_{2}^{2}\) is a weak CBF that does not satisfy the requirement of Lemma III.2. The norm of the min-norm safe controller is: \[|u_{2}^{*}(x)|=\begin{cases}0,&x\in\mathcal{D}_{+},\\ \frac{2x_{1}x_{2}-\alpha(h(x))}{2x_{2}^{3}},&x\in\mathcal{D}_{-}.\end{cases}\] Observe that \(u_{2}^{*}\) is continuous on \(\mathcal{C}\setminus\{(\pm 1,0)\}\). However, with the choice \(\alpha(r)=r\), \(\limsup_{x\to(1,0),x\in\mathcal{D}_{-}}|u_{2}^{*}(x)|=\infty\) and \(\lim_{x\to(1,0),x\in\mathcal{D}_{+}}|u_{2}^{*}(x)|=0\). Thus, \(u_{2}^{*}\) is neither continuous nor bounded on \(\mathcal{C}\), cf. bottom plot in Figure 1.\(\bullet\) ## IV Points of Discontinuity of The Min-Norm Safe Controller Here we characterize the points of (dis)continuity of the min-norm controller \(u^{*}\) in \(\mathcal{C}\). This is motivated by the fact that if \(u^{*}\) goes unbounded when approaching a point in \(\mathcal{C}\), then it is discontinuous at it. Therefore, the results of this section are a stepping stone towards the identification of conditions for (un)boundedness of \(u^{*}\). **Lemma IV.1** (Points of discontinuity of \(u^{*}\) in \(\mathcal{C}\)).: _Let \(h\) be a CBF for a system (1) with a Lipschitz gradient and an associated Lipschitz class-\(\kappa\) function \(\alpha\), and let \(u^{*}\) be the min-norm controller given by (4). Define \(\mathcal{Z}_{h,\alpha}\triangleq\{x\in\mathcal{C}\mid\nabla h(x)f(x)+\alpha(h (x))=0=\|\nabla h(x)G(x)\|\}\). Then, \(u^{*}\) is locally Lipschitz on \(\mathcal{C}\setminus\mathcal{Z}_{h,\alpha}\)._ Proof.: The proof is an extension of the proof of [3, Thm. 8]. Note that Since \(h\) is a CBF, (2) is satisfied for \(\mathcal{D}=\mathcal{C}\) and therefore \(\|\nabla h(x)G(x)\|\neq 0\), for all \(x\in\mathcal{D}_{-}\). Thus, on \(\mathcal{D}_{-}\), \(u^{*}\) is a quotient with a non-zero Lipschitz denominator and a Lipschitz numerator. Hence, both expressions in the piecewise definition of \(u^{*}\) in (4) are locally Lipschitz on their respective domains \(\mathcal{D}_{+}\) and \(\mathcal{D}_{-}\). It remains to prove that \(u^{*}\) is locally Lipschitz with respect to \(\mathcal{C}\) at all the points in the boundary between \(\mathcal{D}_{+}\) and \(\mathcal{D}_{-}\) that are not in \(\mathcal{Z}_{h,\alpha}\). For a point \(x\) in the boundary between \(\mathcal{D}_{+}\) and \(\mathcal{D}_{-}\), \(\nabla h(x)f(x)+\alpha(h(x))=0\). If at such a point \(\|\nabla h(x)G(x)\|\neq 0\) (i.e., \(x\notin\mathcal{Z}_{h,\alpha}\)), then there is a neighborhood \(\mathcal{N}\) of \(x\) such that \(\|\nabla h(y)G(y)\|\neq 0\) for all \(y\in\mathcal{N}\). Thus \(u^{*}(x)=\omega(\frac{\nabla h(x)f(x)+\alpha(h(x))}{\|\nabla h(x)G(x)\|})( \nabla h(x)G(x))^{T}\) for \(x\in\mathcal{N}\), where \[\omega(r)=\begin{cases}0,&r\geq 0,\\ -r,&r<0,\end{cases}\] which is locally Lipschitz on \(\mathbb{R}\). That \(u^{*}\) is locally Lipschitz at \(x\) follows from the facts that the composition and product of locally Lipschitz functions is locally Lipschitz, and the quotient of locally Lipschitz functions is locally Lipschitz provided that the denominator is not zero. Lemma IV.1 can be seen as an extension of previous results, cf. [3, Thm. 8], establishing local Lipschitzness of \(u^{*}\) by assuming uniform relative degree 1 of \(h\). If this is the case, then \(\mathcal{Z}_{h,\alpha}\) is empty and thus \(u^{*}\) is locally Lipschitz on \(\mathcal{C}\). Given the dependency of \(\mathcal{Z}_{h,\alpha}\) on \(h\) and \(\alpha\), one might consider the possibility that a suitable choice of these functions might eliminate the potential points of discontinuity. The following results rule this out. **Lemma IV.2** (Discontinuity Points Are Independent of \(\alpha\)).: _Let \(h\) be a CBF. Then there exists an extended class-\(\kappa\) function \(\alpha\) that validates the CBF condition (2) and such that \(\mathcal{Z}_{h,\alpha}\subseteq\partial\mathcal{C}\). Moreover, let \(\alpha_{1}\) and \(\alpha_{2}\) be two extended class-\(\kappa\) functions that validate the CBF definition for \(h\). Then \(\mathcal{Z}_{h,\alpha_{1}}\cap\partial\mathcal{C}=\mathcal{Z}_{h,\alpha_{2}} \cap\partial\mathcal{C}\)._ Proof.: We prove that if \(\alpha\) validates Definition II.1 for \(h\), then any class-\(\kappa\) function \(\bar{\alpha}\) that satisfies \(\bar{\alpha}(r)>\alpha(r)\) for all \(r>0\) validates Definition II.1 for \(h\) and gives \(\mathcal{Z}_{h,\bar{\alpha}}\cap\operatorname{int}(\mathcal{C})=\emptyset\). That \(\bar{\alpha}\) validates the CBF condition (2) is immediate. Now let \(\bar{x}\in\operatorname{int}(\mathcal{C})\) be such that \(\nabla h(\bar{x})f(\bar{x})+\bar{\alpha}(h(\bar{x}))=0\). We show that \(\|\nabla h(\bar{x})G(\bar{x})\|\neq 0\) and thus \(\bar{x}\notin\mathcal{Z}_{\bar{\alpha},h}\). Since \(\bar{\alpha}(r)>\alpha(r)\) for \(r>0\), \(\nabla h(\bar{x})f(\bar{x})+\alpha(h(\bar{x}))<0\) because \(h(\bar{x})>0\) as \(\bar{x}\in\operatorname{int}(\mathcal{C})\). But \(\alpha\) validates condition (2) and thus \(\|\nabla h(\bar{x})G(\bar{x})\|\neq 0\). The proof of the last claim in the statement is immediate from the fact that \(\alpha_{1}(h(x))=\alpha_{2}(h(x))=0\) on \(\partial\mathcal{C}\). If we thus define \[\mathcal{Z}_{h}\triangleq\{x\in\partial\mathcal{C}\ |\ \nabla h(x)f(x)=\|\nabla h (x)G(x)\|=0\}, \tag{5}\] then Lemmas IV.1 and IV.2 justify stating that \(u^{*}\) is continuous on \(\mathcal{C}\setminus\mathcal{Z}_{h}\). This shows that \(u^{*}\) is continuous on \(\operatorname{int}(\mathcal{C})\) and that the possible points of discontinuity are independent of the choice of \(\alpha\). **Lemma IV.3** (Discontinuity Points Are Independent of \(h\)).: _Let \(h_{1}\), \(h_{2}\in C^{1}\) be CBFs with the same superlevel set \(\mathcal{C}\). Then, \(\mathcal{Z}_{h_{1}}=\mathcal{Z}_{h_{2}}\)._ Proof.: By Definition II.1, \(\nabla h_{i}(x)\neq 0\), \(i\in\{1,2\}\) on \(\partial\mathcal{C}\). By [21, Thm. 5.1], both \(h_{1}=0\) and \(h_{2}=0\) define the same differentiable manifold \(\partial\mathcal{C}\) of dimension \(n-1\) embedded in \(\mathbb{R}^{n}\). By [20, Thm. 3.15], the tangent space \(T_{x}\) of this manifold at a point \(x\) is given by \(T_{x}=\text{kernel}(\nabla h_{1}(x))=\text{kernel}(\nabla h_{2}(x))\). Thus \(\nabla h_{1}(x)\) and \(\nabla h_{2}(x)\) are parallel, and the result follows using the definition of \(\mathcal{Z}_{h}\). Lemma IV.3 shows that \(\mathcal{Z}_{h}\) is associated to the set \(\mathcal{C}\) and is independent of the CBF that has this set as its superlevel set. We thus write \(\mathcal{Z}\) to denote \(\mathcal{Z}_{h}\) without loss of generality. Lemma III.2 can now be readily proved: in fact, the hypotheses there imply that \(\mathcal{Z}\) is empty, and therefore, by Lemma IV.1, \(u^{*}\) is continuous on \(\mathcal{C}\). Now that it is proved that the non-emptiness of the set \(\mathcal{Z}\) implies potential discontinuity; one might then hope that boundedness of \(u^{*}\) can be established for a weak CBF \(h\) by ensuring that \(\mathcal{Z}\) is empty. The next result shows that the latter is never the case. **Lemma IV.4** (Weak CBF Implies Possible Discontinuity).: _If \(h\) is a weak CBF, then \(\mathcal{Z}\) is nonempty._ Proof.: Define the sequence of sets \(\mathcal{D}_{n}\triangleq\{x\in\mathbb{R}^{n}\ |\ \mathrm{d}(x,\mathcal{C})<1/n\}\), where \(\mathrm{d}(x,\mathcal{C})\) is the distance function from \(x\) to set \(\mathcal{C}\), which is continuous, cf. [22, Thm. 3.1]. Note that \(\mathcal{C}\subset\mathcal{D}_{n}\) and \(\mathcal{D}_{n}\) is open for all \(n\in\mathbb{N}\). Since \(h\) is a weak CBF, for each \(n\in\mathbb{N}\), there exists \(x_{n}\in\mathcal{D}_{n}\setminus\mathcal{C}\) such that for all \(u\in\mathbb{R}^{m}\) and all class-\(\kappa\) functions \(\alpha\), \(\nabla h(x_{n})f(x_{n})+\text{kernel}(\nabla h_{2}(x))\). Thus, \(\nabla h(x_{n})f(x_{n})+\text{kernel}(\nabla h_{2}(x))\) is a weak CBF. Fig. 1: Illustration of boundedness of the min-norm safe controller. Top (resp. bottom) plot corresponds to Example III.4 (resp., Example III.5). In each case, the unit circle is the superlevel set of the weak CBF \(h\), black arrows show the vector field \(f(x)\), red arrows show \(G(x)u^{*}(x)\), and the color map shows the magnitude of the input \(u^{*}\). \(\alpha(h(x_{n}))+\nabla h(x_{n})G(x_{n})u<0\). This implies that necessarily \(\|\nabla h(x_{n})G(x_{n})\|=0\) and \(\nabla h(x_{n})f(x_{n})+\alpha(h(x_{n}))<0\). Consider the sequence \(\{x_{n}\}\). Since \(\mathcal{C}\) is compact, the closure of \(\mathcal{D}_{1}\), namely \(\bar{\mathcal{D}}_{1}\), is compact. Since \(\{x_{n}\}\subseteq\bar{\mathcal{D}}_{1}\), there exists, cf. [18, Thm. 3.6], a convergent subsequence of \(\{x_{n}\}\), denoted \(\{y_{n}\}\), whose limit is \(\bar{y}\). By the definition of \(\{y_{n}\}\), we have \(\mathrm{d}(y_{n},\mathcal{C})\to 0\), and by continuity, \(\mathrm{d}(\bar{y},\mathcal{C})=0\), and so \(\bar{y}\in\mathcal{C}\). Since \(h(y_{n})<0\) for all \(n\), it follows that \(h(\bar{y})\leq 0\), and therefore it must be that \(h(\bar{y})=0\), i.e., \(\bar{y}\in\partial\mathcal{C}\). Continuity and the fact that \(\|\nabla h(y_{n})G(y_{n})\|=0\) for all \(n\) implies \(\|\nabla h(\bar{y})G(\bar{y})\|=0\). Similarly, continuity and the fact that \(\nabla h(y_{n})f(y_{n})+\alpha(h(y_{n}))<0\) implies that \(\nabla h(\bar{y})f(\bar{y})+\alpha(h(\bar{y}))=\nabla h(\bar{y})f(\bar{y})\leq 0\). Since \(h\) is a CBF and \(\bar{y}\in\mathcal{C}\), we have \(\nabla h(\bar{y})f(\bar{y})+\alpha(h(\bar{y}))=\nabla h(\bar{y})f(\bar{y})\geq 0\). Therefore \(\nabla h(\bar{y})f(\bar{y})=0\) and thus, \(\bar{y}\in\mathcal{Z}\), implying \(\mathcal{Z}\neq\emptyset\). Lemma IV.4 provides an important connection between the conditions for continuity presented in Section III. In fact, if the CBF is not strong, but weak (i.e., the condition of Lemma III.1 is not met), then Lemma IV.4 implies that the condition of Lemma III.2 is not satisfied either. ## V (Un)Boundedness Conditions For The Min-Norm Safe Controller This section identifies conditions to determine when the min-norm controller is bounded. For a compact safe set \(\mathcal{C}\), the controller can go unbounded only if approaching a state at which it is discontinuous (see e.g., Example III.5 for an illustration). From the exposition in Section IV, we know that the points of discontinuity of the min-norm controller are contained in \(\mathcal{Z}\), cf. (5). The following result provides computable sufficient conditions for (un)boundedness when approaching a point in \(\mathcal{Z}\). **Theorem V.1** ((Un)Boundedness Conditions of Min-Norm Controller).: _Let \(h\in C^{2}\) be a CBF with compact superlevel set \(\mathcal{C}\) and an associated \(\alpha\) that is differentiable at \(0\). Assume \(f\) and \(G\) are differentiable at \(\bar{x}\in\mathcal{Z}\) and let \(H_{h}(\bar{x})\), \(J_{f}(\bar{x})\), and \(J_{g_{i}}(\bar{x})\) denote the Hessian of \(h\) and the Jacobians of \(f\) and \(g_{i}\), respectively. Consider the linear equation_ \[Av=\begin{bmatrix}c_{1}\\ c_{2}\\ \mathbf{0}\end{bmatrix}, \tag{6}\] _with \(v\in\mathbb{R}^{n}\), \(c_{1},c_{2}\in\mathbb{R}\). Here, \(\mathbf{0}\) is the zero vector in \(\mathbb{R}^{m}\), \(A\triangleq\big{[}\nabla h(\bar{x})^{T}\quad\beta_{f}(\bar{x})\quad\beta_{G}( \bar{x})\big{]}^{T}\) and_ \[\beta_{f}(x) \triangleq H_{h}(x)f(x)+(J_{f}^{T}(x)+\alpha^{\prime}(h(x))I_{n}) \nabla h(x)^{T}\in\mathbb{R}^{n},\] \[\beta_{g_{i}}(x) \triangleq H_{h}(x)g_{i}(x)+J_{g_{i}}^{T}(x)\nabla h(x)^{T}\in \mathbb{R}^{n},\] \[\beta_{G}(x) \triangleq\big{[}\beta_{g_{1}}(x)\quad\ldots\quad\beta_{g_{m}}(x) \big{]}\in\mathbb{R}^{n\times m}.\] _Then, the following statements hold:_ 1. _if (_6_) has a solution_ \(v\) _with_ \(c_{1}\geq 0\) _and_ \(c_{2}<0\)_, then_ \(u^{*}\) _is not bounded as_ \(x\to\bar{x}\) _in_ \(\mathcal{C}\) _from the direction of_ \(v\)_, i.e.,_ \(u^{*}(\bar{x}+vt)\) _goes unbounded as_ \(t\to 0^{+}\)_._ 2. _if (_6_) does not have any non-trivial solution with_ \(c_{1}\geq 0\) _and_ \(c_{2}\leq 0\)_, then_ \(u^{*}\) _is bounded as it approaches_ \(\bar{x}\) _from all possible directions in_ \(\mathcal{C}\)_._ Proof.: The proof proceeds by examining the limit \(\limsup_{t\to 0}\|u^{*}(\bar{x}+vt)\|\) for \(v\in\mathbb{R}^{n}\). In doing so, we face the challenge that \(u^{*}\) is given by a piecewise expression that is generally discontinuous at \(\bar{x}\). In addition, when computing the limit, one finds an indeterminate form of the type \(0/0\). This leads us to the use of a particular form of L'Hopital's rule [18] that can handle the discontinuous piecewise expression and the presence of the \(\limsup\). For brevity, we use \(\bar{x}_{t}\triangleq\bar{x}+vt\), \(h_{G}(t)\triangleq\nabla h(\bar{x}_{t})G(\bar{x}_{t})\), \(N(t)\triangleq\nabla h(\bar{x}_{t})f(\bar{x}_{t})+\alpha(h(\bar{x}_{t}))\), and \(D(t)\triangleq\|h_{G}(t)\|\). According to (3b), \(\|u^{*}(\bar{x}_{t})\|=\frac{-N(t)}{D(t)}\) for \(\bar{x}_{t}\in\mathcal{D}_{-}\). (i) Let \(v\) be a solution of (6) with \(c_{1}\geq 0\) and \(c_{2}<0\). Because of the first row of (6), we have that \(\frac{d}{dt}h(\bar{x}_{t})=\nabla h(\bar{x})v=c_{1}\geq 0\). If \(\nabla h(\bar{x})v>0\), then by continuity, \(\nabla h(\bar{x}_{t})>0\) for small enough \(t\). Thus by [18, Thm. 5.11], \(h(\bar{x}_{t})>0\), i.e., \(\bar{x}_{t}\in\mathcal{C}\), for small enough \(t\). If \(\nabla h(\bar{x})v=0\), then \(v\) is tangential to \(\mathcal{C}\). Hence \(\bar{x}_{t}\) approaches \(\bar{x}\) from within \(\mathcal{C}\) or tangentially to it, meaning that \(v\) is a valid direction of approach to consider. The second row of (6) ensures that \(\frac{d}{dt}N(t)|_{t=0^{+}}=v^{T}\beta_{f}(\bar{x})=c_{2}<0\), which again by [18, Thm. 5.11] proves that \(N(t)<0\), i.e., \(\bar{x}_{t}\in\mathcal{D}_{-}\) by (3b), for sufficiently small \(t\). Hence, \(\lim_{t\to 0^{+}}\|u^{*}(\bar{x}_{t})\|=\lim_{t\to 0^{+}}\frac{-N(t)}{D(t)}\). Direct evaluation of this expression at \(t=0\) (where \(\bar{x}_{t}=\bar{x}\)) yields an indeterminate form of the type \(0/0\). We therefore resort to L'Hopital's rule [18, Thm. 5.13], which requires the existence of the limit of the derivative of the numerator \(-N(t)\) and denominator \(D(t)\). For the numerator, we have already established \(\lim_{t\to 0^{+}}\!\frac{d}{dt}N(t)=c_{2}\). As for the denominator, it is the norm of the differentiable function \(h_{G}(t)\), and its derivative exists at \(t\) where \(h_{G}(t)\neq 0\). But since \(\bar{x}_{t}\in\mathcal{D}_{-}\) for small enough \(t\), the CBF condition (2) ensures that \(h_{G}(t)\neq 0\) for sufficiently small \(t\). Thus, the derivative of the denominator exists for sufficiently small \(t>0\). A proof of the existence of the limit of this derivative \(\lim_{t\to 0^{+}}\frac{d}{dt}(D(t))=\lim_{t\to 0^{+}}v^{T}\beta_{G}(\bar{x}_{t})\frac{h_{G}(t)}{ \|h_{G}(t)\|}\) follows. By Holder's inequality, \[\big{|}\frac{v^{T}\beta_{G}(\bar{x}_{t})h_{G}(t)}{\|h_{G}(t)\|}\big{|}\!\leq\!\|v^{T} \beta_{G}(\bar{x}_{t})\|\frac{\|h_{G}(t)\|}{\|h_{G}(t)\|}\!=\!\|v^{T}\beta_{G}( \bar{x}_{t})\|.\] Hence, using the last \(m\) rows of (6), the assumption of continuous differentiability, and the sandwich theorem for limits [23, Thm. 3.3.3], \(\lim_{t\to 0^{+}}\frac{d}{ \(\{\bar{x}_{\bar{t}_{i}}\}\subset\mathcal{D}_{-}\) such that \(D^{\prime}(\bar{t}_{i})=v^{T}\beta_{G}(\bar{x}_{i}),\|\frac{h_{G}(\bar{t}_{i})}{ \|h_{G}(\bar{t}_{i})\|_{G}(\bar{t}_{i})\|}\to 0\). It remains to show that this implies \(v^{T}\beta_{G}(\bar{x})=\mathbf{0}^{T}\). We reason by contradiction and assume \(v^{T}\beta_{G}(\bar{x})\neq\mathbf{0}^{T}\). Without loss of generality, we can assume that the limit of \(\frac{h_{G}(\bar{t}_{i})}{\|h_{G}(\bar{t}_{i})\|}\), denoted \(\zeta\in\mathbb{R}^{n}\), exists (this can be done because \(\{\frac{h_{G}(\bar{t}_{i})}{\|h_{G}(\bar{t}_{i})\|}\}\) is a sequence from the set of unit vectors in \(\mathbb{R}^{n}\), which is compact, so there exists a convergent subsequence [18, Thm. 3.6]). This and the continuity of \(\beta_{G}\) imply that \(D^{\prime}(\bar{t}_{i})\to v^{T}\beta_{G}(\bar{x})\zeta=0\). Without loss of generality, assume \(\frac{\|h_{G}(\bar{t}_{i})\|}{\|h_{G}(\bar{t}_{i+1})\|}\to\infty\) (that this does not undermine generality is shown by Lemma 1.1(i)). Now, Lemma 1.1(ii) applied element-wise gives \[\frac{h_{G}(\bar{t}_{i})-h_{G}(\bar{t}_{i+1})}{\|h_{G}(\bar{t}_{i})\|-\|h_{G}( \bar{t}_{i+1})\|}\to\zeta. \tag{7}\] The sequence in (7) can be written as \[\frac{h_{G}(\bar{t}_{i+1})-h_{G}(\bar{t}_{i})}{\bar{t}_{i+1}-\bar{t}_{i}}\frac {\bar{t}_{i+1}-\bar{t}_{i}}{\|h_{G}(\bar{t}_{i+1})\|-\|h_{G}(\bar{t}_{i})\|}. \tag{8}\] Using the continuous differentiability of \(\nabla h\) and \(G\) at \(\bar{x}\), the first term of (8) \[\frac{h_{G}(\bar{t}_{i+1})-h_{G}(\bar{t}_{i})}{\bar{t}_{i+1}-\bar{t}_{i}}\to \left.\frac{d}{dt}(h_{G}(t))\right|_{t=0}=(v^{T}\beta_{G}(\bar{x}))^{T}\neq \mathbf{0},\] by hypothesis of contradiction. Consequently, the second term in (8) converges to a non-zero scalar, which we denote by \(a\). Therefore, \(\zeta=a(v^{T}\beta_{G}(\bar{x}))^{T}\). This implies that \(D(\bar{t}_{i})\to v^{T}\beta_{G}(\bar{x})\zeta=a\|v^{T}\beta_{G}(\bar{x})\|^{2 }\neq 0\), which is a contradiction. Theorem V.1 provides sufficient conditions for boundedness of the min-norm controller at a point of possible discontinuity. Note that the second row of the the matrix \(A\) in (6) is the gradient of \(\nabla h(x)f(x)+\alpha(h(x))\). Similarly, the \((2+i)^{\text{th}}\) row is the gradient of \(\nabla h(x)g_{i}(x)\). Each of the two equations \(\nabla h(x)f(x)+\alpha(h(x))=0\) and \(\nabla h(x)g_{i}(x)=0\) defines a differentiable \((n-1)\)-dimensional surface embedded in \(\mathbb{R}^{n}\). Thus, the existence of a solution \(v\) for (6) with \(c_{1}>0\) and \(c_{2}<0\) amounts to the existence of a vector that 1. points to the region in \(\mathcal{C}\) that requires non-zero control for safety, and 2. is perpendicular to the surfaces defined by \(\nabla h(x)G(x)=0\). This provides with a geometric intuition for the conditions identified in Theorem V.1. Figure 2(a)-(b) illustrates them for a generic two-dimensional single-input system. We note that condition (ii) (with \(c_{2}\leq 0\)) in Theorem V.1 is _almost_ a negation of condition (i) (with \(c_{2}<0\)). This shows that (i) is almost a sufficient and necessary condition for unboundedness of \(u^{*}\). The gap between both conditions stems from the fact that L'Hopital's rule is indeterminate when both derivatives of the numerator and the denominator approach \(0\). A geometric interpretation of this situation is depicted in Figure 1(c). **Corollary V.2** (Condition for Boundedness of Min-Norm Controller on \(\mathcal{C}\)).: _If condition (ii) in Theorem V.1 holds for all \(\bar{x}\in\mathcal{Z}\), then \(u^{*}\) is bounded on \(\mathcal{C}\)._ We revisit now Examples III.4 and III.5 in light of the above results. Notice that in both cases \(\mathcal{Z}=\{(1,0),(-1,0)\}\). Taking \(\alpha\) with \(\alpha^{\prime}(0)=1\), (6) at \(\bar{x}=(1,0)\) becomes \[-2\begin{bmatrix}1&0\\ 1&1\\ 0&d\end{bmatrix}v=\begin{bmatrix}c_{1}\\ c_{2}\\ 0\end{bmatrix},\] where \(d=1\) for Example III.4 and \(d=0\) for Example III.5. It is clear that the only possible solution for this system of equations with \(c_{1}\geq 0\) and \(c_{2}\leq 0\) with \(d=1\) is the trivial solution \(v=\mathbf{0}\). Thus, by Theorem V.1(ii), \(u_{1}^{*}\) from Example III.4 is bounded as its argument approaches \(\bar{x}\), as we would expect by our analysis of Example III.4. However, a solution \(v=(0,1)\) solves the system with \(d=0\), \(c_{1}=0\geq 0\) and \(c_{2}=-2<0\). By Theorem V.1(i), \(u_{2}^{*}\) from Example III.5 goes unbounded as it approaches \(\bar{x}\) from the direction of \(v\), which is tangential to \(\mathcal{C}\). This is also expected by our analysis of Example III.5. **Remark V.3** (When Unbounded Min-Norm Is Inevitable).: The system of linear equations in (6) has a coefficient matrix \(A\) with \(m+2\) rows and \(n\) columns. A non-trivial solution \(v\) to (6) exists if the first two rows of \(A\) are linearly independent and the remaining rows are linearly independent of the first two. This shows that, if the system data is such that the matrix \(A\) satisfies these independence properties, then an unbounded min-norm controller is inevitable. \(\bullet\) ## VI Conclusions We have studied the continuity and boundedness properties of the min-norm safe feedback controller for general control-affine systems within the framework of control barrier functions (CBF). After re-interpreting the known results in the literature in light of the notion of strong and weak CBFs, we have characterized the set of possible points of discontinuity of the minimum-norm safe controller and shown that it only depends on the safe set (and not on the specific CBF or the sensitivity to the violation of the CBF condition). Based on this characterization, we have generalized the known conditions to guarantee the continuity of the min-norm safe controller and identified sufficient conditions for its (un)boundedness. Our results have important implications for the synthesis of safe feedback controllers subject to hard constraints on control effort. Future work will explore questions about the existence of continuous safe controllers when the min-norm controller is discontinuous but bounded, the modification of CBFs that admit safe controllers when no control bounds are present to incorporate such limits, and the design of discontinuous (but bounded) safe controllers. ## Acknowledgments Mohammed Alyaseen would like to thank Pol Mestres for pointing out Remark III.3(i). This work was partially supported by NSF Award RI IIS-2007141.
2307.08382
Predicting Battery Lifetime Under Varying Usage Conditions from Early Aging Data
Accurate battery lifetime prediction is important for preventative maintenance, warranties, and improved cell design and manufacturing. However, manufacturing variability and usage-dependent degradation make life prediction challenging. Here, we investigate new features derived from capacity-voltage data in early life to predict the lifetime of cells cycled under widely varying charge rates, discharge rates, and depths of discharge. Features were extracted from regularly scheduled reference performance tests (i.e., low rate full cycles) during cycling. The early-life features capture a cell's state of health and the rate of change of component-level degradation modes, some of which correlate strongly with cell lifetime. Using a newly generated dataset from 225 nickel-manganese-cobalt/graphite Li-ion cells aged under a wide range of conditions, we demonstrate a lifetime prediction of in-distribution cells with 15.1% mean absolute percentage error using no more than the first 15% of data, for most cells. Further testing using a hierarchical Bayesian regression model shows improved performance on extrapolation, achieving 21.8% mean absolute percentage error for out-of-distribution cells. Our approach highlights the importance of using domain knowledge of lithium-ion battery degradation modes to inform feature engineering. Further, we provide the community with a new publicly available battery aging dataset with cells cycled beyond 80% of their rated capacity.
Tingkai Li, Zihao Zhou, Adam Thelen, David Howey, Chao Hu
2023-07-17T10:42:21Z
http://arxiv.org/abs/2307.08382v2
# Predicting Battery Lifetime Under Varying Usage Conditions from Early Aging Data ###### Abstract Accurate battery lifetime prediction is important for preventative maintenance, warranties, and improved cell design and manufacturing. However, manufacturing variability and usage-dependent degradation make life prediction challenging. Here, we investigate new features derived from capacity-voltage data in early life to predict the lifetime of cells cycled under widely varying charge rates, discharge rates, and depths of discharge. Features were extracted from regularly scheduled reference performance tests (i.e., low rate full cycles) during cycling. The early-life features capture a cell's state of health and the rate of change of component-level degradation modes, some of which correlate strongly with cell lifetime. Using a newly generated dataset from 225 nickel-manganese-cobalt/graphite Li-ion cells aged under a wide range of conditions, we demonstrate a lifetime prediction of in-distribution cells with 15.1% mean absolute percentage error using no more than the first 15% of data, for most cells. Further testing using a hierarchical Bayesian regression model shows improved performance on extrapolation, achieving 21.8% mean absolute percentage error for out-of-distribution cells. Our approach highlights the importance of using domain knowledge of lithium-ion battery degradation modes to inform feature engineering. Further, we provide the community with a new publicly available battery aging dataset with cells cycled beyond 80% of their rated capacity. lithium-ion battery lifetime hierarchical machine learning prediction open data ## Context and Scale Extending the lifetime of lithium-ion batteries is essential for improving their economic and environmental impact. However, measuring battery lifetime can greatly delay product design because cells can sometimes take years to reach their end of life in accelerated laboratory aging tests. Researchers and engineers need quick and easily obtainable cell lifetime diagnostic signals to rapidly validate products and cell designs. Here, we demonstrate a new method for predicting the lifetime of cells operating under widely varying conditions using measurements from early life. These measurements, taken during the first three weeks of testing, quantify a cell's rate of degradation and correlate strongly with lifetime. Our method can be used to predict the lifetime of batteries under a wide range of operating conditions, and could potentially be extended to different chemistries. Although the method requires full-life training data, there are many possible applications for the trained model, such as screening of new cells, or estimates of relative performance between different cell types. ## 1 Introduction Understanding the long-term degradation of lithium-ion batteries is crucial for their optimal manufacturing, design, and control [1, 2]. However, repeatedly assessing cell performance via aging experiments is a time- and cost-intensive task [3]. Manufacters and researchers need quick and accurate methods to screen long-term performance and quantify the impact of new designs and control changes without having to cycle cells to the end of life (EOL) each time a new question arises. Models using data from early life could significantly shorten the time needed to make accurate predictions of long-term degradation [4], and this could lead to rapid screening of new battery performance and optimization of charging protocols [5, 6, 7]. The idea that lifetime can be predicted using measurements from the early stages of battery aging experiments has its roots in research from over a decade ago by J. Dahn and researchers at Dalhousie University, who were investigating the impact of new electrolyte additives and electrode designs on battery performance. In late 2009, they published a paper describing how high precision measurements of coulombic efficiency during the first few cycles could be used to predict cell lifetime and rank it qualitatively against other cells [8]. Coulombic efficiency is an important performance metric, and it is calculated as the discharge-to-charge capacity ratio, where an ideal value of unity indicates perfect cyclic efficiency. Measuring cell coulombic efficiency with an error of \(<0.01\%\) can indicate cell-to-cell differences caused by different rates of undesirable side reactions that lead to capacity fade. Using purpose-built high precision equipment, the Dalhousie team published a paper in 2011 that compared long-term cycling data (\(>750\) cycles) with predicted lifetimes extrapolated from short-term (\(<500\) hours) high-precision coulombic efficiency measurements [9]. Since this work, many new studies have been published on 'early life prediction'. In 2013, the Dalhousie University group published another paper demonstrating the lifetime ranking of 160 Lion cells with various electrolyte additives, using high precision coulombic efficiency measurements from the first 50 cycles of data [10]. The coulombic efficiency measurements strongly correlated with the cells' lifetimes. However, many researchers and industry professionals do not have access to high precision machines for testing. Furthermore, it would be even more useful to predict lifetimes using early-life measurements made during faster cycling experiments and under a broader range of operating conditions, enabling the technology to be deployed in more research areas and even for cells operating in the field. Research by Baumhofer et al. [11] and Harris et al. [12] investigated alternative approaches not requiring the use of a high precision cycler. Baumhofer et al. developed a lifetime prediction model on 48 cells cycled under identical conditions [11]. Hundreds of early-life features extracted from impedance spectra, pulse characterization tests at different states of charge, and standard capacity tests were reduced to a set of 24 features and used for prediction. The model using 24 features was accurate within 16 cycles, however, further analysis showed that model accuracy was highly dependent on the number of features used, with more features generally being better, suggesting the model may possibly be overfitting the small dataset (\(N=48\)). Harris et al. examined the failure statistics of 24 cells cycled under identical conditions and established a weak correlation between the cells' capacity at cycle 80 and the capacity at cycle 500 [12]. These works suggest simpler and more easily obtainable early-life features might be found to correlate with eventual lifetime. Severson et al. [5] in 2019 demonstrated an early life prediction model using features extracted from the discharge capacity vs. voltage \((Q(V))\) curves during regular cycling. The feature extraction method was unique, quantifying the cells' degradation rates by tracking the early-life variation of their \(Q(V)\) curves between cycles 10 and 100, referred to as \(\Delta Q_{100-10}(V)\). The approach was also used in follow-up work by Attia et al. [6] to accelerate an experimental campaign to optimize the constant current portion of a fast charging protocol. The researchers in these papers generated a large battery aging dataset from 169 lithium-iron-phosphate/graphite (LFP) cells cycled under various fast charging protocols. This was made publicly available, and many other researchers have investigated methods of further improving predictive performance and feature extraction techniques using this data [13, 14, 15, 16, 17, 18, 19, 20, 21, 22]. Notably, Paulson et al. [22] demonstrated accurate early life prediction on six different metal oxide cathode chemistries. Fermin-Cueto et al. [20] investigated predicting the knee point (when capacity begins to decrease rapidly) in a cell's capacity degradation curve using early-life features. Similarly, Li et al. [21] demonstrated a prediction model capable of projecting the entire capacity degradation trajectory from early-life features. Despite this growing body of research, many fundamental questions about battery life modeling remain unanswered. One fundamental issue is that, in order to train machine learning models to predict lifetime from early-life cycles, data from the _entire_ lifetime is required. Therefore these approaches are best suited to applications such as screening cells after manufacturing, or relative comparisons, rather than quantitatively absolute predictions. A second issue is a lack of publicly available battery-lifetime data that covers a wide range of conditions. The dataset published in [5, 6] was specifically generated to study high-rate fast charging protocols for LFP cells, leaving the discharge rate and depth of discharge fixed. Even though the dataset is relatively large compared to existing publicly available datasets (\(N=169\) cells), the limited range of operating conditions, in this case, induced a single dominant degradation mode (loss of active material at the anode or negative electrode, "\(\mathrm{LAM_{NE}}\)"), causing all of the capacity degradation trajectories to have very similar shapes, and perhaps making lifetime prediction easier [23]. While the relationships between cell operating conditions and the corresponding degradation modes are well understood [1, 3, 24, 25], it remains unclear how the \(\Delta Q(V)\) feature transfers to cells of different chemistries and to situations where multiple interacting degradation modes are present. This is especially the case for cells that experience milder degradation resulting in less obvious changes in the \(Q(V)\) curve. Furthermore, all cells in the dataset from [5, 6] were cycled under a fixed depth of discharge, making it easy to extract features from any cycle along the cell's degradation trajectory. However, in practice, cells are rarely subjected to full depth-of-discharge cycles, so there is a need to explore alternative methods of collecting early-life feature data and validating results using periodic reference performance tests or other means. In this work, we investigate new early-life features derived from capacity-voltage data that can be used to predict the lifetimes of cells cycled under a wide range of charge rates, discharge rates, and depths of discharge. To study this, we generated a new battery aging dataset from 225 nickel-manganese-cobalt/graphite cells, cycled in groups of four per condition, under a much wider range of operating conditions than existing publicly available datasets [26]. The cells in our dataset exhibit larger variations in their capacity degradation trajectories than previous open datasets, driven by the interactions and accumulations of various component-level degradation mechanisms [1, 23]. To predict the lifetimes of cells experiencing different degradation pathways accurately, we introduce new early-life features extracted from the differential voltage (\(dV/dQ\) vs. \(Q\)) and incremental capacity (\(dQ/dV\) vs. \(V\)) data gathered during regular weekly reference performance tests (RPTs). The RPTs, two complete cycles at full depth of discharge, enable consistent feature extraction and lifetime prediction for cells that normally cycle at fractional depths of discharge, some as low as 4.0%. Using as little as the first 5% of the aging data, we achieve a prediction error of 22% \(\mathrm{MAPE}\) on the lifetime. Including up to 15% of the entire cell lifetime data, we achieve an average prediction error of 2.8 weeks \(\mathrm{RMSE}\) and 15.1% \(\mathrm{MAPE}\) on in-distribution test sets when testing the new features in traditional machine learning models built with regularized linear regression. Given that our dataset has a hierarchical structure (i.e., the 'group' level and the 'cell' level) in nature, we also explore the possibility of applying hierarchical Bayesian linear modeling to predict lifetime, which achieves better extrapolation performance on out-of-distribution samples, viz. 7.3 weeks \(\mathrm{RMSE}\) and 21.8% \(\mathrm{MAPE}\) lifetime prediction error. The major contributions of this work are * the introduction of a set of new early-life features derived from differential voltage and incremental capacity data, * new approaches for tackling the challenge of feature extraction caused by the wide variation of DoDs, * demonstration of the improvement in accuracy possible using hierarchical Bayesian linear models compared to traditional regression models for lifetime prediction when cells are cycled with varying conditions, and * a large and unique battery aging dataset consisting of 225 NMC cells cycled under a wide range of operating conditions, enabling researchers without access to battery testing equipment to study lifetime modeling. ## 2 Dataset Generation Publicly available datasets such as those from NASA [27, 28], CALCE [29, 30], and Sandia National Lab [31] contain cells of different chemistries cycled under a range of charge rates, discharge rates, and temperatures. These datasets are frequently used in research studies since they comprehensively report capacity, internal resistance (NASA and CALCE), voltage, current, and temperature. However, the relatively small size of these datasets (roughly 30 cells per group) makes investigating machine learning-based approaches to early life prediction challenging. On the other hand, datasets such as those from the Toyota Research Institute [5, 6] and Argonne National Lab [22] contain many more cells (> 150 cells). However, they focus on a limited range of operating conditions--fast charging and symmetric C/2 cycling, respectively--making it difficult to build machine learning models that generalize across cycling conditions. In light of this, we designed our battery aging dataset to study more cells under a broader range of operating conditions than current publicly available datasets [26]. Our dataset comprises 225 cells cycled in groups of four to capture some of the intrinsic cell-to-cell aging variability [32]. A unique feature of our dataset is the many capacity degradation trajectories that reflect different accumulated degradation modes induced by the various operating conditions. These trajectories, shown in Fig. 1, exhibit different one-, two-, and three-stage degradation trends driven by the interaction and accumulation of hidden, threshold, and snowballing degradation modes [23]. These varying trends produce cell lifetimes from 1.5 to 60.9 weeks. The following sections describe the experimental details and testing procedures used to generate the dataset. ### Cell and Tester Specifications The Li-ion cells used in this study were commercial 502030 size Li-polymer cells with nickel-manganese-cobalt (NMC) as the positive electrode and graphite as the negative electrode, manufactured by Honghaosheng Electronics in Shenzhen, China. The rated capacity is 250 mAh (giving 1C as 250 mA), and the operating voltage ranges from 3.0 to 4.2 V. All cells were tested on two 64-channel Neware BTS4000 battery testers, in thermal chambers set at 30 \({}^{\circ}\)C. ### Battery Aging Test Design The aging experiments were designed around three main stress factors that impact battery lifetime: charge rate (\(\mathrm{C_{chg}}\)), discharge rate (\(\mathrm{C_{dis}}\)), and depth of discharge (\(\mathrm{DoD}\)). To track the full discharge capacity of cells with partial depths of discharge cycling, we periodically ran RPTs that measured cell capacity and gathered complete \(Q(V)\) data for feature engineering. Each RPT consisted of two cycles performed at slow rates (C/2 and C/5) to capture cell voltage response while minimizing the impact of the cell kinetics. Before beginning the aging tests, an initial RPT was conducted to determine the beginning-of-life health. Aging tests consisted of 1 week of cycling followed by an RPT, and they were repeated until cell capacity decreased below 200 mAh (80% of the rated capacity). Fig. 2 outlines the test conditions and test sequence used to generate the dataset. As previously mentioned, four cells were cycled at each test condition. We refer to a specific cell using its group number and cell identifier, e.g., G7C3, where the numbers following each letter indicate the group and cell, respectively. Initially, we aimed to study two stress factors: \(\mathrm{DoD}\) and \(\mathrm{C_{chg}}\). Conditions were selected using a grid search, with the discharge rate fixed at 0.5C for all cells. Later, we expanded the dataset to study the third stress factor, \(\mathrm{C_{dis}}\). Additional conditions were then selected using random sampling. The charge/discharge rates and depths of discharge were sampled Figure 1: Overview of battery aging test conditions and capacity data. **a**, 3D scatter plot showing train-test split and cycling conditions used – each point represents conditions for a group of four cells, and marker color indicates a data subset used to generate prediction results in Sec. 4. **b**, Discharge capacity fade curves for all 225 NMC/graphite cells plotted past 80% their rated capacity (250 mAh); color of each curve is scaled by cell lifetime. **c**, Histogram of the cell lifetimes at end-of-life (EOL) using 80% of rated capacity as threshold. Figure 2: Summary of the cycling and RPT conditions. After conducting an initial RPT, the aging test sequence consisted of a week of cycling, followed by an RPT; this was repeated until cell capacity fell below 200 mAh (80% of rated capacity). evenly from the ranges 0.5C to 3C and 25% to 100%, respectively. The effect of varying \(\mathrm{C_{chg}}\) and \(\mathrm{DoD}\) is visualized in Fig. 3. The cycling conditions for all cell groups can be found in Supplementary Information Table S1. However, the depth of discharge design values do not exactly match the measured depths of discharge from the cycling experiments. When we programmed the cycling protocols, we determined the cutoff voltages using a reference discharge capacity vs. voltage curve from a cell cycled at C/2. Unfortunately, the voltage hysteresis that the cells experience under C/2 discharge causes the cells to reach the cutoff voltage quicker than expected, thus causing the difference between the measured and designed depth of discharge. For the remainder of this paper, we present and discuss the depth of discharge using the actual measured values since they more accurately represent the test conditions the cells experienced. ### Overview of Li-ion Battery Aging Under Group-Varying Conditions To showcase the many unique capacity-fade trajectories present in the dataset, we plot capacity-fade curves from groups of cells whose cycling conditions make up a complete grid spanning a range of charge C-rates and depths of discharge. This subset of 9 groups of cells, shown in Fig. 3, were cycled with different charging rates and depths of discharge but a constant discharge C-rate of 0.5C in all cases. We observe that groups with high charging rates and moderate-to-low depth of discharge (e.g., G8, G18, G19) experienced three-stage capacity fade. Their capacity initially Figure 3: Example capacity fade trajectories for groups cycled under different charging C-rates and DoDs. The values inside parentheses indicate charging C-rate, discharging C-rate, and mean DoD, respectively. decreases quickly, then stabilizes into a slower linear fade, and then accelerates again towards the end of life. More frequently, we observe a two-stage capacity fade trend from cells in some groups (e.g. G1, G3, G6, G16). However, in a few cases, we also observe a one-stage capacity trend for cells cycled at full depth of discharge and high charging C-rates (e.g., G9, G11). Our dataset's diverse capacity degradation trajectories make early-life feature engineering challenging because cells experiencing rapid capacity fade during the first few weeks of aging can sometimes end up having moderately long lifetimes. For example, G18 cells in Fig. 3 show rapid capacity fade during the first few weeks of cycling but eventually had lifetimes greater than 20 weeks. On the other hand, G16 cells show much slower capacity fade during the first few weeks but have lifetimes of less than 20 weeks. Additionally, we observe considerable in-group lifetime variation. Groups G1, G6, and G18 in Fig. 3 show a large variation in lifetime for cells operating under the same test conditions. Cell aging variability can be caused by testing equipment inaccuracies, manufacturing variations, and even internal defects, and is highly undesirable when designing battery-powered products. We conducted a statistical analysis to elucidate the relationship between the three cell-aging stress factors and lifetime variability. We calculated the in-group standard deviation of cell lifetime as a function of each aging stress factor, as well as the mean group lifetime, Fig. 4. This reveals that only the depth of discharge (Fig. 4c) has a statistically significant relationship with the observed cell-to-cell lifetime variability. However, we also observe that cells with longer lifetimes have higher lifetime variability (Fig. 4d). These two results make it difficult to determine the true source of lifetime variability--it might result either from shallow depths of discharge or increased cell lifetime. Figure 4: Depth of discharge has a strong impact on lifetime variability. Here, the standard deviation of group lifetimes is plotted vs. **a,** charging C-rate, **b,** discharging C-rate, **c,** depth of discharge, **d,** the mean group lifetime. Smaller p-values indicate greater statistical significance of the fitted value of the slope term in the regression fit. ## 3 Methodology Prediction of lifetime from early data is more challenging when there are multiple varying stress factors, because this leads to diverging capacity trajectories. Our approach, outlined in Fig. 5, differs from the prior art [9, 10, 8, 5] in several ways. First, to apply early prediction to cells cycled under different depths of discharge, we extract features from periodic RPTs instead of regular cycling data. This means that the discharge voltage curves obtained from periodic RPTs are complete and consistent for every cell, making feature extraction more consistent. Second, we develop new features based on partial voltage windows of \(Q(V)\) curves and their derivatives (differential voltage and increment capacity data). Using a new feature extraction method(see details in Sec. 3.1), we find features that better correlate with cell lifetime for our dataset than existing features reported in the literature [5, 15, 19]. Additionally, we explore using cycling protocol information (\(\mathrm{C_{chg}/C_{dis}/DoD}\)) as features to predict lifetime, establishing a link between the two. All extracted features are reduced to a highly predictive subset using a feature selection method (see Sec. 3.2). Then, the selected features are used as input to a machine learning model to predict cell lifetime. In what follows, we outline our approach to feature engineering for early life prediction and discuss the challenges of applying existing feature engineering methodologies proven on LFP/Gr to our NMC/Gr cells that are cycled under a wider range of operating conditions. Last, we introduce hierarchical Bayesian models for early life prediction. ### Degradation-Informed Feature Engineering Initially, we extracted features previously reported to correlate strongly with cell lifetime [5, 15, 19]. We adopt the notation \(\Delta Q_{\mathrm{w3-w0}}(V)\) to describe the features, where the subscripts \(\mathrm{w3}\) and \(\mathrm{w0}\) correspond to data obtained from the RPTs from weeks three and zero, respectively. Preliminary testing of these well-established early-life features reveals that they do not fully explain the variance in our dataset. This is illustrated in Fig. 6a, where we extract the \(\mathrm{var}(\Delta Q(V))\) feature reported by Severson et al. [5] using discharge data from RPTs \(\mathrm{var}(\Delta Q_{\mathrm{w3-w0}}(V))\) and plot it against lifetime, revealing a large unexplained variance in the predicted lifetimes. To understand why this occurs, Figure 5: High-level overview of our approach. Unlike existing approaches for early prediction, we extract features from periodic reference performance tests instead of regular cycling data. In this example, we extract a feature from a partial voltage window of incremental capacity that is highly correlated with lifetime. From this and other features, we build a machine learning model to predict the lifetimes of new unseen cells. consider two cells (G6C4 and G20C1) that have similar feature values but vastly different lifetimes. In this case, even though the \(\Delta Q(V)\) curves have the same variance, they do not have the same shape and location (Fig. 6b). Fig. 6c shows these two cells' differential voltage curves (\(dV/dQ(Q)\)) from week three and week zero, and it can be seen that the group six cell (G6C4) experienced a significant capacity loss during this time, evident by the leftward shift in the right-most asymptote. This capacity loss was not observed for the group 20 cell (G20C1). Other noticeable changes exist in the \(dV/dQ(Q)\) curves that differ between the cells, indicating additional but more subtle degradation modes are present. However, these differences in the evolution of the \(Q(V)\) curve during early life are not captured by the feature \(\mathrm{var}(\Delta Q_{\mathrm{w3-w0}}(V))\), causing the unexplained variance in the dataset. While we only showed an example in Fig. 6 for this particular feature, \(\mathrm{var}(\Delta Q_{\mathrm{w3-w0}}(V))\), the unexplained variance in the data persists using most other early-life features we tested. Typically, it is not a requirement that all model input features exhibit a strong correlation with cell lifetime, but finding a few features that do correlate well is generally advantageous because it can improve model fit and accuracy. In light of this, we explored extracting features from differential voltage and incremental capacity curves using partial voltage ranges in order to capture the diverse degradation trends observed in our dataset more accurately. #### 3.1.1 Incremental Capacity Features Extracting features from incremental capacity curves is a natural extension to using the \(Q(V)\) discharge curve since it is defined over the same fixed voltage range for every cell. After fitting a spline and downsampling each cell's \(Q(V)\) curve to 1000 points, we calculated incremental capacity \((dQ/dV(V))\) as a finite difference approximation (difference quotient) of the first derivative of \(Q(V)\) based on measurements of the \(Q\) and \(V\) time series [5]. It is well documented that incremental capacity analysis is an effective method for cell degradation diagnostics [1, 33, 34]. Measuring changes to the incremental capacity curve over lifetime enables diagnosis of different degradation modes, specifically loss of lithium inventory, and loss of active material in each electrode. Hence, we calculate core summary statistics of \(\Delta dQ/dV(V)\) over a partial voltage range so as to focus the feature extraction on specific areas that may correspond to specific degradation modes. This approach is inspired by work in [13], where the authors showed a strong correlation between the time a cell spends in a specific voltage range and its capacity loss, although here the incremental Figure 6: Well-known early-life features do not explain the variance in our dataset. **a**, Cell lifetime for 225 NMC cells plotted as a function of \(\mathrm{var}(\Delta Q_{\mathrm{w3-w0}}(V))\); Pearson correlation coefficient -0.686. The two cells highlighted have similar values of \(\mathrm{var}(\Delta Q_{\mathrm{w3-w0}}(V))\) but very different lifetimes. **b**, Difference between discharge capacity curves as a function of voltage between week three and zero for the two cells highlighted in **a**. **c**, Differential voltage (\(dV/dQ\)) curves for the two cells, weeks zero and three. Leftward movement of the right-most asymptote of cell G6C4 indicates capacity loss. capacity curve is a result of degradation rather than a cause. Instead of manually specifying the voltage range to calculate the summary statistics, we exhaustively searched the entire 3.0 to 4.2 V range in increments of 0.01 V, with a minimum window size of 0.02 V searching for the maximum Pearson correlation coefficient. Fig. 7 summarizes the voltage range search results using the mean summary statistic. We find the voltage range that produces the highest linear correlation with cell lifetime is a mid-range where the upper and lower voltage limits are centered around prominent peaks in the incremental capacity curves at 3.60 V and 3.90 V. Fig. 7b shows that the change in incremental capacity in this range is inversely proportional to lifetime. This new feature shows a much stronger correlation with cell lifetime and better explains the variance in our dataset compared with the traditional feature \(\mathrm{var}(\Delta Q_{\mathrm{w3-w0}}(V))\). This new feature likely captures the rate of active material loss during early life. This idea is supported by degradation diagnostics literature which shows that changes in the intensity of the incremental capacity (mAh/V) curve at constant voltage correspond to a loss of active material [1, 33, 35, 36]. The new feature captures the change in incremental capacity intensity, calculated as the mean change in mAh/V over the middle voltage range, \(\mathrm{mean}\left(\Delta dQ/dV_{\mathrm{w3-w0}}^{3.60\mathrm{V}-3.90\mathrm{V }}(V)\right)=\mathrm{mean}\left(dQ/dV_{\mathrm{w3}}^{3.60\mathrm{V}-3.90\mathrm{ V}}(V)-dQ/dV_{\mathrm{w0}}^{3.60\mathrm{V}-3.90\mathrm{V}}(V)\right)\), see Fig. 7b. To clarify the relationship between the peaks in the differential voltage curve and cell health, we constructed half-cells from electrode materials obtained from disassembling a fresh cell. We cycled the half-cells at a slow rate (C/20) and reconstructed a full-cell pseudo-open circuit voltage curve. The results presented in Fig. 8a. However, the negative electrode data is poor because half-cell assembly was challenging. During assembly, we had to remove a water-soluble coating covering the negative electrode material by scratching it off, as using solvents would have damaged it. This process is inexact, and it produced poor electrode material, which then yielded poor results during cycling. Thus, we were unable to attribute peaks on incremental capacity curves to the specific side of electrodes. Future work on exploring degradation mechanisms in this dataset and the relationship between the early life features and the dominant degradation modes will help us determine which electrode this feature corresponds to. Lifetime modeling work on NMC/Gr cells by Smith et al. [37] showed that the capacity fade rate due to cycling tracked nearly linearly with the square-root-of-cycling throughput, calculated as Figure 7: Features based on specific voltage ranges have improved lifetime prediction power. **a**, Heatmap showing correlation of \(\mathrm{mean}(\Delta dQ/dV_{\mathrm{w3-w0}}^{V_{1}-V_{2}}(V))\) with cell lifetime as a function of the lower and upper voltage limits \(V_{1}\) and \(V_{2}\). **b**, Incremental capacity curves from weeks three and zero for three representative cells; the change in these between the voltage limits over the first three weeks is shaded. **c**, Cell lifetime plotted as a function of optimized feature \(\mathrm{mean}(\Delta dQ/dV_{\mathrm{w3-w0}}^{3.60V-3.90\mathrm{V}}(V))\), Pearson correlation coefficient \(-0.848\). \((\mathrm{C_{chg}DoD})^{0.5}\), where \(\mathrm{C_{chg}}\) is charging C-rate and DoD is depth of discharge for the experiments. This metric is described as tracking the concentration gradient of lithium ions in the cathode active material and is a proxy for diffusion-induced stress [37, 38, 39]. In Sec. 3.1.3, we further investigate this feature as a model input for early-life prediction and as a condition-level grouping variable for our hierarchical Bayesian modeling approach (Sec. 3.4). The remaining unexplained variance in the new feature-lifetime correlation is likely due to the unavoidable influence of a decreasing lithium inventory on the shape of the \(dQ/dV(V)\) curves. Decreases in lithium inventory can cause shifts in the voltages where peaks occur [34]. This causes a small misalignment between the curves at weeks three and zero that varies cell-to-cell and introduces variation in the incremental capacity feature extraction. Destructively analyzing specific cells from the dataset would help to determine more concretely what the new feature \(\mathrm{mean}\left(\Delta dQ/dV_{\mathrm{w3-w0}}^{3.60\mathrm{V}-3.90\mathrm{V }}(V)\right)\) is capturing, but this was outside the current scope. The remaining features extracted from incremental discharge capacity curves are based on the previously identified voltage range of 3.60 \(-\) 3.90 V. We use the upper and lower voltage limits imposed during cycling to create two more ranges, 3.00 \(-\) 3.60 V and 3.60 \(-\) 4.20 V. We then extract two features from each voltage range using the mean and variance summary statistics. In total, we extracted six features from \(\Delta dQ/dV(V)\), two from each of the three voltage ranges using the mean and variance summary statistics. #### 3.1.2 Differential Voltage Features Like incremental capacity, differential voltage \((dV/dQ(Q))\) analysis can effectively diagnose different component-level degradation modes in Li-ion cells [40, 41, 42]. However, differential voltage analysis has yet to be widely used as part of automated feature extraction methods because curve manipulation and automatic peak detection are challenging. Unlike incremental capacity, the differential voltage is defined as a function of cell capacity, which can change cycle-to-cycle. The changing capacity makes curve manipulation and feature extraction via vector operations more difficult, as any two curves will not be the same length. Furthermore, the peaks and valleys of cells experiencing fast degradation often merge, confusing maxima and minima detection algorithms. Despite these challenges, we investigated extracting four capacity-based features from differential voltage curves. The four features, \(Q^{\mathrm{DVA},1}\) to \(Q^{\mathrm{DVA},4}\) in Fig. 8c, are designed to capture the evolution Figure 8: An overview of the peak identification and tracking method for differential voltage feature engineering. **a**, Experimentally obtained positive and negative half-cell voltage as a function of the state of charge, their difference, and a full-cell curve for comparison. **b**, the incremental capacity as a function of the state of charge for each curve in **a**. The observable peaks in the half-cell curves indicate which electrode they originate from. **c**, Differential voltage as a function of cell capacity for cell one from group one (G1C1), illustrating the change in cell capacity and peak location during aging. of the differential voltage curve during early life and are derived from the locations of peaks. The features capture the rate of change of different capacities and the relative shifts in the differential voltage curves, calculated as \(\Delta Q^{\rm DVA,1}_{\rm w3-w0}=Q^{\rm DVA,1}_{\rm w3}-Q^{\rm DVA,1}_{\rm w0}\). The four differential voltage features are designed to quantify capacity losses attributed to each electrode and capture shifts in the relative electrode balancing [43]. Keil et al. [43] suggest certain capacities can be estimated to determine the change in electrode balancing and the loss of active materials at the positive and negative electrodes (\(\rm LAM_{PE}\) and \(\rm LAM_{NE}\), respectively). A change in the cathode capacity is captured through \(Q^{\rm DVA,2}\) since all the features of interest in this range are cathode specific. Similarly, the anode capacity is captured through \(Q^{\rm DVA,3}\). The different balances of the two electrodes are captured through \(Q^{\rm DVA,1}\) and \(Q^{\rm DVA,4}\), tracking the anode and cathode peaks, respectively. Each of the four features is included in the feature selection process. #### 3.1.3 Constant Voltage Charging Times and Other Features In addition to features extracted from capacity-voltage curves and their derivatives, we derive a set of features from direct cell measurements of time and capacity. A benefit of these features is that they can be achieved using lower sampling frequency, measurement precision, and less data processing than the aforementioned curve difference features, making them suitable for implementation on battery health monitoring devices. The first feature extracted is the time spent in the constant-voltage (CV) charging step during each RPT, denoted \(\rm CV\;Time_{wi}\). We also calculate the difference between two weeks' constant-voltage charging times, denoted \(\Delta\rm CV\;Time_{w3-w0}\). A panel plot illustrating the extraction of these features and their correlation with cell lifetimes is included in the Supplementary Information. During charging, the constant-voltage charging step occurs as the final stage in charging. Data collected from CV charging steps have successfully been used to estimate the state of health of Li-ion batteries in recent literature [44, 45, 46]. The extracted CV features reflect the interaction between capacity loss (decreasing the overall charging time) and increasing resistance to intercalation due to the degradation of the active electrode material. Additionally, we extracted features from the discharge capacity in RPTs, such as the cells' initial capacity \(Q_{\rm w0}\) and the capacity fade between weeks three and zero \(\Delta Q_{\rm w3-w0}\), capturing the initial state of the cell and its relative change during early life, respectively. The last feature we consider for early life prediction is \(\rm Stress_{chg}=C_{chg}^{\phantom{\rm chg}0.5}\rm DoD^{0.5}\). This feature captures the square-root-of-cycling charge throughput and is a proxy for diffusion-induced stress in the electrode active materials [37, 38, 39]. In addition to the charge-based feature, we also calculate a discharge feature, \(\rm Stress_{chg}=C_{chg}^{\phantom{\rm chg}0.5}\rm DoD^{0.5}\). Further, to capture the effects of different charge and discharge rates in a single feature, we calculate an average stress feature as \(\rm Stress_{avg}=(\rm Stress_{chg}+\rm Stress_{chdg})/2\) and also calculate a multiplicative stress feature as \(\rm Stress_{mult}=\rm Stress_{chg}\cdot\rm Stress_{chdg}\). For all features, we use the measured DoD from the first week of cycling in the calculation. A unique characteristic of these features is that they require no cell-specific measurements, assuming the calculation of DoD is accurate and accounts for voltage hysteresis. For this reason, these features are excellent candidates as condition-level grouping variables in our hierarchical Bayesian modeling approach to early prediction (see Sec. 3.4). ### Feature Selection We have so far focused on features that quantify the rate of degradation and correlate strongly with lifetime. However, simply using all the extracted features as inputs to a machine learning model may yield poor results for two reasons. First, some features are strongly correlated with each other, known as multicollinearity. A model trained with collinear features can be sensitive to minor changes in the feature values and may extrapolate poorly [47]. Second, while our dataset is large compared to existing publicly available datasets (225 cells), it is still relatively small from a machine learning perspective. Small datasets require special care to avoid over-fitting and improve generalization performance on unseen test data. This is especially the case when the number of data points is not significantly larger than the number of features \((N_{\rm data}\gg N_{\rm features})\). Therefore, it is crucial to select a subset of highly predictive features before model training [48, 49]. To reduce the number of input features, we perform step-wise forward selection using a linear model and repeated cross-validation with \(\rm RMSE_{EOL}\) as the evaluation metric. Starting with a null model, one feature is added to the model for each step until the number of selected features reaches a preset threshold (\(N=10\)). During each step, all features are tested in the model, and the feature that reduces the mean of the cross-validation \(\rm RMSE_{EOL}\) the most is selected and added to the model for the next step. Simultaneously, we evaluated the selected model at each step using the standard deviation of the cross-validation \(\rm RMSE_{EOL}\). We then select the features to use corresponding to the set with a balance between low mean and small standard deviation of cross-validated \(\rm RMSE_{EOL}\). In practice, we tend toward selecting fewer features so that the resulting model will be less complex and extrapolate better. ### Machine Learning for Early Prediction To predict cell lifetime, we formulate a regression problem with the extracted early-life features \(\mathbf{X}=\left[\mathbf{x}_{1},\mathbf{x}_{2},...,\mathbf{x}_{m}\right]\) as inputs, and the measured cell lifetimes \(\mathbf{y}=\left[y_{1},y_{2},...,y_{n}\right]^{T}\) in logarithmic scale as outputs, where \(m\) is the number of early-life features, and \(n\) is the number of cells. Each element of \(\mathbf{X}\) is a column vector containing the specific features selected through the technique introduced in Sec. 3.2. We assume that the lifetime is a linear function of the early-life features, giving \[\hat{y}=f(\mathbf{X})=\boldsymbol{\beta}_{0}+\mathbf{X}\boldsymbol{\beta}_{1}, \tag{1}\] where \(\boldsymbol{\beta}_{0}\) is an \(n\times 1\) column vector of intercepts and \(\boldsymbol{\beta}_{1}\) is a vector of coefficients, one for each feature, \(\boldsymbol{\beta}_{1}=\left[\beta_{1},\beta_{2},...,\beta_{m}\right]^{T}\). To find the coefficients of this equation, we formulate an optimization problem with elastic net regularization, which is a combination of \(\rm L_{1}\) and \(\rm L_{2}\) penalization. The objective function is \[\hat{\boldsymbol{\beta}}=\operatorname*{argmin}_{\beta_{0},\boldsymbol{\beta _{1}}}\left(\|\mathbf{y}-\boldsymbol{\beta}_{0}-\mathbf{X}\boldsymbol{\beta}_ {1}\|_{2}^{2}+\lambda\left(\frac{1-\alpha}{2}\|\boldsymbol{\beta}\|_{2}^{2}+ \alpha\|\boldsymbol{\beta}\|_{1}\right)\right), \tag{2}\] where \(\alpha\) and \(\lambda\) are hyperparameters that control the balance between the \(\rm L_{1}\) and \(\rm L_{2}\) penalties and the magnitude of regularization, respectively. To select optimal values of \(\alpha\) and \(\lambda\), we perform repeated cross-validation using randomized dataset splits. ### Hierarchical Bayesian Models for Early Prediction As a comparison and contrast to the method in the previous section, we also consider hierarchical Bayesian models (HBMs) for lifetime prediction. These have a layered structure that can model changes in the feature-target relationship throughout the dataset. HBMs have been applied to model naturally structured data in various research fields from ecology to sociology, psychology, and computer vision [50, 51]. #### 3.4.1 Clustering for Hierarchical Modeling For our problem of early life prediction, features can be viewed as coming from two levels: the 'cycling condition' level and the 'individual cell' level. Condition-level features relate to user-defined test protocols rather than measured data. For our dataset, the charge/discharge C-rates and depth of discharge (\(\rm C_{chg}\), \(\rm C_{chg}\), \(\rm DoD\)), and any mathematical combination of these are all condition-level features. In contrast, features that require specific cell measurements during cycling are considered cell-level features. Features such as \(\rm mean\left(\Delta dQ/dV_{\rm w3-w0}^{3.60\rm V-3.90\rm V}(V)\right)\) and \(\rm var\left(\Delta Q_{\rm w3-w0}(V)\right)\) are examples of cell-level features that are unique to each cell. From Fig. 3 we observe, as one might expect, that differences in capacity degradation trajectories and lifetimes result from different externally imposed cycling conditions. Further, certain degradation modes may only appear beyond a threshold within the conditions--for example, lithium plating may only occur in cells with charge rates exceeding a critical threshold [23]. To validate the hypothesis that conditional-level features have a strong impact on the relationship between cell-level features and lifetime, we calculate the condition-level feature \(\mathrm{Stress}_{\mathrm{avg}}=(\mathrm{C_{chg}}^{0.5}\mathrm{DoD}^{0.5}+\mathrm{ C_{dchg}}^{0.5}\mathrm{DoD}^{0.5})/2\) described in Sec. 3.1.3. This represents the average diffusion-induced stress that a cell experiences [37]. Fig. 9 shows a scatter plot of the cell-level feature mean \(\left(\Delta dQ/dV_{\mathrm{w3-w0}}^{3.60\mathrm{V}-3.90\mathrm{V}}(V)\right)\) vs. lifetime, colored by \(\mathrm{Stress}_{\mathrm{avg}}\). As \(\mathrm{Stress}_{\mathrm{avg}}\) decreases, the slope of the cell-level feature-lifetime relationship becomes steeper. However, the changing trend is not as clear when analyzing the data on a log-log plot. Normally, the reason for using the log transform on both the feature and the target is to increase the Pearson linear correlation coefficient, as a higher linear correlation will generally improve model prediction performance. This pursuit of a one-for-all linear relationship between the feature and the target hides the data's differences and hierarchical structure caused by the various cycling conditions. To take advantage of an HBM's ability to model the change in feature-target relationship across different levels, we investigate clustering cell data based on cycling conditions, quantified by average stress (\(\mathrm{Stress}_{\mathrm{avg}}\)). In general, we expect cells with similar average stress levels to share the same feature-lifetime relationship, enabling the HBM to better fit the dataset. We adopt a constrained K-means clustering algorithm [52], which is an improved version of the traditional K-means algorithm that imposes minimum and maximum cluster size limits. The clustering score \(\mathrm{SSE}=\sum_{i=0}^{N}\left(x_{i}-c_{i}\right)^{2}\), which describes the sum of squared distances between sample points and their assigned centroid, is used to evaluate the influence of the number of clusters on the clustering results. Fig. 10a shows the \(\mathrm{SSE}\) as a function of the number of clusters. According to the empirical elbow rule [53], we select \(K=4\) clusters. From Figs. 10b and 10c, we observe two sources of variability that affect lifetimes. The first is the cross-cluster lifetime variability, which arises from differences in usage, and is measured as a difference in \(\mathrm{Stress}_{\mathrm{avg}}\). The other source of lifetime variability arises from in-cluster differences due to manufacturing variability and cycling tester variability. #### 3.4.2 Bayesian Hierarchical Linear Model Similar to the HBM used in former work [54], our model structure has two levels and is shown in Fig. 11. The first level considers the cycling condition parameters. As mentioned previously, cells are first divided into four clusters (indexed from 0) based on their average stress \(\mathrm{Stress}_{\mathrm{avg}}\), calculated using the cycling condition parameters. At this level, we aim to find the mapping (parameterized by \(\mathbf{\gamma},\sigma\)) between condition-level features (\(\mathbf{g_{j}}\)) and the cell-level regression parameters (\(\mathbf{\theta_{j}},\sigma_{j}\)). \[\begin{split}\theta_{j}&=\mathbf{\gamma}^{\top}\mathbf{g_{j}} \\ \sigma_{j}&\sim\mathrm{HalfCauchy}(\sigma)\end{split} \tag{3}\] After the coefficients (\(\mathbf{\theta_{j}},\sigma_{j}\)) are decided for each cluster, the individual cell-level regression is built as the second level of the HBM. The cell-level regression uses individual health features (\(\mathbf{x_{ji}}\)) and coefficients (\(\mathbf{\theta_{j}},\sigma_{j}\)) to give lifetime predictions (\(\mathbf{y_{ji}}\)) for individual cells. \[y_{ji}\sim N(\mathbf{\theta_{j}^{\top}x_{ji}},\sigma_{j}^{2}) \tag{4}\] The overall training objective is to infer posterior distributions for both the condition-level model and the individual cell-level models, \(P\left(\mathbf{\theta_{j}}\mid Y_{j}\right)\) and \(P\left(\mathbf{\gamma}\mid\{Y\}\right)\) respectively, where \(Y_{j}\) represents Figure 11: Overview of HBM structure. Model parameters can be classified as either individual-level (\(\mathbf{\theta_{j}},\mathbf{\sigma_{j}}\)) or conditional-level (\(\mathbf{\gamma},\mathbf{\sigma}\)); \(j\) represents cycling condition group index, \(i\) represents individual cell index, \(y_{ji}\) represents lifetime of \(i\)th cell in \(j\)th cycling group. The two-level structure allows the individual cell-level feature-label (\(x_{ji}-y_{ji}\)) relationship to vary with cycling condition based on cycling condition level features (\(\mathbf{g_{j}}\)). Figure 10: Overview of clustering results. **a**, Influence of number of clusters on clustering score \(\mathrm{SSE}\). **b**, Histogram of stress factor \(\mathrm{Stress_{avg}}\) colored by cluster. **c** Corresponding lifetime distribution for each cluster. lifetimes from only the \(j\)th group but \(\{Y\}\) represents data from all lifetimes. More details about the training procedure and hyper-priors are included in 5. ### Model Evaluation Metrics We use two standard error metrics to evaluate the lifetime prediction accuracy of our approaches, namely, mean absolute percentage error (\(\mathrm{MAPE_{EOL}}\)) and root mean squared error (\(\mathrm{RMSE_{EOL}}\)), both calculated using the measured and predicted values of cell lifetime on a linear scale. The metrics are \[\mathrm{MAPE_{EOL}}=\frac{1}{n}\sum_{i=1}^{n}\left|\frac{\mathbf{y}_{i}-\hat{ \mathbf{y}}_{i}}{\mathbf{y}_{i}}\right|\times 100\% \tag{5}\] \[\mathrm{RMSE_{EOL}}=\sqrt{\frac{1}{n}\sum_{i=1}^{n}(\mathbf{y}_{i}-\hat{ \mathbf{y}}_{i})^{2}} \tag{6}\] where \(\mathbf{y}\) are the measured cell lifetimes, \(\hat{\mathbf{y}}\) are the predicted cell lifetimes, and \(n\) is the number of cells. ## 4 Early Prediction Results and Discussion ### Dataset Partitioning and Feature Selection Dataset partitioning was done at the group rather than the cell level, for three reasons. First, practical battery aging tests for product validation typically cycle multiple cells under the same conditions to capture the aging variability due to manufacturing. Second, it is desirable to build an early prediction model to predict the lifetimes of cells cycled under previously untested conditions. Finally, although building an early prediction model with cells tested under rapidly accelerated aging conditions is useful in minimizing the time and costs of collecting aging data, one cannot preemptively know the lifetime (before tests), so grouping must be done using an alternative indicator of cell lifetime. Since the depth of discharge is the dominant cycling stress factor impacting the battery lifetimes in our aging dataset (Fig. 11(a)), this was used to determine the dataset partitioning. We first separate our dataset into a high-DoD region and a low-DoD region, with a boundary at 40% depth of discharge (Fig. 12). In the high-DoD region, we further divide the data into a training set and an in-distribution high-DoD test set. The high-DoD test set is used to evaluate the model's prediction accuracy for cells with conditions similar to the ones the model was trained on. Last, we assign all data in the low-DoD region (\(<40\%\)) to a second test set used to test the model's ability to extrapolate to unseen test conditions. The dataset split is also visualized in Fig. 1(a), where each axis is one of the three cycle aging stress factors (\(\mathrm{C_{chg}/C_{dis}/DoD}\)), and the marker color indicates the data subset that the group belongs to. The training set contains cells with lifetimes ranging from 3.7 to 36.6 weeks, and the high-DoD test set has cells with lifetimes between 5.2 and 31.6 weeks. On the other hand, the low-DoD test set is more diverse, with lifetimes ranging from 9.7 to 60.9 weeks. Histograms of cell lifetimes for each data subset are visualized in Fig. 11(b). After extracting the features outlined in Sec.3, we perform feature selection on the training dataset following the method described in Sec. 3.2. All extracted features are outlined in the Appendix. To avoid poor performance on the test datasets due to over-fitting, we perform a study of five repeated five-fold cross-validation using up to 10 features. Repeated cross-validation is intended to minimize the statistical randomness caused by a single five-fold cross-validation partition. The trends of the mean and standard deviation of cross-validation \(\mathrm{RMSE_{EOL}}\) of this trial are reported in Fig. 11(c), and the selected feature in each step is listed in Table 1. The model with two features, namely \(\log\left(\mathrm{mean}(\Delta dQ/dV_{\mathrm{w3-w0}}^{3.6V-3.9V}(V)\right)\) and \(\log\left(\left|\Delta\mathrm{CV}\;\mathrm{Time}_{\mathrm{w3-w0}}\right|)\right|\)), has the lowest run-to-run variance and relatively low mean error \(\mathrm{RMSE}_{\mathrm{EOL}}\). Adding a third feature to the set, \(\mathrm{DoD}\), produces a model with lower mean \(\mathrm{RMSE}_{\mathrm{EOL}}\) but increases the run-to-run variation. For a more comprehensive evaluation, we compare the results of models trained using both two and three features. ### Feature and Model Comparison To compare different models, we initially establish a pair of baseline models. The first baseline model is a dummy model that does not use any input features or have any trainable parameters, and instead predicts the mean cell lifetime of the training set for all cells. This is a good way to determine if a more complex model is truly learning new information from the input data, or instead only appears to be learning because of similar train/test dataset distributions that lead to similar error metrics. When tested on the two test datasets, the dummy model achieves \(\mathrm{MAP}_{\mathrm{EOL}}\) of 31.52% \begin{table} \begin{tabular}{l l l} \hline \hline **Step Number** & **Selected Feature** & **Description** \\ \hline 1 & \(\log(\mathrm{mean}(\Delta dQ/dV_{\mathrm{w3-w0}}^{3.6V-3.9V}(V))\) & Best incremental capacity feature from Sec. 3.1.1 Fig. 6(c) \\ 2 & \(\log(\left|\Delta\mathrm{CV}\;\mathrm{Time}_{\mathrm{w3-w0}}\right|)\) & Change in CV hold time (see Sec. 3.1.3) \\ 3 & \(\mathrm{DoD}\) & Depth of discharge \\ 4 & \(\Delta Q_{\mathrm{w3-w0}}^{1}\) & Change in DVA-based capacity \(Q^{\mathrm{DVA},1}\) (see Sec. 3.1.2) \\ 5 & \(\mathrm{C}_{\mathrm{chg}}\)\({}^{0.5}\mathrm{DoD}\)\({}^{0.5}\) & Charge-induced stress (see Sec. 3.1.3) \\ 6 & \(\mathrm{C}_{\mathrm{chg}}\) & Charging C-rate \\ 7 & \(\log(\mathrm{var}(\Delta dQ/dV_{\mathrm{w3-w0}}^{3.6V-3.6V}(V))\) & Variance of low-voltage incremental capacity feature (see Sec. 3.1.1) \\ 8 & \(\Delta Q_{\mathrm{w3-w0}}^{3}\) & Change in DVA-based capacity \(Q^{\mathrm{DVA},3}\) (see Sec. 3.1.2) \\ 9 & \(\log(\left|\mathrm{mean}(\Delta dQ/dV_{\mathrm{w3-w0}}^{3.6V-3.6V}(V)\right|)\) & Mean of low-voltage incremental capacity feature (see Sec. 3.1.1) \\ 10 & \(\log(\left|\mathrm{mean}(\Delta Q_{\mathrm{w3-w0}}(V)\right|)\) & Mean of \(\Delta Q(V)\) vector (see Sec. 3.1) \\ \hline \hline \end{tabular} \end{table} Table 1: Step-wise Forward Search Results Figure 12: **a. Scatter plot of mean group lifetime vs. DoD; marker color indicates train/test subset. b. Histogram showing each subset’s distribution of cell lifetimes. c. Mean and standard deviation of \(\mathrm{RMSE}_{\mathrm{log(EOL)}}\) for five-fold repeated cross-validation on the ten candidate models.** and 47.54% on the high-DoD and low-DoD test sets, respectively. The error metrics for all models tested are shown in Table 2. The second baseline model is built using only the cycling condition parameters as input features. This model predicts lifetimes without using cell-specific aging measurements. This model achieves a \(\mathrm{MAPE_{EOL}}\) of 19.01% and 23.72% on the high DoD and low DoD test sets, respectively. The substantial decrease in prediction error over the dummy model shows that the usage parameters convey a significant amount of information that can be used to predict lifetime accurately. This result is expected, as a great deal of battery lifetime modeling work [37, 55, 56] has already explored the strong connection between usage and degradation. However, only using condition-level cycling features does not account for intrinsic cell-to-cell variability. Hence, the next set of models we tested included cell-level features extracted from the early aging data. The first cell-level features model is the "discharge model" described in [5] and Section 3.1. This model, and all other models built on cell-level inputs, use features extracted from the RPTs of weeks zero and three, which is just under 18% of the average lifetime. The main feature included is \(\mathrm{var}(\Delta Q_{\mathrm{w3-w0}}(V))\), however, we found that this did not completely describe the variance in our dataset. When tested on the high and low DoD test datasets, the discharge model achieved 28.03% and 24.80% \(\mathrm{MAPE_{EOL}}\), respectively. The performance on the two test datasets is slightly worse than the cycling condition model, yet still better than the dummy model, indicating that the features used in the discharge model do carry useful information, but are not optimal for our dataset (see Table 2). The remaining models we compare are the degradation-informed and hierarchical Bayesian models. We refer to our elastic net models as _degradation-informed_ in Table 2 because of the newly developed degradation-based features used as model inputs. Both the degradation-informed and HBM models use the same sets of input features, and for thoroughness, we compare models built using two and three features each. Compared to the cycling condition baseline, the two-feature elastic net model shows decreased \(\mathrm{MAPE_{EOL}}\) on the high-DoD test of 16.0% and a slight increase in error on the low-DoD test set to 24.4%. However, the \(\mathrm{RMSE_{EOL}}\) of the low-DoD test set drops considerably from 9.8 to 7.8 weeks. For the HBMs, we observe small increases in the training and the high-DoD test errors while a noticeable improvement in the low-DoD test errors over the degradation-informed models using the same set of features. \begin{table} \begin{tabular}{l c c c c c c c} \hline \hline **Model** & \(N\)**Features** & \multicolumn{3}{c}{**MAPE [\%]**} & \multicolumn{3}{c}{**RMSE [weeks]**} \\ & & Training & High DoD & Low DoD & Training & High DoD & Low DoD \\ \hline Dummy Model & 0 & 35.0 & 31.5 & 47.5 & 6.5 & 4.8 & 18.5 \\ Cycling Conditions & 3 & 24.8 & 19.0 & 23.7 & 4.0 & 3.3 & 9.8 \\ Discharge Model [5] & 5\({}^{*}\) & 23.9 & 28.0 & 24.8 & 4.6 & 4.7 & 11.5 \\ Degradation-informed & 2 & 17.3 & 16.0 & 24.4 & 3.2 & 3.0 & 7.8 \\ Degradation-informed & 3 & 16.5 & 15.1 & 33.0 & 3.1 & 2.8 & 9.7 \\ HBM & 2\({}^{\dagger}\) & 18.6 & 16.9 & 21.8 & 3.3 & 3.1 & 7.3 \\ HBM & 3\({}^{\dagger}\) & 17.4 & 15.8 & 24.1 & 3.1 & 2.9 & 7.5 \\ \hline \hline \end{tabular} * The discharge model [5] contains six features, with one of them being the difference between the maximum capacity and capacity at cycle two, \(\Delta Q_{\mathrm{max-2}}\). However, this feature cannot be calculated for our dataset due to the partial depth of discharge cycling and the continuously decreasing capacity-fade curves for all cells and has thus been omitted. * The number of features listed refers to the number of cell-level input features. For both HBMs, a single cycling condition-level feature is used for grouping cells, and, as indicated in the table, either two or three cell-level features are used for regression. \end{table} Table 2: Prediction errors for selected models tested using the high- and low-DoD test datasets. For both the degradation-informed and hierarchical models, we observe that including the third feature decreases model prediction error on the training and high-DoD test datasets but increases error for the low-DoD test dataset. When the third feature is added, both models over-fit the training dataset and exhibit poor extrapolation capability to the low-DoD test dataset where the cells have longer lifetimes. Regardless, the HBM trained with three features still performs better when predicting the low-DoD test set compared with its elastic net counterpart. Generally, by comparing the evaluation metrics of the two models (degradation-informed model and HBM), we find that the HBM has better generalizability to the low-DoD test set, but at the cost of slightly higher training and high-DoD test errors. The large improvement in performance observed for models using cell-level (as opposed to only using cycling condition features) features prompts us to further investigate why the feature \(\log(\mathrm{mean}(\Delta dQ/dV_{\mathrm{w3-w0}}^{3.6V-3.9V}(V))\) explains cell-to-cell variability better than other features. Firstly, it is more accurate to use measured health metrics from individual cells in operation to predict their lifetime. This reveals the intrinsic cell-to-cell variability that could cause different aging behaviors under identical cycling conditions. Secondly, this optimized feature, which likely captures how much loss of active material happens during early life, has a balanced representation of the variability within the group and among the entire dataset. In summary, we find that the best feature \(\log(\mathrm{mean}(\Delta dQ/dV_{\mathrm{w3-w0}}^{3.6V-3.9V}(V))\) explains the cell-to-cell variability well for a majority of cells. The remaining variance in the feature-lifetime correlation may be contributed jointly by measurement inaccuracy and unexplained manufacturing variability. Hence, our analysis of the results suggests that a predictive early-life feature should capture the variability introduced by the difference in cycling conditions and information about intrinsic cell-to-cell variation that causes different performances under identical loads. Also, our feature engineering methodology (Sec. 3.1.1) can be extended to find good features for other cell chemistries. #### 4.2.1 Time Dependence of Input Features Motivated by the need to predict cell lifetime as early as possible, we performed a study varying the time frame from which the features are extracted to understand the impact on model accuracy. Unfortunately, a testing error during week four caused irreversible data loss for a large batch of cells, so week four data is omitted from this study. Using the degradation-informed elastic net model with two features, we vary the RPTs from which the features are extracted and record the test errors for the high and low-DoD test datasets. The results are shown in Fig. 13b. First, we analyze the accuracy trend with the starting week fixed to week zero (i.e., \(w_{i}=0\)). Under this setting, the prediction errors on the high-DoD dataset consistently decrease as the time between RPTs increases. However, the prediction errors for the low-DoD test set are found to slightly increase with increasing time-frame around weeks five and six \(w_{j}=5,6\). This is likely because many cells experience rapid degradation after week five/six, which alters the feature-lifetime relationship for cells with short lives. This causes the model to change its fit, decreasing its prediction accuracy on long-lifetime cells. Second, we analyze the accuracy trend by looking at the time between any two RPTs. Along the diagonal, the delta between any two RPTs is one week. Under these conditions, we observe a substantial increase in model prediction error on both the high- and low-DoD test sets compared with counterparts toward the upper left corner (i.e., models with features extracted with intervals longer than one week). This suggests a minimum time interval of \((w_{j}-w_{i})\geq 2\) is required to accurately estimate the rate of degradation inside the cell from early-life features. Finally, we observe that the model prediction error on the low-DoD test set continuously increases with increasing starting week \(w_{i}\). This could be an effect of optimizing the incremental capacity feature (Sec. 3.1.1) using data from weeks three and zero. The optimal voltage range for this feature may change with the RPTs used and was not accounted for in this study. Figure 14: Overview of HBM results. **a**, True vs. predicted lifetimes using the optimal two features extracted from weeks three and zero \((\mathrm{w}_{3}-\mathrm{w}_{0})\), with embedded histogram showing prediction residuals. **b**, Predictions for each cluster with 2 standard deviations as the corresponding error bar for each sample. The embedded histograms show a summary of error bars Figure 13: Overview of prediction results for _degradation-informed_ elastic net model with two input features. **a**, True and predicted lifetimes using features extracted from weeks three and zero \((\mathrm{w}_{3}-\mathrm{w}_{0})\) with embedded histogram showing prediction residuals. **b**, The \(\mathrm{RMSE}_{\mathrm{EOL}}\) and \(\mathrm{MAPE}_{\mathrm{EOL}}\) error metrics as a function of week numbers from which the early-life features, denoted \((w_{j}-w_{i})\). Week four data is omitted from the study due to a testing error. #### 4.2.2 Analysis of HBM Results The probabilistic nature of HBMs enables us to extract a deeper understanding by considering both the mean and the uncertainty of lifetime predictions. Assuming individual cluster fitting parameters and noise variance, \(\mathbf{\theta_{j}}\) and \(\sigma_{j}\) respectively, are independent, the posterior predictive distribution can be written as \[p\left(y_{j}^{*}\mid Y_{j}\right)=\iint p\left(\sigma_{j}\mid Y_{j}\right)p \left(\mathbf{\theta_{j}}\mid Y_{j}\right)p\left(y_{j}^{*}\mid\mathbf{\theta_{j}}, \sigma_{j}\right)d\mathbf{\theta_{j}}d\sigma_{j}. \tag{7}\] For a point-wise prediction, one can estimate the mean value of \(p\left(y_{j}^{*}\mid Y_{j}\right)\). Table 2 lists the performance of the HBM built using two different feature sets. The first uses two cell-level features, \(\log(\left|\mathrm{mean}(\Delta dQ/dV_{\mathrm{w3-w0}}^{3.6V-3.9V}(V)\right|)\) and \(\log(\Delta\mathrm{CV}\;\mathrm{Time_{w3-w0}})\), and achieves 3.08 weeks \(\mathrm{RMSE}\) and 16.88% \(\mathrm{MAPE}\) for the high-DoD test set, which is almost the same as the performance of the degradation-informed model using the same feature set. While, for the low-DoD test set, the HBM achieves 7.3 weeks \(\mathrm{RMSE}\) and 21.83% \(\mathrm{MAPE}\), which outperforms the degradation-informed model by 7% and 10% for \(\mathrm{RMSE}\) and \(\mathrm{MAPE}\), respectively. Similar to the degradation-informed model, we observe that the HBM model overfits the training dataset when the third feature (\(\mathrm{DoD}\)) is added. This is evident by the increased performance on the training and high-DoD test set but worse performance on the low-DoD test set. Specifically, under the high-DoD test set, \(\mathrm{RMSE}\) improved from 3.08 to 2.85 weeks, and \(\mathrm{MAPE}\) improved from 16.88% to 15.80%. However, for the low-DoD test set, \(\mathrm{RMSE}\) increased from 7.30 to 7.49 weeks, and \(\mathrm{MAPE}\) increased from 21.83% to 24.10%. Notably, the HBM shows more resistance to overfitting than the degradation-informed model, whose performance decreased substantially more than the HBM when the third feature was included in the feature set. Fig. 13(b) shows the uncertainty (2 standard deviations) of \(p\left(y_{j}^{*}\mid Y_{j}\right)\) for posterior lifetime predictions of each cluster. The uncertainty levels for clusters 0 and 1 are around \(\pm\)4.5 weeks (at 2 s.d.), whereas for clusters 2 and 3, the uncertainty levels are around \(\pm\)9.5 and 10.5 weeks, respectively, which reflects the model's uncertainty when predicting cells from unseen cycling conditions. According to Table 3, there are only 12 cells from cluster 3 in the training set, while there are 23 cells from cluster 3 in the Low-DoD test set. Due to the lack of data, the uncertainty for all regression parameters (\(\mathbf{\theta_{3}}\), \(\sigma_{3}\)) for cluster 3 is much larger than that of clusters 0 and 1. On the other hand, as the prediction uncertainty becomes large for long-life cells, uncertainty itself can be used as an indicator to denote whether one should include more early-life data for feature calculation. For example, when running HBM in a forward mode (using the trained model to give predictions), for test samples in Cluster 3, large prediction uncertainty is observed (>10 weeks). One may consider including the 4th or 5th week of training data to retrain the model so that the prediction uncertainty on Cluster 3 test samples can be reduced. Since the used three weeks of training data only take up \(7\%\) of the average lifetime for Cluster 3 samples, using 1-2 more weeks train data still only covers the very early stage of these long-life cells. Further analysis of uncertainty is shown in Fig. 15. The HBM successfully captures the changing slope describing the relationship between \(\log(\left|\mathrm{mean}(\Delta dQ/dV_{\mathrm{w3-w0}}^{3.6V-3.9V}(V)\right|)\) and true lifetime in Fig. 14(a). By exploiting the assumption that cell-level regression coefficients are decided by cycling stress cluster-level features, the HBM gives a reasonable fit for Cluster 3 samples (46) based on a very limited training set (12). Considering the posterior predictive distribution expression \(p\left(y_{j}^{*}\mid Y_{j}\right)\), the uncertainty on predictions is influenced by both the uncertainly from the regression intercepts and slopes \(\theta_{j}\), and the uncertainty due to measurement noises \(\sigma_{j}\). Fig. 14(b) shows these two kinds of uncertainties across all clusters. The posterior probability distributions for \(\mathbf{\theta_{j}}\) and \(\sigma_{j}\) are much wider for cluster 3 than for any other clusters. This uncertainty on both lifetime predictions and model parameters can be more beneficial to real-world applications compared to only a point-wise prediction. For example, instead of knowing the exact EoL lifetime, customers care more about a warranty for the worst-case lifetime, which can be satisfied by using the standard deviation of prediction distributions. ## 5 Conclusion In this study, we have developed two data-driven models to tackle the problem of battery early lifetime prediction on a large and unique aging dataset, which consists of 225 NMC cells cycled under a wide range of charge and discharge C-rates (0.5-3 C) and DoDs (4-100%). Our feature engineering process identifies a new predictive feature, \(\mathrm{mean}(\Delta dQ/dV^{3.60V-3.90V}_{\mathrm{w3-w}}(V))\), derived from incremental capacity curves and closely related to the degradation induced by loss of active \begin{table} \begin{tabular}{l c c c c} \hline \hline **Cluster ID** & \multicolumn{4}{c}{\(N\)**Samples**} \\ & \(\mathrm{Stress}_{\mathrm{avg}}\) & Training & High-DoD test & Low-DoD test \\ \hline 0 & 2.2 & 30 & 18 & 0 \\ 1 & 1.9 & 41 & 24 & 4 \\ 2 & 1.5 & 33 & 18 & 22 \\ 3 & 1.0 & 12 & 0 & 23 \\ \hline Total & 1.7 & 116 & 60 & 49 \\ \hline \hline \end{tabular} \end{table} Table 3: Summary of train-test split for each cluster materials. Also, our analysis shows that the widely used \(\Delta Q(V)\) features in the existing early prediction literature may not explain cell-to-cell lifetime variability within our dataset. In terms of results, two distinct machine learning models are trained to predict the lifetime. Our degradation-informed model, trained using elastic net regression, yields 3.0 and 7.8 weeks \(\mathrm{RMSE}\) and 15.1% and 33.0% \(\mathrm{MAPE}\) on the high- and low-DoD test sets, respectively. The HBM produces 3.1 and 7.3 weeks \(\mathrm{RMSE}\) and 16.9% and 21.8% \(\mathrm{MAPE}\) for the high- and low-DoD test sets, respectively. While the HBM shows performance improvement for point-wise predictions on the low-DoD test set, it also gives uncertainty information for its predictions, which can be used in applications like the cell lifetime warranty. And we found that the uncertainty grows across groups with the decrease of cycling stress factor \(\mathrm{Stress_{avg}}\), which indicates the lack of observability for cell-to-cell differences from early-life features, and thus more cycling time range may need to be included for cells under mild cycling conditions. A limitation of this work is that the models are demonstrated on battery aging data collected in a well-controlled laboratory setting under constant cycling conditions over the life of the cells. However, depending on the applications, battery data from real-world applications may be more variable and noisy, posing a challenge to feature extraction and lifetime prediction. To investigate this further, we will expand the dataset by aging cells using simulated electric grid duty cycles (e.g., simulating peak shaving and frequency regulation cycles). ## Data Availability The battery aging dataset collected and used for this work is available for download at: [https://doi.org/10.25380/iastate.22582234](https://doi.org/10.25380/iastate.22582234). Please refer to the dataset as the ISU & ILCC NMC/Gr battery aging dataset. We want to thank Jinqiang Liu, Chad Tischer, Reuben Schooley, and all Iowa Lakes community college students who assisted in the generation of this new battery aging dataset. Without the help of those mentioned here, this dataset would not be possible. ## Code Availability The code for the data preprocessing, feature extraction, and early prediction modeling is available at: [https://doi.org/10.25380/iastate.22582234](https://doi.org/10.25380/iastate.22582234). ## Author Contributions Conceptualization, T.L., A.T., Z.Z., C.H., D.H.; Data Collection, Data Management, Raw Data Processing, T.L.; Investigation, Methodology, Visualization, Software, Formal Analysis, Writing - Original Draft, T.L., A.T., Z.Z.; Writing - Review and Editing, T.L., A.T., Z.Z., C.H., D.H. ## Acknowledgements We acknowledge the hard work of Jinqiang Liu from Iowa State University and Chad Tischer and Reuben D. Schooley from Iowa Lakes Community College for executing and maintaining the battery aging tests. We also want to acknowledge Murtaza Zohair for assembling the half-cells used in this study. The work at Iowa State University and the University of Connecticut was partly supported by Iowa Economic Development Authority under the Iowa Energy Center Grant No. 20-IEC-018 and partly by the US National Science Foundation under Grant No. ECCS-2015710. The China Scholarship Council and the Department of Engineering Science supported the work at the University of Oxford. Any opinions, findings, or conclusions in this paper are those of the authors and do not necessarily reflect the sponsors' views.
2301.02247
Quantum Metric Unveils Defect Freezing in Non-Hermitian Systems
Non-Hermiticity in quantum Hamiltonians leads to nonunitary time evolution and possibly complex energy eigenvalues, which can lead to a rich phenomenology with no Hermitian counterpart. In this work, we study the dynamics of an exactly solvable non-Hermitian system, hosting both $\mathcal{PT}$-symmetric and $\mathcal{PT}$-broken modes subject to a linear quench. Employing a fully consistent framework, in which the Hilbert space is endowed with a nontrivial dynamical metric, we analyze the dynamics of the generated defects. In contrast to Hermitian systems, our study reveals that PT -broken time evolution leads to defect freezing and hence the violation of adiabaticity. This physics necessitates the so-called metric framework, as it is missed by the oft used approach of normalizing quantities by the time-dependent norm of the state. Our results are relevant for a wide class of experimental systems.
Karin Sim, Nicolò Defenu, Paolo Molignini, R. Chitra
2023-01-05T19:00:00Z
http://arxiv.org/abs/2301.02247v3
# Quantum metric unveils defect freezing in non-Hermitian systems ###### Abstract Nonhermiticity in quantum Hamiltonians leads to non-unitary time evolution and possibly complex energy eigenvalues, which can lead to a rich phenomenology with no Hermitian counterpart. In this work, we study the dynamics of an exactly solvable non-Hermitian system, hosting both \(\mathcal{PT}\)-symmetric and \(\mathcal{PT}\)-broken modes subject to a linear quench. Employing a fully consistent framework, in which the Hilbert space is endowed with a nontrivial dynamical metric, we analyze the dynamics of the generated defects. In contrast to Hermitian systems, our study reveals that \(\mathcal{PT}\)-broken time evolution leads to defect freezing and hence the violation of quantum adiabaticity. Additionally, no Kibble-Zurek scaling regime in the quasi-adiabatic limit exists in our model. This physics necessitates the quantum metric framework, as it is missed by the oft used approach of normalizing quantities by the time-dependent norm of the state. Our results are relevant for a wide class of experimental systems. _Introduction._ Non-Hermitian Hamiltonians [1; 2; 3] provide a framework to explore a complex array of out-of-equilibrium phenomena. Far from being a purely mathematical pursuit, non-Hermitian descriptions have been employed widely in both classical and quantum systems. Arguably, the most well known examples include the study of non-Hermitian spin chains in the context of the Kardar-Parisi-Zhang equation [4] and the localization of particles in an imaginary vector potential, used to explain the depinning of vortex lines in a superconductor [5]. More recently, the field has seen a dramatic revival courtesy of effective descriptions of Lindbladian dynamics in dissipative systems [6], continuously monitored systems [7; 8], amplification in optomechanical systems [9], quantum sensors [10], and more. Nonhermiticity has unveiled a plethora of interesting phenomena, such as quantum phase transitions without gap closure [11; 12], anomalous behaviors of quantum emitters [13], tachyonic physics [14; 15] and unconventional topology [16; 17; 18] to name a few. Interest in non-Hermitian systems is further enchanced by the concomitant experimental realizations in diverse platforms: optical systems [16; 19], semiconductor microcavities [20] and acoustic systems [21] in the presence of drive and dissipation. Non-Hermitian Hamiltonians which preserve \(\mathcal{PT}\)-symmetry (i.e., the combined operation of parity and time reversal) [22; 23; 24] constitute a special class of systems possessing a real spectrum, prompting their interpretation as a natural extension to conventional quantum mechanics [25]. When \(\mathcal{PT}\)-symmetry is spontaneously broken, exceptional points (EPs) arise where the eigenvalues become complex-valued, a topic of much theoretical [26; 27; 28] and experimental [29] interest. Conventionally, the Hamiltonian is given by a Hermitian operator which plays the dual role of both the energy operator and the generator of time translations [30]. However, nonhermiticity leads to non-unitary time evolution and possibly complex energy eigenvalues, both of which imply that the Hamiltonian loses this dual role [25; 31]. Consequently, nonhermiticity users in new challenges to fundamental concepts in conventional quantum mechanics, necessitating a more general framework. Multiple approaches are used to tackle the aforementioned issues and compute observables. Foremost is biorthogonal quantum mechanics [32; 33; 34], which has been widely studied in the context of \(\mathcal{PT}\)-symmetric Hamiltonians, though its application is limited to a subset of non-Hermitian Hamiltonians. More often, time-dependent probabilities [35] and observables [19; 36; 37] are explicitly normalized by the non-conserved norm of the states in an ad hoc manner. As we shall see in this work, this method can fail to capture salient aspects of the physics. A more robust method to study non-Hermitian systems is based on considering the Hilbert space as non-stationary and endowed with a non-trivial time-dependent metric [38; 39; 40; 31]. It can be regarded as a generalization of biorthogonal quantum mechanics [32] encompassing spontaneous \(\mathcal{PT}\)-broken scenarios as well. This metric framework presents a consistent formulation of non-Hermitian quantum mechanics. It has been adopted to recover fundamental theorems of quantum information [41], as well as being especially relevant for the evolution of entanglement [42; 43]. Quantum quenches and driving have emerged as tools of choice to explore the non-trivial dynamics of quantum systems [44; 45; 46; 47]. The richness of the emergent phenomenology in Hermitian systems naturally behoves the study of quantum quenches in non-Hermitian systems [48; 49; 50; 51; 52; 34]. A famous example of non-trivial dynamics concerns topological defects generated when a coupling is quenched across a quantum critical point [53]. The Kibble-Zurek scaling predicts that the defect den sity scales as a power law with quench time, where the exponents are determined by the static critical exponents [54; 55]. Using the wavefunction normalization approach, recent work predicted a modified Kibble-Zurek scaling when a system is quenched across EPs [19; 37], thereby recovering adiabaticity. On the other hand, breakdown of adiabaticity was seen experimentally in dissipative superconducting qubits governed by effective non-Hermitian Hamiltonians [56]. In this Letter, using an exactly solvable non-Hermitian model, we rigorously investigate the fundamental question of whether quantum adiabaticity survives. We show that the metric plays a crucial role in the violation of quantum adiabaticity when EPs are traversed adiabatically. A mere normalization of physical quantities by the norm of the time-evolved state completely fails to capture this fundamental aspect. _Metric framework._ We begin by introducing the metric framework. The inner product in a Hilbert space is defined via its metric \(\rho(t)\) as \(\langle\cdot,\cdot\rangle_{\rho(t)}\). For a system described by a Hermitian Hamiltonian, the metric is static and is the identity operator. In the case of a time-dependent non-Hermitian Hamiltonian \(H(t)\), the metric of the Hilbert space develops a non-trivial time evolution, even in the \(\mathcal{PT}\)-symmetric regimes [31; 57]. The dynamics of the Hilbert space \(\mathscr{H}_{\rho(t)}\) is encoded in the time evolution of the metric \(\rho(t)\), given by [39; 31] \[i\dot{\rho}(t)=H^{\dagger}(t)\rho(t)-\rho(t)H(t), \tag{1}\] where the overdot denotes time derivative. Provided that a solution to Eq. (1) can be found [31], we can map the system to a Hermitian Hamiltonian \(h(t)=\eta(t)H(t)\eta^{-1}(t)+i\dot{\eta}(t)\eta^{-1}(t)\), where we have introduced the square-root decomposition of the positive-definite metric, \(\rho(t)=\eta^{\dagger}(t)\eta(t)\). The Hamiltonian \(h(t)\) acts in a different Hilbert space \(\mathscr{H}\)[31; 57], where the nonhermiticity is encoded in the dynamics of \(\eta(t)\). The time evolution of the states \(|\psi(t)\rangle\) in \(\mathscr{H}_{\rho(t)}\) and \(|\Psi(t)\rangle\) in \(\mathscr{H}\) is governed by the time-dependent Schrodinger equation (TDSE) \[\begin{split} i\frac{\mathrm{d}}{\mathrm{d}t}|\psi(t)\rangle& =H(t)|\psi(t)\rangle\\ i\frac{\mathrm{d}}{\mathrm{d}t}|\Psi(t)\rangle&=h( t)|\Psi(t)\rangle\end{split} \tag{2}\] where the unitarity of the evolution is conserved in both representations, since \(\langle\psi(t)|\rho(t)|\psi(t)\rangle=\langle\Psi(t)|\Psi(t)\rangle=1\) at all times \(t\)[31]. The states are related by \(|\Psi(t)\rangle=\eta(t)|\psi(t)\rangle\). Under this formalism, the expectation value of an operator \(\hat{o}:\mathscr{H}\rightarrow\mathscr{H}\) is given by \[\langle O(t)\rangle_{\mathrm{metric}}=\langle\Psi(t)|\hat{o}|\Psi(t)\rangle= \langle\psi(t)|\rho(t)\hat{O}(t)|\psi(t)\rangle \tag{3}\] where \(\hat{O}(t):\mathscr{H}_{\rho(t)}\rightarrow\mathscr{H}_{\rho(t)}\) is defined as \(\hat{O}(t)=\eta^{-1}(t)\hat{o}\eta(t)\). In contrast, the expectation of \(\hat{o}\) calculated from a simple normalization by the time-dependent norm is given by \[\langle O(t)\rangle_{\mathrm{norm}}=\frac{\langle\psi(t)|\hat{o}|\psi(t) \rangle}{\langle\psi(t)|\psi(t)\rangle}. \tag{4}\] as was done, for example, in Ref. [37]. _Exactly solvable model._ To highlight the nontrivial role played by the metric, we consider an exactly solvable model of effective two level systems parameterised by momentum \(k\). This is given by the Hamiltonian [34] \[H_{k}(t)=k\sigma_{x}+i\gamma\sigma_{y}+Ft\sigma_{z} \tag{5}\] where \(\sigma_{i}\) denotes the Pauli matrices and \(F,k,\gamma\in\mathbb{R}\). Eq. (5) is a generalization of the Hamiltonian presented in Ref. [58] and realized experimentally in Ref. [59], by adding a real drive term \(Ft\) and applying a basis rotation. In our case, the non-Hermitian term \(\gamma\) corresponds to the imaginary tachyon mass [58] and the parameter \(k\) is the momentum. The dimensionless term \(\frac{\gamma^{2}}{F}\) sets the scale for the extent of nonhermiticity in our model. Note that we recover a purely Hermitian Hamiltonian by setting \(\gamma=0\). \(\mathcal{PT}\)-symmetry is realised in our model by the operators \(\mathcal{P}=\sigma_{y}\) and \(\mathcal{T}=-i\sigma_{y}\mathcal{K}\) where \(\mathcal{K}\) is complex conjugation, such that \([H_{k},\mathcal{PT}]=0\). At the EP, spontaneous breaking of this symmetry occurs and the states are no longer eigenstates of the \(\mathcal{PT}\) operator. The instantaneous eigenvalues of Eq. (5) are given by \(E_{\pm,k}(t)=\pm\sqrt{F^{2}t^{2}+k^{2}-\gamma^{2}}\), as shown in Fig. 1. By tuning the momentum \(k\) and the imaginary mass \(\gamma\), our Hamiltonian permits us to study the evolution of two different types of modes: those that undergo fully \(\mathcal{PT}\)-symmetric evolution, \(|k|\geq|\gamma|\) and those that pass through EPs during their evolution, \(|k|<|\gamma|\). The dynamics of our model is exactly solvable through Eqs. (1) and (2), making our model ideal for illustrating an accurate description of non-Hermitian physics. In analogy to the Hermitian Landau-Zener problem [60], we time-evolve the system between Hermitian initial and end points, which are given by the asymptotic limits \(t\rightarrow\pm\infty\). The uniqueness of the metric \(\rho_{k}(t)\) is ensured by the Hermitian initial condition, \(\rho_{k}(t\rightarrow-\infty)=\mathbbm{1}\) valid for all \(k\). Using the exact solution for \(\rho_{k}(t)\), we can map our problem to a Hermitian Hamiltonian \(h_{k}(t)\), where the dynamical richness of \(\rho_{k}(t)\) is directly encoded in the dynamics of \(h_{k}(t)\)[61]. In contrast to the original Hamiltonian \(H_{k}(t)\), we find that \(h_{k}(t)\) does not describe a linear quench, where the extent of its departure from a linear quench regime is dictated by the parameters \(\frac{\gamma^{2}}{F}\) and \(\delta=\frac{k^{2}-\gamma^{2}}{2F}\). This modified dynamics due to the metric directly influences the evolution of the state \(|\Psi(t)\rangle_{k}\), defined in Eq. (2), for a certain parameter regime. For \(k\gg\gamma\), i.e. very weak nonhermiticity, the departure from a linear quench is rather insignificant and \(|\Psi(t)\rangle_{k}\) and \(|\psi(t)\rangle_{k,\mathrm{norm}}\equiv\frac{|\psi(t)\rangle_{k}}{\|\psi(t) \rangle_{k}\|}\) are in good agreement with each other, as shown in Fig. 2(a). However, this equivalence breaks down when \(k\sim\gamma\) (even when \(\mathcal{PT}\)-symmetry is not broken) and in the \(\mathcal{PT}\)-broken regime \(|k|<\gamma\), as shown in Figs. 2 (b)-(d). Curiously, for the critical value \(k=\gamma\), the evolution of the state \(|\Psi(t)\rangle_{k}\) is entirely due to the metric. Consequently, the state \(|\psi(t)\rangle_{k,\text{norm}}\) stays at the north pole of the Bloch sphere and does not evolve, as shown in Fig. 2 (c). Another striking difference concerns the \(k\leftrightarrow-k\) symmetry: \(|\Psi(t)\rangle_{k}\)=\(|\Psi(t)\rangle_{-k}\;\;\forall k\) but this symmetry is in general not respected by \(|\psi(t)\rangle_{k,\text{norm}}\). This asymmetry in the norm method, which stems from the fact that \(H^{\dagger}(t)\neq H(t)\), is clearly seen for \(k=\pm\gamma\). For \(k=\gamma\), the time evolution of \(|\psi(t)\rangle_{k,\text{norm}}\) only involves the upper level such that \(|\psi(t)\rangle_{k=\gamma}\propto(1,0)^{T}\). This does not hold for \(|\psi(t)\rangle_{k=-\gamma}\), which involves a transition between the levels. This asymmetry is not present in \(|\Psi(t)\rangle_{k=\pm\gamma}\) as the metric dynamics restores the correct symmetry by taking into account the states evolved using both \(H(t)\) and \(H^{\dagger}(t)\) in the construction of the metric [61]. To summarize, Fig. 2 shows that the metric substantially impacts the time evolution, even for \(\mathcal{PT}\)-symmetric evolution close to the EP. _Spin expectation._ The very different state trajectories predicted by the two methods lead to different spin expectation values \(\langle\sigma_{z}(t)\rangle_{k,\text{metric}}\) and \(\langle\sigma_{z}(t)\rangle_{k,\text{norm}}\), calculated from Eqs. (3) and (4) by setting \(\hat{o}=\sigma_{z}\)[61]. We find that \(\langle\sigma_{z}(t)\rangle_{k,\text{norm}}\) is not symmetric under the individual replacement of \(k\rightarrow-k\) or \(\gamma\rightarrow-\gamma\), but is only invariant under the combined replacement of these two variables. On the other hand, \(\langle\sigma_{z}(t)\rangle_{k,\text{metric}}\) is invariant under either of these replacements, reflecting the symmetry of the instantaneous spectrum \(E_{\pm,k}(t)\). The exact results for the spin expectation values in the asymptotic limit, \(\langle\sigma_{z}(\infty)\rangle\equiv\langle\sigma_{z}(t\rightarrow\infty)\rangle\) obtained using both formalisms, are given by [61] \[\begin{split}&\langle\sigma_{z}(\infty)\rangle_{k,\text{metric}}= \frac{(2k^{2}-\gamma^{2})e^{-2\pi\delta}-k^{2}}{k^{2}-\gamma^{2}e^{-2\pi \delta}}\\ &\langle\sigma_{z}(\infty)\rangle_{k,\text{norm}}=\frac{2ke^{-2 \pi\delta}-k+\gamma}{2\gamma e^{-2\pi\delta}+k-\gamma}\end{split} \tag{6}\] where the different regimes of nonhermiticity are dictated by the magnitude of \(\frac{\gamma^{2}}{F}\). In the limit \(\gamma\to 0\), we recover the time evolution under a Hermitian Hamiltonian. In this case, both \(\langle\sigma_{z}(\infty)\rangle_{k,\text{metric}}\) and \(\langle\sigma_{z}(\infty)\rangle_{k,\text{norm}}\) converge to the standard Landau-Zener result \(2e^{-2\pi\delta_{0}}-1\) where \(\delta_{0}=\frac{k^{2}}{2F}\)[60]. Thus, our study reveals the necessity to explicitly consider the non-trivial dynamics induced by the metric in order to obtain a correct description of non-Hermitian Hamiltonian dynamics in all parameter regimes. The metric is essential in ensuring that the spin expectation Figure 1: The instantaneous spectrum of the non-Hermitian Hamiltonian given by Eq (5) as a function of time, where \(\gamma=1\) and \(\frac{\gamma^{2}}{F}=2.5\). The static system has exceptional points at \(k=\pm\gamma\). For \(k=0.2\gamma\), the solid and dashed lines indicate the real and imaginary parts, respectively. Our model allows us to track both \(\mathcal{PT}\)-broken and \(\mathcal{PT}\)-symmetric evolution. fulfills certain symmetry requirements arising from the instantaneous spectrum. It is worth noting that, for a limited subset of initial conditions and Hamiltonian parameters, the two approaches may still produce similar results, see Fig. 2 (a). _Adiabatic limit._ We now turn to the adiabatic limit \(F\to 0\). For \(\gamma=0\), the adiabatic limit is the regime where we recover universal dynamics and Kibble-Zurek scaling. This scaling is verifiable in experiments, and, due to universality, is unaffected by any modification of the dynamical protocol nor of the microscopic details of the model. For the non-Hermitian case where \(\gamma\neq 0\), we first remark that the adiabatic limit corresponds to the regime of strong nonhermiticity \(\frac{\gamma^{2}}{F}\to\infty\) in our model. The presence or absence of the aforementioned correct symmetry in physical observables, as obtained from the metric vs. the normalization methods, leads to a direct physical consequence in this limit. In analogy to the Hermitian Landau-Zener and Kibble-Zurek problem, the defects are defined as the excitations which move away from the south pole of the Bloch sphere. Note that the south pole of the Bloch sphere corresponds to the ground state of the Hermitian end point. The density of defects is then given by [37] \[\begin{split}\Sigma_{z}&=\Sigma_{z}^{\mathcal{PT}s }+\Sigma_{z}^{\mathcal{PT}b}\\ \Sigma_{z}^{\mathcal{PT}s/b}&=\int_{k\in\mathcal{PT}s }\frac{dk}{2\pi}\lim_{F\to 0}\langle\sigma_{z}(\infty)\rangle_{k}\end{split} \tag{7}\] where \(\mathcal{PT}s\) and \(\mathcal{PT}b\) indicate the contributions from the modes undergoing \(\mathcal{PT}\)-symmetric and \(\mathcal{PT}\)-broken evolution, \(|k|\geq\gamma\) and \(|k|<\gamma\), respectively. The asymptotic expression \(\langle\sigma_{z}(\infty)\rangle_{k}\) is given by Eq. (6). For the \(\mathcal{PT}\)-broken modes, the metric and the norm methods predict starkly different asymptotic behaviors in the adiabatic limit. We obtain \(\langle\sigma_{z}(\infty)\rangle_{k,\text{metric}}\to 1-\frac{2k^{2}}{\gamma^{2}}\), consistent with the \(k\leftrightarrow-k\) symmetry. On the other hand, using the norm method, we obtain \(\langle\sigma_{z}(\infty)\rangle_{k,\text{norm}}\to\frac{k}{\gamma}\), which is anti-symmetric with respect to \(k\). This is shown in Fig. 3. The contribution of the \(\mathcal{PT}\)-broken modes to the defect density is thus \[\begin{split}\left(\Sigma_{z}^{\mathcal{PT}b}\right)_{\text{ metric}}&=\frac{\gamma}{3\pi}\\ \left(\Sigma_{z}^{\mathcal{PT}b}\right)_{\text{norm}}& =0.\end{split} \tag{8}\] The non-zero defect contribution from the \(\mathcal{PT}\)-broken modes shows that defects are generated when a system is driven across an exceptional point, no matter how slow the drive is, thus violating quantum adiabaticity. This is in stark contrast to the Hermitian case where the defect density tends to zero as \(F\to 0\)[60], and is consistent with the findings of recent experimental work [56]. This is because non-Hermitian systems are inherently out of equilibrium. However, this defect freezing effect is not captured if we do not take the dynamics of the metric into account. This is a direct consequence of the odd parity of \(\langle\sigma_{z}(\infty)\rangle_{k,\text{norm}}\) with respect to \(k\). We saw in Fig. 2(b) that, away from the adiabatic limit, the time-evolved state \(|\Psi(t)\rangle_{k}\) shows non-trivial behavior even for \(\mathcal{PT}\)-symmetric modes. However, a clear distinction in the behaviors between \(\mathcal{PT}\)-symmetric and \(\mathcal{PT}\)-broken modes is recovered in the adiabatic limit. This is shown in Fig. 3. For the \(\mathcal{PT}\)-symmetric modes, the metric and the norm methods predict the same asymptotic behaviors: \(\langle\sigma_{z}(\infty)\rangle_{k}\to-1\) and thus \(\Sigma_{z}^{\mathcal{PT}s}=\frac{\gamma}{\pi}-1\). In this limit, the \(\mathcal{PT}\)-symmetric modes are pinned to the south pole of the Bloch sphere, where the term \(\frac{\gamma}{\pi}\) in \(\Sigma_{z}^{\mathcal{PT}s}\) shows a reduction in the fraction of spins pointing to the south pole compared to the Hermitian case. We emphasize that these are not the defects. In addition to the violation of quantum adiabaticity, there is no Kibble-Zurek scaling regime in this system, in contrast to the prediction in Ref. [37]. Indeed, conventional many-body systems are expected to display a power-law scaling of the defects generated after a slow ramp across a critical point. For a generic spin system, this would mean \(\sigma_{z}=-1+\mathcal{O}(F^{\theta})\) leading to a defect density \(\sim F^{\theta}\), where \(\theta\) depends on the critical exponents at equilibrium. For an infinite ensemble of Hermitian two-level systems, one has \(\theta=\frac{1}{2}\)[63; 62; 53]. The case of a non-Hermitian drive has been studied in Ref. [37] using the normalization approach, yielding a modified Kibble-Zurek scaling with \(\theta=\frac{2}{3}\). In contrast, for the static non-Hermitian term under study here, the Kibble-Zurek scaling is wiped out and the density of defects freezes to a rate-independent value \(\sim\gamma\), which survives even in the adiabatic limit \(F\to 0\). In fact, the asymptotic limit given by Eq. (8) is valid for \(F\ll 1\), such that there is no \(F\)-dependence in the defect density for several orders of magnitudes of small \(F\). It is worth noting that, while the rate-independent result in Eq. (8) is rather remarkable Figure 3: The asymptotic value of the spin expectation values, given by Eq. (6), in the adiabatic limit \(F\to 0\) (here \(\frac{\gamma^{2}}{F}=400\) and \(\gamma=1\)). The shaded areas show the defect contribution from the \(\mathcal{PT}\)-broken modes. Although the behavior of the \(\mathcal{PT}\)-symmetric modes is accurately captured by both methods, we see that the effect of defect freezing is only captured when the metric is taken into account. This is a direct consequence of the odd parity of \(\langle\sigma_{z}(\infty)\rangle_{k,\text{norm}}\) with respect to \(k\). for an ensemble of two-level systems, a similar violation of Kibble-Zurek scaling has already been observed when crossing infinitely degenerate critical points [64, 65, 66]. _Conclusion_. Our work shows that quantum adiabaticity is violated and Kibble-Zurek scaling is lost in the presence of nonhermiticity. Defects are created purely by the \(\mathcal{PT}\)-broken modes, which survive even in the adiabatic quench limit. This is consistent with the spectral coalescence at the EPs leading to ambiguity across a quench. The normalization approach completely misses this fundamental feature, as it fails to reflect the correct symmetry of the observables. Our results can be experimentally verified in a variety of photonic and phononic platforms where non-Hermitian drives can be directly implemented. For example, the evolution of the metric can be directly engineered using single-photon interferometry [19] and parametric amplification [67]. Many open questions regarding the dynamics of non-Hermitian systems remain, in particular, the post-quench spread of correlation and the putative violation of Lieb-Robinson bounds [11, 36, 49]. _Acknowledgments_. This work is supported by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy EXC2181/1-390900948 (the Heidelberg STRUCTUREES Excellence Cluster) and a Simons Investigator Award. The authors would like to thank G. M. Graf for numerous fruitful discussions and E. Bergholtz for comments on our manuscript. ## References * Ashida _et al._ [2020]Y. Ashida, Z. Gong, and M. Ueda, Non-hermitian physics, Advances in Physics **69**, 249 (2020). * Bergholtz _et al._ [2021]E. J. Bergholtz, J. C. Budich, and F. K. Kunst, Exceptional topology of non-hermitian systems, Rev. Mod. Phys. **93**, 015005 (2021). * Okuma and Sato [2023]N. Okuma and M. Sato, Non-hermitian topological phenomena: A review, Annual Review of Condensed Matter Physics **14**, null (2023), [https://doi.org/10.1146/annurev-conmatphys-040521-033133](https://doi.org/10.1146/annurev-conmatphys-040521-033133). * Fogedby _et al._ [1995]H. C. Fogedby, A. B. Eriksson, and L. V. Mikheev, Continuum limit, galilean invariance, and solitons in the quantum equivalent of the noisy burgers equation, Phys. Rev. Lett. **75**, 1883 (1995). * Hatano and Nelson [1996]N. Hatano and D. R. Nelson, Localization transitions in non-hermitian quantum mechanics, Phys. Rev. Lett. **77**, 570 (1996). * Shibata and Katsura [2019]N. Shibata and H. Katsura, Dissipative spin chain as a non-hermitian kitaev ladder, Phys. Rev. B **99**, 174303 (2019). * Muller _et al._ [2022]T. Muller, S. Diehl, and M. Buchhold, Measurement-induced dark state phase transitions in long-ranged fermion systems, Phys. Rev. Lett. **128**, 010605 (2022). * Buchhold _et al._ [2021]M. Buchhold, Y. Minoguchi, A. Altland, and S. Diehl, Effective theory for the measurement-induced phase transition of dirac fermions, Phys. Rev. X **11**, 041004 (2021). * Wanjura _et al._ [2021]C. C. Wanjura, M. Brunelli, and A. Nunnenkamp, Correspondence between non-hermitian topology and directional amplification in the presence of disorder, Phys. Rev. Lett. **127**, 213601 (2021). * Budich and Bergholtz [2020]J. C. Budich and E. J. Bergholtz, Non-hermitian topological sensors, Phys. Rev. Lett. **125**, 180403 (2020). * Matsumoto _et al._ [2020]N. Matsumoto, K. Kawabata, Y. Ashida, S. Furukawa, and M. Ueda, Continuous phase transition without gap closing in non-hermitian quantum many-body systems, Phys. Rev. Lett. **125**, 260601 (2020). * Yang _et al._ [2022]F. Yang, H. Wang, M.-L. Yang, C.-X. Guo, X.-R. Wang, G.-Y. Sun, and S.-P. Kou, Hidden continuous quantum phase transition without gap closing in non-hermitian transverse ising model, New Journal of Physics **24**, 043046 (2022). * Gong _et al._ [2022]Z. Gong, M. Bello, D. Malz, and F. K. Kunst, Anomalous behaviors of quantum emitters in non-hermitian baths, Phys. Rev. Lett. **129**, 223601 (2022). * Liegeois _et al._ [2022]B. Liegeois, C. Ramasubramanian, and N. Defenu, Tunable tachyon mass in the pt-broken massive thirring model (2022). * Lamata _et al._ [2007]L. Lamata, J. Leon, T. Schatz, and E. Solano, Dirac equation and quantum relativistic effects in a single trapped ion, Phys. Rev. Lett. **98**, 253005 (2007). * Zeuner _et al._ [2015]J. M. Zeuner, M. C. Rechtsman, Y. Plotnik, Y. Lumer, S. Nolte, M. S. Rudner, M. Segev, and A. Szameit, Observation of a topological transition in the bulk of a non-hermitian system, Phys. Rev. Lett. **115**, 040402 (2015). * Gong _et al._ [2018]Z. Gong, Y. Ashida, K. Kawabata, K. Takasan, S. Hiegashikawa, and M. Ueda, Topological phases of non-hermitian systems, Phys. Rev. X **8**, 031079 (2018). * Kunst _et al._ [2018]F. K. Kunst, E. Edvardsson, J. C. Budich, and E. J. Bergholtz, Biorthogonal bulk-boundary correspondence in non-hermitian systems, Phys. Rev. Lett. **121**, 026808 (2018). * Xiao _et al._ [2021]L. Xiao, D. Qu, K. Wang, H.-W. Li, J.-Y. Dai, B. Dora, M. Heyl, R. Moessner, W. Yi, and P. Xue, Non-hermitian kibble-zurek mechanism with tunable complexity in single-photon interferometry, PRX Quantum **2**, 020313 (2021). * Gao _et al._ [2015]T. Gao, E. Estrecho, K. Y. Bliokh, T. C. H. Liew, M. D. Fraser, S. Brodbeck, M. Kamp, C. Schneider, S. Hofling, Y. Yamamoto, F. Nori, Y. S. Kivshar, A. G. Truscott, R. G. Dall, and E. A. Ostrovskaya, Observation of non-hermitian degeneracies in a chaotic exciton-polariton billiard, Nature **526**, 554 (2015). * Zhang _et al._ [2021]X. Zhang, Y. Tian, J.-H. Jiang, M.-H. Lu, and Y.-F. Chen, Observation of higher-order non-hermitian skin effect, Nature Communications **12**, 5377 (2021). * Bender [2007]C. M. Bender, Making sense of non-hermitian hamiltonians, Reports on Progress in Physics **70**, 947 (2007). * Bender and Boettcher [1998]C. M. Bender and S. Boettcher, Real spectra in non-hermitian hamiltonians having \(\mathcal{PT}\) symmetry, Phys. Rev. Lett. **80**, 5243 (1998). * Bender [2015]C. M. Bender, PT-symmetric quantum theory, Journal of Physics: Conference Series **631**, 012002 (2015). * Gong and Wang [2013]J. Gong and Q.-h. Wang, Time-dependent \(\mathcal{PT}\)-symmetric quantum mechanics, Journal of Physics A: Mathematical and Theoretical **46**, 485302 (2013). * Sayyad and Kunst [2022]S. Sayyad and F. K. Kunst, Realizing exceptional points of any order in the presence of symmetry, Phys. Rev. Res. **4**, 023130 (2022). * Crippa _et al._ [2018]L. Crippa, G. Sangiovanni, and J. C. Budich, Spontaneous formation of exceptional points at the onset of magnetism (2022). * Heiss [2012]W. D. Heiss, The physics of exceptional points, Journal of Physics A: Mathematical and Theoretical **45**, 444016 (2012). * Ding _et al._ [2021]L. Ding, K. Shi, Q. Zhang, D. Shen, X. Zhang, and W. Zhang, Experimental determination of \(\mathcal{PT}\)-symmetric exceptional points in a single trapped ion, Phys. Rev. Lett. **126**, 083604 (2021). * Shankar [1980]R. Shankar, _Principles of quantum mechanics_ (Plenum, New York, NY, 1980). * Mostafazadeh [2020]A. Mostafazadeh, Time-dependent pseudo-hermitian hamiltonians and a hidden geometric aspect of quantum mechanics, Entropy **22**, 10.3390/e22040471 (2020). * Brody [2013]D. C. Brody, Biorthogonal quantum mechanics, Journal of Physics A: Mathematical and Theoretical **47**, 035305 (2013). * Curtright and Mezincescu [2007]T. Curtright and L. Mezincescu, Biorthogonal quantum systems, Journal of Mathematical Physics **48**, 092106 (2007). * Shen _et al._ [2019]X. Shen, F. Wang, Z. Li, and Z. Wu, Landau-zener-stuckelberg interferometry in \(\mathcal{PT}\)-symmetric non-hermitian models, Phys. Rev. A **100**, 062514 (2019). * Longstaff and Graefe [2019]B. Longstaff and E.-M. Graefe, Nonadiabatic transitions through exceptional points in the band structure of a \(pt\)-symmetric lattice, Phys. Rev. A **100**, 052119 (2019). * Turkeshi and Schiro [2022]X. Turkeshi and M. Schiro, Entanglement and correlation spreading in non-hermitian spin chains (2022). * Dora _et al._ [2019]B. Dora, M. Heyl, and R. Moessner, The kibble-zurek mechanism at exceptional points, Nature Communications **10**, 2254 (2019). * Geyer _et al._ [2008]H. B. Geyer, W. D. Heiss, and F. G. Scholtz, The physical interpretation of non-hermitian hamiltonians and other observables, Canadian Journal of Physics **86**, 1195 (2008). * Fring and Frith [2020]A. Fring and T. Frith, Time-dependent metric for the two-dimensional, non-hermitian coupled oscillator, Modern Physics Letters A **35**, 2050041 (2020). * Zhang _et al._ [2019]D.-J. Zhang, Q.-h. Wang, and J. Gong, Time-dependent \(\mathcal{PT}\)-symmetric quantum mechanics in generic non-hermitian systems, Phys. Rev. A **100**, 062121 (2019). * Ju _et al._ [2019]C.-Y. Ju, A. Miranowicz, G.-Y. Chen, and F. Nori, Non-hermitian hamiltonians and no-go theorems in quantum information, Phys. Rev. A **100**, 062118 (2019). * Frith [2020]T. Frith, Exotic entanglement for non-hermitian jaynes-cummings hamiltonians, Journal of Physics A: Mathematical and Theoretical **53**, 485303 (2020). * Fring and Frith [2019]A. Fring and T. Frith, Eternal life of entropy in non-hermitian quantum systems, Phys. Rev. A **100**, 010102 (2019). * Mitra [2018]A. Mitra, Quantum quench dynamics, Annual Review of Condensed Matter Physics **9**, 245 (2018). * Oka and Kitamura [2019]T. Oka and S. Kitamura, Floquet engineering of quantum materials, Annual Review of Condensed Matter Physics **10**, 387 (2019), [https://doi.org/10.1146/annurev-conmatphys-031218-013423](https://doi.org/10.1146/annurev-conmatphys-031218-013423). * Heyl [2018]M. Heyl, Dynamical quantum phase transitions: a review, Reports on Progress in Physics **81**, 054001 (2018). * Sim _et al._ [2022]K. Sim, R. Chitra, and P. Molignini, Quench dynamics and scaling laws in topological nodal loop semimetals, Phys. Rev. B **106**, 224302 (2022). * Lehmann _et al._ [2021]C. Lehmann, M. Schuler, and J. C. Budich, Dynamically induced exceptional phases in quenched interacting semimetals, Phys. Rev. Lett. **127**, 106601 (2021). * Dora and Moca [2020]B. Dora and C. P. Moca, Quantum quench in \(\mathcal{PT}\)-symmetric luttinger liquid, Phys. Rev. Lett. **124**, 136802 (2020). * Bacsi and Dora [2021]A. Bacsi and B. Dora, Dynamics of entanglement after exceptional quantum quench, Phys. Rev. B **103**, 085137 (2021). * Dora _et al._ [2022]B. Dora, D. Sticlet, and C. P. Moca, Correlations at pt-symmetric quantum critical point, Phys. Rev. Lett. **128**, 146804 (2022). * Tang _et al._ [2022]J.-C. Tang, S.-P. Kou, and G. Sun, Dynamical scaling of loschmidt echo in non-hermitian systems, Europhysics Letters **137**, 40001 (2022). * Dziarmaga [2005]J. Dziarmaga, Dynamics of a quantum phase transition: Exact solution of the quantum ising model, Phys. Rev. Lett. **95**, 245701 (2005). * Kibble [1976]T. W. B. Kibble, Topology of cosmic domains and strings, Journal of Physics A: Mathematical and General **9**, 1387 (1976). * Damski and Zurek [2006]B. Damski and W. H. Zurek, Adiabatic-impulse approximation for avoided level crossings: From phase-transition dynamics to landau-zener evolutions and back again, Phys. Rev. A **73**, 063405 (2006). * Doppler _et al._ [2016]J. Doppler, A. A. Mailybaev, J. Bohm, U. Kuhl, A. Girschik, F. Libisch, T. J. Milburn, P. Rabl, N. Moiseyev, and S. Rotter, Dynamically encircling an exceptional point for asymmetric mode switching, Nature **537**, 76 (2016). * Frith [2020]T. Frith, Time-dependence in non-hermitian quantum systems (2020). * Lee _et al._ [2015]T. Lee, U. Alvarez-Rodriguez, X. Cheng, L. Lamata, and E. Solano, Tachyon physics with trapped ions, Phys. Rev. A **92**, 032129 (2015). * Gerritsma _et al._ [2010]R. Gerritsma, G. Kirchmair, F. Zahringer, E. Solano, R. Blatt, and C. F. Roos, Quantum simulation of the dirac equation, Nature **463**, 68 (2010). * Damski [2005]B. Damski, The simplest quantum model supporting the kibble-zurek mechanism of topological defect production: Landau-zener transitions from a new perspective, Phys. Rev. Lett. **95**, 035701 (2005). * [61]See Supplemental Material at K. Sim, Supplemental material, URL_will_be_inserted_by_publisher (2022) for the derivation of this equation. * Damski [2005]B. Damski, The simplest quantum model supporting the kibble-zurek mechanism of topological defect production: Landau-zener transitions from a new perspective, Phys. Rev. Lett. **95**, 035701 (2005). * Zurek _et al._ [2005]W. H. Zurek, U. Dorner, and P. Zoller, Dynamics of a quantum phase transition, Phys. Rev. Lett. **95**, 105701 (2005). * Defenu _et al._ [2018]N. Defenu, T. Enss, M. Kastner, and G. Morigi, Dynamical critical scaling of long-range interacting quantum magnets, Phys. Rev. Lett. **121**, 240403 (2018). * Bachmann _et al._ [2017]S. Bachmann, M. Fraas, and G. M. Graf, Dynamical crossing of an infinitely degenerate critical point, Annales Henri Poincare **18**, 1755 (2017). * Defenu [2021]N. Defenu, Quantum adiabatic cycles and their breakdown, Communications Physics **4**, 150 (2021). * Wang and Clerk [2019]Y.-X. Wang and A. A. Clerk, Non-hermitian dynamics without dissipation in quantum systems, Phys. Rev. A **99**, 063834 (2019). # Supplemental Material: Quantum metric unveils defect freezing in non-Hermitian systems Karin Sim Institute for Theoretical Physics, ETH Zurich, 8093 Zurich, Switzerland Nicolo Defenu Institute for Theoretical Physics, ETH Zurich, 8093 Zurich, Switzerland Paolo Molingnini Cavendish Laboratory, University of Cambridge, 19 J J Thomson Avenue, Cambridge CB3 0HE, United Kingdom Department of Physics, Stockholm University, AlbaNova University Center, 106 91 Stockholm, Sweden R. Chitra Institute for Theoretical Physics, ETH Zurich, 8093 Zurich, Switzerland November 3, 2021 ###### Abstract ## I Solution to the time-dependent Schrodinger equation The time evolution of each \(k\)-mode \(|\psi(t)\rangle_{k}\) in the Hilbert space \(\mathscr{H}_{\rho(t)}\) is governed by the time-dependent Schrodinger equation (TDSE) \[i\frac{\mathrm{d}}{\mathrm{d}t}|\psi(t)\rangle_{k}=H_{k}(t)|\psi(t)\rangle_{k},\] (S.1) where the Hamiltonian \(H_{k}(t)=k\sigma_{x}+i\gamma\sigma_{y}+Ft\sigma_{z}\)[1] is as given in Eqn. (5) of the main text. We take the initial state to be the ground state of the initial Hamiltonian, \(|\psi(t\rightarrow-\infty)\rangle_{k}=(e^{i\varphi_{k}},0)^{T}\), where \(\varphi_{k}\) is an irrelevant global phase. Defining \[\begin{split} f_{k}(t)&=D_{-i\delta}\left(-e^{ \frac{i\pi}{4}}\sqrt{2F}t\right)\\ g_{k}(t)&=D_{-i\delta-1}\left(-e^{\frac{i\pi}{4}} \sqrt{2F}t\right)\end{split}\] (S.2) where \(D_{\nu}(z)\) is the parabolic cylinder function [2] and \(\delta=\frac{k^{2}-\gamma^{2}}{2F}\) is dimensionless, we find the time-evolved state to be \[|\psi(t)\rangle_{k}=e^{-\frac{\pi i}{4}}\begin{pmatrix}e^{-\frac{i\pi}{4}}f_{ k}(t)\\ -\frac{(k-\gamma)}{\sqrt{2F}}g_{k}(t)\end{pmatrix}.\] (S.3) In particular, we note that the state and its bare norm \(|\psi(t)\rangle_{k}\neq|\psi(t)\rangle_{-k}\) and \(\langle\psi(t)|\psi(t)\rangle_{k}\neq\langle\psi(t)|\psi(t)\rangle_{-k}\) do not reflect the \(k\leftrightarrow-k\) symmetry. ## II Time evolution of the metric \(\rho(t)\) The dynamics of the Hilbert space \(\mathscr{H}_{\rho(t)}\) is encoded in the time evolution of the metric \(\rho(t)\), given by [3; 4; 5; 6; 7] \[i\dot{\rho}(t)=H^{\dagger}(t)\rho(t)-\rho(t)H(t),\] (S.4) where the overdot denotes time derivative. To solve Eqn. (S.4) for a general non-Hermitian Hamiltonian \(H(t)\) of a two-level system, we find two linearly independent solutions to the TDSE \[i\frac{\mathrm{d}}{\mathrm{d}t}|\phi_{i}(t)\rangle=H^{\dagger}(t)|\phi_{i}(t) \rangle,\quad i=1,2\] (S.5) which describes the dynamics under the Hermitian conjugate, \(H^{\dagger}(t)\). The metric \(\rho(t)\) is then given by \[\rho(t)=\sum_{i=1}^{2}|\phi_{i}(t)\rangle\langle\phi_{i}(t)|\] (S.6) which satisfies Eqn. (S.4) by construction. For our model, the initial value of the metric is given by \(\rho_{k}(t\rightarrow-\infty)=\mathbb{1}\) for all \(k\) since we have a Hermitian starting point. We thus solve Eqn. (S.5) with the initial conditions \(|\phi_{1}(t\rightarrow-\infty)\rangle_{k}=(1,0)^{T}\) and \(|\phi_{2}(t\rightarrow-\infty)\rangle_{k}=(0,1)^{T}\) up to irrelevant global phases. This gives \[\begin{split}|\phi_{1}(t)\rangle_{k}&=e^{-\frac{ \pi i}{4}}\left(\begin{array}{c}e^{-\frac{i\pi}{4}}f_{k}(t)\\ -\frac{(k+\gamma)}{\sqrt{2F}}g_{k}(t)\end{array}\right),\\ |\phi_{2}(t)\rangle_{k}&=e^{-\frac{\pi i}{4}}\left(\begin{array}{c}\frac{k- \gamma}{\sqrt{2F}}g_{k}^{*}(t)\\ e^{\frac{i\pi}{4}}f_{k}^{*}(t)\end{array}\right).\end{split}\] (S.7) Since \(\rho_{k}(t)\) is Hermitian by construction, we can express it in terms of the Pauli matrices \[\rho_{k}(t)=\rho_{0,k}(t)\mathbb{1}+\sum_{j=x,y,z}\rho_{j,k}(t)\sigma_{j}\] (S.8) where its components are given by \[\begin{split}\rho_{0,k}(t)&=e^{-\frac{\pi i}{2}}\left( |f_{k}(t)|^{2}+\left(\frac{k^{2}+\gamma^{2}}{2F}\right)|g_{k}(t)|^{2}\right) \\ \rho_{x,k}(t)&=-\frac{2\gamma}{\sqrt{2F}}e^{-\frac{ \pi i}{2}}\text{Re}\left(e^{\frac{i\pi}{4}}f_{k}^{*}(t)g_{k}(t)\right)\\ \rho_{y,k}(t)&=-\frac{2\gamma}{\sqrt{2F}}e^{-\frac{ \pi i}{2}}\text{Im}\left(e^{\frac{i\pi}{4}}f_{k}^{*}(t)g_{k}(t)\right)\\ \rho_{z,k}(t)&=-\frac{k\gamma}{F}e^{-\frac{\pi i}{2}}|g_{k}(t )|^{2}\end{split}\] (S.9) where Re, Im denote the real and imaginary parts of the functions. Using the identity \[e^{-\frac{\pi i}{2}}\left(|f_{k}(t)|^{2}+\delta|g_{k}(t)|^{2}\right)=1,\] (S.10) we see that unitary evolution is recovered in the Hilbert space \(\mathscr{H}_{\rho(t)}\), since \(\langle\psi(t)|\rho_{k}(t)|\psi(t)\rangle_{k}=1\) at all times. We also recover \(\rho_{k}(t)=\mathbb{1}\) in the Hermitian case \(\gamma=0\). ## IV Mapping to Hermitian \(h(t)\) We can also map the system to a stationary Hilbert space \(\mathscr{H}\) described by the Hermitian Hamiltonian [7] \[h_{k}(t)=\eta_{k}(t)H_{k}(t)\eta_{k}^{-1}(t)+i\dot{\eta}_{k}(t)\eta_{k}^{-1}(t),\] (S.11) where we have introduced the square-root decomposition of the metric, \(\rho_{k}(t)=\eta_{k}^{\dagger}(t)\eta_{k}(t)\). The time-evolved state in \(\mathscr{H}\) is given by \[i\frac{\text{d}}{\text{d}t}|\Psi(t)\rangle_{k}=h_{k}(t)|\Psi(t)\rangle_{k}\] (S.12) which is related to \(|\psi(t)\rangle_{k}\) by \(|\Psi(t)\rangle_{k}=\eta_{k}(t)|\psi(t)\rangle_{k}\). In the Hermitian case \(\gamma=0\), the time-evolved states satisfy \(|\Psi(t)\rangle_{k}=|\psi(t)\rangle_{k}\) up to a global phase. Although \(\eta_{k}(t)\) needs not be unique, this imposes some constraints on its choice. In our model, this is satisfied if we choose a Hermitian \(\eta_{k}(t)=\eta_{k}^{\dagger}(t)\), such that [8] \[\eta_{k}(t)=\frac{\theta_{k}(t)}{2}\mathbb{1}+\sum_{j=x,y,z}\frac{\rho_{j,k}(t )}{\theta_{k}(t)}\sigma_{j}\] (S.13) where \[\theta_{k}(t)=\sqrt{\rho_{0,k}(t)+\sqrt{\rho_{0,k}^{2}(t)-1}}+\sqrt{\rho_{0,k} (t)-\sqrt{\rho_{0,k}^{2}(t)-1}}\] (S.14) and \(\rho_{j,k}(t),\ j=0,x,y,z\) are given in Eqn. (S.9). With this choice of \(\eta_{k}(t)\), we recover \(\eta_{k}(t)=\mathbb{1}\) for all \(k\) in the Hermitian case \(\gamma=0\). Using Eqns. (S.11) and (S.13), we obtain \[h_{k}(t)=k\left(1+\frac{\gamma^{2}}{F}\Delta h_{x}(t)\right)\sigma_{x}+\sqrt{F }\left(\sqrt{F}t+\frac{\gamma^{2}}{F}\Delta h_{z}(t)\right)\sigma_{z}\] (S.15) where we recover \(h_{k}(t)_{|\gamma=0}=H_{k}(t)_{|\gamma=0}=k\sigma_{x}+Ft\sigma_{z}\) in the Hermitian case \(\gamma=0\). The non-Hermitian contributions to \(h_{k}(t)\) are proportional to the dimensionless parameter \(\frac{\gamma^{2}}{F}\) which is a measure of the extent of non-Hermiticity. The dimensionless non-Hermitian correction terms are given by \[\begin{split}\Delta h_{x}(t)&=-\frac{1}{2}\left( \frac{|f_{k}(t)|^{2}}{|g_{k}(t)|^{2}}+\frac{k^{2}}{2F}\right)^{-1}\\ \Delta h_{z}(t)&=\frac{1}{\sqrt{2}}\left(\frac{ \operatorname{Re}(e^{\frac{i\pi}{F}}f_{k}^{*}(t)g_{k}(t))}{|f_{k}(t)|^{2}+ \frac{k^{2}}{2F}|g_{k}(t)}\,|\right)\end{split}\] (S.16) which can be completely parameterized by \(\delta\) and \(\frac{\gamma^{2}}{F}\) by writing \(\frac{k^{2}}{2F}=\delta+\frac{\gamma^{2}}{2F}\). From Eqns. (S.15) and (S.16), we see that \(h_{k}(t)\) picks up a complicated time dependence in the presence of non-Hermiticity. The extent of departure from the original linear quench is controlled by the parameters \(\delta\) and \(\frac{\gamma^{2}}{F}\). ## Spin expectation Setting \(\hat{o}=\sigma_{z}\) and \(\hat{O}(t)=\eta_{k}^{-1}(t)\sigma_{z}\eta_{k}(t)\equiv\tilde{\sigma}_{z}(t)\) in Eq. (3) of the main text, the spin expectation value under the metric formalism is given by \[\begin{split}\langle\sigma_{z}(t)\rangle_{k,\text{metric}}& =\langle\Psi(t)|\sigma_{z}|\Psi(t)\rangle_{k}=\langle\psi(t)| \rho(t)\tilde{\sigma}_{z}(t)|\psi(t)\rangle_{k}\\ &=\langle\psi(t)|\eta_{k}^{\dagger}(t)\sigma_{z}\eta_{k}(t)|\psi (t)\rangle_{k}.\end{split}\] (S.17) Substituting Eqns. (S.3) and (S.13) into Eqn. (S.17), we obtain \[\langle\sigma_{z}(t)\rangle_{k,\text{metric}}=\frac{2+\left(\frac{2k^{2}- \gamma^{2}}{k\gamma}\right)\rho_{z,k}(t)}{1+\rho_{0,k}(t)}.\] (S.18) Using the asymptotic expressions \[\begin{split}\lim_{t\to\infty}|f_{k}(t)|^{2}&=e^{ -\frac{3\pi\delta}{2}}\\ \lim_{t\to\infty}&|g_{k}(t)|^{2}&=\frac{ e^{\frac{\pi\delta}{2}}}{\delta}(1-e^{-2\pi\delta})\end{split}\] (S.19) and Eqn. (S.9), we obtain Eqn. (6) in the main text. The same proudecure can be done for \(\langle\sigma_{z}(t)\rangle_{k,\text{norm}}\) using Eq. (4) of the main text and Eq. (S.3). The asymptotic expression, Eq. (6) in the main text, is then obtained by using Eq. (S.19). In particular, in the adiabatic limit \(F\to 0\) with a finite \(\gamma\), the parameter \(\delta\to\pm\infty\) with the sign depending on the sign of \(k^{2}-\gamma^{2}\). This restores the clear distinction in the behaviors between the \(\mathcal{PT}\)-broken and \(\mathcal{PT}\)-symmetric modes in the adiabatic limit.
2302.03409
Demonstration of a plasmonic nonlinear pseudo-diode
We demonstrate a nonlinear plasmonic metasurface that exhibits strongly asymmetric second-harmonic generation: nonlinear scattering is efficient upon excitation in one direction and it is substantially suppressed when the excitation direction is reversed, thus enabling a diode-like functionality. A significant (approximately 10 dB) extinction ratio of SHG upon opposite excitations is measured experimentally and those findings are substantiated with full-wave simulations. The combination of two commonly used metals - aluminium and silver - produces a material composition asymmetry that results into a bianisotropic response of the system, as confirmed by performing homogenization analysis and extracting an effective susceptibility tensor. Finally, we discuss the implications of our results from the more fundamental perspectives of reciprocity and time-reversal asymmetry.
Sergejs Boroviks, Andrei Kiselev, Karim Achouri, Olivier J. F. Martin
2023-02-07T11:34:56Z
http://arxiv.org/abs/2302.03409v1
# Demonstration of a plasmonic nonlinear pseudo-diode ###### Abstract We demonstrate a nonlinear plasmonic metasurface that exhibits strongly asymmetric second-harmonic generation: nonlinear scattering is efficient upon excitation in one direction and it is substantially suppressed when the excitation direction is reversed, thus enabling a diode-like functionality. A significant (approximately \(10\,\mathrm{dB}\)) extinction ratio of SHG upon opposite excitations is measured experimentally and those findings are substantiated with full-wave simulations. The combination of two commonly used metals - aluminium and silver - produces a material composition asymmetry that results into a bianisotropic response of the system, as confirmed by performing homogenization analysis and extracting an effective susceptibility tensor. Finally, we discuss the implications of our results from the more fundamental perspectives of reciprocity and time-reversal asymmetry. High-performance nanoscale devices that allow transmission of light only in one direction - optical isolators - remain a long-coveted research objective for optical engineers. This problem is nontrivial due to the fundamental property of electromagnetic waves: in linear time-invariant (LTI) media and in the absence of an external time-odd bias, such as a magnetic field, they propagate reciprocally, i.e. the same way in the forward and backward directions. This property is linked with the time-reversal symmetry of the macroscopic Maxwell's equations and can be shown via the Lorentz reciprocity theorem, which specifically applies to LTI media [1; 2; 3]. However, despite recent comprehensive publications on this topic [1; 2; 4; 5; 6], there remains a tangible confusion in the community about the difference between true nonreciprocivity and deceptively similar time-reversal asymmetric response. For example, time-invariant and bias-less lossy systems may exhibit contrast upon excitation from opposite directions, but they do not qualify as optical isolators since they possess a symmetric scattering matrix and thus obey Lorentz reciprocity [7]. Furthermore, in the case of devices based on nonlinear effects, the distinction between true and pseudoisolators is even more intricate. In particular, devices based on Kerr-type nonlinearities [8] are intrinsically limited by dynamic reciprocity: they can only perform as pseudo-isolators, since they do not exhibit unidirectional transmission upon simultaneous excitation from opposite directions [9; 10]. One aim of this work is to explore possibilities to overcome this limitation and demonstrate how it can be turned into an advantage with an appropriate application. In that context, photonic metasurfaces - artificial planar materials constituted of subwavelength elements - have been identified as a promising platform for the realization of miniature optical isolators or asymmetric devices [11]. To this end, let us highlight recent progress in the development of two classes of metasurfaces - nonlinear and bianisotropic metasurfaces. These two classes are particularly relevant to the scope of our work, since combining their features enables realization of unconventional functionalities, such as aforementioned nonlinearly induced nonreciprocity [12; 13; 14; 15; 16], directional harmonic generation [17; 18; 19] and nonlinear beam shaping [20; 21]. Nonlinear metasurfaces [22; 23; 24] have the potential to replace bulky optical crystals and thus minimize nonlinear optical devices. Among other applications, plasmonic metasurfaces have proven to be interesting for second-harmonic generation (SHG) [25; 26; 27], which is a second-order nonlinear optical process in which an excitation wave with frequency \(\omega\) is converted into a wave with double frequency \(2\omega\)[28]. However, the second-order nonlinear response of plasmonic metals is weak due to their centrosymmetric crystal structure, which is only broken at the surface, giving rise to a non-vanishing surface normal component of the second-order susceptibility tensor \(\chi^{(2)}_{\perp\perp\perp}\). Yet, the overall SHG efficiency remains small due to the reduced interaction volume: essentially, the nonlinear process occurs within the few atomic layers at the metal surface, since the bulk metal is opaque for visible and infrared light and its bulk second-order response is vanishing. Nevertheless, this limitation can be partially overcome by the virtue of the field enhancement associated with surface plasmon resonances at metal surfaces. Thus, various SHG enhancement schemes were proposed for plasmonic metasurfaces, based on multipolar resonances [29; 30; 31; 32; 33; 34; 35], plasmonic lattice resonances [36; 37] and even light-induced centrosymmetry breaking [38]. On the other hand, bianisotropic metasurfaces allow engineering the polarization response to realize highly efficient refraction devices through the combination of electric and magnetic effects [39; 40]. The bianisotropic response, which emerges in structures with broken spatial symmetries [41], implies that the material acquires magnetic polarization upon excitation with an electric field, and vice versa, electric polarization is produced by a magnetic field. Such a magnetoelectric coupling gives rise to the spatial dispersion (i.e. wavevector-dependent response) that enables an excitation angle-dependent operation [42]. For example, in lossy systems, it may lead to asymmetric reflection and absorption, which will be discussed further in relation to our work. In this work, we demonstrate theoretically and ex perimently a plasmonic metasurface that exhibits asymmetric SHG. The operation of the device is conceptually depicted in Fig. 1: in contrast to a conventional nonlinear crystal, second-harmonic (SH) is efficiently generated only upon one excitation direction, which essentially, enables a nonlinear optical pseudo-diode functionality (to be distinguished from optical isolators and pseudo-isolators). Such an asymmetric response imposes a structural asymmetry of the system and previously proposed theoretical designs with similar functionalities have relied on a geometric asymmetry, which might be difficult to realize experimentally [43; 44; 45; 46; 47]. Here, we take a different route and implement a structural asymmetry through the utilization of two common plasmonic materials - silver (Ag) and aluminium (Al) - in a metasurface and show that substantial direction-dependent SHG (up to approx. 16.9 dB in theory and approx. 10 dB in experiment). A major advantage of this two-dimensional design is that such a material asymmetry is relatively easy to implement using standard nanofabrication techniques, e.g. single-exposure electron-beam lithography (EBL) [48]. Furthermore, the combination of plasmonic metals is known to enhance nonlinear processes [49; 50]. To the best of our knowledge, this is the first experimental demonstration of a _plasmonic_ metasurface for asymmetric SHG, although we note that in a recent experimental demonstration Kruk et al. utilized a combination of dielectric nonlinear materials for third-harmonic generation [51]. Additionally, we perform homogenization analysis of the metasurface to extract effective susceptibilities and reveal bianisotropic property of our metasurface. Finally, we discuss the fundamental implications of our results in the context of nonreciprocity. The building block of the metasurface - the meta-atom - is schematically depicted in Fig. 2a. It is comprised of two T-shaped nanostructures made of Al and Ag that are stacked one on top of the other and separated by a thin silicone dioxide (SiO\({}_{2}\)) spacer. These nanostructures are embedded in SiO\({}_{2}\) and arranged in a square lattice with the period of \(\Lambda=\)250 nm. Such a periodicity is sufficiently small to avoid diffraction in both linear and nonlinear regimes, as the metasurface is designed for the excitation with the vacuum wavelength of \(\lambda_{0}=\)800 nm (the effective wavelength in SiO\({}_{2}\) is \(\sim\)537 nm) and SHG at \(\lambda_{\text{SH}}=\)400 nm (\(\sim\)268 nm in SiO\({}_{2}\)). As shown in Fig. 2b, we consider two different excitation conditions that are indicated with red thick arrows: forward (in the direction along the \(+z\)-axis) and backward (along the \(-z\)-axis) propagating plane waves that are \(x\)-polarized. In the linear regime, each of the two waves gives rise to transmitted (red solid arrows) and reflected (red dashed arrows) waves, which are labeled as forward-excited reflection (FR) and transmission (FT), or backward-excited reflection (BR) and transmission (BT). Additionally, both excitations produce signals at the SH frequency (shown with blue arrows). For the SH signals, we use the same naming convention as the waves produced by linear scattering, Fig. 2b. For the reflected and transmitted waves at the excitation frequency, we measure the co-polarized \(x\)-component of the electric field, whereas for the SHG waves, the cross-polarized \(y\)-component is measured, as it is found to be dominant (see Fig. S3 in the Supporting Information). T-shaped meta-atoms provide almost independent control of the spectral positions for the resonances both at the excitation and SH frequencies by varying the lateral dimensions \(L_{x}\) and \(L_{y}\)[52]. As can be seen from Fig. S1 in the Supporting Information, for a fixed wavelength, the transmission in the linear regime is tuned by varying \(L_{x}\). In the nonlinear regime, the transmission and reflection are controlled by both \(L_{x}\) and \(L_{y}\). Importantly, for forward excitation, the maximum in SHG transmission coincides with the minimum in linear transmission (compare panels a and b in Fig. S1 in the Supporting Information). The other geometric parameters \(L_{\text{s}}\), \(D\), \(t_{\text{Ag}}\) and \(t_{\text{Al}}\) do not have a strong influence on the resonance wavelength of the fundamental mode, however they affect the scattering cross-section of the meta-atoms via the retardation effects [53], which, in turn determines the overall transmission and SHG intensity (see Fig. S2 in the Supporting Information). The sidewalls of the meta-atom are tilted by 10\({}^{\circ}\) and the edges and corners are rounded with a 5 nm radius to mimic the experimentally fabricated structures, as discussed below. We select \(L_{x}=\)135 nm, \(L_{y}=\)195 nm, \(L_{\text{s}}=\)25 nm and \(D=t_{\text{Ag}}=t_{\text{Al}}=\)50 nm, since these parameters maximize SHG upon forward excitation at the design wavelength. Such meta-atom dimensions result in minimal transmission in the linear regime and sufficiently high extinction ratio of SHG upon forward and backward excitation (see the parametric sweeps in Fig. S1 in the Supporting Information). Furthermore, in the \(L_{x}\) and \(L_{y}\) parameter space, the forward-excitation SHG peak is broad, which implies that the metasurface efficiency is weakly sensitive to deviations Figure 1: Comparison of conventional and asymmetric SHG: (a) symmetric SHG from a nonlinear (NL) crystal; (b) asymmetric SHG from a nonlinear bianisotropic metasurface. from the nominal dimensions, thus easing nanofabrication tolerances. The simulations are performed in two steps using a custom-developed numerical electromagnetic solver based on the surface integral equation [54, 55]. First, the linear fields are computed with a plane-wave excitation and periodic boundary conditions. For the SHG simulations, the nonlinear surface polarization \(P_{\perp}^{(2\omega)}=\chi_{\perp\perp}^{(2)}E_{\perp}^{\omega}E_{\perp}^{\omega}\) is used as a source, where the normal components of the surface fields \(E_{\perp}^{\omega}\) are obtained from the linear simulations. The simulated reflectance and transmittance in the linear and SHG regimes are shown in Fig. 2c and d. In the simulations, we use interpolated values \(\varepsilon_{\mathrm{Al}}\) and \(\varepsilon_{\mathrm{Ag}}\) of the experimental permittivity data from McPeak et al. [56], and for the permittivity of the background medium we use \(\varepsilon_{\mathrm{SiO_{2}}}=2.22\). Among the noble metals, Ag is known to have the lowest losses at optical frequencies, whereas Al has recently attracted attention as a low cost alternative plasmonic material [57, 58, 59, 60]. Apart from its low cost, Al is known to have the highest second-order nonlinear susceptibility among the plasmonic materials, in particular its surface normal component \(\chi_{\perp\perp\perp}^{(2)}\)[61], it also exhibits an interband transition-related absorption peak at \(800\,\mathrm{nm}\) (see Fig. S4 in the Supporting Information). As shown in Fig. 2c, in the linear regime, the transmission \(T\) for both forward and backward excitations is exactly the same, as imposed by reciprocity. However, the reflection \(R\) and absorption \(A\), which are related to transmission as \(A+R=1-T\), depend on the excitation direction, as they are not restricted by reciprocity and depend on the spatial asymmetry of the system. The asymmetric reflection and absorption of the system can be analyzed by considering an isolated meta-atom. As can be seen in Fig. S5c and d in the Supporting Information, forward and backward excitations give rise to two distinct electric field distributions. In particular, the electric field concentration in the Al part of the structure is strongly dependent on the excitation direction. Although the response is primarily dipolar for both excitations (see Fig. S6a and b in the Supporting Information), this results in asymmetric linear scattering and absorption cross-sections, which is a characteristic of _bianisotropic_ systems [16]. In fact, it is presence of the losses that enables asymmetric scattering when the structure is illuminated from opposite directions, whereas the extinction cross-section remains exactly the same, as imposed by reciprocity [62]. In turn, the SHG response that is plotted in Fig. 2d, has an even stronger dependence on the excitation direction: both nonlinear FT and RT are more than two orders of magnitude stronger than the BT an BR at \(400\,\mathrm{nm}\). A multipolar analysis of an isolated meta-atom (see Fig. S6c and d in the Supporting Information), shows that the electric dipolar and quadrupolar modes are excited more efficiently at \(400\,\mathrm{nm}\) upon forward excitation. This is due to the aforementioned different electric-field distributions at the surface of the T-shaped particles, that become the sources for the SHG. To further elucidate the significance of bianisotropy in such an asymmetric response, we extracted the effective susceptibilities from the simulated electromagnetic fields following the previously documented procedure of metasurface homogenization analysis [63, 64, 65]. Briefly, the expressions for nonlinear susceptibilities are derived from the generalized sheet transition conditions and are calculated using the simulated reflected and transmitted fields upon different excitation conditions at \(\omega\) and \(2\omega\) frequencies. In Fig. 2e and f we plot the extracted effective susceptibility tensor elements that are relevant to the considered excitation conditions. For both linear and nonlinear susceptibilities, the magneto-electric coupling (corresponding to the terms with mixed "e" and "m" subscripts in Fig. 2e and f) is non-negligible. The asymmetric response becomes apparent by noting that the induced linear and nonlinear polarizations are given by \[\mathbf{P}^{\omega}=\overline{\chi}_{\mathrm{ee}}^{\omega}\cdot \mathbf{E}^{\omega}+\overline{\overline{\chi}}_{\mathrm{em}}^{\omega}\cdot \mathbf{H}^{\omega}, \tag{1a}\] \[\mathbf{P}^{2\omega}=\overline{\overline{\chi}}_{\mathrm{ee}}^{2 \omega}\cdot\mathbf{E}^{2\omega}+\overline{\overline{\chi}}_{\mathrm{em}}^{2 \omega}\cdot\mathbf{H}^{2\omega}+\overline{\overline{\chi}}_{\mathrm{ee}}^{ \omega}:\mathbf{E}^{\omega}\mathbf{E}^{\omega}+\overline{\overline{\chi}}_{ \mathrm{em}}^{\omega}:\mathbf{E}^{\omega}\mathbf{H}^{\omega}+\overline{ \overline{\chi}}_{\mathrm{em}}^{\omega}:\mathbf{H}^{\omega}\mathbf{H}^{\omega}. \tag{1b}\] In the linear regime, the non-negligible magneto-electric coupling term \(\chi_{\mathrm{me}}\) results in an asymmetric absorption and reflection. As for the nonlinear effective susceptibility tensors, the dominant components are \(\chi_{\mathrm{mem}}^{\mathrm{\textit{W}\textit{E}}}\) and \(\chi_{\mathrm{em}}^{\mathrm{\textit{W}\textit{E}}}\), which relate magnetic/electric excitations with electric/magnetic responses along orthogonal directions and result in strongly asymmetric SHG. To verify experimentally this asymmetric nonlinear response, we fabricated and characterized a metasurface device. Instead of the widespread lift-off process, we employ the ion beam etching (IBE) technique which enables the fabrication of stratified nanostructures, in particular metal-dielectric composites, with sharper features [48, 66]. The schematic flowchart of the fabrication process is shown in Fig. 3a. We use a \(150\,\mathrm{\SIUnitSymbolMicro m}\)-thick D 263 glass wafer (Schott) which is coated with \(50\,\mathrm{\SIUnitSymbolMicro m}\)-thick Al and \(25\,\mathrm{nm}\) SiO\({}_{2}\) films using RF sputtering (Pfeiffer SPIDER 600). Next, we deposit a \(50\,\mathrm{nm}\) thick Ag layer using an e-beam assisted evaporator (Alliance-Concept EVA 760). The T-shaped pattern arrays are exposed in the hydrogen silsesquioxane (HSQ, XR-1541-006 from DuPont), which is a negative tone e-beam resist, using electron beam lithography (Raith EBPG5000+). The formation of the exposed patterns in the thin films is performed using a low-power argon IBE (Veeco Nexus IBE350, operated at a 300 V acceleration voltage). An important point for this last step is the pulsed IBE operation: 10 s of etching followed by 30 s of cooling to avoid damaging the sample by substrate overheating. The typical overall IBE process time is 160 s, and the etching depth is controlled in-situ using a mass-spectrometer, which allows real-time monitoring of the etched material composition: the etching process is stopped as soon as the Al flux drops to a minimum. The fabrication results are shown in the scanning electron microscope (SEM) images in Fig. 3b-d. The morphology of the fabricated structure can be inspected in Fig. 3c: intrinsically, the IBE process results in tilted sidewalls (approx. 10\({}^{\circ}\)) and rounded corners and edges. Although such features are typically undesired, they are not expected to degrade the performance of the metasurface, as these were taken into account in the simulations. In turn, the layered material composition can be well identified in the image acquired with the back-scattered electron (BSE) detector in Fig. 3d. In the last fabrication step, we cover the metallic nanostructures with a thick SiO\({}_{2}\) layer (approx. 300 nm) which serves two purposes: it acts as a protective layer preventing degradation of the Al and Ag nanostructures, and simplifies the physical conditions by having identical permittivities above and below the metasurface. The experimental setup and the results for the optical characterization of the fabricated sample are shown in Fig. 4. As an excitation light source, we use a mode-locked Ti:Saph laser that outputs approx. 120 fs pulses with a central wavelength of 800 nm. The excitation light is weakly focused onto the metasurface with a low magnification objective (NA = 0.1), which results in a focal spot with a 10 um FWHM mimicking the plane wave excitation used in the simulations. The spectrum of the nonlinearly generated light is shown in Fig. 3a. Apart from the characteristic SHG peak at 400 nm, it has a tail at longer wavelengths, which is attributed to nonlinear photo-luminescence (NPL). As an interesting side-effect, we note that the NPL signal is substantially larger for BT than for FT. This fact can be explained by the peculiarity of the two-photon absorption mechanism in metals that induces the NPL. As opposed to the coherent nature of two-photon absorption in molecules or dielectrics, in metals it can be regarded as a cascaded process. Specifically, two photons are absorbed sequentially rather than simultaneously [67; 68; 69]. Absorption of the first photon gives rise to an intraband transition in the conduction band and creates a vacancy below the Fermi level. Thus, the second photon results in an interband transition that fills the vacancy in the conduction band and creates one in the valence band. Both of these photon absorption steps are linear, but result in an effective nonlinearity. Thus, higher linear absorption upon backward excitation (see Fig. S5 and discussion above), results in a higher probability of two-photon absorption and subsequent NPL, which is consistent with our observations. Such asymmetric behaviour is sometimes referred to as _"nonreciprocal SHG"_, both in the metasurfaces [43] and solid state physics [70; 71] communities. We share the view that such a nomenclature is improper in the case of SHG, since the concept of nonreciprocity is not well-defined for nonlinear optics [72; 64; 73]. For any \(N\)-port system, the Lorentz reciprocity implies the symmetry of the scattering matrix \(\overline{\overline{\mathbf{S}}}^{T^{2}}=\overline{\overline{\mathbf{S}}}\), where \({}^{T}\) denotes the transpose operator. In the case of a two-port system like that considered in this work in Figure 2: Design and simulated performance of the nonlinear bianisotropic metasurface. (a) Schematics of the system and the considered forward-excitation transmission (FT) and reflection (FR), as well as backward-excitation transmission (BT) and reflection (BR); thick solid red arrows indicate the excitation waves; thin solid (dashed) arrows indicate the transmitted (reflected) waves at the excitation frequency in red and at the SH frequency in blue. (b) Schematic drawing of the metasurface unit cell in isometric, top- and side-views with indicated geometric and material parameters. Simulated metasurface reflectance and transmittance (c) in the linear regime and (d) at the SH frequency. Relevant components of the extracted (e) linear and (f) nonlinear effective susceptibility tensors. the linear regime, the scattering matrix is given by \[\overline{\overline{\mathbf{S}}}=\begin{bmatrix}S_{11}&S_{12}\\ S_{21}&S_{22}\end{bmatrix}, \tag{2}\] and reciprocity requires that the transmission coefficients \(S_{12}\) and \(S_{21}\) are equal. However, it does not impose any limitations on the reflection coefficients \(S_{11}\) and \(S_{22}\). This is true for our system in the linear regime, since the transmissions for forward and backward excitations are equal, while the reflections are asymmetric. However, in the nonlinear regime, our metasurface cannot be regarded as a two-port system anymore, since the SH emission represents a distinct electromagnetic mode. Therefore, this system must be at least considered as a 4-port system (assuming that higher-order harmonic generation is negligible), represented with the following scattering matrix: \[\overline{\overline{\mathbf{S}}}=\begin{bmatrix}\overline{ \overline{\mathbf{S}}}^{\omega\rightarrow\omega}&\overline{\overline{\mathbf{S} }}^{\omega\rightarrow\omega}\\ \overline{\overline{\mathbf{S}}}^{\omega\rightarrow\omega 2}&\overline{\overline{\mathbf{S}}}^{ \omega\rightarrow\omega 2}\end{bmatrix}\\ =\begin{bmatrix}S_{11}^{\omega\rightarrow\omega}&S_{11}^{ \omega\rightarrow\omega}&S_{11}^{\omega\rightarrow\omega}&S_{11}^{ \omega\rightarrow\omega}&S_{11}^{\omega\rightarrow\omega}\\ S_{21}^{\omega\rightarrow\omega}&S_{21}^{\omega\rightarrow\omega}&S_{21}^{ \omega\rightarrow\omega}&S_{22}^{\omega\rightarrow\omega}\\ S_{11}^{\omega\rightarrow\omega}&S_{12}^{\omega\rightarrow\omega}&S_{11}^{ \omega\rightarrow\omega}&S_{12}^{\omega\rightarrow\omega}\\ S_{21}^{\omega\rightarrow\omega}&S_{22}^{\omega\rightarrow\omega}&S_{21}^{ \omega\rightarrow\omega\rightarrow\omega}&S_{22}^{\omega\rightarrow\omega \rightarrow\omega}\end{bmatrix}, \tag{3}\] which describes both linear transmission/reflection at frequencies \(\omega\) and \(2\omega\), as well as nonlinear processes \(\omega\)-\(2\omega\) and \(2\omega\)-\(\omega\). In our experiment, we do not directly probe \(S_{21}^{\omega\rightarrow\omega 2\omega}\overset{?}{=}S_{12}^{\omega \rightarrow\omega}\), where \(S_{12}^{2\omega\rightarrow\omega}\) parameters corresponds to the excitation at SH frequency and generation of a wave at frequency \(\omega\). In fact, this process is known as known as parametric down conversion and it has an extremely low efficiency in comparison with SHG [28]. Probing this equality, as well as equality of 8 other parameters that are flipped by the \(\overline{\mathbf{S}}^{T}\) operation, namely \(S_{21}^{\omega\rightarrow\omega}=\overset{?}{=}S_{12}^{\omega\rightarrow\omega}\), \(S_{11}^{\omega\rightarrow\omega\rightarrow\omega}\overset{?}{=}S_{11}^{2 \omega\rightarrow\omega}\), \(S_{12}^{\omega\rightarrow\omega}\overset{?}{=}S_{21}^{2\omega\rightarrow\omega}\), \(S_{22}^{\omega\rightarrow\omega}\overset{?}{=}S_{21}^{2\omega\rightarrow\omega}\) stand for a true reciprocity test in a four-port system. Instead, within our experiment we show that \(S_{21}^{\omega\rightarrow\omega}\neq S_{12}^{\omega\rightarrow\omega \rightarrow\omega}\), which corresponds to an asymmetric nonlinear scattering process that is reciprocal. Yet, a rigorous probing of reciprocity in a nonlinear system would require sophisticated experiments that involve simultaneous excitation with the two waves at frequencies \(\omega\) and \(2\omega\) and precise control over their amplitude and phase [72]. Nevertheless, we assert that our device essentially functions as a nonlinear optical pseudo-diode, allowing the transmission of SH signal only in one direction, which is a desired functionality for various signal processing applications [74]. Figure 4: Optical characterization of the metasurface: (a) measurement setup. (b) Excitation spectrum. (c) Nonlinear spectra. Figure 3: Fabrication of the bimetallic metasurface. (a) Flowchart of the fabrication: 1. Initial substrate; 2. Al, SiO\({}_{2}\), Ag and HSQ thin films deposition; 3. E-beam exposure; 4. IBE; 5. Covering with a thick SiO\({}_{2}\) film. SEM images of the fabricated structure acquired using different detectors and tilt angles: (b) top view SE (scale bar: 1 μm. (c) 45’-tilted view SE (scale bar: 200 nm. (d) 45’-tilted view BSE (scale bar: 200 nm. In summary, we have demonstrated that strongly asymmetric SHG can be achieved in a plasmonic metasurface that is comprised of two common plasmonic metals - aluminium and silver. The structural asymmetry created by the material contrast results in a strong dependence on the excitation direction, with an extinction ratio of approx. 16.9 dB in theory and approx. 10 dB in the experiment. We anticipate that our findings can pave the way for further developments in the field of nonlinear bianisotropic and nonreciprocal devices, as well as inspire novel plasmonic devices with unrivaled functionalities. ## Acknowledgement The authors thank Christian Santschi and Zdenek Benes for their valuable advises on nanofabrication. Funding from the Swiss National Science Foundation (grant PZ00P2_193221) is gratefully acknowledged.
2306.06086
Developing Speech Processing Pipelines for Police Accountability
Police body-worn cameras have the potential to improve accountability and transparency in policing. Yet in practice, they result in millions of hours of footage that is never reviewed. We investigate the potential of large pre-trained speech models for facilitating reviews, focusing on ASR and officer speech detection in footage from traffic stops. Our proposed pipeline includes training data alignment and filtering, fine-tuning with resource constraints, and combining officer speech detection with ASR for a fully automated approach. We find that (1) fine-tuning strongly improves ASR performance on officer speech (WER=12-13%), (2) ASR on officer speech is much more accurate than on community member speech (WER=43.55-49.07%), (3) domain-specific tasks like officer speech detection and diarization remain challenging. Our work offers practical applications for reviewing body camera footage and general guidance for adapting pre-trained speech models to noisy multi-speaker domains.
Anjalie Field, Prateek Verma, Nay San, Jennifer L. Eberhardt, Dan Jurafsky
2023-06-09T17:48:58Z
http://arxiv.org/abs/2306.06086v1
# Developing Speech Processing Pipelines for Police Accountability ###### Abstract Police body-worn cameras have the potential to improve accountability and transparency in policing. Yet in practice, they result in millions of hours of footage that is never reviewed. We investigate the potential of large pre-trained speech models for facilitating reviews, focusing on ASR and officer speech detection in footage from traffic stops. Our proposed pipeline includes training data alignment and filtering, fine-tuning with resource constraints, and combining officer speech detection with ASR for a fully automated approach. We find that (1) fine-tuning strongly improves ASR performance on officer speech (WER=12-13%), (2) ASR on officer speech is much more accurate than on community member speech (WER=43.55-49.07%), (3) domain-specific tasks like officer speech detection and diarization remain challenging. Our work offers practical applications for reviewing body camera footage and general guidance for adapting pre-trained speech models to noisy multi-speaker domains. Anjalie Field, Prateek Verma, Nay San, Jennifer L. Eberhardt, Dan Jurafsky Stanford University, USA {anjalief, prateekv, nay.san, jleberhardt, jurafsky}@stanford.edu **Index Terms**: speech recognition, accountability, policing, social applications, noisy domains ## 1 Introduction Over the last decade, police departments across the United States have rapidly adopted body-worn cameras (BWCs) [1]. This rapid adoption has been spurred on by widespread protests demanding improved accountability and transparency following high-profile deaths of civilians involving officers' use of force [2, 3]. In some ways, BWCs have resulted in improvements: the footage is valuable evidence in instances such as litigation of excessive force cases [4, 5], and analysis of hand-transcribed footage can identify racial disparities in policing and failures to practice procedural justice [6, 7, 8]. However, in the absence of a lawsuit or high-profile incident, most footage is never reviewed. Further, reliance on manual transcriptions limits the scalability of existing automated analyses [6, 9, 8]. At the same time, large pre-trained speech models have achieved remarkable performance over standardized datasets [10, 11, 12, 13, 14]. Models like Whisper and WA2Vvc2 also have demonstrated potential in social good applications, e.g., in monitoring audio(visual) materials related to long-term elderly care [15] or child exploitation [16]. However, in applications involving multi-speaker conversations in noisy environments, models require application-specific adaptation and evaluation [17, 18, 19]. Little work has investigated the speech processing of police BWC footage specifically. Here, we develop and evaluate automatic speech recognition (ASR) and police officer speech detection (diarization) for police BWC footage. Automatic transcription of officer speech would allow extending existing text analyses of racial bias in hand-transcriptions to new data without requiring expensive transcription efforts [6, 8]. It would also allow departments to determine adherence to a procedure by using text classifiers [7] or keyword searches. Although most reviews are likely to be internal, some departments publicly release BWC footage or are mandated to provide access upon request [20, 1]. Thus, speech-processing technology could support independent audits. Our primary data is footage from 1,040 vehicle stops conducted by one department in one month, where utterances spoken by officers and community members were previously hand-transcribed. We use the data to construct training and test data sets for ASR and officer speech detection. We evaluate ASR models, with and without in-domain fine-tuning, over the entire test set, dividing by role (officer or community member), race, and gender, and we examine the performance of officer speech detection in combination with ASR. Our findings provide insight into the best practices and limitations of developing technology in this domain. For example, our training data processing pipeline is robust enough that fine-tuning improves ASR performance by 3-11 points. We also show evidence that Whisper models learn to mimic transcribers' representations of transcription confidence by marking difficult segments as unintelligible. Differences by gender and race are not significant; however, ASR over officer speech (WER=12-13% for officers unseen in training) is much more accurate than over community member speech (WER=43.55-49.07%), which suggests that models have a high potential for addressing accountability with less risk of compromising community member privacy [20]. Finally, we identify diarization, specifically officer speech detection, as a continued challenge. ## 2 Data Video recordings of the 1,040 vehicle stops and hand-transcriptions were provided to us under a data use agreement for the management of such high-risk data and under IRB supervision. The data is generally noisy. Prior transcripts were intended for language analysis, rather than the development of speech processing tools, so not all speech was transcribed and diarized.1 Stops contain background noise like wind and traffic. They contain multiple speakers, and secondary officers, as well as drivers and passengers, can be situated far from the recording device. Dispatch speech from officers' radios can often be heard, sometimes directly overlapping with utterances from the primary interaction. There is high variance in the clarity of speech and quality of footage across stops. Footnote 1: The transcribers were instructed to transcribe only speech by officers and community members, not police dispatch; they inconsistently included officer speech to dispatch (vs. to the community member). **Test and Validation Sets.** To create reliable test and valida tion sets, we hand-align existing transcribed utterances to time-stamps and correct observed transcription errors. To facilitate analysis by race, we chose the test data to consist of 50%/50% stops of white and black drivers. We also choose each test file to be a stop by a distinct officer and withhold any other stops made by the same officers (whether as primary or secondary officers) from the training and validation sets. Thus, we also selected officers who made a small number of stops to minimize unusable data. Hand-aligning data is extremely time consuming, so we restrict test set stops to contain \(<60\) utterances. We similarly ensure there is no overlap in primary officers between the validation and training set, witholding data as needed, though we less strictly enforce the separation of secondary officers, who speak less frequently. We conduct evaluations over these aligned utterances, discarding un-transcribed speech. **Training Set Alignment.** We build a training set by applying automated alignment tools and filtering poor-quality transcriptions. We determine the start and end time for each transcribed utterance using the best of 5 alignment methods: * Unaligned: Isec granularity timestamps hand-written by transcribers with heuristics to correct for obvious typos and extending the start and end by.25sec * MFA: Montreal Forced Aligner [21] with unaligned timestamps as starting points * MFA chunked: Many utterances are too short for the aligner to process correctly. Thus, using the unaligned timestamps, we chunk consecutive utterances up to a total of 20sec. We run MFA to obtain word-level timestamps and then divide chunks back into separate utterances, with start and end times determined by the word-level timestamps * W2V2: Robust Wav2Vec2 [13] for forced alignment [22] * W2V2 chunked: Same as MFA chunked, but using Robust Wav2Vec2 for forced alignment instead of MFA. For each utterance, we use off-the-shelf Whisper Large [14] and Robust Wav2Vec2 (W2V2) [13] to transcribe the audio segment identified by each alignment method and compare the output with the hand-written transcript. We choose as the final alignment the one for which \(min(WER_{Whisper},WER_{W2V2})\) is lowest. Table 1 reports training WER for each alignment method and the percent of the final training data aligned using each method. **Training Set Filtering.** Even after alignment, the training data is noisy, containing, for example, transcription errors, overlapping speech, and unfixed alignment errors. We again use \(min_{WER}=min(WER_{Whisper},WER_{W2V2})\) over the best alignment to filter out training instances that are likely incorrect. We experiment with four filtering criteria, indicating filtered training data size in brackets: * Remove instances \(<0.5\)sec and \(>10\)sec [54,600] * #1, and remove instances where \(min_{WER}>50\%\)[40,361] * We define \(WER[nosubs.]\) as WER where we do not count substitutions as errors. This metric is designed to retain instances where there may be errors in the Whisper/Wav2Vec2 outputs (e.g., WER is high) but likely not alignment errors (e.g., WER is driven by substitutions rather than insertions or deletions). We then filter according to #1, and keep only instances where (\(min_{WER[nosubs.]}<10\%\) AND \(min_{WER}<50\%\)). [26,121] * #1, and remove instances where \(min_{WER}>10\%\)[19,759] We compare each criteria by using the filtered training data to fine-tune Robust Wav2Vec2 and examining performance over the validation set. Criteria #3 (WER=45.23) and #4 (WER=44.92) perform similarly and both outperform #1 (WER=49.34) and #2 (WER=48.75). We use #3 when training subsequent models, favoring the criteria that keeps more training data. Table 2 reports the final sizes for each data split. ## 3 ASR We compare the performance of ASR models off-the-shelf and fine-tuned on the training data set constructed in Section 2. We use two of the current best-performing and most popular architectures: Wav2Vec2 [10] and Whisper [14]. For Wav2Vec2, we use the Robust model [13], which was pre-trained using a self-supervised objective on Libri-Light, CommonVoic, Switchboard, Fisher and fine-tuned for ASR on Switchboard. For Whisper, which was trained on 680,000 hours of multilingual and multitask data, we compare _small_, _medium_, and _large_[14]. Thus, both models are intended to perform well in a variety of domains and over noisy data. We describe the model training parameters in detail, including the use of decoder-only training for Whisper large due to compute constraints. ### Experimental Setup To fine-tune Wav2Vec2, we use model default parameters with learning rate=1e-5, weight decay=0.005, warmup steps=500, batch size=32. We report performance with and without a 4-gram language model trained over the training data transcripts, implemented with KenLM and integrated with beam size=1500, lm weight=1.31 and word score=1.31.2 Footnote 2: lm weight and word score were tuned following the Bayesian optimization procedure in [10]. We do no other hyperparameter tuning. For Whisper models without fine-tuning, we hard-code the task as transcription and the language as English. For fine-tuning, we use model default parameters with learning rate=1e-5, and warmup steps=500. Our experiments are conducted in a resource-constrained environment. Data protocols mandate that the footage be stored on a secure restricted-access server, which does not have sufficient GPU memory to fine-tune Whisper large, even with reduced batch size and precision. Thus, we experiment with freezing the encoder and just training the decoder as well as the inverse. We use a batch size of 32 for Whisper small and 16 for medium and large. Finally, as Whisper is \begin{table} \begin{tabular}{c c c c} \hline & Robust & Whisper & Prop. of \\ Alignment & W2V2 WER & Large WER & Final data \\ \hline \hline Unaligned & 65.51 & 56.78 & 13.86 \\ MFA & 68.42 & 54.84 & 12.01 \\ MFA Chunked & 61.11 & 42.04 & 32.32 \\ W2V2 & 60.25 & 43.27 & 12.72 \\ W2V2 Chunked & 68.0 & 52.27 & 29.10 \\ \end{tabular} \end{table} Table 1: WER over the full training set (78K utterances) under each alignment method and what percentage of training data were ultimately aligned with each method. \begin{table} \begin{tabular}{c c c c} \hline & \# Stops & \# Utterances & Speech Time \\ \hline \hline Train & 795 & 78,082 & 61.85hr \\ Train (filtered) & 787 & 26,121 & 17.61hr \\ Validation & 8 & 373 & 21.24min \\ Test & 20 & 634 & 32.41min \\ \end{tabular} \end{table} Table 2: Final data set sizes. Across the full data set, there are an average of 91.73 utterances and 3.2 speakers per stop. prone to outputting repeated words and phrases, we remove any words from the model output if they occur \(>10\) times. As transcription norms vary between corpora and the body-camera gold transcripts contain bracketed terms like _[unintelligible]_ and _[laughter]_, we remove all terms in brackets and use the Whiser text normalizer on both the reference and model output before computing WER for all models (including Wav2Vec2 models). For all models, we choose the checkpoint with the lower validation WER after 5 epochs and train using 1-2 A40 GPUs. Wav2Vec2 and Whiser small models trained in \(<5\)hrs; Whisper medium and large models trained in \(<16\)hrs. ### Results #### 3.2.1 Overall ASR Table 3 reports validation results (reserving the test set for final configurations) of freezing either the encoder or decoder when fine-tuning Whisper large and small. For Whisper small, decoder-only tuning performs almost comparably to tuning the entire model (28.12 vs., 26.07), whereas tuning only the encoder performs less well (34.30). For Whisper large, freezing the encoder or decoder provides advantages over no fine-tuning, though decoder-only tuning converged faster (2 vs. 5 epochs). Subsequently, we use decoder-only training for the fine-tuned the Whisper large model. Table 4 reports the overall WER and CER for each model. Whisper large with fine-tuning performs the best overall. Fine-tuning gives improves performance by 3-11pts across models. As Whisper is a new model with yet-limited work on understanding model performance and fine-tuning effects, we highlight a few examples from the data in Table 5. In the original transcripts, transcribers mark segments they are unable to decipher as _[unintelligible]_. While we removed all bracketed text when computing WER rate for fair comparison of off-the-shelf and fine-tuned models, examining Whisper outputs reveals that the fine-tuned model sometimes outputs _[unintelligible]_. In some instances, the predicted _[unintelligible]_ exactly aligns with hand-transcription. However, we also find examples where Whisper hallucinates transcriptions for difficult content, whereas Wav2Vec2 more often does not produce output. After fine-tuning, Whisper hallucinations are particularly difficult to identify without referring back to the audio, as they often appear to be plausible statements in an interaction. #### 3.2.2 Performance by officer/driver, gender, and race We examine model performance over sub-populations of the test data, specifically distinguishing between officers and community members, black and white people, and men and women. As there is high variance in model performance depending on the quality of footage from each stop, we use a mixed effects linear regression model. Each data point in the regression is a single utterance. The dependent variable is model WER for the utterance. Role (officer or community member), race, gender are fixed effects, and the specific stop is a random effect. Table 6 reports the learned regression coefficients and WER by sub-population for the best performing Wav2Vec2 and Whisper models, off-the-shelf and fine-tuned. ASR performance for officers is significantly better than performance for community members by a wide margin. Even the best-performing models perform poorly at transcribing community member speech. Community members are situated further from the camera and typically speak very few short utterances. Even hand-transcribers often mark their speech as unintelligible, and training a high-performing model on this type of data may be infeasible. This result suggests that ASR could be an extremely useful tool for police accountability with small potential privacy-reducing impact on community members. In contrast to prior work, we do not find significant differences by race or by gender [23]. Subdividing the test data leads to small data set sizes, which could be skewed by a single outlying stop. This potential effect is greater when looking at race and gender than looking at role, since a low-quality video would decrease ASR performance for both the officer and the community member, whereas in examining race and gender, we are comparing across footage of different stops. Table 6 does show WER is lower for white than black officers for most models. \begin{table} \begin{tabular}{c c c c c c} \hline \hline & W2V2 & W2V2 & Whisp. & Whisp. \\ & & tuned+LM & & tuned \\ \hline \hline Role [Officer] & -.440* & -.435* & -.791* & -.383* \\ Race [Black] & -.028 & -.026 & -.350 & -.034 \\ Gender [F] &.106 &.088 &.215 &.091 \\ \hline CM Black [120] & 83.67 & 66.53 & 66.53 & 43.55 \\ CM White [130] & 88.45 & 74.02 & 75.05 & 49.07 \\ Off. Black [175] & 42.14 & 27.26 & 19.43 & 13.11 \\ Off. White [166] & 32.80 & 21.95 & 22.70 & 12.50 \\ \hline \hline \end{tabular} \end{table} Table 6: ASR by role/race/gender for Robust W2V2 and Whisper Large (not including 3 Hispanic officers). Top: ASR Mixed Effects Regression. A negative (starred if significant) coefficient indicates lower WER (better performance). Bottom: WER for each subgroup. Brackets indicate number of test utterances \begin{table} \begin{tabular}{c c c|c c c} \hline \hline Wav2Vec2 & WER & CER & Whisper & CER & WER \\ \hline \hline [None] & 45.01 & 31.57 & Small & 32.13 & 22.83 \\ +LM & 38.91 & 31.27 & Small+Tune & 22.09 & 16.30 \\ +Tune & 42.20 & 26.05 & Med. & 26.21 & 18.36 \\ +Tune+LM & 32.29 & 25.97 & Med.+Tune & 23.47 & 17.78 \\ & & & Large & 29.60 & 22.35 \\ & & & Large+Tune & **18.33** & **13.61** \\ \end{tabular} \end{table} Table 4: ASR Results over police test set. \begin{table} \begin{tabular}{c|c|c|c} Reference & Whisper & Whisper(tuned) \\ \hline \hline Yeah. I know, I’m & I’ll turn it. & Yeah. [unintelligible]. \\ trying to– & & & gible]. \\ \hline Yeah. [unintelligible] expired like la– December. & The fire started in December. & Yeah. [unintelligible] expired like December. \\ \hline [unintelligible]. & It’s going to be a bad traffic. & It’s going to be a bad traffic. \\ \end{tabular} \end{table} Table 5: Test outputs of fine-tuned Whisper large ## 4 Officer Speech Detection In Section 3, we use hand-aligned evaluation data, but in practice, we do not know segmentation or speaker identities in new footage. As our goal is police accountability, we develop two models to identify segments of speech by primary officers (e.g., officers wearing the camera) and evaluate them using the best-performing ASR model over the detected speech. ### Methodology **Training Data Processing** We adapt the training set introduced in Section 2. We remove any instances that do not contain active speech using an off-the-shelf acoustic scene understanding Mobile-Net [24] architecture trained on AudioSet [25] (AuioSet category \(0<0.3\)). We divide remaining samples into 250ms chunks with a 100ms hop and represent each 250ms segment as a mel-spectogram with 64 mel-filters, computed with a hop of 10ms, and a window of 25ms. We create a balanced training corpus by randomly sampling 150K chunks each of officer/non-officer speech. Since officers are closer to body-camera microphones (near-field) than community members (far-field), we use volume-based data augmentation. As the raw training data contains non-officer speech that was not transcribed (e.g., dispatch speech), we also _augment_ the training set. We divide training files into 250ms chunks with a 100ms hop, keep chunks with a speech score (from the MobileNet model) \(\geq 0.5\), and merge consecutive chunks that occur within 1sec of each other. We add all new segments (ones that were not transcribed) to the training data as instances of not-officer-speech and then filter and sample the data as described above. We use these data to train models to classify 250ms chunks as officer or not-officer speech (with cross-entropy loss). **In-domain classifier** We train a custom model from scratch, which contains 7 convolutional layers with 128 3x3 filters in every layer and Relu activation followed by max-pooling of 2. The output of the last layer is passed onto a linear head of 1024 neurons, followed by softmax activation, and the posterior probability is taken as officer score for that instance. **Universal d-vectors** We extract d-vectors as features from an off-the-shelf model trained over the VoxCeleb dataset for speaker recognition [26] and train an officer speech classifier, with the same linear-head architecture as the in-domain model. **Inference** We predict voice activity detection (using the same Mobile-Net model) and officer scores for 250ms chunks with 100ms hops. We consider a chunk to be officer speech if its voice activity score is \(>t_{\text{VAD}}\) and its officer score is \(>t_{\text{officer}}\), and we merge positive chunks if they occur within \(t_{\text{smooth}}\) sec of each other.3 For evaluation, we concatenate the ASR model output for all identified segments and compute WER against similarly concatenated hand-aligned officer segments. Footnote 3: \(\{t_{\text{VAD}}\text{,}t_{\text{smooth}}\text{, }t_{\text{officer}}\}\) are hyperparameters chosen via 20-iteration Bayesian optimization over the validation set with range [0,1] for \(t_{\text{VAD}}\)/H\({}_{\text{officer}}\) and [0.25,2] for \(t_{\text{smooth}}\). They are {0.93,1.76,0.16} for d-vector, {0.4,0.67,1.2} for in-domain, and {0.52,0.51,1.1} for in-domain [aug.] ### Results Table 7 reports results for the best performing ASR model over the automatically detected officer speech segments. There is a substantial performance decrease between the hand-aligned segments and the detected segments. The d-vector model performance particularly poorly, likely due to the high difference in domain between VoxCeleb and police traffic stops. Augment-
2308.01245
VisualPDE: rapid interactive simulations of partial differential equations
Computing has revolutionised the study of complex nonlinear systems, both by allowing us to solve previously intractable models and through the ability to visualise solutions in different ways. Using ubiquitous computing infrastructure, we provide a means to go one step further in using computers to understand complex models through instantaneous and interactive exploration. This ubiquitous infrastructure has enormous potential in education, outreach and research. Here, we present VisualPDE, an online, interactive solver for a broad class of 1D and 2D partial differential equation (PDE) systems. Abstract dynamical systems concepts such as symmetry-breaking instabilities, subcritical bifurcations and the role of initial data in multistable nonlinear models become much more intuitive when you can play with these models yourself, and immediately answer questions about how the system responds to changes in parameters, initial conditions, boundary conditions or even spatiotemporal forcing. Importantly, VisualPDE is freely available, open source and highly customisable. We give several examples in teaching, research and knowledge exchange, providing high-level discussions of how it may be employed in different settings. This includes designing web-based course materials structured around interactive simulations, or easily crafting specific simulations that can be shared with students or collaborators via a simple URL. We envisage VisualPDE becoming an invaluable resource for teaching and research in mathematical biology and beyond. We also hope that it inspires other efforts to make mathematics more interactive and accessible.
Benjamin J. Walker, Adam K. Townsend, Alexander K. Chudasama, Andrew L. Krause
2023-08-02T16:00:35Z
http://arxiv.org/abs/2308.01245v3
# VisualPDE: rapid interactive simulations of partial differential equations ###### Abstract Computing has revolutionised the study of complex nonlinear systems, both by allowing us to solve previously intractable models and through the ability to visualise solutions in different ways. Using ubiquitous computing infrastructure, we provide a means to go one step further in using computers to understand complex models through instantaneous and interactive exploration. This ubiquitous infrastructure has enormous potential in education, outreach and research. Here, we present VisualPDE, an online, interactive solver for a broad class of 1D and 2D partial differential equation (PDE) systems. Abstract dynamical systems concepts such as symmetry-breaking instabilities, subcritical bifurcations and the role of initial data in multistable nonlinear models become much more intuitive when you can play with these models yourself, and immediately answer questions about how the system responds to changes in parameters, initial conditions, boundary conditions or even spatiotemporal forcing. Importantly, VisualPDE is freely available, open source and highly customisable. We give several examples in teaching, research and knowledge exchange, providing high-level discussions of how it may be employed in different settings. This includes designing web-based course materials structured around interactive simulations, or easily crafting specific simulations that can be shared with students or collaborators via a simple URL. We envisage VisualPDE becoming an invaluable resource for teaching and research in mathematical biology and beyond. We also hope that it inspires other efforts to make mathematics more interactive and accessible. Keywords:Interactive mathematics, web-based visualisation, time-dependent partial differential equations, spatial modelling ## 1 Introduction This paper introduces VisualPDE, a web-based tool for interactive simulations of time-dependent partial differential equations (PDEs) in one and two spatial dimensions. We would highly encourage a reader to browse to [https://visualpde.com/](https://visualpde.com/) to directly experience this tool, including exhaustive documentation and direct access to all code used. For each of the examples and figures presented in this paper, we have included links to the VisualPDE website in the form of clickable footnotes so that the reader can interact with the content, live in their browser. Rather than provide detailed documentation of VisualPDE's features and examples (which can both be better experienced on the website), this paper provides wider context to the design and anticipated use cases of this tool. Our discussion here includes aspects of the overall design philosophy, technical achievements, and how we imagine it being used for teaching, research, knowledge exchange, and outreach activities involving partial differential equations. We first give broad historical, pedagogical, and technical context to this project in Section 2. In Section 3 we give a high-level overview of the technical frameworks used, with a particular eye towards demonstrating why key design choices were made, and in what ways others can extend this project or develop similar tools using the technologies underlying VisualPDE. Our user interface design is a key aspect of this work, described in Section 4 along with ways to share and reuse VisualPDE. In Section 5 we give examples of using this tool within our own teaching, research, and outreach activities, suggesting ways it can be incorporated by others in future activities. We encourage readers to peruse these sections independently, and especially to spend time playing with the website, to get an overall feel for the project. Lastly, in Section 6, we summarise VisualPDE and our vision for how it may enhance how we communicate and interact with PDEs from diverse areas of mathematics. The website is intended to be living and constantly updated with new features and examples. As a result, figures and snapshots in this manuscript may not always reflect the precise content of the site, but should nevertheless remain illustrative of VisualPDE, its scope, and the authors' aims and ideas. Similarly, pages linked to via the URLs provided in this manuscript may evolve slightly over time, but we do not envisage this impeding the reader. ## 2 Historical & pedagogical context Widespread access to computation has had many profound impacts on science, and society more broadly. The development of numerical scientific modelling, and the implementation of models on increasingly sophisticated hardware, has led to enormous advances across science and engineering (Gustafsson, 2018). Even relatively advanced and technical mathematical models, such as partial differential equations, have become widely used in numerous fields in part due to the development of numerical methods (Thomee, 2001) and to the development of high-performance and personal computing machines. Computing has vastly increased the sophistication of modelling, enlarging the kinds of systems that we can understand through simulation that are not amenable to pen-and-paper calculations. In the past few decades, the field of 'ubiquitous computing' has emerged to describe the accessibility of computation in many aspects of life (Meshram et al., 2016), with widespread use of smartphones and tablet computers being the most obvious examples. In contrast to high-performance scientific computing, the emphasis here is not on technical capabilities, but on accessibility and everyday usability. This is readily apparent in the changing landscape of mobile and web computing, where we see an increasing emphasis on user experience (Benyon, 2019), in addition to the availability of raw compute power in mobile devices. Some areas of scientific computing have transitioned to embracing this 'ubiquitous' nature, with increasingly widespread use of online tools by students and educators for computation and visualisation. Importantly, it has also led to the development of more dynamic and interactive learning environments. For example, there is a growing literature on the use of web-based mathematical visualisation software applied to a variety of mathematical concepts. Such interactive and accessible tools side students in being able to explore ideas on their own, including designing their own exercises and solutions, which has been shown to have substantial pedagogical benefits (Korucu and Cakir, 2018). Examples of such tools include GeoGebra (Sangwin, 2007; Arbain and Shukor, 2015), Desmos (Ebert, 2014; King, 2017), and WolframAlpha (Dimiceli et al., 2010; Necesal and Pospusil, 2012). Related to this are more computationally-focused web tools such as CodeRunner (Lobb and Harlow, 2016) for rapid unit testing of student-written programs, as well as the growing use of Jupyter Notebooks (Cardoso et al., 2019) for exploring a variety of different areas of scientific computing in interactive ways. Jupyter Notebooks have even been developed to explore areas of computational fluid dynamics (Castilla and Pena, 2023) and reaction-transport processes (Golman, 2019), showcasing the versatility of these web-based tools even for complex scientific computing tasks such as solving partial differential equations. We direct the interested reader to the work of Engelbrecht et al. (2020) for a broad overview on the role of internet technologies in transforming educational environments. A major technical roadmark that enabled more immersive web-based tools was the transition from early web multimedia (e.g. Flash and Java applets) to more browser-native HTML5 'canvas' elements (Fulton and Fulton, 2013) and related technologies. An example of such a technology-enabled website is 'Complexity Explorables' (Brockmann, 2023), a collection of interactive simulations covering a wide range of topics in cellular automata, complex network theory, and beyond. Another advance in this area is the release of a range of graphics-oriented libraries that enable high level, platform agnostic development of interactive web-based applications that make use of graphics processing units (GPUs) to do large-scale calculations in real time (Angel and Shreiner, 2014), typically targeting 2D and 3D graphics. Libraries based on the popular WebGL framework include Three.js (Dirksen, 2013), Babylon.js (Catuhe et al., 2014) and Abubu.js (Kaboudian et al., 2019), which each allow for high-level interaction with GPUs within the context of a webpage and have been used in a variety of physics education scenarios (McCauley, 2017; Zatarian-Cabada et al., 2023). Abubu.js in particular has been used to develop a range of mathematical visualisations, with a particular focus on models of cardiac electrophysiology (Kaboudian et al., 2021, 2019). One can find a huge range of examples of this technology online by simply searching for 'WebGL physics' and 'WebGL fluid simulation'. Such tools have enabled crucial changes in the landscape of higher education instruction, particularly in the context of the life sciences. We note in particular the recent effort in teaching dynamical systems modelling to biology students by Garfinkel et al. (2022) through the development of a new course 'Calculus for Life Sciences', based on dynamical systems. This course makes extensive use of Python/Jupyter Notebooks, allowing for an accessible and interactive approach to student-driven instruction. As described by Garfinkel et al. (2022), there has been a growing need to modernise many aspects of undergraduate teaching, particularly in light of the rapid pace of technological and scientific advancement. This is especially true in the life sciences, where the disconnect with the more 'classical mechanics' training of the mathematical, physical and engineering sciences has grown with the development of these fields (Woodin et al., 2010). Teaching across and encouraging interaction between the biological sciences and mathematical modelling also poses unique challenges in terms of subject matter and cultural differences within each field (Reed, 2004). Jupyter Notebooks and other high-level programming interfaces are an important aid to overcoming these barriers, though we feel that even more can be done to connect modelling work in the mathematical and physical sciences more directly with everyday experiences in the life sciences. Interactive and accessible simulations can play an important role in bridging this divide by providing immediate access to application-relevant simulations, without the need to become experts in underlying foundational aspects (e.g. modelling, numerical analysis and especially programming). In particular, we believe that there would be significant merit to a platform that abstracts away the intricacies of numerical methods and their implementation, with a user thereby free to play directly with concepts and models that are typically only accessible through simulation. Of course, depending on the goal of the learner, such a platform could also be used as a way to motivate learning about the more foundational topics rather than completely circumventing them. From this perspective, accessible and interactive simulations can provide a crucial tool for people to gain an understanding of more advanced topics without the need to build an expert knowledge of this from scratch. These ideas have already been pioneered in teaching coding skills by using 'unplugged programming' methods that do not employ computers or traditional code (Sun et al., 2021; Munasinghe et al., 2023). VisualPDE aims to be such a platform, representing a significant advancement on the state-of-the-art of visualising PDE systems, especially those of relevance to modelling in mathematical biology and related fields. Existing solutions described above (e.g. Jupyter Notebooks or Mathematica) either still look and feel like programming, or they have a more point-and-click interface but can only handle a limited class of problems. In contrast, VisualPDE uses the extensible interactive design language of websites like Desmos, and applies this to PDE visualisation. It does this with no requirement for familiarity with numerical analysis or programming and, in addition, allows for an unprecedented range of freedom in terms of PDE systems, boundary and initial conditions, and other complex modelling and visualisation features. In addition, it is open source and by default coupled with a tutorial library that serves as a guide for any user looking for further instruction. In short, VisualPDE's accessible interface bridges the gap between the online software many students are (increasingly) familiar with, and more powerful methods of visualising solutions that typically have a higher barrier for entry. In the next two sections, we will describe aspects of the implementation and design of VisualPDE, informed by the context described above. We will return to questions of using the software in education, research, and knowledge exchange in Section 5, demonstrating the impact that VisualPDE can have in these arenas. ## 3 Implementing VisualPDE VisualPDE is designed to be a flexible, plug-and-play PDE solver that runs in a web browser on a user's device. In this section, we give an overarching description of the equations that VisualPDE can solve, the numerical methods that underlie this and the aspects of the implementation that enable this to happen rapidly and interactively on widely available computing devices, including mobile phones. ### The PDEs When first designing VisualPDE, we were motivated by reaction-diffusion systems of the form \[\frac{\partial u}{\partial t} =\mathbf{\nabla}\cdot(D_{u}\mathbf{\nabla}u)+f_{u}, \tag{1a}\] \[\frac{\partial v}{\partial t} =\mathbf{\nabla}\cdot(D_{v}\mathbf{\nabla}v)+f_{v}, \tag{1b}\] where \(u,v\) are scalar fields defined on a domain \(\Omega\subset\mathbb{R}^{2}\), \(D_{u},D_{v}\) are given diffusion coefficients (often constants in classical applications) and \(f_{u},f_{v}\) are given functions of \(u\) and \(v\). An example of such a system is the Gray-Scott model1(Gray and Scott, 1984). This model received huge interest from scientists and artistic amateurs alike following numerical experiments by Pearson (1993), which demonstrated a striking range of spatial and spatiotemporal phenomena by changing only two parameters. We note in particular the interactive WebGL simulators (pmneila, 2012; Sims, 2022), among others, that served as inspiration for VisualPDE. Our initial goal was to allow the user to type in values of the diffusion coefficients and kinetics and, hence, explore a larger class of reaction-diffusion systems, rather than hand code the WebGL as in these cited examples. Footnote 1: [https://visualpde.com/nonlinear-physics/gray-scott.html](https://visualpde.com/nonlinear-physics/gray-scott.html) From this simple yet dynamically rich beginning, VisualPDE has been significantly extended. Currently, coupled systems or subsystems of four unknowns \((u,v,w,q)\) of the following general form can be posed in VisualPDE: \[\frac{\partial u}{\partial t} =\mathbf{\nabla}\cdot(D_{uu}\mathbf{\nabla}u+D_{uv}\mathbf{\nabla}v+D_{uw}\bm {\nabla}w+D_{uq}\mathbf{\nabla}q)+f_{u},\] (2) one of \[\left\{\begin{aligned} \frac{\partial v}{\partial t}& =\mathbf{\nabla}\cdot(D_{vu}\mathbf{\nabla}u+D_{vv}\mathbf{\nabla}v+D_{vw}\mathbf{ \nabla}w+D_{vq}\mathbf{\nabla}q)+f_{v},\\ v&=\mathbf{\nabla}\cdot(D_{vu}\mathbf{\nabla}u+D_{vw}\mathbf{ \nabla}w+D_{vq}\mathbf{\nabla}q)+f_{v},\end{aligned}\right.\] \[\text{one of }\left\{\begin{aligned} \frac{\partial w}{ \partial t}&=\mathbf{\nabla}\cdot(D_{wu}\mathbf{\nabla}u+D_{uv}\mathbf{ \nabla}v+D_{uw}\mathbf{\nabla}w+D_{uq}\mathbf{\nabla}q)+f_{uv},\\ w&=\mathbf{\nabla}\cdot(D_{wu}\mathbf{\nabla}u+D_{uv}\mathbf{ \nabla}v+D_{uw}\mathbf{\nabla}q)+f_{uv},\end{aligned}\right.\] \[\text{one of }\left\{\begin{aligned} \frac{\partial q}{ \partial t}&=\mathbf{\nabla}\cdot(D_{qu}\mathbf{\nabla}u+D_{qv}\mathbf{ \nabla}v+D_{qw}\mathbf{\nabla}w+D_{qq}\mathbf{\nabla}q)+f_{q},\\ q&=\mathbf{\nabla}\cdot(D_{qu}\mathbf{\nabla}u+D_{qv}\mathbf{ \nabla}v+D_{qw}\mathbf{\nabla}w)+f_{q}.\end{aligned}\right.\] Each of the variables \(v\), \(w\) and \(q\) can either satisfy their own time-dependent PDE or can be specified directly in terms of the other variables. The elements \(D_{ij}\) form a matrix of diffusivities, and the \(f_{i}\) are given functions. Moreover, \(f_{i}\) and \(D_{ij}\) can be a function of the unknowns, the coordinates of the spatial domain, time, and any user-defined parameters. The functions \(f_{i}\) can also depend on first and second spatial derivatives of \(u\) (mixed spatial derivatives are not currently supported). This flexibility in form entails that VisualPDE is able to solve a broad class of differential-algebraic equations, including systems with nonlinear cross diffusion and advection. By exploiting zeros and algebraic variables, one can construct systems with up to 8th order spatial or 4th order temporal derivatives. Many intricate, highly nonlinear PDEs can be cast in this general form. Examples on VisualPDE include the inhomogeneous wave equation2 Footnote 2: [https://visualpde.com/basic-pdes/inhomogeneous-wave-equation.html](https://visualpde.com/basic-pdes/inhomogeneous-wave-equation.html) \[\frac{\partial^{2}u}{\partial t^{2}}=\mathbf{\nabla}\cdot(f(x,y)\mathbf{\nabla}u) \quad\iff\quad\left\{\begin{aligned} \frac{\partial u}{\partial t}&=v,\\ \frac{\partial v}{\partial t}&=\mathbf{\nabla}\cdot(f(x,y)\mathbf{ \nabla}u),\end{aligned}\right. \tag{3}\] and Keller-Segel chemotaxis models3(Horstmann, 2003) Footnote 3: [https://visualpde.com/mathematical-biology/Keller-segel.html](https://visualpde.com/mathematical-biology/Keller-segel.html) \[\frac{\partial u}{\partial t} =\nabla^{2}u-\mathbf{\nabla}\cdot(\chi(u)\mathbf{\nabla}v)+f_{u}(u), \tag{4}\] \[\frac{\partial v}{\partial t} =D\nabla^{2}v+f_{v}(u,v).\] We remark that the user interface (UI) allows a user to select how many equations to solve, whether or not any of the cross diffusion terms (\(D_{ij}\) for \(i\neq j\)) appear, and how many algebraic variables there are. In these cases the notation simplifies to represent the simpler versions of the system, including a single subscript in the case without cross diffusion. Similarly, a user may relabel any of the unknown variables to suit their preferred notation. ### Numerical methods To solve equations cast in the general form of Eq. (2), VisualPDE employs a central finite difference scheme for all spatial derivatives (as well as some non-centred discretisations to accommodate 'upwinding' methods) coupled to one of several explicit timestepping schemes. This non-specialised approach reflects the intended generality of VisualPDE and facilitates its plug-and-play features. A natural caveat of this generality is that some features of special systems (such as those with conserved quantities) may not be captured as well as might be achieved with bespoke numerical methods, such as symplectic or geometric integrators for models with symmetries such as Hamiltonian systems (Hairer et al., 2006). Nevertheless, we have found that the implemented scheme successfully captures key solution behaviours for a diverse range of systems currently implemented on the site, including many sensible approximations to systems with infinitely many conserved quantities. A nontrivial example is the phenomenon of solitons passing through one another in the Korteweg-De Vries equation4(Miura, 1976), Footnote 4: [https://visualpde.com/nonlinear-physics/kdv.html](https://visualpde.com/nonlinear-physics/kdv.html) \[\frac{\partial\phi}{\partial t}=-\frac{\partial^{3}\phi}{\partial x^{3}}-6 \phi\frac{\partial\phi}{\partial x}\quad\iff\quad\left\{\begin{aligned} \frac{\partial\phi}{\partial t}&=-\frac{ \partial^{2}v}{\partial x^{2}}-6v\phi,\\ v&=\frac{\partial\phi}{\partial x}.\end{aligned}\right. \tag{5}\] The use of explicit, non-symplectic timestepping means that this scheme will not preserve any of the infinitely many conserved quantities of this model and, hence, will exhibit small fluctuations as it is only an approximation to a 'true' soliton-soliton solution. Nevertheless, after passing through one another, these solitons have visually identical heights and speeds in VisualPDE, indicating a good approximation of the behaviour of this model. The remainder of this subsection is a discussion of well-known standard finite difference methods, which we include for completeness. See LeVeque (2007) or most other texts on the numerical analysis of partial differential equations for a more complete overview. #### 3.2.1 Spatial discretisation In more detail, suppose that a user has taken the domain to be rectangular (the default for the majority of examples present on the site), with coordinates \((x,y)\) in \(\Omega=[0,L_{x}]\times[0,L_{y}]\) for \(L_{x},L_{y}>0\). For a given spatial step size \(\Delta x\), configurable by the user, we split the domain into an \((L_{x}/\Delta x,L_{y}/\Delta x)\) grid (rounding down where necessary). Spatial derivatives are computed on this grid using commonplace finite difference schemes, with first derivatives being computed using the central difference: \[\left.\frac{\partial u}{\partial x}\right|_{(t,x,y)}\approx\frac{u(t,x+ \Delta x,y)-u(t,x-\Delta x,y)}{2\,\Delta x}. \tag{6}\] The divergence terms of Eq. (2) are computed by approximating \[\left.\begin{aligned} \mathbf{\nabla}\cdot(D_{u}\mathbf{\nabla}u) \right|_{(t,x,y)}\approx\frac{1}{2\,\Delta x^{2}}&\{D_{u}(t,x,y)[u(t,x-\Delta x,y)-2u(t,x,y)+u(t,x+\Delta x,y)]\\ &+D_{u}(t,x-\Delta x,y)[u(t,x-\Delta x,y)-u(t,x,y)]\\ &+D_{u}(t,x+\Delta x,y)[u(t,x+\Delta x,y)-u(t,x,y)]\}\\ +&\frac{1}{2\,\Delta y^{2}}&\{D_{u}(t,x,y)[u(t,x,y-\Delta y)-2u(t,x,y)+u(t,x,y+\Delta y)]\\ &+D_{u}(t,x,y-\Delta y)[u(t,x,y-\Delta y)-u(t,x,y)]\\ &+D_{u}(t,x,y+\Delta y)[u(t,x,y+\Delta y)-u(t,x,y)]\}.\end{aligned} \tag{7}\] Here, \(D_{u}\) can depend on space (explicitly or implicitly as a function of the variables \(\mathbf{u}\)), though the approximation reduces to a standard second-order central difference approximation if \(D_{u}\) is independent of space. At boundaries, ghost nodes are introduced to allow for the enforcement of boundary conditions through modification of the finite difference schemes. VisualPDE accommodates periodic, Neumann and Robin boundary conditions in this way. Dirichlet boundary conditions are also supported, though these do not modify the finite difference operators and are instead implemented by fixing the values at these nodes. Boundary conditions can be fully customised by the user via the VisualPDE interface, including mixed conditions on different interfaces, and arbitrary inhomogeneous boundary terms (e.g. time dependent boundary conditions are obtained just by writing some function of \(t\) when specifying the boundary condition). Domains that are not rectangular are implemented by casting them as subsets of a rectangular domain, with individual points in the finite difference discretisation included or excluded via an indicator function. As this enables users to specify general domains that need not have smooth boundaries, boundary conditions on non-square domains involving derivatives should be interpreted with due care. #### 3.2.2 Timestepping With space discretised as above, the resulting system of ordinary differential equations are solved by discretising in time and employing one of four explicit, fixed-timestep finite difference schemes: forward Euler, two-step Adams-Bashforth, the midpoint method (also called the modified Euler method) and the four-step Runge-Kutta method (often known as RK4). These schemes are each associated with benefits and drawbacks, with RK4 conferring the broadest stability, the highest order accuracy and the greatest computational cost, while forward Euler represents the computationally simplest option but scales least favourably with the timestep \(\Delta t\). Forward Euler is the default option for the majority of the examples in VisualPDE due to its simplicity and somewhat surprising reliability in practice. For completeness, this scheme approximates \[\left.\frac{\partial u}{\partial t}\right|_{(t,x,y)}\approx\frac{u(t+\Delta t,x,y)-u(t,x,y)}{\Delta t}, \tag{8}\] which is first-order accurate in \(\Delta t\). At any point a user can select from any of the timestepping schemes available, including midway through a simulation, which might be needed to satisfy a user's requirements of accuracy, stability or curiosity. In this way, VisualPDE enables the simple, accessible exploration of the properties of numerical schemes in practice. #### 3.2.3 Validation We have validated the results from VisualPDE through a number of comparisons to published results, as well as our own finite difference, spectral, and finite element implementations. We have also included a 'Numerical Methods' section on the website, which we plan to expand with further examples discussing aspects of the methods used. Currently this section consists of an example5 that explores three equations with analytical solutions that can be quantitatively compared to numerical solutions. Importantly, this also invites the reader to explore changing the timestep or timestepping scheme, in particular in the context of the Schrodinger equation where a total mass of the wavefunction is only approximately conserved. Footnote 5: [https://visualpde.com/numerical-methods/validating-VisualPDE.html](https://visualpde.com/numerical-methods/validating-VisualPDE.html) ### WebGL implementation & technical considerations Solving the large system of ordinary differential equations (ODEs) that arises from the spatial discretisation and displaying their solution presents non-trivial computational challenges. This is compounded by the desire for VisualPDE to run in browsers across a broad range of devices, including mobile phones, with a high level of interactivity and speed. To overcome this challenge, VisualPDE exploits the significant and widespread capabilities of graphics processing units (GPUs), present in some form on essentially all modern computing devices. In essence, VisualPDE achieves this by casting the problem of timestepping ODEs as one of iterated image manipulation. Floating-point image textures are used to represent the computational domain, whereby each pixel corresponds to a point in the spatial discretisation. Taking advantage of the massively distributed parallel architecture of modern GPUs, the points in the spatial domain are updated in parallel every timestep. The speed of this image-based approach typically entails that VisualPDE can advance the solution a great many times each second, with recent devices (including mobile phones) being capable of upwards of 24,000 timesteps per second in representative simulations. This amounts to performing hundreds of timesteps every time the user's device requests an update to the solution being displayed on the screen, which typically occurs 30-60 times each second. We refer directly to the GitHub code (Walker et al., 2023) for further details on the implementation, which makes use of the Three.js library for interfacing with WebGL. The parallelism that drives the responsiveness and speed of VisualPDE does, however, come with its own limitations. One significant limitation is that the independent updating of each pixel prevents the simple, efficient implementation of implicit timestepping schemes, which typically update every point at once as the solution of a coupled system of equations. This is the reason why VisualPDE only makes use of explicit timestepping schemes, though we note that the conditional numerical stability associated with these schemes is largely mitigated by the frequency at which timesteps can be taken, with small \(\Delta t\) thereby not detracting from the user experience. Another barrier posed by the use of GPUs is the lack of widespread support for double-precision floating point textures. While these are available on some devices, they do not appear to be ubiquitously supported. Thus, VisualPDE makes exclusive use of single-precision arithmetic to ensure a consistent experience across devices, though we expect to transition to double-precision arithmetic as support widens. Simulations using the website do remarkably well in quantitatively matching more accurate numerical methods using standard double-precision methods, though we would advise some caution if an example demands particularly high accuracy and precision. Providing a consistent experience for all users has presented a number of additional challenges, not least of which are vendor-specific differences in the capabilities and behaviours of various devices. For instance, many mobile devices do not automatically interpolate low-resolution textures onto the display, while many laptop and desktop computers do. To circumvent this, VisualPDE attempts to detect your device and its capabilities and implements bespoke interpolation where necessary. Despite the varied idiosyncrasies of popular devices, extensive testing suggests that the functionality of VisualPDE is maintained across many kinds of devices, with the curated examples configured to provide a smooth, interactive experience on even relatively low-end hardware. ## 4 Interacting with VisualPDE ### Design of the user interface The VisualPDE user interface (UI) consists of two halves: a static website, with colourful tutorials, examples and user guides, and the interactive simulation page, where PDEs are solved in real time inside the browser window. Our guiding philosophy for both is to let intuition guide the user: to be minimal yet feature rich. The website is written using the static site generator Jekyll (Preston-Werner, 2023), which compiles HTML files from simple text-based markdown scripts. Combined with MathJax (Cervone, 2012) to enable LaTeX-style mathematical typesetting, it allows pages to be written by authors without technical HTML or CSS knowledge, scaling gracefully to different device sizes. The front page of the website presents links to a number of collections that each contain guided tutorials on a particular PDE system (see Fig. 1) or several related models. These collections start with basic linear PDEs, followed by more complicated systems in mathematical biology, nonlinear physics, and numerical analysis. The curated examples are largely chosen to demonstrate different phenomena that we feel benefit from being explored through interactive visualisations, as well as to illustrate different features of VisualPDE. We also include our innovative curiosity-driven 'Visual Stories', discussed in Section 5. Importantly, these collections of examples are not meant to be exhaustive, and features are provided to make crafting and sharing entirely new models seamless and simple. Each tutorial contains links to the simulation page, with preset equations, parameters, and other settings. The user can then change these on the simulation page and see how the system solution changes. Importantly, every example can be transformed into every other example on the website simply by clicking and typing within the interface. This includes changing the colour scheme and other visualisation options under the Views menu at the bottom of the left column of buttons (see Fig. 2e). The simulation page considers the entire device browser window as the domain on which to solve the PDE. Optimised for both mobile and static devices, the initial user experience is a blank coloured page with icons on the left and right. An invitation to click or tap (device depending) prompts the user to paint the screen using the cursor or their finger to set a forcing or initial condition of the system (Fig. 2a). Painting can be done at any time, live, with a configurable brush; using the pause and play icons allows for more complex painting that evolves live when the simulation is resumed (Fig. 2b). Although the static website links to many preset simulations, the equations are entirely configurable from within the simulation page. The equations panel (Fig. 2c-d) allows for live alteration of all the equations within the form discussed in Section 3.1. Expressions are instantly typeset in MathJax both for familiarity and to assure the user that VisualPDE has interpreted their expression correctly. As a user modifies a part of the equations, the relevant typeset portion of the equation is highlighted to guide the user. Sliders for parameters can be created easily by the user, similar to those seen in popular online 2D plotting software, allowing for interactive, intuitive parameter sweeps. The limits of these sliders can be fully customised so that, for example, educators can suggest ranges for students to look within. Boundary conditions for each species can be individually set to periodic, Dirichlet, Neumann, Robin or a specified combination through a drop-down menu. With these features, the UI is intended to make explorations of the impact of boundary conditions, or the presence of bifurcations, intuitive and simple to play with. VisualPDE presents many options for visualising solutions. The Views panel (Fig. 2e-f) allows users to specify which expression they want to see mapped onto the domain: this can be customised and can include any nonlinear combination of species or even explicit time or space dependence. Although the default for 2D systems is to display the solution as a 2D image, the solution can also be visualised in 3D on a surface, or for 1D models, on a line. An adaptive colour bar can be displayed and customised, with a wide variety of colour maps available to cater for user preference and enhance accessibility (Smith and van der Walt, 2015). For even richer visuals, contours and 3D-effect lighting can be added, along with custom, solution-dependent vector fields and graphical overlays. All these viewing options can be Figure 1: The VisualPDE static website. **Left:** The homepage presents several collections of examples organised by theme. **Right:** The mathematical biology tutorial collection starts with simple models and becomes increasingly complex. Attractive simulation screenshots indicate the behaviour the user might observe. Captured July 2023. Figure 2: The VisualPDE simulation page. **(a)** The user can pause the simulation and paint an initial condition on the screen using the cursor or their finger. **(b)** By clicking ’play’, the simulation starts (here, the Brusselator). **(c)** The equations panel allows users to edit the equations and parameters being simulated. **(d)** Editing the definitions and parameters in real time allows the user to see the effect immediately (here, in the Gray–Scott model, \(a\) has been increased.) **(e)** The Views panel allows for selection between (potentially preset) quantities to visualise, as well as providing a selection of colour maps, scales, lighting and other options. Here, we are simulating Keller–Segel chemotaxis with a 3D lighting effect and an overlaid vector field \((-\partial u/\partial x,-\partial u/\partial y)\). Some options in the panel require scrolling down to see. **(f)** The same solution at the same time as (e) but viewed as a surface plot, with a different colour map and with a colour bar. **(g)** The settings panel allows for changes in domain shape, timestepping scheme, brush sizes when painting with the cursor/the user’s finger, and any images used as input. Here we see FitzHugh–Nagumo on a rectangular domain. **(h)** The same system as (g) but on a circular domain. Captured July 2023 on a \(800\times 500\)px screen. saved as preset views and given a custom name (in Fig. 2e-f, 'From above' and '3D'), so that simulations can be shared and understood easily. Finally, the settings menu (Fig. 2e) contains slightly more advanced options. Numerical options include the dimension, step size and shape of the domain, as well as the timestepping scheme and timestep size. Presentation options include custom choices of letters for the species functions (by default \(u\), \(v\), \(w\) and \(q\)) to better match any source material; the shape and size of the brush for painting; labels showing the elapsed time and integral of the displayed solution. Checkpoints can be set so the user can restart the simulation to a given timestep, rather than to the beginning of the simulation. A particularly fun option here is to upload a photograph to use as a spatial function inside an equation (Fig. 3). By default, Sofya Kovalevskaya and Alan Turing are functions \(I_{S}(x,y)\) and \(I_{T}(x,y)\), but clicking or tapping on their photos prompts the user to either upload a different photo (on a computer) or to take a photo using their camera (on a phone). We consider this a fun hook into the software. User feedback regarding error handling proves a particular challenge in presenting a user-friendly interface, especially for users who may be unfamiliar with the underlying numerical methods. VisualPDE provides custom messages for syntax errors, and for solutions blowing up. Numerical solutions (Section 3.2) can be fragile, in strong contrast to 2D plotting software that novice users may bring their intuition from. Ideally users should be told whether an infinite solution has been reached because of numerical instability or because the solution is truly exponential growth. This is beyond the scope of VisualPDE at the moment, but the omnipresent help icon and the error messages both lead to discussions on numerics that may be helpful, including practical tips for improving numerical stability. Preset simulations where parameters are constrained by sliders and where equations are uneditable (as in the Visual Stories) may therefore be more beneficial to novice users. ### Sharing VisualPDE models There are several different ways that users can share and extend the VisualPDE website and simulator, described in detail on the VisualPDE FAQs page10. These include sharing a screenshot, sharing a link to a user-crafted model, embedding a model inside another website, or forking the entire VisualPDE website or simulator via the GitHub code (Walker et al., 2023). We expect that most users will design a simulation, either by modifying an existing example or writing their own, and then share this via a direct link. This can be done from within the'share' menu, accessed by clicking the'share' icon on the right-hand side of the screen. This will generate a URL that corresponds exactly to the current simulation given the specified initial conditions (it will not include anything drawn with the brush, Figure 3: Turingify your friends: Users can take photos on their phone and use them as forcing in their PDEs. Here, the Turing on Turing6 example on VisualPDE was customised with happily coexisting red and grey squirrels. The user can also select a predefined image such as Sofya Kovalevskaya or Alan Turing (not shown). Original image of squirrels courtesy of Elizabeth Brocklebank. timesteps taken, or any changes to the image files \(I_{S}\) or \(I_{T}\)). We expect that this functionality meets the requirements of the vast majority of users, including those creating bespoke models for teaching or research. Individual simulations can be directly embedded within a webpage by clicking 'Embed' within the share menu, which generates a fragment of HTML that can be pasted into a user's own site. The result of this can be seen in action in the 'Visual Stories' collection on the website. One can even customise the level of UI elements which are displayed, depending on how they envision a user interacting with their model. We can see this being invaluable in designing webpages to describe models in more detail, such as in research pages or interactive lecture notes. Lastly, the entire code base is shared through a standard CC BY open source licence, meaning that users are allowed to directly fork the GitHub repository and design their own versions of the website, or even the simulator. The website has been designed with this idea of extensibility in mind, as the vast majority of it is written using high-level markdown, so that creating collections of examples like those currently on the site can be done very easily. We imagine this detailed modification being used by educators to design bespoke courses or lecture notes with VisualPDE at their core. ## 5 VisualPDE in teaching, research & knowledge exchange Here, we briefly outline a few current projects and activities involving VisualPDE in different settings, and discuss the possibilities of using it much more widely. Figure 4: Example simulations from teaching. **(a)–(b)** An example d’Alembert solution7 of the wave equation matched against the analytical solution in black. **(c)–(d)** An example of stripe to spot instabilities8 in the Gierer–Meinhardt system. **(e)–(f)** Image denoising9 via the Perona–Malik equation. ### Teaching through interactive simulations The systems presented in the basic PDEs section of the website include many of the classical linear PDEs studied in undergraduate courses, such as the heat and wave equations, in addition to relatively simple extensions including the convection-diffusion and Euler buckling models. These examples include demonstrations of the d'Alembert solution of the wave equation (see Fig. 4a-b), and the Fourier series solution of the heat equation. The overall goal is to provide some intuition for mathematical formulas obtained analytically (such as the role of the wavenumber in the decay of a cosine initial condition in the heat equation), as well as to go beyond what can be easily understood analytically, for instance exploring the impact of heterogeneous media on these simple models. Many of the PDEs that appear in Murray's classical textbooks on mathematical biology are included on the website (Murray, 2003). These books form the core of many mathematical biology courses, and cover a range of topics that can benefit from interactive visualisations. Two of the authors of this manuscript have begun making extensive use of VisualPDE in teaching a third year undergraduate mathematical biology course at Durham University, providing links and interactive demonstrations of VisualPDE simulations in lectures and via the course's content management system. Initial informal feedback from students suggests that VisualPDE has significantly enhanced students' understanding of a range of topics in the course. These include travelling waves in the Fisher-Kolmogorov equation11, impacts of Allee effects on spatial population invasion12, and pattern formation and stripe vs spot selection in the Gierer-Meinhardt system13. Footnote 11: [https://visualpde.com/mathematical-biology/travelling-wave.html](https://visualpde.com/mathematical-biology/travelling-wave.html) Footnote 12: [https://visualpde.com/mathematical-biology/bistable-travelling-waves.html](https://visualpde.com/mathematical-biology/bistable-travelling-waves.html) Footnote 13: [https://visualpde.com/mathematical-biology/gierer-neinhardt.html](https://visualpde.com/mathematical-biology/gierer-neinhardt.html) Importantly, all of these topics illustrate theory that can be developed and understood with pen and paper, while also highlighting phenomena that are analytically intractable at this level but easy to explore numerically. For example, we encourage students to explore how the initial mass and shape of an invasive population matters for persistence when subject to a strong Allee effect, and compare this with a spatially homogeneous model where persistence is much easier to understand due to the simplicity of an equilibrium acting as a separatrix. Fig. 4c-d shows an example simulation in Gierer-Meinhardt demonstrating how stripe-like solutions are unstable to small perturbations, breaking up into spots. Such a result is difficult to understand analytically (Kolokolnikov et al., 2006), but one can develop intuition through simulations. Along with using VisualPDE to enhance engagement and build intuition in a lecture setting, we have been including interactive exercises on homework sheets that encourage students to numerically verify their analytical calculations of, for example, thresholds for Turing instability or wavespeeds for travelling wave models. In future, we hope to further capitalise on the flexibility of VisualPDE and provide students with opportunities to exhibit creative expression in their mathematical explorations, such as by designing their own biologically meaningful systems (Woolley et al., 2021). In the broader context of undergraduate education, we hope that VisualPDE can be integrated into a wide range of courses and incorporated into a new generation of interactive course assessments. Beyond introductory PDEs and mathematical biology, colleagues have reported using VisualPDE to give students access to visualisations in other topics. For example, the Korteweg-De Vries equation simulation mentioned in Section 3.2 exhibits an important phenomenon, in which two solitons pass through one another without interacting (noting that this is only captured approximately, as described in Section 3.2). The Perona-Malik equation15, used for image denoising, particularly benefits from interactive exploration, as the ability of the model to sharpen an image can depend somewhat sensitively on the parameters (see Fig. 4e-f for a successful denoising simulation). These are just two examples of what we consider to be important lessons that can often be lost in pen-and-paper analysis or the details of numerical implementations, but which can be immediately appreciated by simply trying it out using different parameters and images in VisualPDE. As noted in Section 2, we view such playing at the concept level as valuable scaffolding for in-depth courses in topics such as image analysis and numerical methods. Footnote 15: [https://visualpde.com/nonlinear-physics/perona-malik.html](https://visualpde.com/nonlinear-physics/perona-malik.html) In the UK context, it is common for lecturers to use bespoke lecture notes for a given course. In the past few years it has become standard to make these notes available to students, and it is becoming increasingly desirable to make them accessible in HTML, which has several advantages compared to PDF, especially for students with disabilities (Mejia et al., 2021). One advantage to web-based tools such as VisualPDE, GeoGebra and Desmos is that they can be natively embedded in such webpages, providing seamless connection with the material being described. There are currently several efforts to develop accessible and interactive lecture notes, and we are hopeful that VisualPDE can bring substantial value to these web-based formats. Of course, there are important challenges regarding accessibility of such visual tools themselves, and this is an ongoing area of current work. ### VisualPDE in research development and communication This project was initially conceptualised in terms of teaching. However, we think there is ample potential to use it for rapid prototyping and communication of a variety of spatial models, and we are presently using it in our own research. Below we showcase some examples of this where we discuss the ease with which spatial models can be developed and shared, and highlight the potential for VisualPDE to drastically alter how we communicate and interact with mathematics during the research cycle. #### 5.2.1 Spatially heterogeneous reaction-diffusion dynamics Page et al. (2003, 2005) explored models of spatially heterogeneous reaction-diffusion systems finding, among other things, that such heterogeneity can lead to spatiotemporal movement of spike solutions in 1D. This dynamic behaviour was later explored by Krause et al. (2018) and Kolokolnikov and Wei (2018) using different methods. Related to these studies, Dillon and Othmer (1999) explored mixed boundary conditions using continuation methods, and Krause et al. (2021) later justified these conditions in terms of asymptotic models of heterogeneous tissue, finding that certain combinations of mixed boundary Figure 5: Example simulations of the Swift–Hohenberg equation giving rise to translating localised solutions14, which can be explored with VisualPDE. **(a)–(b)** Localised patterns in the form from an initial condition in (a) to the stationary structure with \(D_{4}\) symmetry in (b). **(c)–(f)** Under sufficiently large advection to the right, the localised structure from (b) undergoes an instability forming a hexagonal pattern on its left side, breaking the \(D_{4}\) symmetry. Captured July 2023 on a \(1280\times 768\)px screen. conditions can isolate patterning away from the boundary of the domain. In all of these cases, the phenomenon was largely understood first by numerical exploration before any analytical results were available. VisualPDE allows for rapid implementations of every model studied in these papers, such as this example of dynamic bifurcations with heterogeneity22, including explorations of the dynamic spatiotemporal spike oscillations observed first by Page et al. (2005). While our implementation of these models in VisualPDE is some years after these papers were written, one could imagine developing these ideas substantially more quickly through the rapid prototyping and analysis made possible by VisualPDE. Footnote 2: [https://visualpde.com/mathematical-biology/heterogeneous-dynamics.html](https://visualpde.com/mathematical-biology/heterogeneous-dynamics.html) #### 5.2.2 Localised structures Localised patterns are steady states of PDEs with spatial structure only in a subset of the domain, with most of the domain at a homogeneous and stable steady state. They represent an interesting class of multistable solutions in PDEs, and substantial recent research has been undertaken to understand them (Burke and Knobloch, 2007; Knobloch et al., 2011; Champneys et al., 2021). Recently, Hill et al. (2023) explored a class of symmetric localised solutions in two spatial dimensions in the Swift-Hohenberg equation. The lead author of the cited study was able to quickly use these results to develop a VisualPDE model capable of generating several classes of two-dimensional solutions exhibiting different Figure 6: Example simulations of spatiotemporal solutions. **(a)** A heterogeneous Fisher–Kolmogorov equation16. **(b)** A three-species Lotka–Volterra system17. **(c)** The Kuramoto–Sivashinsky equation18. **(d)** An example of coarsening in the Cahn–Hilliard equation19. We recommend playing with the timescale \(r\) in this simulation. **(e)** Turing wave bifurcations20 in a hyperbolic reaction–diffusion system. **(f)** Irregular vegetation patterns in the Klausmeier model21. Captured July 2023 on a \(1280\times 768\)px screen. symmetries, one of which is shown in Fig. 5b and can be explored on VisualPDE23. Once these states are found numerically within VisualPDE, it is easy to subject them to various perturbations, both in terms of perturbing the solution state but also in terms of changing the model. For instance, an example exploring advection24 considers the effect of rotational and linear advection on these localised states. For small values of advection the states persist with minor changes but, as the advective velocity increases, they begin to change shape and lose symmetries. An example is shown in Fig. 5c-f, where a large rightward advection has changed the shape of the localised structure, with it developing a larger hexagonally-spaced 'tail'. Importantly, the solution no longer retains the \(D_{4}\) symmetry used to rigorously study its existence and stability. This loss of symmetry immediately leads to questions of how to extend such analyses to consider more complicated settings where the originally assumed symmetries are not preserved in the model and, more broadly, to understand how robust such solutions are to changes in the model. Notably, extending this model within VisualPDE to include advection took only a matter of seconds, significantly less than the time needed to develop a numerical scheme to add advection in a programming language. Footnote 23: [https://visualpde.com/nonlinear-physics/swift-hohenberg.html](https://visualpde.com/nonlinear-physics/swift-hohenberg.html) Footnote 24: [https://visualpde.com/nonlinear-physics/advecting-patterns.html](https://visualpde.com/nonlinear-physics/advecting-patterns.html) #### 5.2.3 Spatiotemporal dynamics Another area in which rapid and interactive PDE simulations are extremely valuable is the domain of spatiotemporal solutions, such as solitons, travelling and spiral waves, and chaos. For such systems, 2D snapshots are often insufficient to really gain intuition about the dynamics compared to moving videos. Many such systems are also somewhat sensitive to initial conditions and other details of the model, so being able to tweak these interactively allows for a deeper understanding of the roles of such details in the resulting dynamics. Figure 6 presents some exemplar systems, including invasion in a spatially heterogeneous Fisher-Kolmogorov equation, spiral waves in a three-species 'rock-paper-scissors' Lotka-Volterra system (Reichenbach et al., 2007), spatiotemporal chaos in the Kuramoto-Sivashinsky equation (Kalogirou et al., 2015), coarsening in the Cahn-Hilliard equation (Miranville, 2019), Turing wave bifurcations in hyperbolic reaction-diffusion systems (Ritchie et al., 2022), and irregular vegetation patterns in the Klausmeier model (Klausmeier, 1999). These examples have many connections to contemporary research questions where we feel the lightning-fast interactivity of VisualPDE can be helpful in stimulating discussion and understanding broad qualitative features of different models. #### 5.2.4 Research communication Besides the examples above, the authors have also found the use of VisualPDE in interacting with collaborators, colleagues and industrial partners invaluable. Being able to rapidly test an idea and share a link directly to a simulation (with all parameters and functional forms easily accessible) has been greatly helpful in speeding up our own research efforts. A use case that we envisage becoming more and more common is for demonstrating prototypes to industrial and interdisciplinary partners, with VisualPDE enabling solution and visualisation within minutes of a model first being conceived. Even in reviewing papers, we have found the ability to quickly corroborate qualitative results useful. We suspect that as tools like VisualPDE become more widespread, it will become much more routine to send direct links to open-source implementations of models at high levels. We also hope that, as in this article, it will become commonplace to include links to interactive simulations in research articles, allowing the research community to engage with published science in an entirely new way. ### Knowledge exchange through 'Visual Stories' Beyond teaching or research, we think VisualPDE has marked potential for broad knowledge exchange and outreach activities. We have begun developing a'maths-free' collection of examples that focuses on models as ways to explore phenomena, doing so purely through interactive simulation with VisualPDE and a guiding narrative aimed at the general public. These can be found in the 'Visual Stories' collection25 on the website. Footnote 25: [https://visualpde.com/visual-stories.html](https://visualpde.com/visual-stories.html) One example of the use of VisualPDE for knowledge exchange is in exploring airborne virus transmission within a room, incorporating the effects of circulating airflow. Lau et al. (2022) developed a model of this situation, which eventually led to significant interaction with policymakers during the Covid-19 pandemic and the development of a web-based airborne virus risk calculator26. With support of the authors of the cited study, we extended this interface using VisualPDE to allow for more detailed spatial probabilities of infection in a Visual Story on virus transmission. This innovative medium complements the existing calculator by providing an interactive way for diverse audiences to engage with the various features of the model and develop their own intuition for complex mathematics. See Fig. 7 for some snapshots of this Story, or interact with the Story27 yourself. Footnote 26: [https://people.maths.ox.ac.uk/griffit4/Airborne_Transmission/index.html](https://people.maths.ox.ac.uk/griffit4/Airborne_Transmission/index.html) Footnote 27: [https://visualpde.com/visual-stories/airborne-infections.html](https://visualpde.com/visual-stories/airborne-infections.html) Notably, creating this accessible exposition of complex, cutting-edge mathematics was straightforward using VisualPDE. In particular, producing visually striking simulations that are woven into the narrative is simple, with interactive elements being embedded directly into the article. We hope that this inspires other researchers to explore this medium as a way to communicate their research and knowledge to a broad audience, with engagement and interaction at the heart of the discourse. ## 6 Summary In this paper, we have presented a web-based interactive PDE simulator capable of solving a large range of time-dependent PDEs in real time. We have discussed the historical and pedagogical context behind this tool, its technical and user-facing design philosophy, and examples of its use in teaching, research, and knowledge exchange, which we hope illustrate its broad, multi-audience potential. Looking ahead, we hope that VisualPDE proves to be useful across many mathematical communities, and that it facilitates and inspires interdisciplinary connections through interactive, accessible, shareable computing. We also hope that, alongside tailored numerical codes, future research articles across disciplines might include links to interactive, representative simulations like those presented in this article, engaging audiences in what we believe is an entirely new way. We are excited to continue developing VisualPDE and exploring its potential for changing how we interact with and communicate science and mathematics. We are hopeful that VisualPDE inspires further efforts in making mathematics more interactive, which we see as an exciting and important frontier in how we communicate and conceptualise science. ###### Acknowledgements. The ideas for this project originated in a Durham Centre for Academic Development collaborative innovation grant titled _Accessible interactive visualisations in mathematical biology_, which supported AKC in the initial version of the interactive PDE solver, based on the Gray-Scott reaction-diffusion simulator by pmneila (2012). BJW is supported by the Royal Commission for the Exhibition of 1851. ## References * (1) Figure 7: Example simulations of the concentration of virus particles from a rotating source, from an adaptation of the model of Lau et al. (2022). **(a)** A circling source only. **(b)** A circling source subject to advection to the right due to an air conditioner. See the Visual Story27 for more details.
2310.04177
Global deceleration and inward movements of X-ray knots and rims of RCW 103
Kinematics of shocks, ejecta knots, and the compact remnant of a supernova remnant gives an insight into the nature of the progenitor and surrounding environment. We report on a proper motion measurement of X-ray knots and rims of the magnetar-hosting supernova remnant RCW 103. Chandra data obtained in three epochs, 1999, 2010, and 2016 are used. We find a global deceleration of 12 knots and rims both in northern and southern regions within the last $\sim 24$ yrs, even though its age is thought to be larger than 2 kyr. Some of them even changed their moving directions from outward ($\sim 1,000$ km s$^{-1}$) to inward ($\sim -2,000$ km s$^{-1}$). Our findings can be explained with a collision with a high-density medium both in the northern and southern edges of the remnant, although the remnant may still be expanding in the wind-blown cavity. The proper motion of the associated magnetar 1E161348$-$5055 is possibly detected with a velocity of $\approx 500$ km s$^{-1}$.
Hiromasa Suzuki, Takaaki Tanaka, Tsuyoshi Inoue, Hiroyuki Uchida, Takuto Narita
2023-10-06T11:47:21Z
http://arxiv.org/abs/2310.04177v1
# Global deceleration and inward movements of X-ray knots and rims of RCW 103 ###### Abstract Kinematics of shocks, ejecta knots, and the compact remnant of a supernova remnant gives an insight into the nature of the progenitor and surrounding environment. We report on a proper motion measurement of X-ray knots and rims of the magnetar-hosting supernova remnant RCW 103. Chandra data obtained in three epochs, 1999, 2010, and 2016 are used. We find a global deceleration of 12 knots and rims both in northern and southern regions within the last \(\sim 24\) yrs, even though its age is thought to be larger than 2 kyr. Some of them even changed their moving directions from outward (\(\sim 1,000\) km s\({}^{-1}\)) to inward (\(\sim-2,000\) km s\({}^{-1}\)). Our findings can be explained with a collision with a high-density medium both in the northern and southern edges of the remnant, although the remnant may still be expanding in the wind-blown cavity. The proper motion of the associated magnetar 1E 161348\(-\)5055 is possibly detected with a velocity of \(\approx 500\) km s\({}^{-1}\). Supernova remnants (1667); X-ray sources (1822); Shocks (2086); Circumstellar matter (241); Magnetars (992) + Footnote †: journal: ApJ 0000-0002-8880-708X]Hiromasa Suzuki 0000-0002-4070-387X]Takaaki Tanaka 0000-0002-4133-088X]Tsuyoshi Inoue 0000-0002-4133-088X]Hiroyuki Uchida ## 1 Introduction RCW 103 is a young or middle-aged supernova remnant (SNR) hosting the compact object 1E 161348\(-\)5055 (Tuohy & Garmire, 1980). Its age, i.e., elapsed time after the supernova explosion, is estimated to be 2.0-4.4 kyr (Carter et al., 1997; Braun et al., 2019). It has a nearly circular shape with a spatial extent of \(\sim 10^{\prime}\) or \(\sim 9\) pc at the estimated distance of 3.1 kpc (Reynoso et al., 2004). Interestingly, its morphologies are similar among radio, infrared, optical, and X-rays. All show bright emissions in the southern large area and northern small part. Radio continuum observations revealed a smooth structure without clear shells (Dickel et al., 1996). Paron et al. (2006) found an interacting \({}^{12}\)CO cloud in the southern area. Infrared observations also found interacting H\({}_{2}\) gas and other elements (Oliva et al., 1990, 1999; Rho et al., 2001; Reach et al., 2006; Pinheiro Goncalves et al., 2011). A 1720 MHz OH maser detection from the southern area also supports the cloud interaction (Frail et al., 1996). Carter et al. (1997) detected H\(\alpha\) filaments from both south and north, with the northern filament being much fainter. They estimated the age to be \(\sim 2\) kyr based on optical proper motions of \(\sim 1,100\) km s\({}^{-1}\). The compact object 1E 161348\(-\)5055 has been known as an extraordinary compact object with a very long periodicity \(\sim 6.67\) h (De Luca et al., 2006). In 2016, it exhibited a bursting activity and began to be recognized as a magnetar (D'Ai et al., 2016; Rea et al., 2016; Tendulkar et al., 2017). Previous X-ray observations shed light on the relation between the progenitor and magnetar (Nugent et al., 1984; Frank et al., 2015; Braun et al., 2019; Zhou et al., 2019). A common conclusion is that the supernova explosion was less energetic (with an explosion energy of \(10^{49}\)-\(10^{50}\) erg) and the progenitor was not very massive (\(\lesssim 13\) M\({}_{\odot}\)). Most recently, Narita et al. (2023) identified X-ray emission from shock-heated circumstellar medium (CSM) near the edges of RCW 103. They found an enhanced N/O abundance ratio (\(\sim 4\)) of the CSM, and suggested that the progenitor rotation was not rapid (\(\lesssim 100\) km s\({}^{-1}\)) and a magnetar forma tion by dynamo effects in massive stars (\(>\) 35 M\({}_{\odot}\)) is unlikely. From another aspect, constraining the X-ray kinematic properties including movements of forward shocks, ejecta knots, and the associated magnetar is of great importance as well to understand the nature of the progenitor and magnetar. In this paper, we report on proper motion measurements of X-ray bright knots and rims, and the associated magnetar. Out original purpose was to determine the explosion center and obtain tight constraints on the age and kinematics. However, we find a global deceleration and inward movements of the X-ray knots and rims. In Section 2, we summarize the observation log and data reduction processes. Our proper motion analysis and results are described in Section 3. We discuss the origin of the deceleration and inward movements in Section 4, and conclude in Section 5. ## 2 Observation and Data Reduction We use five Chandra ACIS-I (Garmire, 1997) observations of the RCW 103 region listed in Table 1, which consist of three epochs (1999, 2010, and 2016). The baselines for the proper motion study are \(\approx\) 11 yr and \(\approx\) 6 yr, for the first and second intervals, respectively. The observation log is summarized in Table 1. We use the analysis software CIAO (v4.15; Fruscione et al., 2006) and calibration database v4.10.2 for the data reduction and analysis. We process the raw data using the standard data reduction method (chandra_repro). ## 3 Analysis and Results We perform a proper motion study on RCW 103. The procedures and results are presented in this section. In our analysis, we use the software HEASoft (v6.30.1; HEASARC, 2014), XSPEC (v12.12.1; Arnaud, 1996), and AtomDB 3.0.9. Throughout the paper, uncertainties in the text, figures, and tables indicate 1\(\sigma\) confidence intervals. Figure 1 presents an X-ray image of the whole remnant with our analysis regions. We choose bright knots and sharp edges as our analysis regions. The profile extraction directions are determined by eye to roughly correspond to the directions perpendicular to the boundaries or toward the geometric center of the remnant. ### Aspect correction To maximize the reliability and accuracy of the proper motion measurement, we perform the aspect correction to individual observations. As a large fraction of the central region of the field of view is covered by the bright target source, we only find 7-9 point-like sources with off-axis angles of \(>4^{\prime}\). We perform the correction with the CIAO tool wcs_match and wcs_update. Considering the small number of detected point-like sources and their off-axis positions, we perform coordinate transformations without rotation and scaling. The resultant transformation parameters are listed in Table 1. The relative offsets of 0.1-0.6 pixels, which correspond to 0\(\farcs\)05-0\(\farcs\)3, are reasonable according to the pointing accuracy of Chandra ACIS-I, \(\approx\) 0\(\farcs\)67.1 Footnote 1: The reference for the pointing accuracy is [https://cxc.harvard.edu/cal/ASPECT/celmon/](https://cxc.harvard.edu/cal/ASPECT/celmon/). ### Proper motions of X-ray knots and rims We measure proper motions using radial profiles extracted from the regions indicated in Figure 1. The two observations in 2010 are merged after the aspect correction. The same process is done for the observations in 2016. Thus, hereafter, we use three images obtained in 1999, 2010, and 2016. Flux profiles are extracted from the exposure-corrected images in the 0.5-5.0 keV energy range. Two examples are shown in Figure 2 (a-1) and (b-1). We use the same method as that taken in Tanaka et al. (2021) and Suzuki et al. (2022) in calculating the velocities. Two profiles obtained from two epochs are used. We artificially shift the second profile by \(\Delta x\) and evaluate the difference against the first one with \(\chi^{2}(\Delta x)\), which is defined as \[\chi^{2}(\Delta x)=\sum_{i}\frac{(f_{i}-g_{i}(\Delta x))^{2}}{\Delta f_{i}^{2} +\Delta g_{i}(\Delta x)^{2}}, \tag{1}\] where \(f_{i}\) and \(\Delta f_{i}\) indicate the flux and error of the bin number \(i\) in the first observation, and \(g_{i}\) and \(\Delta g_{i}\) indi Figure 1: Exposure-corrected Chandra image of RCW 103 in the energy band of 0.5–5.0 keV. The radial-profile extraction regions are indicated with the green boxes. cate those of the shifted second profile. This calculation is repeated with various values of \(\Delta x\) and we obtain \(\chi^{2}\) as a function of \(\Delta x\). The minimum \(\chi^{2}\) value (\(\chi^{2}_{\rm min}\)) and corresponding shift (\(\Delta x_{\rm min}\)) are determined by fitting the \(\chi^{2}\)-\(\Delta x\) plot with a parabola function. We calculate proper motion velocity from the best-fit \(\Delta x_{\rm min}\) and known baselines. The profile shift is not limited to an integer multiple of the bin width. We re-bin the shifted profile \(g(\Delta x)\) with the same bin arrangement as \(f\) with an assumption of a uniform probability distribution inside each bin. Then, the profile-shift ranges that give \(\chi^{2}(\Delta x)=\chi^{2}_{\rm min}+1\) are calculated from the best-fit parabola functions. These ranges are considered to be 1\(\sigma\) confidence ranges of the profiles shifts. The \(\chi^{2}\)-\(\Delta x\) plots of Reg. 5 and Reg. 6 are shown in Figure 2 (a-2) and (b-2), respectively. The \(\Delta x_{\rm min}\) values in the second interval (2010 to 2016) are negative, whereas those in the first (1999 to 2010) are positive. These indicate that their movements were outward before 2010 but they changed their moving directions to inward after 2010. We show the calculated velocities of all the analysis regions in the two intervals in Table 2 and Figure 3. One can see a global deceleration from the first to second interval. Among them, Regs. 5-8 are firmly found to be moving inward in the second interval. We here evaluate possible systematic uncertainties. The pointing accuracy of Chandra has to be considered, which would be \(<0\farcs 67\) after the aspect correction. Even if we assume that the astrometry offsets between the images are significant, some of the knots or rims still should be moving inward in the second interval, because the analysis regions include both northern and southern edges. The profile extraction directions are determined by eye, and thus the measured velocities will have some uncertainties due to deviations from the true moving directions. We evaluate this uncertainty by slightly changing the extraction direction (\(\pm 10\) deg) of Reg. 6, finding a \(\lesssim 100\) km s\({}^{-1}\) variation in velocities. This uncertainty is not significant because the statistical uncertainties are much larger. We also repeat the analysis procedure with 1) an alternative aspect correction with an optical source catalog and 2) different extraction energy ranges of 1.0-5.0 keV and 1.5-5.0 keV. The measured velocities are largely consistent with the ones obtained above with the same tendency (See Appendix A and B). ### Proper motion of the associated magnetar 1E 161348\(-\)5055 Using the aspect-corrected, 0.5-5.0 keV images, we measure the proper motion of the associated magnetar, 1E 161348\(-\)5055. For individual observations, we determine the positions of the magnetar and their statistical errors using the CIAO tool wavdetect. The determined locations are presented in Figure 4. The angular displacement between 1999 and 2016 (ObsID 18854) is measured to be \(0\farcs 585\pm 0\farcs 025\), which is converted to \(501\pm 21\) km s\({}^{-1}\) at a distance of 3.1 kpc (Reynoso et al., 2004). We note that this displacement might be insignif \begin{table} \begin{tabular}{l c c c c c} \hline \hline \multicolumn{1}{c}{ ObsID} & R.A. (2000) & Dec. (2000) & Date & Exposure (ks) & \(\Delta x\) (pixel)\({}^{a}\) & \(\Delta y\) (pixel)\({}^{a}\) \\ \hline 123 & 244\(\fdg\)40550 & 51\(\fdg\)02022 & 1999 Sep. 26 & 13.4 & 0.51 & 0.39 \\ 11823 & 244\(\fdg\)41094 & 51\(\fdg\)02284 & 2010 Jun. 01 & 62.5 & \(\cdots\) & \(\cdots\) \\ 12224 & 244\(\fdg\)40770 & 51\(\fdg\)02419 & 2010 Jun. 27 & 17.8 & \(-\)0.10 & 0.13 \\ 18459 & 244\(\fdg\)41579 & 51\(\fdg\)04894 & 2016 May 23 & 25.8 & \(\cdots\) & \(b\) \\ 18854 & 244\(\fdg\)41333 & 51\(\fdg\)04969 & 2016 May 30 & 13.0 & \(-\)0.64 & 0.29 \\ \hline \end{tabular} \({}^{a}\)Coordinate transformation parameters with respect to those of ObsID 11823. Transformation directions \(+\Delta x\) and \(+\Delta y\) correspond to \(-\)R.A. and \(+\)Dec., respectively. \({}^{b}\)No correction is performed due to the low quality of point-like sources. \end{table} Table 1: Chandra observation log \begin{table} \begin{tabular}{l r r} \hline \hline \multicolumn{1}{c}{ Region} & \multicolumn{1}{c}{Velocity (km s\({}^{-1}\))} & \multicolumn{1}{c}{Velocity (km s\({}^{-1}\))} \\ & 1999–2010 & 2010–2016 \\ \hline Reg.1 & \(1000\pm 600\) & \(-1000\pm 1000\) \\ Reg.2 & \(1400\pm 300\) & \(-600\pm 500\) \\ Reg.3 & \(1600\pm 600\) & \(-1100\pm 800\) \\ Reg.4 & \(990\pm 400\) & \(-500\pm 400\) \\ Reg.5 & \(1600\pm 600\) & \(-2100\pm 700\) \\ Reg.6 & \(1100\pm 300\) & \(-1300\pm 500\) \\ Reg.7 & \(-10\pm 500\) & \(-1500\pm 800\) \\ Reg.8 & \(1100\pm 300\) & \(-1500\pm 500\) \\ Reg.9 & \(60\pm 400\) & \(-100\pm 500\) \\ Reg.10 & \(880\pm 400\) & \(-600\pm 600\) \\ Reg.11 & \(1400\pm 500\) & \(-70\pm 700\) \\ Reg.12 & \(1200\pm 400\) & \(-1200\pm 800\) \\ \hline \end{tabular} \({}^{a}\)A distance of 3.1 kpc is assumed. Minus velocities indicate inward movements (Same for Table 3 and 4). \end{table} Table 2: Proper motion velocities\({}^{a}\) icant if we consider systematic uncertainties due to the pointing accuracy. ## 4 Discussion We find a global deceleration of the X-ray knots and rims in RCW 103 in the last \(\sim\) 24 yrs, even though its age is thought to be larger than 2 kyr. Some of them were even moving inward in the second interval, from 2010 to 2016. We here discuss the origin of this sudden deceleration. Narita et al. (2023) proposed that X-ray emitting plasma near the outer edges are CSM dominated. They also suggested that the remnant is still expanding in the wind-blown bubble based on the derived progenitor properties. The X-ray bright southern and northern edges, on which this work focuses, coincide with the locations of H\(\alpha\) emission (Carter et al., 1997). The southern edge is thought to be interacting with a molecular cloud (Dickel et al., 1996). Considering these facts and suggestions, we propose a scenario that both northern and southern regions interact with molecular or atomic Figure 4: Locations of the magnet 1E 161348\(-\)5055 in 1999, 2010, and 2016. The central positions and radii of the ellipses indicate the estimated positions of the magnetar at different times and their statistical errors. Figure 3: Proper motion velocities of our analysis regions. Positive velocities indicate outward movements. A distance of 3.1 kpc is assumed in calculation of the velocities. Figure 2: Radial profiles and derived \(\chi^{2}\)–\(\Delta x\) (shift) plots extracted from Reg. 5 (a) and from Reg. 6 (b). In the radial profiles, negative directions correspond to outer regions. In the \(\chi^{2}\)–\(\Delta x\) plots, positive values indicate outward movements. clouds, although the remnant is still expanding in the wind-blown bubble. We assume that the northern part is also interacting with a high-density medium but it is yet to be detected. The weaker H\(\alpha\) emission and slower deceleration in the northern part support an interpretation that the interacting medium there has a lower density than the southern part. Regs. 5-8 are found to have decelerated from \(\sim+1,000\) km s\({}^{-1}\) (outward) to \(\sim-2,000\) km s\({}^{-1}\) (inward). This can be interpreted as a reflection of the shocks due to a collision with a high-density medium. The shock reflection by an interaction with a high density cloud is studied analytically by Miesch & Zweibel (1994) and Inoue et al. (2012). For a high Mach number incident shock, like an SNR blast wave shock, the relation between the incident/reflection shock velocities and density jump at a cloud surface is given by eq. (A3) of Inoue et al. (2012). In the case of an incident shock velocity \(\sim 1,000\) km s\({}^{-1}\) and a reflection shock velocity \(\sim-2,000\) km s\({}^{-1}\), the required density jump is calculated to be \(\sim 36\). This is consistent with a typical density jump between a diffuse ISM and HI clouds. We note that a shock wave can be reflected whereas shock-heated plasma will not be. The reflected shock enhances thermal X-rays while moving inward, which can be observed as an inward movement if the newly enhanced emission is bright enough. If the observed X-ray radial profiles originate from mixtures of outward- and inward-moving plasma, actual reflected-shock velocities in the observer's frame might be larger than the measured velocities. Inward moving filaments were found in a few other young SNRs, such as Cassiopeia A (Sato et al., 2018) and RCW 86 (Suzuki et al., 2022). The inward filaments are interpreted as reverse shocks for Cassiopeia A and reflection shocks for RCW 86. The case of RCW 103 is similar to RCW 86. A large difference is the locations where the inward movements are observed: in the present case, they are at the outer edges of the X-ray emission, whereas they are well behind outermost filaments in the case of RCW 86. This is consistent with our interpretation, a very recent collision in RCW 103. To summarize, the global deceleration can be understood as a result of a collision of the shocks with a high-density medium (molecular or atomic cloud), although the X-ray emitting plasma may still be expanding in the wind-blown bubble. ## 5 Conclusion We examined proper motions of X-ray knots and rims in the southern and northern edges of RCW 103. We found a global deceleration of them within the last \(\sim 24\) yrs, even though its age is thought to be larger than 2 kyr. Among them, Regs. 5-8 were found to have changed the moving directions from \(\sim+1,000\) km s\({}^{-1}\) (outward) to \(\sim-2,000\) km s\({}^{-1}\) (inward). We confirmed that the deceleration and inward movements are robust to the uncertainties in the moving directions and the pointing accuracy of Chandra. The inward movements can be understood as a shock reflection due to a collision with a high-density medium. As a conclusion, the global deceleration can be explained as due to a collision with a high-density medium both in the northern and southern regions, although the X-ray emitting plasma may still be expanding in the wind-blown bubble. We appreciate a fruitful discussion with H. Sano about the surrounding medium. This research has made use of the VizieR catalogue access tool, CDS, Strasbourg, France (DOI : 10.26093/cds/vizier). The original description of the VizieR service was published in 2000, A&AS 143, 23. This work was partially supported by JSPS/MEXT grant Nos. JP21J00031 (HS), JP19H01936, JP21H04493 (TT), and JP22H01265 (HU). Chandra HEASoft (v6.30.1; HEASARC 2014), CIAO (v4.15; Fruscione et al. 2006) ## Appendix A Aspect Correction with the Nomad-1 Optical Source Catalog In order to evaluate systematic uncertainties associated with the aspect correction, we here apply another aspect correction. We use the NOMAD-1 optical source catalog (Zacharias et al., 2004) available via the VizieR service2(Ochsenbein et al., 2000) to register the point-like sources in the Chandra images. We find 4-6 X-ray sources (depending on observations) which match the catalog sources. After correcting all the images, we measure the proper motions in the same way as in Section 3. The resultant velocities are listed in Table 3. Overall, the velocities are consistent with the ones obtained in Section 3, suggesting that the systematic uncertainties due to the aspect correction are small compared to the statistical errors. ## Appendix B Proper motions in different energy ranges We here check for systematic uncertainties of the proper motions due to the extraction energy range. Because the detector calibration might be less reliable at low energies due to the contamination on the sensor surface (Marshall et al., 2004; O'Dell et al., 2015; Plucinsky et al., 2018), we test two additional cases where we use the 1.0-5.0 keV and 1.5-5.0 keV energy ranges. The measured velocities are listed in Table 4. In the latter case, velocities are constrained only for Regs. 6, 7, and 8, due to the limited statistics. In both cases, one can see that the results are mostly consistent with the ones in Section 3, showing a global deceleration and a change in moving directions.
2303.13533
Towards risk-informed PBSHM: Populations as hierarchical systems
The prospect of informed and optimal decision-making regarding the operation and maintenance (O&M) of structures provides impetus to the development of structural health monitoring (SHM) systems. A probabilistic risk-based framework for decision-making has already been proposed. However, in order to learn the statistical models necessary for decision-making, measured data from the structure of interest are required. Unfortunately, these data are seldom available across the range of environmental and operational conditions necessary to ensure good generalisation of the model. Recently, technologies have been developed that overcome this challenge, by extending SHM to populations of structures, such that valuable knowledge may be transferred between instances of structures that are sufficiently similar. This new approach is termed population-based structural heath monitoring (PBSHM). The current paper presents a formal representation of populations of structures, such that risk-based decision processes may be specified within them. The population-based representation is an extension to the hierarchical representation of a structure used within the probabilistic risk-based decision framework to define fault trees. The result is a series, consisting of systems of systems ranging from the individual component level up to an inventory of heterogeneous populations. The current paper considers an inventory of wind farms as a motivating example and highlights the inferences and decisions that can be made within the hierarchical representation.
Aidan J. Hughes, Paul Gardner, Keith Worden
2023-03-13T15:42:50Z
http://arxiv.org/abs/2303.13533v1
# Towards risk-informed PBSHM: Populations as hierarchical systems ###### Abstract The prospect of informed and optimal decision-making regarding the operation and maintenance (O&M) of structures provides impetus to the development of structural health monitoring (SHM) systems. A probabilistic risk-based framework for decision-making has already been proposed. The framework comprises four key sub-models: the utility model, the failure-modes model, the statistical classifier, and the transition model. The cost model consists of utility functions that specify the costs of actions and structural failures. The failure-modes model defines the failure modes of a structure as combinations of component and substructure failures via fault trees. The statistical classifier and transition model are models that predict the current and future health-states of a structure, respectively. Within the data-driven statistical pattern recognition (SPR) approach to SHM, these predictive models are determined using machine learning techniques. However, in order to learn these models, measured data from the structure of interest are required. Unfortunately, these data are seldom available across the range of environmental and operational conditions necessary to ensure good generalisation of the model. Recently, technologies have been developed that overcome this challenge, by extending SHM to _populations_ of structures, such that valuable knowledge may be transferred between instances of structures that are sufficiently similar. This new approach is termed population-based structural health monitoring (PBSHM). The current paper presents a formal representation of populations of structures, such that risk-based decision processes may be specified within them. The population-based representation is an extension to the hierarchical representation of a structure used within the probabilistic risk-based decision framework to define fault trees. The result is a series, consisting of systems of systems ranging from the individual component level up to an inventory of heterogeneous populations. The current paper considers an inventory of wind farms as a motivating example and highlights the inferences and decisions that can be made within the hierarchical representation. **Keywords: population-based structural health monitoring; risk; decision-making; value of information** ## 1 Introduction Structural health monitoring (SHM) is a technology that aims to detect damage within mechanical, civil and aerospace structures and infrastructure [1]. By inferring information about the health of a structure from discriminative features extracted from data acquired throughout a monitoring campaign, these systems can facilitate informed predictions relating to one or more of the following problems regarding the health of a structure, as summarised in Rytter's hierarchy [2]: * The presence of damage in a structure (detection). * The location of damage within a structure (localisation). * The type of damage present in a structure (classification). * The extent of damage in a structure (severity). * The remaining safe/useful life of a structure (prognosis). By informing predictions with data from a monitoring system, one can also inform decision-making regarding the operation and maintenance of structures and this can yield benefits such as improved safety, reduced operation costs and operational lifetime extension. Recent works have explicitly framed structural health monitoring in the context of decision-making [3, 4, 5]. The approach to decision-making for SHM presented in [5], adopts a probabilistic risk-based perspective. In this approach, probability distributions over structural health states are inferred from data via statistical classifiers. The distributions are then forecast via a transition model and mapped to probabilities of failure for specific failure modes of interest via Bayesian network representations of fault trees. Optimal decisions are found by maximising the expected utility when considering both the risk of structural failure and the cost of maintenance actions. Several submodels have been identified as elements that are required to sufficiently define SHM decision processes; these submodels include statistical classifiers for inferring health-states and health-state transition models. In order to achieve robust decision-making, these submodels require labelled data for learning and/or validation. A critical challenge associated with the development of SHM systems is the scarcity of the data necessary for the learning and validation of models. Prior to the implementation of a monitoring system, there is often a lack of comprehensive labelled data across the health-states of interest for a given structure as obtaining data corresponding to damage states tends to be prohibitively expensive or otherwise infeasible. One approach to circumvent this issue in the development of classification models is to utilise online active learning algorithms to preferentially obtain labelled data via structural inspections after a monitoring system is installed [6, 7, 8]. In [9], a methodology for determining a transition model using qualitative data from historical inspections is demonstrated. Population-based structural health monitoring (PBSHM), provides a holistic framework for overcoming data scarcity in the development of predictive models for SHM [10, 11, 12, 13]. The core principal of PBSHM is that predictions about individual structures can be improved with the use of information transferred from other similar structures. The current paper aims to further the core principal of PBSHM, such that _decisions_ about the operation of both individual structures and populations of structures can be improved via the transfer of information. This is achieved by extending the hierarchical representation of structures, used to develop fault trees in the risk-based approach to decision-making for traditional SHM, to hierarchical representations of populations of structures. Throughout the current paper, an inventory of offshore wind farms are referenced as a motivating example. The layout of the current paper is as follows. Background theory is provided for both PBSHM and risk-based SHM in Sections 2 and 3, respectively. Subsequently, a hierarchical representation of individual structures is presented in Section 4. This representation is extended to populations of structures in Section 5. Inferences and decisions within the population hierarchy are defined and discussed in Section 6. Finally, conclusions are provided in Section 7. ## 2 Population-based SHM The foundations of PBSHM have been presented in a series of journal papers, each detailing the fundamental concepts of the approach; homogeneous populations [10], heterogeneous populations [11], mapping and transfer [12], and the geometric spaces in which structures exist [13]. By adopting a population-based approach to SHM, such that knowledge and information can be transferred between similar structures, there is the potential for improved diagnostic and prognostic capabilities [14]. In the most general sense, a population can be considered to simply be a set of structures. Given the broad nature of this definition, in order to achieve useful transfer of knowledge and information between structures, it is discerning to consider specific classes of populations based upon the similarity of the constitutive structures. Thus, the notions of homogeneous and heterogeneous populations are introduced in [10, 11, 12]. ### Homogeneous and heterogeneous Populations Within a population, structures may share common characteristics such as geometries, topologies, materials, and boundary conditions. Consider a population of wind turbines in an offshore wind farm and suppose these turbines are of the same model; developed to the same ISO standards and possessing common components, materials, aerodynamic design and so on. Qualitatively, these structures can be regarded as nominally identical. Populations comprised exclusively of nominally-identical structures are termed _homogeneous populations_. Specific instances of structures in a homogeneous population can be considered to be perturbations of a population _form_[10]. For further discussions on population forms, the reader is directed to [10]. Other examples of homogeneous populations include a fleet of Airbus A380s, an array of small modular nuclear reactors, and the Global Positioning System (GPS) satellite constellation. Variation between structures in homogeneous populations may arise because of factors such as environmental conditions and manufacturing defects. Returning to the example of an offshore wind farm, one could imagine that two turbines at differing locations in the farm may experience different geotechnical conditions - perhaps as a result of varying geological composition in the seabed. Variability in such conditions could affect the boundary conditions of the monopile turbine towers and therefore modify the behaviours and data exhibited by these otherwise nominally-identical structures. In essence, _heterogeneous_ populations form the complement of the set of homogeneous populations [14]; that is, heterogeneous populations are not exclusively comprised of structures that are nominally identical. Heterogeneous populations represent more general sets of structures and allow for differing designs, large variability in boundary conditions, and even multiple types of structure. While there may be stark differences between individual structures in a heterogeneous population, there may nonetheless be similarities that can be exploited to achieve useful knowledge and information transfer. Consider again the offshore wind farm example and suppose that the population is comprised of wind turbines each with three blades. Suppose also that the operating company manage an additional wind farm in a distinct location, comprised of four-blade turbines. Useful inferences could be achieved by considering these wind farms as two homogeneous populations, however, further insights could also be gained by considering them as a single heterogeneous population. For example, similarities may be present in the tower design between both types of wind turbine; hence, by considering a larger population from which to make observations, improved predictive models can be developed for this specific substructure. Other types of heterogeneous populations that may be useful to consider include inventories of aircraft comprised of a variety of models, and multiple suspension bridges with differing designs (e.g. single-span, multi-span). Thus far, similarities between structures have been described somewhat qualitatively, however, to better indicate where information transfer may work, it is useful to quantify this similarity. ### Similarity between structures Graph theory provides a rigorous and rich framework for representing and comparing discrete structured objects and has proved to be an invaluable modelling tool in fields such as chemistry and proteomics. In [11], the notion of the irreducible element (IE) model for structures is introduced as a representation of structures with relatively low-dimension when compared to alternatives such as finite element, or CAD models. The IE representation involves abstracting a structure into discrete elements having geometrical and material information (e.g. beams, plates, shells) and relationships (e.g. joints) so as to sufficiently capture the nature of a structure. Here, the 'nature' one wishes to capture pertains to health monitoring problems associated with a structure. Once an IE representation of a structure has been obtained, the information can be encoded into an attributed graph (AG). Whereas the purpose of the IE model is to present key characteristics of a structure in a human-readable format, the purpose of the AG is to embed a structure space, so as to facilitate the efficient pair-wise comparison of structures. With structures embedded into a metric space via AGs, one can utilise graph-matching algorithms to find common subgraphs between sets of structures. These subgraphs indicate substructures that are common within sets of structures and can be used to inform where transfer may be applicable. Furthermore, measures of closeness within the space of AGs (or common subgraphs) can be used to quantify similarity; in [11] the Jaccard index is used and in [15] a variety of graph kernels are demonstrated. In summary, structures can be mapped into a graphical domain to facilitate comparison, identify common substructures and quantify similarity. By conducting this similarity assessment for structures within a population, one can determine where it is likely that information and knowledge can be successfully transferred between individual structures. ### Mapping and Transfer As mentioned previously, the primary benefit in taking a population-based approach to SHM is gaining the ability to transfer knowledge and information between sufficiently-similar individual structures; thereby overcoming issues associated with data scarcity. The sharing of knowledge and information between individual structures can be achieved via a number of methodologies. One manner in which this can be achieved is by having a statistical representation of the aforementioned population form, as demonstrated in [10]. Another approach, presented in [16], shares datasets in joint hierarchical statistical models of a population. Methodologies founded upon _transfer learning_ have also been successfully demonstrated [12]. The principal of transfer learning is closely aligned with the goals of PBSHM; specifically, a branch of transfer learning known as _domain adaptation_. In domain adaptation, datasets are adapted in a manner that allows a model constructed for a _source_ domain to generalise to a _target_ domain. For knowledge/information transfer to be successful, it is imperative that these source and target domains are comparable. This constraint can be adhered to by employing the similarity assessment outlined in the previous section. Thus far, PBSHM has been considered with respect to predictions and inferences. Before incorporating decisions into the PBSHM framework, background on the risk-based approach to decision-making for traditional SHM is provided. ## 3 Probabilistic risk-based SHM The probabilistic risk-based approach to SHM is founded on the notion that monitoring systems should be designed and developed with consideration for the specific decision-support applications motivating their implementation. In the SHM paradigm detailed in [1], monitoring campaigns begin with a process termed _operational evaluation_. This stage in the development of an SHM system is concerned with specifying the context for an SHM system, dealing with aspects such as the safety/economic justification and the environmental and operational conditions. In [5], it is proposed that the decision-making processes associated with an SHM campaign should be considered from the outset of a monitoring campaign as part of the operational evaluation. To begin defining the decision processes that one may wish to inform with a monitoring system, one must identify a set of failure modes or conditions for a structure that one may wish to prevent, in addition to a set of actions that can be executed to aid in the mitigation of failures. Furthermore, as part of the economic justification of a monitoring system, costs or utilities must be assigned to these failures and actions. The prediction of specific failure events and the informed selection of optimal mitigating actions should provide the basis for the development of monitoring systems guiding, choices for aspects of the monitoring system such as: sensors and their placement on the structure; data processing; and the discriminative features and models used to classify structural health states. Once a monitoring system developed with respect to decision-making is implemented, optimal strategies can be found by maximising expected utility with consideration for the risk of failure and the cost of mitigating actions. This can be achieved by representing the decision processes as a probabilistic graphical model (PGM). The following two subsections provide background on PGMs and the modelling of SHM decision processes as PGMs respectively. ### Probabilistic graphical models Probabilistic graphical models (PGMs) are graphical representations of factorisations of joint probability distributions, and are a powerful tool for reasoning and decision-making under uncertainty. For this reason, they are apt for representing and solving decision problems in the context of SHM, where there is uncertainty in the health states of structures. While there exist multiple forms of probabilistic graphical model, the key types utilised for representing SHM decision processes are Bayesian networks (BNs) and influence diagrams (IDs) [17]. Bayesian networks are directed acyclic graphs (DAGs) comprised of nodes and edges. Nodes represent random variables, and edges connecting nodes represent conditional dependencies between variables. In the case where the random variables in a BN are discrete, the model is defined by a set of conditional probability tables (CPTs). For continuous random variables, the model is defined by a set of conditional probability density functions (CPDFs). Figure 1 shows a simple Bayesian network comprised of three random variables \(X\), \(Y\) and \(Z\). \(Y\) is conditionally dependent on \(X\) and is said to be a _child_ of \(X\), while \(X\) is said to be a _parent_ of \(Y\). \(Z\) is conditionally dependent on \(Y\) and can be said to be a child of \(Y\) and a _descendant_ of \(X\), while \(X\) is said to be an _ancestor_ of \(Z\). The factorisation described by the Bayesian network shown in Figure 1 is given by \(P(X,Y,Z)=P(X)\cdot P(Y|X)\cdot P(Z|Y)\). Given observations on a subset of nodes in a BN, inference algorithms can be applied to compute posterior distributions over the remaining unobserved variables. Observations of random variables are denoted in a BN via grey shading of the corresponding nodes, as is demonstrated for \(X\) in Figure 1. Bayesian networks may be adapted into influence diagrams to model decision problems. This augmentation involves the introduction of two additional types of node, as shown in Figure 2: decision nodes, denoted as squares, and utility nodes, denoted as rhombi. For influence diagrams, edges connecting random variables to utility nodes denote that the utility function is dependent on the states of the random variables. Similarly, edges connecting decisions nodes to utility nodes denote that the utility function is dependent on the decided actions. Edges from decision nodes to random variable nodes indicate that the random variables are conditionally dependent on the decided actions. Edges from random variable or decision nodes to other decision nodes do not imply a functional dependence but rather order, i.e. that the observations/decisions must be made prior to the next decision being made. To gain further understanding of IDs, one can consider Figure 2. Figure 2 shows the ID for a simple binary decision; stay home and watch TV or go out for a walk, i.e. \(\mathrm{dom}(D)=\{\mathrm{TV},\ \mathrm{walk}\}\). Here, the agent tasked with making the decision has access to a weather forecast \(W_{f}\) which is conditionally dependent on the future weather condition \(W_{c}\). The weather forecast and future condition share the same possible states \(\mathrm{dom}(W_{f})=\mathrm{dom}(W_{c})=\{\mathrm{bad},\ \mathrm{good}\}\). The utility achieved \(U\), is then dependent on both the future weather condition and the decided action. For example, one might expect high utility gain if the agent decides to go for a walk and the weather condition is good. In general, a policy \(\delta\) is a mapping from all possible observations to possible actions. The problem of inference in influence diagrams is to determine an optimal strategy \(\mathbf{\Delta}^{*}=\{\delta_{1}^{*},\ldots,\delta_{n}^{*}\}\) given a set of observations on random variables, where \(\delta_{i}^{*}\) is the \(i^{th}\) decision to be made in a strategy \(\mathbf{\Delta}^{*}\) that yields the _maximum expected utility_ (MEU). For further details on the computation of the MEU for influence diagrams, the reader is directed to [18]. Defined as a product of probability and utility, the expected utility can be considered as a quantity corresponding to risk. ### Decision framework A probabilistic graphical model for a general SHM decision problem across a single time-slice is shown in Figure 3. Here, a maintenance decision \(d\) is shown for a simple fictitious structure \(\mathbf{S}\), comprised of two substructures \(\mathbf{s}_{1}\) and \(\mathbf{s}_{2}\), each of which are comprised of two components; \(c_{1,2}\) and \(c_{3,4}\), respectively. The overall decision process model shown in Figure 3 is based upon a combination of three sub-models; a statistical classifier, a failure-mode model, and a transition model. The failure condition of the structure \(F_{\mathbf{S}}\) is represented as a random variable within the PGM. This failure condition \(F_{\mathbf{S}}\) can be expressed as a failure mode of the global structure that can be specified by a fault tree; a combination of local failures at a component, joint and substructure level related by Boolean logic gates. In order to fit into the PGM framework, fault trees must be mapped into Bayesian networks. Fortunately, there is a well-established mapping for this presented in [19, 20]. Essentially, components, joints and substructures receive random variables in the PGM corresponding to their respective local health states. The conditional probability tables defining the Figure 1: An example Bayesian network. Figure 2: An example influence diagram representing the decision of whether to go outside or stay in under uncertainty in the future weather condition given an observed forecast. relationship between these random variables correspond to the Boolean truth tables for each of the logic gates in the fault tree defining the failure mode \(F_{\mathbf{S}}\). Figure 3 considers a failure mode dependent on two substructures \(\mathbf{s}_{1-2}\), which in turn are each dependent on two components \(c_{1,2}\) and \(c_{3,4}\). For the decision process shown in Figure 3, component health states are denoted by \(hc_{1-4}\), and substructure health states are denoted by \(hs_{1,2}\). An advantage of considering specific failure modes and their representations as fault trees is that doing so yields the health states that must be targeted by the monitoring system; the local health states of the components can be summarised in a global health-state vector. For the example shown in Figure 3, this health-state vector is given by \(\mathbf{H}=\{hc_{1},hc_{2},hc_{3},hc_{4}\}\). In essence, the purpose of this failure model is to map from a distribution over the global health states to a probability of structural failure. Finally, the failure states associated with the variable \(F_{\mathbf{S}}\) are given utilities via the function represented by the node \(U_{F}\). As it is necessary to consider the future risk of failure in the decision process, this failure-mode model and utility function are repeated for each time-step. As previously mentioned, a random variable denoted \(\mathbf{H}_{t}\) is used to represent the latent global health state of the structure at time \(t\). Within the decision process, the function of the statistical classifier is to provide a posterior probability distribution over the latent health state \(\mathbf{H}_{t}\), inferred via observations on a set of discriminative features \(\mathbf{\nu}_{t}\). This probability distribution over health-states may be obtained via a generative model \(P(\mathbf{\nu}|\mathbf{H})\) as shown in Figure Figure 3: An influence diagram representing a partially-observable Markov decision process over one time-slice for determining the utility-optimal maintenance strategy for a simple structure comprised of four components. The fault-tree failure-mode model for time \(t+1\) has been represented as the node \(F^{\prime}_{t+1}\) for compactness. 3, or obtained more directly via a discriminative classifier which yields \(P(\mathbf{H}|\mathbf{\nu})\). Here, the use of a probabilistic classifier is vital to ensure decisions made are robust to uncertainty in the health state of the structure. Finally, a transition model is used to forecast the future health states, given the current health state and a decided action, i.e. \(P(\mathbf{H}_{t+1}|\mathbf{H}_{t},d_{t})\). The transition model considers the degradation of the structure under the various operational and environmental conditions a structure may experience, while accounting for uncertainties in each. By employing decision-process models such as the one presented here, one can obtain optimal strategies regarding the operation and maintenance of individual structures by maximising the expected utility. ## 4 Structures as hierarchies A key assumption implicit in the development of the fault-tree failure models within the risk-based SHM decision framework, is that structures can be represented as a hierarchy, or, in other terms, as a system of systems of systems. As it is outside the scope of the current paper, a comprehensive and consistent notation for referencing specific elements of a structure is not established here. Rather, the constituent levels and elements within the hierarchical representation are presented, in addition to the process by which one arrives at them. Consider a structure of interest \(S\). To obtain a hierarchical representation for \(S\), one must first decompose \(S\) into a discrete number of constituent elements, which are referred to as _substructures_. Substructures are considered to be entities which may, in principle, be assembled remotely or available for independent testing prior to incorporation into the full-scale structure. Within the hierarchical representation, some substructures may be further decomposed up until the stage at which it would no longer be meaningful or useful to do so. Substructures at this stage are referred to as _components_. As such, components are considered to be substructures which cannot (or need not) be decomposed further; these are the smallest element of a structure one might reasonably monitor. A notable sub-class of component is the _joint_. Joints are considered to be the physical mechanisms by which substructures are joined together. A diagram illustrating the hierarchical representation of a structure is shown in Figure 4. The levels in the hierarchy that specifies the system of systems of systems shown are denoted as \(\mathcal{S}^{1}\), \(\mathcal{S}^{2}\), and \(\mathcal{S}^{3}\) - corresponding to the component, substructure and substructure levels, respectively. Within each level of the hierarchy, elements can be listed. Returning to the example of a wind farm, it would be perfectly reasonable to consider a single turbine as an individual structure, representing the \(\mathcal{S}^{3}\) level in a hierarchy. In the \(\mathcal{S}^{2}\) level one may consider substructures such as the drive train, blades, or tower. Finally, in the \(\mathcal{S}^{1}\) level one may have components such as the gearbox or bearings comprising the drive train, or the web and shells comprising the blades. The hierarchical representation of structures facilitates the specification of the decision process that motivate the development and implementation of SHM technologies. This facilitation is achieved by decomposing structures into constituent substructures and components which can then be used to define failure modes of the structure. Given a finite set of failure modes of interest, one can then specify critical components, and therefore health states, to be targeted by a monitoring system. Figure 4: A structure as systems of systems. Populations of structures as hierarchies A natural method for incorporating decision-making into PBSHM, is to extend the hierarchical representation of structures to hierarchical representations of populations. The number of levels required in a hierarchy is of course dependent on context. However, it is deemed that an additional three levels provide sufficient generality for most PBSHM applications, and indeed the discussions in the current paper. The additional levels necessary to extend the hierarchical representation to populations of structures can be summarised as follows: * Type/Model Inventory: This level of the hierarchy corresponds to the lowest population level and represents an organisational grouping in which all individual structures in the population are of the same type/model and can be considered to be nominally identical. Thus, populations at this level in the hierarchy are homogeneous. * Group Inventory: This next population level corresponds to a set of \(\mathcal{S}^{4}\) inventories for which it is necessary or convenient to consider as a group for operational reasons such as asset management. As a group inventory may be formed of disparate type/model inventories, in general, group inventories are heterogeneous populations. * Inventory: This level of the hierarchy corresponds to the total set of structural assets operated or owned by an organisation or company. Again, this level will generally represent a heterogeneous population. Figure 5 depicts the continuation of the hierarchical representation from \(\mathcal{S}^{3}\) to \(\mathcal{S}^{6}\). In Figure 5, an inventory \(I\) is considered as a system of systems of systems of systems. Once again, a list can be formed of the constituent elements for each level in the hierarchy. To further elucidate this extension of the hierarchy, once again, consider the example of an organisation operating offshore wind farms. As previously indicated, a wind farm comprised exclusively of turbines of a single type or model can form a homogeneous population; this corresponds to \(\mathcal{S}^{4}\) in the hierarchy. In the case that the organisation is responsible for multiple wind farms, or a single farm with a mixture of turbine types, one may wish to organise these type/model inventories into group inventories. For example, these group inventories may be formed from type inventories according to the geographical jurisdiction of sub-divisions within the organisation, or even formed from a collection of type inventories that are overseen by a single maintenance crew. Should these populations each be comprised of a different model of wind turbine, the group inventories formed would be heterogeneous populations and correspond to \(\mathcal{S}^{5}\) in the hierarchy. Alternatively, if all the wind farms consist of a single type of turbine, \(\mathcal{S}^{4}\) and \(\mathcal{S}^{5}\) can be merged and the group inventories are instead homogeneous populations. Finally, the group inventories owned Figure 5: An inventory as a system of systems of systems of systems. by the wind farm organisation can be aggregated as an inventory in the \(\mathcal{S}^{6}\) level of the hierarchy. This level would represent the organisation's total structural assets and could amount to, for example, multiple wind farms spread across the globe, maritime vessels, and aircraft that may be used for inspection, maintenance or other operational activities. As is the case for traditional SHM, the hierarchical representation of structures and populations of structures can help facilitate decision-making for PBSHM in several ways. These decision processes are discussed further in the following section. ## 6 Risk-informed PBSHM Numerous decisions must be made throughout the life cycle of a PBSHM system. Most obvious are the operation and maintenance decisions an organisation may have to make, following the installation of a monitoring system, such as inspections and repairs. Equally important, however, are the decisions that must be made prior to implementation such as those made in the operational evaluation stage of PBSHM. ### Operational evaluation One significant way in which adopting a hierarchical risk-based approach to PBSHM facilitates decision-making occurs very early on, in the operational evaluation stage. By considering specific failure modes and constructing fault trees for individual structures, one can decide the key elements of a structure which should be modelled in IEs and AGs. In other words, the specification of failure modes as combinations of component and substructure failures can be used to inform the granularity at which IEs and AGs are constructed. A further benefit of the population-based approach is that, as structures are considered nominally identical, large proportions of the fault trees may be mapped across a homogeneous population, with the exception of perhaps environment-specific failure modes. The extension of the hierarchy to represent populations of structures via the inclusion of levels \(\mathcal{S}^{4}\) to \(\mathcal{S}^{6}\) prompts one to consider how failures may be defined at the population level. One possible way to approach the failure of a population would be to consider the critical missions for the operating organisation. Depending on the nature of the organisation - whether they are non-commercial or commercial - these missions may be related to performance measures such as availability and/or profitability. Consider the wind farm example. Suppose that the operating organisation are required to supply energy from the wind farm to an electrical grid while maintaining a total population availability of 99%. This population can then be considered to have failed if the population structural availability falls below 99%. This population failure may be specified then by extending the fault tree; defining the population failures as a combination of individual failures. In addition, the organisation may wish to specify a failure condition based upon profitability, perhaps based upon a performance criterion related to a moving-average of the total power output. Again, this failure could be represented as a combination of individual structure failures and environmental conditions. This distinct failure mode is likely to be highly correlated with the availability failure mode; fortunately, the probabilistic graphical models employed in the risk-based approach can account for these 'common-cause' failures. This approach to defining population failures can be applied at any of the population levels within the hierarchical representation by considering combinations of failures in the levels below. Defining failures at the population level within the hierarchy allows one to assign costs during the operational evaluation stage. Following on from this, population-scale actions can be also be defined. ### Inferences and decisions A fundamental process of decision-making for PBSHM is reasoning under uncertainty. This is typically achieved via inferences. Within the hierarchical framework for PBSHM, different types of inferences can be defined: * I-inference: This type of inference corresponds to those usually made in traditional SHM, and occur within the individual structure levels \(\mathcal{S}^{3}\) to \(\mathcal{S}^{1}\). An example of an I-inference is the process of determining a probability distribution over the health states of an individual structure using data acquired from that structure. * L-inference: This type of inference occurs between levels in the hierarchical representation of structures. These may also be types of I-inference, for example determining the probability of failure for a (sub)structure given local component health states. Other L-inferences may include those relating to the validation and verification of predictive models (V&V). For example, one may be able to validate a predictive model for a structure at the \(\mathcal{S}^{3}\) level with data measured from substructures or components at the \(\mathcal{S}^{2}\) and \(\mathcal{S}^{1}\) levels, respectively. * P-inference: This type of inference occurs across populations. If the inference is across a type inventory in \(\mathcal{S}^{4}\), i.e. a homogeneous population, they can be denoted as HomP-inferences. These inferences across populations may utilise technologies such as forms [10]. An example of a HomP-inference is inferring the health state of a member in a population using data aggregated across all members in the population. On the other hand, if a P-inference is between populations containing different types of structure, such as within a group inventory in \(\mathcal{S}^{5}\), then the inferences can be referred to as HetP-inferences. HetP-inferences may involve using transfer learning techniques such as domain adaptation [12]. An example of a HetP-inference is transferring the degradation (transition) model for a blade from a population of four-blade wind turbines to a population of three-blade wind turbines. These inferences within the hierarchical representation of populations, facilitate reasoning under uncertainty using PBSHM systems; this can naturally be extended to decision-making under uncertainty, by considering the following types of decision: * I-decision: This type of decision is made at the individual structure levels in the hierarchy, \(\mathcal{S}^{1}\) to \(\mathcal{S}^{3}\). Again, this type of decision corresponds to decisions one may make with a traditional SHM system. An example of an I-decision is selecting a maintenance strategy for an individual structure, substructure, or component for repair. Unlike in traditional SHM, in the risk-informed PBSHM approach, I-decisions can be informed by I-, L- and P-inferences alike. * L-decision: The actions selected via this type of decision operate between levels of the hierarchical representation. As with L-inferences these decisions may pertain to the V&V of predictive models. For example, deciding whether can one proceed with using a structural model validated on substructures. Another example of this type of decision relates to resource allocation. Suppose one has a limited budget to carry out some structural testing to acquire data for mode updating. Under these circumstances, one should aim to decide on a set of tests, and the levels at which these tests are carried out, such that the largest improvement in model performance is obtained for the given budget. * P-decision: This type of decision is made at the population levels in the hierarchy, \(S^{4}\) to \(S^{6}\). These actions may pertain to resource management. For example, one may decide to send a team of engineers to perform inspections on a type inventory based on the probability of failure for a population rather than the probability of failure of an individual structure. Scheduling inspections in this manner could save both time and expenditure. Again, these decisions may be informed via I-, L- and P-inferences. To summarise, the hierarchical representation of populations of structures facilitates both making inferences and making decisions for PBSHM, by allowing for the definition of specific types of inferences and decisions. ### Value of information transfer Value of information (VoI) is a concept in decision theory defined to be the amount of money/resource a decision-maker should be willing to pay in order to gain access to information prior to making a decision. The concept of VoI has seen some application to traditional SHM in recent works [7, 21]. Extending the risk-based approach to decision-making from traditional SHM to PBSHM opens up the possibility of value of information transfer, i.e. the price a decision-maker should be willing to pay in order to gain information via transfer, prior to making a decision. This value arises as a result of change in maximum expected utility that can be achieved should a change in optimal policy occur as a result of the additional information made available via transfer. This notion of value of information transfer yields the thought-provoking implication that, in some contexts, it may be an optimal decision to allow a (sub)structure to fail, since the data obtained throughout the failure process may improve the management of the other individuals in a population. Conclusions To conclude, PBSHM provides a general framework for overcoming issues of data scarcity associated with developing predictive models for detecting and forecasting damage within structures. This advantage is achieved via technologies that allow for the transfer of information between individual structures within a population. Adopting a probabilistic risk-based approach to SHM, allows inferences made about the health-states of individual structures to inform operation and maintenance decisions via the use of hierarchical representations of structures and fault trees. The current paper extends this hierarchical representation of structures to representations of populations, such that decision process can be defined over populations. Other advantages can be gained by adopting a risk-based approach to PBSHM; for example, the identification of critical components and substructures can be used to inform the development of irreducible element models and the associated attributed graphs. ## Acknowledgements The authors would like to gratefully acknowledge the support of the UK Engineering and Physical Sciences Research Council (EPSRC) via grant references EP/W005816/1 and EP/R006768/1. For the purpose of open access, the authors has applied a Creative Commons Attribution (CC BY) licence to any Author Accepted Manuscript version arising. KW would also like to acknowledge support via the EPSRC Established Career Fellowship EP/R003625/1.
2301.12732
Steady thermodynamic fundamental relation for the interacting system in a heat flow
There is a long-standing question of whether it is possible to extend the formalism of equilibrium thermodynamics to the case of non-equilibrium systems in steady states. We have made such an extension for an ideal gas in a heat flow [Ho\l{}yst \emph{et al.}, J. Chem. Phys. 157, 194108 (2022)]. Here we investigate whether such a description exists for the system with interactions: the Van der Waals gas in a heat flow. We introduce the parameters of state, each associated with a single way of changing energy. The first law of non-equilibrium thermodynamics follows from these parameters. The internal energy $U$ for the non-equilibrium states has the same form as in equilibrium thermodynamics. For the Van der Waals gas, $U(S^*, V, N, a^*,b^* )$ is a function of only 5 parameters of state (irrespective of the number of parameters characterizing the boundary conditions): the entropy $S^*$, volume $V$, number of particles $N$, and the rescaled Van der Waals parameters $a^*$, $b^*$. The state parameters, $a^*$, $b^*$, together with $S^*$, determine the net heat exchange with the environment.
Robert Hołyst, Karol Makuch, Konrad Giżyński, Anna Maciołek, Paweł J. Żuk
2023-01-30T08:56:00Z
http://arxiv.org/abs/2301.12732v1
# Steady thermodynamic fundamental relation for the interacting system in a heat flow ###### Abstract There is a long-standing question of whether it is possible to extend the formalism of equilibrium thermodynamics to the case of non-equilibrium systems in steady states. We have made such an extension for an ideal gas in a heat flow [Holyst _et al._, J. Chem. Phys. 157, 194108 (2022)]. Here we investigate whether such a description exists for the system with interactions: the Van der Waals gas in a heat flow. We introduce the parameters of state, each associated with a single way of changing energy. The first law of non-equilibrium thermodynamics follows from these parameters. The internal energy \(U\) for the non-equilibrium states has the same form as in equilibrium thermodynamics. For the Van der Waals gas, \(U(S^{*},V,N,a^{*},b^{*})\) is a function of only 5 parameters of state (irrespective of the number of parameters characterizing the boundary conditions): the entropy \(S^{*}\), volume \(V\), number of particles \(N\), and the rescaled Van der Waals parameters \(a^{*}\), \(b^{*}\). The state parameters, \(a^{*}\), \(b^{*}\), together with \(S^{*}\), determine the net heat exchange with the environment. ## I Introduction Determination of energy and its changes induced by heat or work are necessary to understand systems such as combustion engines or the earth's atmosphere with weather phenomena. When an equilibrium state approximates a system state, thermodynamics allows one to predict the system's behaviour by using energy as a function of a few parameters of state and a few principles. In particular, the first law of thermodynamics [1] represents a global energy conservation law. The energy, \(U(S,V,N)\) is a function of entropy, \(S\), volume, \(V\), and the number of molecules, \(N\). Each variable is related to one independent way of energy exchange: heat, work, and change in the amount of matter. However, a similarly simple theory does not exist for non-equilibrium systems in steady (stationary) states. There is no description similar to thermodynamics that grasps the energy transfer to the system in terms of a few global parameters. One of the most straightforward non-equilibrium cases is a steady heat flow. The appearance of the heat flow opens many research directions belonging to various fields of physics. Rational and extended thermodynamics focus on local transport equations [2]. Irreversible thermodynamics formulates thermo-hydrodynamic descriptions with local equations of state and mass, momentum, and energy balance [3]. Sometimes it is possible to represent governing equations in terms of variational principles [4; 5; 6; 7], which determine the profile of thermodynamic fields (such as temperature). The issue closely related to the studies mentioned above is whether we can represent the energy of the non-equilibrium system as a function of a few global parameters. The answer to this question would lead to a description similar to classical equilibrium thermodynamics. The existence of such a thermodynamic-like description for steady-state systems has been considered in various studies [8; 9; 10; 11; 5; 12]. The progress [13; 14; 15; 16] in this field is limited to small temperature differences and low heat fluxes. The recent papers on this topic carry the conviction that general rules exist in non-equilibrium thermodynamics. But scepticism regarding the usefulness of the equilibrium-based entropy [17] or even the existence of a description in terms of thermodynamic-like potentials [18] also appears. Lieb and Yngwasson [17] expressed scepticism regarding the use of entropy by suggesting heat as a primary quantity. It requires a generalization of heat for steady states. But how can it be generalized, e.g., for a steady gas between two plates with heat flow in a perpendicular direction? Thermo-hydrodynamic equations describe the system, so the heat flowing through the surface is well-defined. This applies both for a steady state and when the system passes from one stationary state to another. In a steady state, the same amount of heat enters through one plate and leaves on the opposite side. The net heat vanishes. But the net heat may flow to the system during the transition between steady states. This reasoning leads to a concept of heat measured in transition between stationary (steady) states. It is a particular case of the excess heat discussed by Oono and Paniconi [19]. In 2019 Nakagawa and Sasa [20] noticed that the excess heat concept defined by Oono and Paniconi had yet to be further utilized by other researchers. We adopt the term net (or excess) heat to name the heat that enters the system and changes its internal energy during the transition between steady states. We note that in literature, the excess heat has other meanings [21]. Our recent investigations of an ideal gas in a steady state with a heat flow showed a surprising result [22]. We proved that the net heat has an integrating factor and rigorously calculated non-equilibrium 'entropy' and non-equilibrium temperature. This entropy determines steady adiabatic insulation during transitions between stationary states. However, it is not clear whether the non-equilibrium entropy exists beyond the ideal gas approximation. We continue research to formulate global steady thermodynamics using Van der Waals gas as an example of an interacting system. First, from the thermo-hydrodynamic equations, we derive the global energy balance. Next, we show that it is possible to represent the non-homogeneous Van der Waals gas in a heat flow with equations formally identical to the equations of state for the Van der Waals gas in equilibrium. This procedure (named mapping) defines the parameters of the state for the non-equilibrium system in the steady state. We also show that the net heat does not have an integrating factor as proposed by Oono and Paniconi [19]. Instead, the net heat is represented by two independent thermodynamic parameters of state in the Van der Waals gas. ## V Van der Waals gas in equilibrium We consider the Van der Waals fluid described by the following fundamental thermodynamic relation [1] \[U=N\left(\frac{V}{N}-b\right)^{-\frac{1}{c}}\exp\left[\frac{S-Ns_{0}}{cNk_{B }}\right]-a\frac{N^{2}}{V}. \tag{1}\] It binds together thermodynamic state functions, i.e., energy \(U\), entropy \(S\), volume \(V\), and a number of particles \(N\), with two interaction parameters \(a\) and \(b\). The number of the degrees of freedom of a single molecule is given by constant \(c\) (\(c=3/2\) for single atoms), and \(k_{B}\) is the Boltzmann constant. In equilibrium thermodynamics, \(a\) and \(b\) are also parameters of state just like \(S\), \(V\) and \(N\)[23; 24; 25]. Therefore, for the Van der Waals gas they are present in the differential of energy (first law of thermodynamics) \[dU=TdS-pdV-\frac{N^{2}}{V}da+Nk_{B}T\left(\frac{V}{N}-b\right)^{-1}db \tag{2}\] with temperature \(T=\partial U\left(S,V,a,b\right)/\partial S\), pressure \(p=-\partial U\left(S,V,a,b\right)/\partial V\), \(\frac{N^{2}}{V}=-\partial U\left(S,V,a,b\right)/\partial a\) and \(Nk_{B}T\left(\frac{V}{N}-b\right)^{-1}=\partial U\left(S,V,a,b\right)/\partial b\)[1]. Each term in the above expression corresponds to one way the energy enters the Van der Waals gas. \(dQ=TdS\) is the heat, \(dW=-pdV\) is the elementary mechanical work when the volume changes, and the last two terms represent the work of external sources required to change the strength of interactions. Modifications of an interaction parameter are used, e.g., in the thermodynamic integration methods [26]. In the following sections, we will benefit from the equivalence between the fundamental thermodynamic relation for the Van der Waals fluid (1) and the energy differential (2) supplemented with the equations of state \[p =\frac{nk_{B}T}{1-nb}-an^{2}, \tag{3a}\] \[u =cnk_{B}T-an^{2}, \tag{3b}\] where \(n=N/V\) is particle density and \(u=U/V\) is energy density. ## VI Van der Waals gas in a heat flow We discuss a simplified Van der Waals gas (\(b=0\)) first. Consider a system schematically shown in Fig. 1, a rectangular cavity with a constant amount of particles \(N\). We distinguish two parallel walls separated by a distance \(L\) in the \(z\) direction. The walls are kept at temperatures \(T_{1}\) and \(T_{2}\). In other directions, we assume the translational invariance, which constitutes a 1D problem. We assume the local equilibrium, that is, the dynamics of the gas density \(n\left(z\right)\) is governed by thermo-hydrodynamic equations: mass continuity, momentum balance and energy balance equations [3], which are supplemented with equations of states (3) \[p\left(z\right) =n\left(z\right)k_{B}T\left(z\right)-an\left(z\right)^{2}, \tag{4a}\] \[u\left(z\right) =cn\left(z\right)k_{B}T\left(z\right)-an\left(z\right)^{2} \tag{4b}\] valid for every coordinate \(z\). In the steady state, inside the finite 1D segment, the velocity field has to be equal \(0\) Figure 1: The schematic of the Van der Waals gas between parallel walls separated by a distance \(L\). The walls are kept at temperatures \(T_{1}>T_{2}\), and the density of spheres represents the variation of the gas density in the temperature gradient. everywhere. The constant pressure solution \(p\left(z\right)=\mathrm{const}\) follows. Another simplification resulting from the stationary condition is the Laplace equation for the temperature profile with linear solution \[T\left(z\right)=T_{1}+\left(T_{2}-T_{1}\right)\frac{z}{L}. \tag{5}\] To determine the concentration profile, we observe that equation (4a) written locally, \(p=nk_{B}T-an^{2}\), is quadratic in density. Thermodynamic stability conditions [1] requires that \(\left(\partial p/\partial n\right)_{T}\geq 0\), which gives \(k_{B}T-2an\geq 0\). Therefore, the only physical solution for the density that satisfies (4a) is given by, \[n\left(z\right)=\frac{k_{B}T\left(z\right)-\sqrt{\left(k_{B}T\left(z\right) \right)^{2}-4ap}}{2a}, \tag{6}\] and the stability condition, \(k_{B}T\left(z\right)-2an\left(z\right)\geq 0\), with the use of the above expression for \(n\left(z\right)\) is reduced to \(\left(k_{B}T\left(z\right)\right)^{2}\geq 4ap\). Because the pressure in the system is constant, and the temperature profile is known, eqs. (5) and (6) allow us to determine the total number of particles in the system, \[N\left(T_{1},T_{2},A,L,p\right)=A\int_{0}^{L}dz\,n\left(z\right) =\frac{ALk_{B}\left(T_{1}+T_{2}\right)}{2a}\times\] \[\times\left[\frac{1}{2}+\frac{4ap}{k_{B}^{2}\left(T_{2}^{2}-T_{1 }^{2}\right)}\int_{k_{B}T_{1}/\sqrt{4ap}}^{k_{B}T_{2}/\sqrt{4ap}}du\,\sqrt{u^ {2}-1}\right], \tag{7}\] where \(A\) is the surface area of the system in the direction of translational invariance. Similarly, from the eq. (4b) we determine the total internal energy \[U\left(T_{1},T_{2},A,L,p\right)=A\int_{0}^{L}dz\,u\left(z\right)\] \[=ALp\left[1+\frac{\left(c-1\right)\sqrt{4ap}}{k_{B}\left(T_{2}-T _{1}\right)}\left(g\left(\frac{k_{B}T_{2}}{\sqrt{4ap}}\right)-g\left(\frac{k_ {B}T_{1}}{\sqrt{4ap}}\right)\right)\right] \tag{8}\] with \(g\left(x\right)=\frac{1}{3}\left[x^{3}-\left(x^{2}-1\right)^{\frac{3}{2}}-1\right]\). ## III Net heat for van der walls gas and new parameter of state In a steady state, the same amount of heat enters through one wall and leaves through the other. However, during the transition from one steady state to another, e.g., by a slight change of temperature \(T_{2}\) or by a motion of the right wall changing \(L\) (see Fig. 1), this balance is, in general, disturbed and the net heat may flow to the system changing its internal energy [22]. In the case of a very slow transition between stationary states, the energy changes only by means of mechanical work and heat flow \[dU=dQ+dW. \tag{9}\] The mechanical work is given by \[dW=-pdV. \tag{10}\] and the energy balance during the transition between non-equilibrium steady states has the following form \[dU=dQ-pdV. \tag{11}\] The above equation reduces to the first law of thermodynamics in equilibrium. It has the same form, but here the \(dQ\) is the net heat transferred to the system during a small change between two stationary instead of equilibrium states. We obtain the formal analogy between equilibrium and stationary state for the Van der Waals gas by integrating the equations of state (4) over the volume \[pV =A\int_{0}^{L}dz\,n\left(z\right)k_{B}T\left(z\right)-Aa\int_{0}^ {L}dz\,n\left(z\right)^{2}, \tag{12a}\] \[U =\frac{3}{2}A\int_{0}^{L}dz\,n\left(z\right)k_{B}T\left(z\right)- Aa\int_{0}^{L}dz\,n\left(z\right)^{2}, \tag{12b}\] and by introducing average temperature \[T^{*}\equiv\frac{A\int_{0}^{L}dz\,n\left(z\right)T\left(z\right)}{A\int_{0}^ {L}dz\,n\left(z\right)} \tag{13}\] and the effective potential energy parameter \[a^{*}\equiv\frac{Aa\int_{0}^{L}dz\,n\left(z\right)^{2}}{AL\bar{n}^{2}}=\frac{ a\int_{0}^{L}dz\,n\left(z\right)^{2}}{L\bar{n}^{2}}, \tag{14}\] where \(\bar{n}=N/V\) is average particle density and \(\bar{u}=U/V\) is the total energy of the system divided by its volume. As a result, we obtain two relations \[p =\bar{n}k_{B}T^{*}-a^{*}\bar{n}^{2}, \tag{15a}\] \[\bar{u} =c\bar{n}k_{B}T^{*}-a^{*}\bar{n}^{2}, \tag{15b}\] which (for \(b=0\)) are formally identical to (3). Because the equations (15) have the same structure as the equilibrium equation of state, they relate to the fundamental relation (1) \[U\left(S^{*},V,N,a^{*}\right)=N\left(\frac{V}{N}\right)^{-\frac{1}{2}}\exp \left[\frac{S^{*}-Ns_{0}}{cNk_{B}}\right]-a^{*}\frac{N^{2}}{V}, \tag{16}\] but with effective parameters. Moreover, the above equation defines \(S^{*}\) and it has a differential \[dU=T^{*}dS^{*}-pdV-\frac{N^{2}}{V}da^{*}, \tag{17}\] where \(T^{*}=\left(\partial U/\partial S\right)_{V,N,a^{*}}\), \(p=\left(\partial U/\partial V\right)_{S^{*},N,a^{*}}\) and \(\frac{N^{2}}{V}=-\partial U\left(S^{*},V,a^{*}\right)/\partial a^{*}\). The comparison of equations (17) and (11) gives the relation between the net heat in the system and the effective entropy, \[dQ=T^{*}dS^{*}-\frac{N^{2}}{V}da^{*}. \tag{18}\] The net heat flow during the transition between two steady states is a combination of the two exact differentials: effective entropy \(dS^{*}\), and effective interaction \(da^{*}\). It is contrary to equilibrium thermodynamics, where the heat is determined solely by the temperature and the change of entropy. ## IV The integrating factor for net heat in the van der Waals gas in steady states does not exist We rearrange Eq. (11) to get the net heat, \[dQ=dU+pdV. \tag{19}\] The energy and pressure can be determined from the stationary solution. Therefore we are in position to ask whether the heat differential \(dQ\) has an integrating factor in space \(T_{1},T_{2},V\). For the ideal gas (\(a=0\)) the integrating factor exists [22]. It follows that there exists a function of state, which is constant if the steady state system is "adiabatically insulated" (i.e. the net heat vanishes, \(dQ=0\)). We say that a differential form \(dF=f_{1}\left(x_{1},x_{2},x_{3}\right)dx_{1}+f_{2}\left(x_{1},x_{2},x_{3} \right)dx_{2}+f_{3}\left(x_{1},x_{2},x_{3}\right)dx_{3}\) has an integrating factor if there exists a function \(\phi\left(x_{1},x_{2},x_{3}\right)\) whose differential is related to \(dF\) by \[d\phi\left(x_{1},x_{2},x_{3}\right)\equiv dF/\mu\left(x_{1},x_{2},x_{3}\right).\] The function \(\mu\) is called the integrating factor and \(\phi\) is called the potential of the form \(dF\). The differential form may be considered in different variables, e.g. given by \(y_{i}=y_{i}\left(x_{1},x_{2},x_{3}\right)\) for \(i=1,2,3\). We will write shortly, \(Y\left(X\right)\). It is straightforward to check that when the differential form is transformed into new variables, the integrating factor is given by, \(\mu\left(X\left(Y\right)\right).\) We can choose the most convenient set of variables to find the integrating factor of a differential form. We considered the space of the control parameters, \(T_{1},T_{2},A,L,N\). It has been used to represent the number of particles, \(N=N\left(T_{1},T_{2},A,L,p\right)\) and the energy in the system, \(U=U\left(T_{1},T_{2},A,L,p\right)\), given by Eqs. (7) and (8). To simplify further considerations, let's notice that the surface area, \(A\), and the length of the system, \(L\), always appear in the above relations as a product, \(V=AL\). We can reduce the space of control parameters to \(T_{1},T_{2},V,N\). Because we confined our considerations to constant number of particles, \(N\), we have three parameters, \(T_{1},T_{2},V\). However, the natural variables of the differential form (19) are \(U\), \(V\). We will use them in the following considerations and we take \(\tau=T_{2}/T_{1}\) as the third parameter. Suppose that the net heat has the integrating factor. It means that there exists a potential \(\phi\left(U,V,\tau\right)\) which differential is related to the net heat differential by \[d\phi\left(U,V,\tau\right)\equiv dQ/\mu\left(U,V,\tau\right).\] By definition, \(d\phi=\frac{\partial\phi}{\partial U}dU+\frac{\partial\phi}{\partial V}dV+ \frac{\partial\phi}{\partial\tau}d\tau\). On the other hand the above relation with Eq. (19) gives, \(d\phi=1/\mu\left(U,V,\tau\right)dU+p\left(U,V,\tau\right)/\mu\left(U,V,\tau \right)dV.\) Equality of the second derivatives for all three independent variables \(U,V,\tau\) is a necessary condition for the existence of \(\phi\). It is easy to check that this condition is satisfied only if \(p\left(U,V,\tau\right)\) does not depend on \(\tau\), \[\left(\frac{\partial p}{\partial\tau}\right)_{U,V}=0.\] Equivalently, \(\left(\partial p/\partial\tau\right)_{U,V}\neq 0\), then the integrating factor of the net heat does not exist. The above condition requires the determination of \(p\left(U,V,\tau\right)\). The pressure can be determined from Eqs. (7) and (8), which have the following form, \(N=N\left(T_{1},T_{2},V,p\right)\), and, \(U=U\left(T_{1},T_{2},V,p\right)\). Inversion of the former relation would lead to the formula \(p=p\left(T_{1},T_{2},V,N\right)\), but we are not able to obtain explicit expression for \(p\) in terms of elementary functions. However, what we need is not the function itself, but its derivative over \(\tau\). Even if a function is given implicitly, its derivative can be explicitly determined with the use of the simple properties of derivatives [1]. We have a similar situation here: although \(p\left(U,V,\tau,N\right)\) with \(\tau=T_{2}/T_{1}\) cannot be explicitly determined from \(N=N\left(T_{1},T_{2},V,p\right)\), and, \(U=U\left(T_{1},T_{2},V,p\right)\), but its derivative, \(\left(\partial p/\partial\tau\right)_{U,V}\neq 0\), can be determined explicitly. By using properties of derivatives of functions \(U=U\left(T_{1},T_{2},V,p\right)\) and \(N=N\left(T_{1},T_{2},V,p\right)\) one shows the following property. The derivative \(\left(\partial p/\partial\tau\right)_{U,V}\neq 0\) does not vanishes, if the following conditions are satisfied: \[\left\{U,N\right\}_{T_{1},T_{2}}\neq 0 \tag{20}\] and \[\frac{T_{2}}{T_{1}}\left\{U,N\right\}_{p,T_{2}}+\left\{U,N\right\}_{p,T_{1}} \neq 0.\] In the above expressions the Poisson bracket is defined by \(\left\{f,g\right\}_{x,y}\equiv\partial f/\partial x\,\partial g/\partial y- \partial g/\partial x\,\partial f/\partial y\). The proof of the above property requires standard properties of derivatives under change of variables [1] and is omitted here. It can be directly checked whether the Poisson bracket (20) does not vanish for functions \(U=U\left(T_{1},T_{2},V,p\right)\) and \(N=N\left(T_{1},T_{2},V,p\right)\) given by Eqs. (7) and (8). Calculations are straightforward but cumbersome. To convince the reader that the Poisson bracket (20) does not vanish, we consider the limit \(T_{2}\to T_{1}.\) It gives the following expression, \[\lim_{T_{2}\to T_{1}}\frac{\partial}{\partial T_{2}}\left\{U,N \right\}_{T_{1},T_{2}}=\] \[=\frac{(c-1)k_{B}^{3}V^{2}\left(\frac{k_{B}T_{1}}{\sqrt{ap}}- \sqrt{\frac{(k_{B}T_{1})^{2}}{ap}-4}\right)}{8a^{2}\left(\frac{(k_{B}T_{1})^{2 }}{ap}-4\right)^{3/2}}. \tag{21}\] It follows that even in the neighborhood of the equilibrium state, \(T_{2}\approx T_{1},\) the above Poisson bracket does not vanish. As a consequence, the heat differential for Van der Waals gas has no integrating factor. Thus a function that plays the role of entropy does not exist for Van der Waals gas in a steady state with heat flow. The representation \(dQ=T^{*}dS^{*}\) is impossible. ## VI Global Steady Thermodynamics for Van der Walls Gas with \(b\neq 0\) So far we have introduced global steady thermodynamic description for Van der Walls gas given by Eq. (1) with reduced parameter, \(b=0\). Here we consider \(b\neq 0\) case in which the following equations of state \[p=\frac{n\left(z\right)k_{B}T\left(z\right)}{1-bn\left(z\right)}-an\left(z \right)^{2}, \tag{22}\] \[u\left(z\right)=cn\left(z\right)k_{B}T\left(z\right)-an\left(z\right)^{2}, \tag{23}\] describe Van der Walls gas in a stationary state. As before, the pressure in the system is constant. Integration of the above equations over volume leads to the following relations, \[p=\frac{\bar{n}k_{B}T^{*}}{1-\bar{n}b^{*}}-a^{*}\bar{n}^{2}, \tag{24}\] \[\bar{u}=c\bar{n}k_{B}T^{*}-a^{*}\bar{n}^{2}, \tag{25}\] where \(T^{*}\) and \(a^{*}\) are defined by Eqs. (13) and (14) while \(b^{*}\) is defined by the following formula \[\frac{\bar{n}k_{B}T^{*}}{1-\bar{n}b^{*}}=\frac{1}{L}\int_{0}^{L}dz\frac{n\left( z\right)k_{B}T\left(z\right)}{1-bn\left(z\right)}. \tag{26}\] Eqs. (24) and (25) show that the nonhomogeneous Van der Waals gas in a stationary state with a heat flow can be mapped on the homogeneous Van der Waals gas with effective temperature and interaction parameters, \(T^{*},a^{*},b^{*}\). Therefore it has the following fundamental relation (1), \[U=N\left(\frac{V}{N}-b^{*}\right)^{-\frac{1}{2}}\exp\left[\frac{S^{*}-Ns_{0}} {cNk_{B}}\right]-a^{*}\frac{N^{2}}{V}, \tag{27}\] with partial derivatives, \(T^{*}=\partial U\left(S^{*},V,a^{*},b^{*}\right)/\partial S^{*}\) and \(p=-\partial U\left(S^{*},V,a^{*},b^{*}\right)/\partial V\). Differential of the above fundamental equation gives, \[dU=T^{*}dS^{*}-pdV-\frac{N^{2}}{V}da^{*}+Nk_{B}T^{*}\left(\frac{V}{N}-b^{*} \right)^{-1}db^{*}. \tag{28}\] Using also the expression for the net heat (19), we identify the heat differential, \[dQ=T^{*}dS^{*}-\frac{N^{2}}{V}da^{*}+Nk_{B}T^{*}\left(\frac{V}{N}-b^{*}\right) ^{-1}db^{*}.\] The above equations describe the energy balance for Van der Walls gas with a heat flow and they correspond to the first law in equilibrium thermodynamics when the heat flow vanishes. The parameters \(T^{*},a^{*},b^{*}\) defined by Eqs. (13-26) are not independent. To explain it, we keep in mind that for a given number of particles, three control parameters \(T_{1},T_{2},V\) are sufficient to determine the system's energy, work, and net heat differential. On the other hand, the energy differential in Eq. (28) is given by four parameters, \(S^{*},V,a^{*},b^{*}\). It follows that \(S^{*},V,a^{*},b^{*}\) are dependent. Consequently, one of these parameters should be determined by the others, e.g. \(b^{*}=b^{*}\left(S^{*},V,a^{*}\right)\). In the above considerations, Van der Waals gas was enclosed between two parallel walls. Control parameters \(T_{1}\), \(T_{2}\), \(V\), and \(N\) determine the steady state. In a more practical situation, the system does not need to be rectangular, and several temperature parameters, \(T_{1},\ldots,T_{N}\), determine the boundary conditions. The same procedure determines the fundamental relation (27) because it applies to any density and temperature profile. Even in a situation with an arbitrary number of control parameters (\(N>2\)), the five parameters of states \(S^{*}\), \(V\), \(N\), \(a^{*}\) and \(b^{*}\) are sufficient to determine the energy exchange in the system. ## VII Summary A fundamental relation such as Eq. (1) plays a key role in equilibrium thermodynamics. The fundamental relation, by definition, is a relation between parameters of the system's state, from which one can ascertain all relevant thermodynamic information about the system [1]. It includes the identification of different forms of energy exchange of the system with the environment. In equilibrium thermodynamics the particular terms of the energy differential correspond to heat, mechanical work, or chemical work. In the same spirit, Eq. (27) is the fundamental relation for the Van der Waals gas in a steady state with a heat flow. Its differential (28) gives information about the net heat and the work performed on the system. Eq. (28) directly reduces to the first law of thermodynamics when the heat flow vanishes. It represents the first law of the global steady thermodynamic description of an interacting system subjected to heat flow. The integrating factor for the heat differential in the case of the ideal gas discussed previously [22] allowed us to introduce the non-equilibrium entropy and use it to construct the minimum energy principle beyond equilibrium. This principle generalizes thermodynamics' second law beyond equilibrium. Here we showed that the net heat has no integrating factor. It excludes a direct generalization of the second law along the line proposed in [22]. However, it does not exclude a possibility that such a principle also exists in the case of an interacting gas. This paper suggests a general prescription for formulating the fundamental relation of global nonequilibrium steady thermodynamics. First, we identify equilibrium equations of state. Next, we write the local equations of state. Whether these equations are in the same form in equilibrium thermodynamics or some other form remains to be found. Next, we average these local (or non-local) equations of the state over the entire system. We insist that the global equations of a nonequilibrium state should have the same form as at equilibrium but with new state parameters. These parameters emerge after averaging the local equations over the entire system. In the case of Van der Waals, the new state parameters emerged, \(a^{*}\) and \(b^{*}\). These parameters are constant at equilibrium since they are material parameters that define interactions in a particular system. This result suggests that, in general, all material parameters in the equilibrium equations of states will become parameters of state in the nonequilibrium systems. ## Acknowledgements P. J. Z. would like to acknowledge the support of a project that has received funding from the European Union's Horizon 2020 research and innovation program under the Marie Sklodowska-Curie grant agreement No. 847413 and was a part of an international co-financed project founded from the program of the Minister of Science and Higher Education entitled 'PMW' in the years 2020-2024; agreement No. 5005/H2020-MSCA-COFUND/2019/2.
2310.04611
Rapid Dust Growth During Hydrodynamic Clumping Due to Streaming Instability
Streaming instability is considered to be one of the dominant processes to promote planetesimal formation by gravitational collapse of dust clumps. The development of streaming instability is expected to form dust clumps in which the local dust density is strongly enhanced and even greater than the Roche density. The resulting clumps can collapse to form planetesimals. Recent simulations conducted long-term simulations and showed that such strong clumping occurs in a wider parameter space than previously expected. However, the indicated timescale for strong clumping can be on the order of tens to hundreds Keplerian periods. In this paper, we estimate the growth time of dust grains during the pre-clumping phase. We find that the dust growth considerably proceeds before the strong clumping because even the moderate clumping due to streaming instability increases the local dust-to-gas ratio $\gtrsim10$. Depending on the gas sound speed, the dust collision velocity can be kept below $\sim 1\;\mathrm{m/s}$ once sufficiently strong dust clumping occurs. Thus, even silicate grains might have the potential to grow safely toward the size whose Stokes number is unity during the clumping. Our results demonstrate the importance of local dust coagulation during the dust clumping due to streaming instability.
Ryosuke T. Tominaga, Hidekazu Tanaka
2023-10-06T22:07:07Z
http://arxiv.org/abs/2310.04611v1
# Rapid dust growth during hydrodynamic clumping due to streaming instability ###### Abstract Streaming instability is considered to be one of the dominant processes to promote planetesimal formation by gravitational collapse of dust clumps. The development of streaming instability is expected to form dust clumps in which the local dust density is strongly enhanced and even greater than the Roche density. The resulting clumps can collapse to form planetesimals. Recent simulations conducted long-term simulations and showed that such strong clumping occurs in a wider parameter space than previously expected. However, the indicated timescale for strong clumping can be on the order of tens to hundreds Keplerian periods. In this paper, we estimate the growth time of dust grains during the pre-clumping phase. We find that the dust growth considerably proceeds before the strong clumping because even the moderate clumping due to streaming instability increases the local dust-to-gas ratio \(\gtrsim 10\). Depending on the gas sound speed, the dust collision velocity can be kept below \(\sim 1\) m/s once sufficiently strong dust clumping occurs. Thus, even silicate grains might have the potential to grow safely toward the size whose Stokes number is unity during the clumping. Our results demonstrate the importance of local dust coagulation during the dust clumping due to streaming instability. hydrodynamics -- instabilities -- protoplanetary disks 0000-0002-4880-8880]Ryosuke T. Tominaga 0000-0002-2886-7880]Hidekazu Tanaka ## 1 Introduction There are two promising processes for planetesimal formation: direct collisional growth of dust grains (e.g., Ormel et al., 2007; Okuzumi et al., 2012; Kataoka et al., 2013; Krijt et al., 2016; Arakawa and Nakamoto, 2016; Garcia and Gonzalez, 2020; Kobayashi and Tanaka, 2021) and gravitational collapse of a dust layer (e.g., Safronov, 1972; Goldreich and Ward, 1973; Sekiya, 1983; Youdin and Shu, 2002). The latter process is expected to occur once a sufficient amount of dust is concentrated against turbulent diffusion in a disk (e.g., Cuzzi et al., 1993; Sekiya, 1998). Some dust-driven instabilities have been proposed as the dust concentration mechanism (e.g., Youdin and Goodman, 2005; Johansen and Youdin, 2007; Youdin, 2011; Takahashi and Inutsuka, 2014; Tominaga et al., 2021, 2022, 2022, 2023). Among the dust-driven instabilities, streaming instability has been extensively studied analytically and numerically (e.g., Youdin and Goodman, 2005; Youdin and Johansen, 2007; Johansen and Youdin, 2007; Johansen et al., 2007; Krapp et al., 2019; Chen and Lin, 2020; Umurhan et al., 2020; Paardekooper et al., 2020, 2021; McNally et al., 2021; Zhu and Yang, 2021; Yang and Zhu, 2021; Carrera et al., 2021, 2022). Streaming instability occurs on a spatial scale much smaller than the gas scale height. The nonlinear evolution causes turbulent motion and transient/intermittent formation of azimuthally elongated filamentary structures (e.g., Johansen and Youdin, 2007). The resulting dust-dense regions collapse self-gravitationally once the local dust density exceeds the Roche density \(\rho_{\rm R}\) (e.g., Johansen et al., 2007; Simon et al., 2016) and the self-gravity overcomes turbulent diffusion (Gerbig et al., 2020; Klahr and Schreiber, 2020, 2021). This combined process of streaming instability and the subsequent self-gravitational collapse is expected to explain planetesimal formation (e.g., Drazkowska et al., 2016; Nesvorny et al., 2019; Gole et al., 2020; Gerbig et al., 2020). Previous studies investigated the condition for streaming instability to cause strong dust clumping where the maximum dust density \(\rho_{\rm d,max}\) exceeds the Roche density (e.g., \(\sim\) several hundred times gas density for low mass disks)1, which is necessary for the self-gravitational collapse of the dust clump (Carrera et al., 2015; Yang et al., 2017; Li & Youdin, 2021). Carrera et al. (2015) and Yang et al. (2017) found that the dust-to-gas surface density ratio should be a few times higher than 0.01 for the strong clumping, which also depends on the Stokes number of dust, St. Gerbig et al. (2020) discussed that the critical value for the gravitational collapse to proceed also depends on the gas pressure gradient and the Toomre's \(Q\) value of the gas disk. Recently, Li & Youdin (2021) revisited the condition by performing high-resolution long-term simulations. They found that the strong clumping occurs even for lower dust-to-gas ratios during the time evolution over ten to hundreds Keplerian periods (see their Tables 1 and 2). The critical dust-to-gas ratio at the midplane is about \(\simeq 0.35-2.5\) in their simulations for \(10^{-3}\lesssim\mathrm{St}\lesssim 1\) (see their Figure 4). This may indicate that streaming instability and subsequent clumping operate more easily than previously thought, leading to planetesimal formation. Footnote 1: The Roche density is \(\rho_{\mathrm{R}}\equiv 9\Omega^{2}/4\pi G\simeq 3\times 10^{2}\rho_{\mathrm{g}} \times(Q/55)\), where \(\Omega\) is the Keplerian angular velocity, \(G\) is the gravitational constant, \(\rho_{\mathrm{g}}\) is the midplane gas density, and \(Q\) is the Toomre’s \(Q\) value of the gas disk. In such a long-term evolution, dust coagulation may be more important. The coagulation timescale is only a few tens of Keplerian periods for the dust-to-gas surface density ratio of 0.01 (e.g., Brauer et al., 2008), which can be shorter than the time required for the strong clumping to occur. Besides, the coagulation may be effective even in moderately high density regions around much denser clumps. However, a possible combined process of coagulation and streaming instability is not well studied. We note that there are studies that investigated how disk evolution with coagulation produces a region where streaming instability operates (e.g., Drazkowska et al., 2016; Carrera et al., 2017). What they considered is that dust coagulation and streaming instability occur one after the other: the dust sizes are determined by dust coagulation through the global disk evolution and are independent from the onset of streaming instability and the resulting local turbulence/clumping. In contrast to these studies, we investigate the possible impact of dust growth during the local process of streaming instability, which could change the planetesimal formation efficiency assumed in the previous models. In this paper, we demonstrate that dust coagulation considerably proceeds in moderately dense regions or clumps before the strong clumping. We consider dust growth toward the size of \(\mathrm{St}=1\). The collision velocity provides the insight into the growth efficiency. When dust grains are so large that the Brownian motion is insignificant, the collision velocity is determined by the drift motion and the turbulent motion. The drift-induced collision velocity of similar-sized dust is on the order of \(\mathrm{St}\eta v_{\mathrm{K}}\), where \(v_{\mathrm{K}}\) is the Keplerian velocity, \(\eta\sim(c_{\mathrm{s}}/v_{\mathrm{K}})^{2}\) is a measure of the radial pressure gradient, and \(c_{\mathrm{s}}\) is the gas sound speed (e.g., Weidenschilling, 1977). For \(\mathrm{St}\sim 1\), this velocity is \(\sim 54\) m/s in the minimum mass solar nebula (Hayashi, 1981) and lower in a disk that is optically thick to the stellar irradiation (e.g., Chiang & Goldreich, 1997). The turbulence-induced collision velocity depends on \(\mathrm{St}\) and the turbulence strength \(\alpha\)(Shakura & Sunyaev, 1973). When dust grains are large enough, the velocity is on the order of \(\sqrt{\alpha\mathrm{St}}c_{\mathrm{s}}\) for similar-sized grains (Ormel & Cuzzi, 2007). The collision velocity is thus \(\sim 10-30\) m/s for \(\alpha=10^{-3}\), \(\mathrm{St}=0.1-1\), and \(c_{\mathrm{s}}=1\) km/s. It has been suggested that such high velocity collisions can lead to fragmentation that limits the dust growth (e.g., Weidenschilling & Cuzzi, 1993; Blum & Wurm, 2008). Even when the fragmentation is less efficient, the radial drift can limit the dust growth: the drift time is so short that dust grains reach the central star before they grow in size significantly. Before streaming instability operates, we naively expect that dust grains grow to the fragmentation- or drift-limited sizes. We thus address whether or not the clumping due to streaming instability can assist further dust growth. An advantage of dust coagulation in the clumping regions is lower collision velocity (Johansen et al., 2009; Bai & Stone, 2010; Schreiber & Klahr, 2018). Schreiber & Klahr (2018) showed that the local velocity dispersion of the same sized dust grains can be on the order of \(10^{-3}c_{\mathrm{s}}\) or much lower in the clumping regions for \(\mathrm{St}=0.1\) (e.g., see Figure 11 therein). Bai & Stone (2010) also showed low collision velocities in the simulations where dust clumping is efficient (their R10Z3 run, see Figures 6 and 10 therein). It is also indicated that large-scale turbulence is ineffective on clump scales once dust grains are locally concentrated (Klahr & Schreiber, 2020). This means that, even if the dust growth is limited by fragmentation before the clumping, dense regions shielded from the large-scale turbulence enable further dust growth owing to the lower-speed collisions. Besides, the clumping leads to the reduced drift velocity (see Figures 7 and 8 in Bai & Stone, 2010)2, which is due to the aerodynamical backreaction from dust to gas (Nakagawa et al., 1986). Thus, such dense regions are also preferable for dust grains to overcome the drift barrier. Footnote 2: Bai & Stone (2010) also found that the multiple-species effect reduces the drift velocity of each dust species (see solid and dashed (dash-dotted) lines in their Figure 8 for 2D (3D) simulations). We focus on the effect of high dust densities due to clumping (see the panels of R10Z1 and R10Z3 runs as well as Figure 5 therein). This paper is organized as follows. We describe our model in Sections 2 and 3. In Section 2, we describe the duration time before the strong clumping via streaming instability (a pre-clumping period) and the coagulation timescale. In Section 3, we describe models for dust collision velocities in clumping regions, which is based on the previous simulations of streaming instability. We then compare the coagulation timescale and the pre-clumping period in Section 4. We show that coagulation proceeds faster than the clumping. In Section 5, we give conclusions and brief discussion. ## 2 Timescale Model ### Pre-clumping periods We consider a situation where streaming instability has developed into the nonlinear phase and the dust clumping gradually and/or intermittently increases the local dust density as observed in numerical simulations (e.g., Johansen & Youdin, 2007; Johansen et al., 2007; Bai & Stone, 2010; Yang & Johansen, 2014; Yang et al., 2018; Schaffer et al., 2018; Xu & Bai, 2022). As mentioned in the previous section, such evolution is expected when the dust-to-gas mass ratio at the midplane is \(\gtrsim 0.35-2.5\)(Li & Youdin, 2021). Li & Youdin (2021) found the critical dust-to-gas mass ratio of \(\simeq 0.35-1\) for \(\mathrm{St}\gtrsim 0.02\) (see their Figure 4 and the sudden change of the critical value at \(0.01<\mathrm{St}<0.02\)). For smaller \(\mathrm{St}\), larger dust-to-gas ratio is required for the strong clumping, and it takes longer time than for \(\mathrm{St}=0.1-1\) according to Li & Youdin (2021) (see their Tables 1 and 2). Such situations will be more preferable for coagulation to operate before the strong clumping. We thus focus on the dust evolution for \(\mathrm{St}\gtrsim 0.01\). To investigate the possible effect of dust coagulation in clumps before the strong clumping, we estimate the coagulation timescale and compare it to the duration time of the pre-clumping phase (pre-clumping periods). We represent the pre-clumping period as \(\tau_{\mathrm{SI,sat}}\Omega^{-1}\), where \(\tau_{\mathrm{SI,sat}}\) is a numerical factor, and \(\Omega\) is the Keplerian angular velocity. In this period, we assume that turbulence driven by streaming instability is nearly saturated and that azimuthally elongated filaments have developed. Li & Youdin (2021) defined the pre-clumping phase as a phase between the first sedimentation 3 (\(t<t_{\mathrm{sedi-tr}}\), where \(t\) here denotes their simulation time) and the strong clumping phase (\(t>t_{\mathrm{pre-cl}}\))4. Table 2 in Li & Youdin (2021) shows \(t_{\mathrm{sedi-tr}}\) and \(t_{\mathrm{pre-cl}}\) for each Figure 1: Duration time of the pre-clumping phase reported in Li & Youdin (2021). They define the pre-clumping phase using (1) the time at which the first sedimentation phase ends (\(t_{\mathrm{sedi-tr}}\)) and (2) the time at which the dust density exceeds \(2\rho_{\mathrm{R}}/3\) (\(t_{\mathrm{pre-cl}}\)). These values are collected from their Table 2. The difference \(t_{\mathrm{pre-cl}}-t_{\mathrm{sedi-tr}}\) that we plot is the pre-clumping period. We refer to the simulations that adopt \(\mathrm{St}\geq 10^{-2}\) and show \(t_{\mathrm{pre-cl}}>t_{\mathrm{sedi-tr}}\). Color of filled circles represents St adopted in their simulations (see Table 1 in Li & Youdin, 2021). In this figure, the pre-clumping periods are given by \(t_{\mathrm{sim}}-t_{\mathrm{sedi-tr}}\) for weak/moderate clumping cases (see text for details, and see also Table 1 in Li & Youdin (2021)). Li & Youdin (2021) also conducted simulations with \(\mathrm{St}=10^{-3}\) (their Z4t0.1, Z3t0.1, and Z2t0.1 runs) and show the pre-clumping periods \(\gtrsim 475\Omega^{-1}\) (see their Tables 1 and 2), which is longer than \(\tau_{\mathrm{SI,sat}}\Omega^{-1}\) in our fiducial model. run. In Figure 1, we plot the pre-clumping period, \(t_{\rm pre-cl}-t_{\rm scdi-tr}\), for \({\rm St}\geq 0.01\) using the reported values in their table. We refer to the runs that show strong clumping and \(t_{\rm pre-cl}>t_{\rm scdi-tr}\) (below the gray line). We also refer to their simulations that only show weak/moderate clumping (i.e., \(\rho_{\rm d,max}<\rho_{\rm R}\); above the gray dashed line). For such cases, we plot \(t_{\rm sim}-t_{\rm scdi-tr}\), where \(t_{\rm sim}\) is the simulation time reported in Table 1 of Li and Youdin (2021) (e.g., Z0.3t30 run). Although Li and Youdin (2021) call them non-clumping cases (or weak clumping), the local dust-to-gas ratio increases to \(\sim 10\) in some of them (e.g., their Z0.3t30 and Z0.4t100 runs; see their interactive figure of the online version of Li and Youdin (2021)). We thus call them as the moderate clumping in the present paper. Such moderate clumping is enough for dust growth to proceed efficiently, which is shown in Section 4. According to Li and Youdin (2021) and Figure 1, the pre-clumping period ranges between \(\sim 10\Omega^{-1}-1000\Omega^{-1}\). For \({\rm St}<0.3\), the pre-clumping period is greater than \(100\Omega^{-1}\). Although \(\tau_{\rm SI,sat}\) should depend on the background \(\rho_{\rm d}/\rho_{\rm g}\) and \({\rm St}\), we just assume it to be a constant for simplicity. In this paper, we adopt \(\tau_{\rm SI,sat}\Omega^{-1}=100\Omega^{-1}\) as the fiducial pre-clumping period. ### Dust growth time If we assume the perfect sticking, the coagulation timescale is given by \[t_{\rm coag}=a\left(\frac{da}{dt}\right)^{-1} =3m\left(\frac{dm}{dt}\right)^{-1}, \tag{1}\] \[=3\frac{m}{\rho_{\rm d}4\pi a^{2}\Delta v}, \tag{2}\] where \(a\) and \(m\) denote the size and the mass of a spherical dust grain, \(\rho_{\rm d}\) is the dust density, \(\Delta v\) is the dust-dust collision velocity. Here, we assume equal-mass collisions for simplicity (see also Section 5). We focus on the mass-dominating dust sizes. If the number density distribution of dust \(dn(a)/da\) is proportional to \(a^{-3.5}\) or shallower, the mass-dominating dust size roughly corresponds to the largest dust size. Adopting the size distribution of \(d\ln n/d\ln a=-3.5\), Yang and Zhu (2021) conducted numerical simulations of polydisperse streaming instability and found dust segregation that concentrates large dust grains in dense regions. Since we focus on the dust growth in dense regions resulting from streaming instability, the assumption of the equal-mass collisions of the mass-dominating dust grains may be valid. In the Epstein regime, the Stokes number, \({\rm St}\equiv t_{\rm stop}\Omega\), is given as follows: \[{\rm St}=\sqrt{\frac{\pi}{8}}\frac{\rho_{\rm int}a}{\rho_{\rm g}c_{\rm s}}\Omega, \tag{3}\] where \(\rho_{\rm g}\) is the gas density. We then rewrite the coagulation timescale in terms of \({\rm St}\): \[\tau_{\rm coag}\equiv t_{\rm coag}\Omega=\sqrt{\frac{8}{\pi}}\frac{\rho_{\rm g }}{\rho_{\rm d}}\frac{{\rm St}c_{\rm s}}{\Delta v}. \tag{4}\] The coagulation timescale depends on the dust-to-gas ratio \(\rho_{\rm d}/\rho_{\rm g}\). An increase in \(\rho_{\rm d}/\rho_{\rm g}\) reduces \(\tau_{\rm coag}\), i.e. accelerates dust coagulation. If we assume no clumping and that large scale turbulence determines \(\Delta v\), one obtains \(\tau_{\rm coag}\sim\Sigma_{\rm g}/\Sigma_{\rm d}\),5 where \(\Sigma_{\rm g}\) and \(\Sigma_{\rm d}\) are the surface densities of gas and dust (e.g., Brauer et al., 2008). Footnote 5: We also assumed the following: (1) the vertical density distributions of gas and dust are the Gaussian profile, (2) the dust scale height \(H_{\rm d}\) is determined by the large scale turbulence, \(H_{\rm d}\sim H\sqrt{\alpha/{\rm St}}\), where \(H\equiv c_{\rm s}/\Omega\) is the gas scale height (e.g., Dubrulle et al., 1995), and (3) \(\rho_{\rm d}/\rho_{\rm g}\) in Equation (4) is the mid-plane dust-to-gas ratio. To take the effect of fragmentation into account, we also utilize the sticking efficiency \(p_{\rm eff}\) introduced in Okuzumi and Hirose (2012) and Okuzumi et al. (2016): \[p_{\rm eff}\equiv\min\left(1,-\frac{\ln\left(\Delta v/v_{\rm frag}\right)}{\ln 5 }\right), \tag{5}\] where \(v_{\rm frag}\) is the critical fragmentation velocity. We then scale the coagulation timescale with the sticking efficiency as follows: \[\tau_{\rm coag,eff}=\tau_{\rm coag}/p_{\rm eff}. \tag{6}\] The critical fragmentation velocity has been investigated by both laboratory experiments and numerical simulations (e.g., Blum and Wurm, 2000, 2008; Wada et al., 2009, 2013; Hasegawa et al., 2021). However, the value of \(v_{\rm frag}\) still seems uncertain. We thus adopt \(v_{\rm frag}=10\) m/s as a fiducial case (see also Section 5). This value has been used in the model treating coagulation and planetesimal formation due to streaming instability (e.g., Drazkowska et al., 2016). ## 3 Collision velocity model Once we model \(\Delta v\) as a function of dust properties, we can estimate how efficiently coagulation proceeds using Equation (6). As a simple model, we consider the following form of the collision velocity: \[\Delta v\propto c_{\rm s}{\rm St}^{A}(\rho_{\rm d}/\rho_{\rm g})^{B}, \tag{7}\] where \(A\) and \(B\) are constant. We assume the velocity dispersion of dust grains to be \(\Delta v\) in this work. The velocity dispersion has been measured in the previous numerical simulations (e.g., Johansen et al., 2009; Bai and Stone, 2010; Schaffer et al., 2018; Schreiber and Klahr, 2018; Yang & Zhu, 2021). We follow Schreiber & Klahr (2018) and Bai & Stone (2010) in this work since (1) Schreiber & Klahr (2018) showed the dependence of the velocity dispersion on the dust-to-gas ratio in small-domain simulations (e.g., see Figure 5 therein), and (2) Bai & Stone (2010) showed the velocity dispersion for larger \(\mathrm{St}\) with the \(\rho_{\mathrm{d}}/\rho_{\mathrm{g}}\)-dependence (e.g., see Figure 13 therein). ### Model 1: Schreiber & Klahr (2018) Schreiber & Klahr (2018) conducted 2D simulations with \(\mathrm{St}=0.01\) and \(0.1\) and with the background dust-to-gas ratio ranging from \(0.1\) to \(10^{3}\). Some of their simulations adopted very small domain size of \(10^{-3}H\), where \(H\) is the gas scale height. Velocity dispersion measured in such a small domain is suitable to discuss dust collisions at small spatial scales. They derived velocity dispersion with respect to the domain-averaged velocity and one with respect to the cell-averaged velocity. The former could include relative velocities between individual dust clumps. We refer to the latter, which seems to be more important for dust collisions. We note that their simulations treat a single dust species, and thus the measured velocity dispersion is for the same dust sizes. This is sufficient for the purpose of this work since the previous coagulation simulations showed that the dust growth is dominated by similar-sized dust collisions (e.g., Okuzumi et al., 2012). They found that the velocity dispersion is almost constant for \(1\lesssim\rho_{\mathrm{d}}/\rho_{\mathrm{g}}\lesssim 10\), and \(\sim 10^{-3}c_{\mathrm{s}}\) for \(\mathrm{St}=0.1\). On the other hand, the velocity dispersion is proportional to \((\rho_{\mathrm{d}}/\rho_{\mathrm{g}})^{-1}\) for \(\rho_{\mathrm{d}}/\rho_{\mathrm{g}}\gtrsim 10\) (Figures 5, 11, 16 and 21 therein). As for the \(\mathrm{St}\)-dependence, their simulations show a relatively weak dependence with \(0<A<1\) (e.g., see their Figures 11 and 21 for the runs with the radial and vertical dimensions). Based on these results, we model the velocity dispersion (\(\Delta v=\Delta v_{1}\)) as follows: \[\Delta v_{1}=\begin{cases}10^{-3}c_{\mathrm{s}}\left(\frac{\mathrm{St}}{10^{- 1}}\right)^{0.5}&\left(\frac{\rho_{\mathrm{d}}}{\rho_{\mathrm{g}}}<10\right)\\ 10^{-3}c_{\mathrm{s}}\left(\frac{\mathrm{St}}{10^{-1}}\right)^{0.5}\left(\frac {\rho_{\mathrm{d}}/\rho_{\mathrm{g}}}{10}\right)^{-1}.&\left(\frac{\rho_{ \mathrm{d}}}{\rho_{\mathrm{g}}}\geq 10\right)\end{cases} \tag{8}\] This collision velocity \(\Delta v_{1}\) is lower than the maximum drift speed \(\eta v_{\mathrm{K}}\): \[\eta\equiv-\frac{1}{2}\left(\frac{c_{\mathrm{s}}}{v_{\mathrm{K}}}\right)^{2} \frac{d\ln P}{d\ln r}, \tag{9}\] where \(P\) is the gas pressure and \(r\) is the radial distance from the central star (e.g., Adachi et al., 1976; Weidenschilling, 1977). The maximum drift velocity is thus \(\eta v_{\mathrm{K}}\sim(c_{\mathrm{s}}/v_{\mathrm{K}})\times c_{\mathrm{s}}\). The disk aspect ratio \(c_{\mathrm{s}}/v_{\mathrm{K}}\) is \(\sim 10^{-2}-10^{-1}\) (e.g., D'Alessio et al., 1999). Thus, \(\Delta v_{1}\) is about ten times lower than the maximum drift speed. This collision velocity becomes even lower in the clumping regions of \(\rho_{\mathrm{d}}/\rho_{\mathrm{g}}>10\). This low collision velocity is favorable for dust growth without significant fragmentation. ### Model 2: Bai & Stone (2010) The simulations in Schreiber & Klahr (2018) are limited to the cases of \(\mathrm{St}=0.01\) and \(0.1\). As shown in the subsequent section, the clumping due to streaming instability can promote further dust growth: dust grains accumulated locally in a clump collide with each other, and their Stokes number will increase further. Thus, in addition to Model 1, we also refer to Bai & Stone (2010) who performed multiple-dust-size simulations with larger dust grains. The simulations of Bai & Stone (2010) also treats the vertical stratification in 2D and 3D domains while the simulations of Schreiber & Klahr (2018) considered unstratified disks with smaller domains. They assumed the uniform dust mass distribution over their assumed size ranges (see Section 2.3 therein). Among their simulations with different size ranges and total metallicities, we refer to the R10Z3 run, where the size range is \(0.1\leq\mathrm{St}\leq 1\) and the metallicity is \(0.03\). The reasons for referring to this run are as follows. First, the previous studies on dust coagulation in Figure 2: Comparison of our collision velocity model \(\Delta v_{2}\) and the simulation results of Bai & Stone (2010) (\(c_{\mathrm{s}}=0.99\) km/s). Solid and dash-dotted lines show the collision velocities measured in the R10Z3 run of Bai & Stone (2010) with 2D and 3D domains, respectively (see their Figure 13). Dashed lines show our model (Equation (10)). We show the collision velocities of each \(\mathrm{St}\) with different color. a disk show that the dust mass distribution tends to be top-heavy (e.g., Brauer et al., 2008; Birnstiel et al., 2012; Okuzumi et al., 2012). The uniform mass distribution in the numerical simulation of streaming instability would be valid if the size range is narrow and covers the mass-dominating sizes, as noted in Bai & Stone (2010). The R10Z3 run is one of the runs with the narrowest size range in the logarithmic space. Second, the focus of our study is the dust coagulation in the pre-clumping phase, and thus we need the collision velocity in dust dense regions. The R10Z3 run shows efficient dust concentration in both 2D and 3D simulations while the other runs labeled R21Z3 and R30Z3 show very small volume fraction of the dust dense region in 3D (see Figure 6 therein). The high volume fraction of the dust dense region in the R10Z3 run will ensure better statistics of the collision velocity. The dust collision velocity in the R10Z3 run is shown in their Figure 13. We focus on the equal mass collision for clear comparison with Model 1. According to their results, the equal mass collision velocity is larger for larger dust (Figure 2). As in Schreiber & Klahr (2018), Bai & Stone (2010) also show the decrease of the collision velocity in the dust-dense region. Based on their data, we construct the following model as Model 2 (\(\Delta v=\Delta v_{2}\)): \[\Delta v_{2}=\begin{cases}1.7\times 10^{-2}c_{\rm s}\left(\frac{\rm St}{1} \right)&\left(\frac{\rho_{\rm d}}{\rho_{\rm g}}<20\right)\\ 1.7\times 10^{-2}c_{\rm s}\left(\frac{\rm St}{1}\right)\left(\frac{\rho_{\rm d }/\rho_{\rm g}}{20}\right)^{-0.4}.&\left(\frac{\rho_{\rm d}}{\rho_{\rm g}}\geq 2 0\right)\end{cases} \tag{10}\] We derive the prefactor of \(1.7\times 10^{-2}\) and the \(\rho_{\rm d}/\rho_{\rm g}\)-dependence by fitting \(\Delta v_{2}/\rm St\) to the collision velocities of \(\rm St=10^{-0.5}\) and \(1\) in the high-density regions of \(\rho_{\rm d}/\rho_{\rm g}>20\). We note that their simulations show higher abundance of particles of \(\rm St=10^{-0.5}\) and \(1\) than particles of \(\rm St=10^{-1}\) (see Figure 6 of Bai & Stone, 2010), which may indicate better statistics for larger particles. We thus use the larger particle data for the fitting. In Figure 2, we compare \(\Delta v_{2}\) and the simulation data of Bai & Stone (2010) for \(\rm St=10^{-1},\ 10^{-0.5}\), and \(1\). Our model underestimates the collision velocity for \(\rm St=10^{-1}\) and \(\rho_{\rm d}/\rho_{\rm g}>10^{2}\). Thus, the collisional growth of dust particles of \(\rm St=10^{-1}\) is more efficient than our model predicts unless the critical fragmentation velocity is low. Our model overestimates the collision velocity of \(\rm St=1\) for \(\rho_{\rm d}/\rho_{\rm g}\lesssim 10\), and thus the coagulation efficiency is overestimated by a factor of a few. The collision velocity of \(\rm St=10^{-0.5}\) is better represented by our model. The collision velocity \(\Delta v_{2}\) is greater than \(\Delta v_{1}\), which may be due to the vertical motion in the stratified disk. The collision velocity can be comparable to the maxi Figure 3: Ratio of the pre-clumping period \(\tau_{\rm SI,sat}\) and the coagulation timescale \(\tau_{\rm coag,eff}\). We assume the gas temperature of \(50\) K (\(c_{\rm s}\simeq 0.4\) km/s) as an example. We note that we take into account the effect of the possible imperfect sticking (Equation (6)). The blue (red) region indicates that the growth time is shorter (longer) than the pre-clumping period before the strong clumping. We naively expect that the dust evolves along the white region (\(\tau_{\rm SI,sat}\sim\tau_{\rm coag,eff}\)) on this \(\rho_{\rm d}/\rho_{\rm g}-\rm St\) plane (see also Figure 4). mum drift velocity for \(\rho_{\rm d}/\rho_{\rm g}<20\) or only slightly lower. Nevertheless, the dust enhancement of \(\rho_{\rm d}/\rho_{\rm g}>20\) reduces the collision velocity and makes fragmentation less efficient. This is thus favorable for dust growth (see also Johansen et al., 2009). ## 4 Results ### Dust evolution on \(\rho_{\rm d}/\rho_{\rm g}-{\rm St}\) plane Figure 3 shows \(\tau_{\rm SI,sat}/\tau_{\rm coag,eff}\) as a function of \(\rho_{\rm d}/\rho_{\rm g}\) and \({\rm St}\). The collision velocity used to plot the left panel is the Model 1 while that for the right panel is the Model 2. We assume the gas temperature of 50 K and thus \(c_{\rm s}\simeq 0.4\) km/s as an example. We note that the gas temperature affects the coagulation timescale only through \(p_{\rm eff}\) (see also Equation (4)). We find that, for the high dust-to-gas ratio, coagulation proceeds before the strong clumping (\(\tau_{\rm SI,sat}>\tau_{\rm coag,eff}\)). The strong clumping dominates over coagulation when \(\rho_{\rm d}/\rho_{\rm g}\) is relatively small or \({\rm St}\) is sufficiently large. We note that, regardless of low collision velocities, the dust growth timescale in clumps is comparable to or shorter than the growth timescale in the non-clumping case where \(t_{\rm coag}\sim 100\Omega^{-1}\) for \(\Sigma_{\rm d}/\Sigma_{\rm g}=0.01\)(Brauer et al., 2008). One can see how the dust properties evolve with time on the \(\rho_{\rm d}/\rho_{\rm g}-{\rm St}\) plane (see also Figure 4). In the blue region (\(\tau_{\rm SI,sat}>\tau_{\rm coag,eff}\)), the coagulation is more efficient than the dust clumping. This means that \({\rm St}\) increases faster than \(\rho_{\rm d}/\rho_{\rm g}\), and thus the dust evolution is in the vertical direction and upward on the \(\rho_{\rm d}/\rho_{\rm g}-{\rm St}\) plane. On the other hand, in the red region, the dust clumping is faster than the coagulation, which means the efficient increase of \(\rho_{\rm d}/\rho_{\rm g}\). The dust evolution is then in the horizontal direction and rightward. In this way, we naively expect that the dust moves first to the critical line of \(\tau_{\rm SI,sat}\sim\tau_{\rm coag,eff}\) and then roughly along it, which is independent from initial values of \({\rm St}\) and \(\rho_{\rm d}/\rho_{\rm g}\). In the case of the Model 1 (the left panel of Figure 3), the critical line is inclined for \(\rho_{\rm d}/\rho_{\rm g}<10\), and thus both coagulation and clumping appear to proceed. Once the dust-to-gas ratio becomes larger than 10, the coagulation dominates over the clumping for \(10^{-2}\leq{\rm St}\leq 1\). The coagulation timescale in this regime is given by \[\tau_{\rm coag,eff}=p_{\rm eff}^{-1}\sqrt{\frac{8}{\pi}}10^{1.5}\sqrt{{\rm St}}, \tag{11}\] which is independent from \(\rho_{\rm d}/\rho_{\rm g}\). Thus, the coagulation timescale is comparable to \(\tau_{\rm SI,sat}\) for the following stopping time: \[{\rm St}=p_{\rm eff}^{2}\frac{\pi}{8}10^{-3}\tau_{\rm SI,sat}^{2}\simeq 4p_{ \rm eff}^{2}\left(\frac{\tau_{\rm SI,sat}}{100}\right)^{2}. \tag{12}\] For the perfect sticking case (\(p_{\rm eff}=1\)), the clumping dominates over the coagulation for \({\rm St}>0.36\) and \({\rm St}>4\) when \(\tau_{\rm SI,sat}\) is 30 and 100, respectively. In this region, the dust density increases efficiently (see also Figure 4). In the case of the Model 2, the coagulation timescale is comparable to \(\tau_{\rm SI,sat}=100\) for \(\rho_{\rm d}/\rho_{\rm g}\sim 1\) when dust grains are relatively small (\({\rm St}\lesssim 0.3\)). Thus, coagulation dominates the dust evolution, and the dust moves along the vertical white region. As the dust becomes larger (\({\rm St}\gtrsim 0.3\)), the collision velocity becomes high, and collisions lead to the imperfect sticking. As a result, the clumping dominates for \(\rho_{\rm d}/\rho_{\rm g}\sim 1\) and \({\rm St}\gtrsim 0.3\), leading to an increase in the dust density. For high dust-to-gas ratios (\(\rho_{\rm d}/\rho_{\rm g}>10\)), the coagulation again becomes faster than the clumping. Therefore, we expect coagulation to be important for Model 2 as well. Bai & Stone (2010) also discussed the combined effect of coagulation and the clumping, indicating the positive feedback between them (see Section 6 therein). Our results are consistent with their discussion. In Figure 5, we show the \(\tau_{\rm SI,sat}\)-dependence of the critical line (\(\tau_{\rm SI,sat}=\tau_{\rm coag,eff}\)) for both Model 1 (solid lines) and Model 2 (dotted lines). As shown by Equation Figure 4: Schematic figure to show dust evolution on the \(\rho_{\rm d}/\rho_{\rm g}-{\rm St}\) plane. We assume Model 1 for the collision velocity as an example. The dust moves upward if the coagulation timescale is shorter than \(t_{\rm SI,sat}\) (the blue region). For \(\tau_{\rm SI,sat}<\tau_{\rm coag,eff}\), the clumping increases the dust density and thus the dust moves rightward on the \(\rho_{\rm d}/\rho_{\rm g}-{\rm St}\) plane (the red region). Therefore, we naively expect that the dust first moves to the boundary of the blue and red regions and then moves along it, which is independent from initial values of \({\rm St}\) and \(\rho_{\rm d}/\rho_{\rm g}\). The gray region indicates the maximum density that can be expected via the strong clumping due to streaming instability. (12), the critical line for the Model 1 becomes horizontal at \(\mathrm{St}\simeq 0.36\) and \(\mathrm{St}=1\) for \(\tau_{\mathrm{SI,sat}}=30\) and \(50\), respectively. Thus, the clumping takes over the coagulation earlier in the smaller-\(\tau_{\mathrm{SI,sat}}\) cases. In the Model 2, larger \(\tau_{\mathrm{SI,sat}}\) leads to wider ranges of \(\mathrm{St}\) where coagulation dominates the clumping (see the regions where the dotted lines are inclined). Thus, even in the presence of the imperfect sticking, coagulation can be the mechanism to govern the dust evolution. According to Schreiber and Klahr (2018), the dust-to-gas ratio increases by a factor of \(\sim\)10-100 from the background value via the dust clumping in the nonlinear phase of turbulence due to streaming instability. Other studies also show a similar increase (e.g., Yang and Johansen, 2014; Yang et al., 2017). Although Li and Youdin (2021) show that the maximum dust density \(\rho_{\mathrm{d,max}}\) can be \(\sim 10^{3}\rho_{\mathrm{g}}\) in their simulations, the dust-to-gas ratio has a time variation and oscillates in \(\sim 10^{2}\rho_{\mathrm{g}}-10^{3}\rho_{\mathrm{g}}\) (e.g., see Figures 3 and 12 therein). Thus, it would be reasonable to expect \(\rho_{\mathrm{d,max}}/\rho_{\mathrm{g}}\) to be \(\sim 10^{2}\) on average if the clumping starts at \(\rho_{\mathrm{d}}/\rho_{\mathrm{g}}\sim 1\). This means that, even if the dust moves to the clumping regime (e.g., the red region in Figure 4), the dust density stops increasing around \(\rho_{\mathrm{d,max}}\) (the gray region in Figure 4) unless many resulting filaments merge so that the density increases further (see also Li et al., 2018, for the dependence of the maximum density on the mass reservoir). The fate of the dust clumps depends on whether or not \(\rho_{\mathrm{d,max}}\) is greater than the Roche density \(\rho_{\mathrm{R}}\equiv 9\Omega^{2}/4\pi G\), where \(G\) is the gravitational constant. On the one hand, the dust clumps have the potential to collapse self-gravitationally for \(\rho_{\mathrm{d,max}}>\rho_{\mathrm{R}}\). For the self-gravitational collapse, it is also required that the self-gravity of the dust clump dominates over the turbulent diffusion (e.g., Gerbig et al., 2020; Klahr and Schreiber, 2020, 2021). Klahr and Schreiber (2020) found that dust clumps larger than \(60-120\) km can collapse, which is based on the solar nebula model of Lenz et al. (2020). On the other hand, for \(\rho_{\mathrm{d,max}}<\rho_{\mathrm{R}}\), the dust evolves only through coagulation. Although the coagulation in such a regime takes a longer time than \(\tau_{\mathrm{SI,sat}}\Omega^{-1}\), the large dust-to-gas ratio leads to lower collision velocity, and dust grains will avoid significant fragmentation (see also Tominaga et al., 2022, for a similar process). Besides, the dust drift becomes slower for \(\mathrm{St}>1\), which also helps dust grains to grow beyond the drift barrier. In the above estimate, we assume \(T=50\) K, and thus the results are applicable in the context of icy planetesimal formation. To investigate the temperature dependence, we also estimate \(\tau_{\mathrm{SI,sat}}/\tau_{\mathrm{cong,eff}}\) and the collision velocity for \((T,v_{\mathrm{frag}})=(100\ \mathrm{K},\ 10\ \mathrm{m/s})\) and \((200\ \mathrm{K},\ 3\ \mathrm{m/s})\). We adopt smaller \(v_{\mathrm{frag}}\) in the case of \(T=200\) K since silicate grains are often assumed to be more fragile than water ice (but see also Kimura et al., 2015; Steinpilz et al., 2019). Figure 6 shows the critical lines and the collision velocities. In the case of the Model 1 with \(T=100\) K (the top left panel), the critical line is the same as in Figure 3, since \(p_{\mathrm{eff}}\) is unity in the plotted region and \(\tau_{\mathrm{cong}}\) is independent of \(c_{\mathrm{s}}\) (see Equations (6) and (8)). In the case of the Model 2 (the top right panel), the region where the imperfect sticking occurs is larger than in Figure 3. Nevertheless, once \(\rho_{\mathrm{d}}/\rho_{\mathrm{g}}\) increases beyond \(\simeq 20\), the coagulation dominates over the clumping even for large dust of \(\mathrm{St}\sim 1\). In the cases of \((T,v_{\mathrm{frag}})=(200\ \mathrm{K},\ 3\ \mathrm{m/s})\), dust grains suffer fragmentation even with the Model 1 since \(\Delta v_{1}\) becomes high (e.g., \(\simeq 1\ \mathrm{m/s}\) for \(\mathrm{St}>0.1\) and \(\rho_{\mathrm{d}}/\rho_{\mathrm{g}}<10\); see the bottom left panel). The dust growth is thus delayed compared to the perfect sticking case. Nevertheless, the collision velocity is still lower than \(3\ \mathrm{m/s}\) by a factor of \(\lesssim 0.6\) (e.g., \(\Delta v_{1}/v_{\mathrm{frag}}\simeq 0.6\) for \(\rho_{\mathrm{d}}/\rho_{\mathrm{g}}=10\) and \(\mathrm{St}=0.5\); see Equation (8)). For the Model 2, the collision velocity is very close to \(v_{\mathrm{frag}}\) for \(\mathrm{St}\gtrsim 0.2\) along the critical line (the bottom right panel). Fragmentation significantly slows the dust growth down, and thus the increase in \(\rho_{\mathrm{d}}/\rho_{\mathrm{g}}\) via streaming instability dominates over coagulation in a wider parameter space than in the top panels and in Figure 3. If the strong clumping or the merger of filaments leads to \(\rho_{\mathrm{d}}/\rho_{\mathrm{g}}\sim 10^{3}\), silicate dust grains can grow toward \(\mathrm{St}=1\) for the Model 2 (see Section 4.2). We also estimate the drift timescale during the dust evolution along the white line in Figure 6. The drift Figure 5: Dependence of the critical line (\(\tau_{\mathrm{SI,sat}}=\tau_{\mathrm{cong,eff}}\)) on \(\tau_{\mathrm{SI,sat}}\). The solid and dotted lines represent cases where we adopt Model 1 and Model 2, respectively. As in Figure 3, we assume the gas temperature of \(50\ \mathrm{K}\) to plot this figure. The critical \(\mathrm{St}\) for which the solid line becomes horizontal (see the case of \(\tau_{\mathrm{SI,sat}}=30,\ 50\)) is given by Equation (12). timescale \(t_{\rm drift}\) is given by \[t_{\rm drift}\equiv\frac{r}{\left|v_{\rm drift}\right|}=\frac{\left(1+\epsilon \right)^{2}+\rm St^{2}}{2\rm St\eta\Omega}, \tag{13}\] where we use the drift velocity \(v_{\rm drift}\) that depends on both \(\rm St\) and \(\epsilon\equiv\rho_{\rm d}/\rho_{\rm g}\)(Nakagawa et al., 1986). The timescale of the dust evolution along the white line is on the order of \(\tau_{\rm SI,sat}\Omega^{-1}\) since the coagulation timescale is comparable to the clumping timescale. Streaming instability and the coagulation in clumps will be ineffective if \(t_{\rm drift}\) is smaller than \(\tau_{\rm SI,sat}\Omega^{-1}\) from the beginning. We thus consider \(t_{\rm drift}>\tau_{\rm SI,sat}\Omega^{-1}\) in the initial state (\(\epsilon,\ \rm St\)) = (\(\epsilon_{0},\ \rm St_{0}\)). Assuming the change in \(\eta\) to be insignificant compared to \(\rm St\) and \(\rho_{\rm d}/\rho_{\rm g}\), we obtain an increasing factor of the drift timescale with respect to the initial value: \[\frac{\tau_{\rm drift}(\epsilon,\ \rm St)}{\tau_{\rm drift}( \epsilon_{0},\ \rm St_{0})} = \frac{\rm St_{0}}{\rm St}\frac{\left(1+\epsilon\right)^{2}+\rm St ^{2}}{\left(1+\epsilon_{0}\right)^{2}+\rm St_{0}^{2}}, \tag{14}\] \[\sim \frac{\rm St_{0}}{\rm St}\left(\frac{1+\epsilon}{1+\epsilon_{0} }\right)^{2}.\] Figure 6: Collision velocities as a function of \(\rm St\) and \(\rho_{\rm d}/\rho_{\rm g}\) for \(T=100\) K and \(200\) K, for which \(c_{\rm s}\) is \(\simeq 0.6\) km/s and \(\simeq 0.8\) km/s, respectively. The adopted critical fragmentation velocity in each case is \(v_{\rm frag}=10\) m/s and \(v_{\rm frag}=3\) m/s, which is based on the fact that silicate grains are often assumed to be more fragile than water ice (but see also Kimura et al., 2015; Steinpilz et al., 2019). The white line marks the critical line (\(\tau_{\rm SI,sat}=\tau_{\rm coag,eff}\)). The arrows show the direction of the dust evolution on \(\rho_{\rm d}/\rho_{\rm g}-\rm St\) plane. Dust grains suffer fragmentation in a wider parameter space than in Figure 3 because of higher temperature and smaller \(v_{\rm frag}\). Nevertheless, dust growth time is shorter than the pre-clumping period space once the local \(\rho_{\rm d}/\rho_{\rm g}\) increases beyond \(15\) in the Model 1 for \(T=200\) K (20 in the Model 2 for \(T=100\) K). In the Model 2 with \(T=200\) K (the bottom right panel), the critical line is close to the fragmentation-limited line (\(\Delta v=v_{\rm frag}\)) for \(\rm St\gtrsim 0.2\). Figure 7: Isolines of the collision velocity of the Model 2 \(\Delta v_{2}\) with \(\rm St=1\) on \(r-\rho_{\rm d}\) plane. We adopt 120 K(\(r/1\) au)\({}^{-3/7}\)(Chiang & Youdin, 2010) and plot lines of 3 m/s, 6 m/s and 10 m/s. Note that the collision velocity decreases as \(\rho_{\rm d}/\rho_{\rm g}\) increases for \(\rho_{\rm d}/\rho_{\rm g}\geq 20\) (Equation (10)). where \(\tau_{\rm drift}\equiv t_{\rm drift}\Omega\) and we assumed \((1+\epsilon)^{2}\gg{\rm St}^{2}\) and \((1+\epsilon_{0})^{2}\gg{\rm St}_{0}^{2}\) considering the dust evolution along the white lines in Figure 6. In the case of the top left panel of Figure 6, the white line shows \({\rm St}\propto\epsilon^{2}\), and the drift timescale is almost constant along the white line. The slopes of the white lines on the other panels are shallower for \(\Delta v/v_{\rm frag}>0.2\), meaning \(\tau_{\rm drift}(\epsilon,\ {\rm St})>\tau_{\rm drift}(\epsilon_{0},\ {\rm St}_{0})\) for \(\epsilon>\epsilon_{0}\). Therefore, the drift speed of dust grains does not increase during their growth in clumps and the clumping. The clumping can help dust grains to overcome the drift barrier (see also Bai and Stone, 2010).6 Footnote 6: Because of the \(r\)-dependence of \(\Omega\), \(t_{\rm drift}(\epsilon,\ {\rm St})\) can be shorter than \(t_{\rm drift}(\epsilon_{0},\ {\rm St}_{0})\). However, the coagulation timescale and the clumping timescale also scale with \(\Omega^{-1}\)(see Brauer et al., 2008, for the coagulation timescale). We thus ignore the factor of \(\Omega\). ### On the possible fragmentation limit during streaming instability For the Model 1, we find that fragmentation affects the dust growth, but it just moderately slows the dust growth down. The collision velocity of the Model 2 can be, however, very close to \(v_{\rm frag}=3\ {\rm m/s}\) even for \(\rho_{\rm d}/\rho_{\rm g}\gg 10\). This indicates that, in the case of the Model 2, efficient fragmentation regulates or limits the dust growth unless the clumping increases \(\rho_{\rm d}/\rho_{\rm g}\) sufficiently. To see where and when the fragmentation prevents the dust growth beyond \({\rm St}=1\), we adopt a simple temperature profile, \(T=120\ {\rm K}(r/1\ {\rm au})^{-3/7}\)(Chiang and Youdin, 2010), and see the \(r-\)dependence of the collision velocity of the Model 2. Since the velocity also depends on \(\rho_{\rm d}/\rho_{\rm g}\), we assume \({\rm St}=1\) and plot isolines of the collision velocity. From Equation (10) with \(\rho_{\rm d}/\rho_{\rm g}\geq 20\) and \({\rm St}=1\), one gets a formula of an isoline, \(\Delta v_{2}({\rm St}=1)=\Delta v\), \[\frac{\rho_{\rm d}}{\rho_{\rm g}}\simeq 75\left(\frac{c_{\rm s}}{1\ {\rm km/s}} \right)^{2.5}\left(\frac{\Delta v}{10\ {\rm m/s}}\right)^{-2.5}. \tag{15}\] In Figure 7, we plot three isolines of the collision velocity of the Model 2 \(\Delta v_{2}({\rm St}=1)\) on the \(r-\rho_{\rm d}\) plane. We note that the collision velocity decreases as \(\rho_{\rm d}/\rho_{\rm g}\) increases, and also note that the vertical axis of Figure 7 represents the dust-to-gas ratio in dust clumps. For the adopted temperature model, the collision velocity can be lower than \(3\ {\rm m/s}\) at \(r\lesssim 0.5\ {\rm au}\) (\(T\gtrsim 160\ {\rm K}\)) if the dust-to-gas ratio in a clump exceeds \(\sim 800\). If \(v_{\rm frag}\) is \(\simeq 6\ {\rm m/s}\) for silicate dust with \(0.1\ {\rm\mu m}\)-sized monomers (Wada et al., 2009), the required \(\rho_{\rm d}/\rho_{\rm g}\) is about five times smaller (see also Equation (15)). Growth of silicate dust grains is even easier if their surface energy is ten times higher than previously assumed, which is indicated in Kimura et al. (2015) and Steinpiz et al. (2019). For the temperature profile we adopted, the collision velocity of the Model 2 for \(\rho_{\rm d}/\rho_{\rm g}\geq 20\) is lower than \(10\ {\rm m/s}\) beyond \(\simeq 1.6\ {\rm au}\). Thus, regardless of higher collision velocity than the Model 1, the growth of icy dust grains will not be inhibited in the turbulent state due to streaming instability. ## 5 Conclusions and Discussion Based on the previous numerical studies of streaming instability, we model the collision velocity and compare the coagulation timescale and the duration time of the pre-clumping phase (the pre-clumping period). We show that even moderately increased dust density due to streaming instability promotes dust coagulation (Figure 3). It is expected that dust evolves roughly along the line of \(\tau_{\rm coag,eff}\sim\tau_{\rm SI,sat}\) on the \(\rho_{\rm d}/\rho_{\rm g}-{\rm St}\) plane (Figure 4). The combination of dust growth and the clumping might allow dust growth toward and beyond \({\rm St}=1\) if the clumping leads to sufficient deceleration of dust drift. Our results highlight the importance of numerical simulations that consider both coagulation and streaming instability. Once dust grains grow beyond \({\rm St}=1\) via the rapid coagulation, streaming instability becomes inefficient to sustain the clumping state because of weak dust-gas coupling. Such large dust grains will then settle toward the midplane, gradually increasing the midplane dust density. If a sufficient amount of large dust grains form and settle, planetesimal formation via gravitational instability will occur (e.g., Michikoshi et al., 2010; Michikoshi and Kokubo, 2017). If the increase of the midplane dust density is too slow, the evolution of the large dust grains might be regulated by erosion or mass transfer through collisions with remaining smaller dust grains (Krijt et al., 2015; Hasegawa et al., 2021). One important parameter in this study is the pre-clumping period \(\tau_{\rm SI,sat}\). Although we assume \(\tau_{\rm SI,sat}\) to be constant, their simulations show shorter periods for larger dust sizes (see their Tables 1 and 2, and Figure 1 of the present paper). Therefore, the dust growth during the pre-clumping phase might accelerate the clumping. This positive feedback process was also indicated in Bai and Stone (2010) and will operate even if \(\rho_{\rm d}/\rho_{\rm g}\) increases only up to a few or \(\sim 10\) by the moderate clumping (e.g., Z0.6t2 and Z0.3t30 runs in Li and Youdin (2021)) since dust grains should grow gradually. The efficiency of the feedback will depend on a production rate of larger dust grains due to the coagulation in clumps, which we will address in our future work. In the present model calculations, we assume equal-mass collisions, where the collision velocity is primarily due to turbulent motion. If we consider unequal-mass collisions, the coagulation timescale may become shorter since dust grains also collide through relative drift motion. We note that collisions in dense clumps are still necessary to avoid fragmentation since collision velocity of unequal-mass dust grains is higher outside the clumps (see Johansen et al., 2009; Bai and Stone, 2010). Dense clumps will also help dust grains to overcome the drift barrier since the drift velocity is reduced in dense regions (Nakagawa et al., 1986; Bai and Stone, 2010). On the other hand, recent studies show that streaming instability is less efficient when one includes a dust size distribution (Krapp et al., 2019; Paardekooper et al., 2020; McNally et al., 2021; Zhu and Yang, 2021; Yang and Zhu, 2021). This may indicate that it takes longer time to achieve the strong clumping. Therefore, coagulation will be even more effective in this case than the clumping, which further motivates us to consider coagulation during the pre-clumping phase. Further quantitative studies are necessary since the impact of the size distribution depends on its slope and shape (Zhu and Yang, 2021; McNally et al., 2021). In our fiducial case, we assume \(T=50\) K and consider icy dust grains (see also the top panels of Figure 6 for \(T=100\) K). Numerical simulations of dust aggregate collisions showed \(v_{\rm frag}\simeq 30-100\) m/s for water ice with a monomer size of \(0.1~{}\mu\)m (e.g., Wada et al., 2009; Hasegawa et al., 2021). The fragmentation velocity can be about two times smaller than these values for a monomer size of \(\simeq 0.2~{}\mu\)m, which is in the possible range of monomer sizes observationally indicated by Tazaki and Dominik (2022). The assumed \(v_{\rm frag}\) in this work is lower than these values, meaning that the efficiency of coagulation will be higher than we estimated (see also Figure 7). Therefore, coagulation should be taken into account when one investigates icy planetesimal formation via streaming instability. In the case of silicate grains, the coagulation efficiency will be lower (the bottom panels of Figure 6) since usually assumed \(v_{\rm frag}\) for silicate is lower. Regardless of small \(v_{\rm frag}\), we show that the collision velocity of the Model 1 is kept below 3 m/s for \(c_{\rm s}\simeq 0.8\) km/s in moderately clumping regions with \(\rho_{\rm d}/\rho_{\rm g}>10\). This suggests that even silicate dust grains can grow toward the size of \({\rm St}=1\). In the case of the Model 2 (the bottom right panel of Figure 6), silicate dust grains can grow toward \({\rm St}=1\) at the strong clumping phase with \(\rho_{\rm d}/\rho_{\rm g}\sim 10^{3}\). This required value of \(\rho_{\rm d}/\rho_{\rm g}\) strongly depends on \(v_{\rm frag}\) (see Figure 7 and Equation (15)). The silicate dust will grow more easily during the clumping if \(v_{\rm frag}\) is \(\simeq 6\) m/s for silicate dust with \(0.1~{}\mu\)m-sized monomers (Wada et al., 2009) or higher (Kimura et al., 2015; Steinpiz et al., 2019). In our future work, we will address time evolution of dust sizes and collision velocities during the clumping in more detail. We thank Xuening Bai for providing us with their simulation data that we used to verify our collision velocity model and for helpful comments. We also thank the anonymous referee for constructive comments that helped us to improve the manuscript. This work was supported by JSPS KAKENHI Grant Nos. 21K20385 (R.T.T.), 19K03941 (H.T.). R.T.T. is also supported by RIKEN Special Postdoctoral Researchers Program.
2303.13255
ReLo: a Dynamic Logic to Reason About Reo Circuits
Critical systems require high reliability and are present in many domains. They are systems in which failure may result in financial damage or even loss of lives. Standard techniques of software engineering are not enough to ensure the absence of unacceptable failures and/or that critical requirements are fulfilled. Reo is a component-based modelling language that aims to provide a framework to build software based on existing pieces of software, which has been used in a wide variety of domains. Its formal semantics provides grounds to certify that systems based on Reo models satisfy specific requirements (i.e., absence of deadlocks). Current logical approaches for reasoning over Reo require the conversion of formal semantics into a logical framework. ReLo is a dynamic logic that naturally subsumes Reo's semantics. It provides a means to reason over Reo circuits. This work extends ReLo by introducing the iteration operator, and soundness and completeness proofs for its axiomatization.
Erick Grilo, Bruno Lopes
2023-03-23T13:38:07Z
http://arxiv.org/abs/2303.13255v1
# _ReLo_: a Dynamic Logic to Reason About Reo Circuits+ ###### Abstract Critical systems require high reliability and are present in many domains. They are systems in which failure may result in financial damage or even loss of lives. Standard techniques of software engineering are not enough to ensure the absence of unacceptable failures and/or that critical requirements are fulfilled. Reo is a component-based modelling language that aims to provide a framework to build software based on existing pieces of software, which has been used in a wide variety of domains. Its formal semantics provides grounds to certify that systems based on Reo models satisfy specific requirements (i.e., absence of deadlocks). Current logical approaches for reasoning over Reo require the conversion of formal semantics into a logical framework. _ReLo_ is a dynamic logic that naturally subsumes Reo's semantics. It provides a means to reason over Reo circuits. This work extends _ReLo_ by introducing the iteration operator, and soundness and completeness proofs for its axiomatization.The core aspects of this logic are also formalized in the Coq proof assistant. ## 1 Introduction In software development, service-oriented computing [32] and model-driven development [7] are examples of techniques that take advantage of software models. The first technique advocates computing based on preexisting systems (services) as described by Service-Oriented Architecture (SOA), while the latter is a development technique that considers the implementation of a system based on a model. A model is an abstraction of a system (or some particular portion of it) in a specific language, which will be used as a specification basis for the system's implementation. It can be specified in languages such as Unified Modeling Language (UML) or formal specification languages like B [2] or Alloy [17]. Researchers also have applied approaches such as formal methods in software development to formalize and assure that certain (critical) systems have some required properties [20, 31]. Reo [3] is a prominent modelling language, enabling coordination of communication between interconnected systems without focusing on their internal properties. Reo models are compositionally built from base connectors, where each connector in Reo stands for a specific communication pattern. Reo has proven to be successful in modeling the organization of concurrent systems' interaction, being used in a variety of applications, from process modeling to Web-Services integration [5] and even in the construction of frameworks to verify specifications in Reo [23, 35]. Reo's ability to model communication between software interfaces has also attracted research on verification of Reo circuits, resulting in many different formal semantics [18] like automata-based models [4, 8, 24], coalgebraic models [3], Intuitionistic Logic with Petri Nets [13] (to name a few), and some of their implementations [23, 34, 36, 20, 27, 28, 24]. However, as far as the authors are concerned, there is no logic apart from _ReLo_[14] to specific reason about Reo models naturally, where the usage of other logic-based approaches requires conversion between different formal semantics. This work extends _ReLo_[14] by introducing an iteration operator and the soundness and completeness proofs of its axiomatic system. A prototypical implementation of this framework in Coq proof assistant, enabling the verification of properties of Reo programs in _ReLo_ within a computerized environment is available at [http://github.com/frame-lab/ReLogicCoq](http://github.com/frame-lab/ReLogicCoq). This work is structured as follows. Section 3 discusses briefly a related logic formalism with the one hereby proposed and introduces Reo modelling language, along with some examples. Section 4 discuss _ReLo_'s main aspects, from its core definitions (such as language, models, transitions firing) and its soundness and completeness proofs. Finally, Section 5 closes the work by discussing the obtained results and assessing possible future work. ## 2 Related Work The fact that Reo can be used to model many real-world situations has attracted attention from researchers all around the world, resulting in a great effort directed in formalizing means to verify properties of Reo models [19, 33, 21, 22, 29, 28, 18]. Such effort also resulted in the proposal of several formal semantics for this modelling language [18], varying from operational semantics to coalgebra models. One of the most known formal semantics for Reo consists of Constraint Automata [9], an operational semantic in which Reo connectors are modelled as automata for _TDS_-languages [6]. It enables reasoning over the data flow of Reo connectors and when they happened. Constraint Automata have been extended to some variants which aim to enrich the reasoning process by capturing properties like the timing of the data flows or possible actions over the data, respectively as Timed Constraint Automata [23] and Action Constraint Automata [22]. Some of them are briefly discussed below, along with other formal semantics for Reo. The approach presented by Klein et al. [19] provides a platform to reason about Reo models using Vereofy,1 a model checker for component-based systems, while Pourvatan et al. [33] propose an approach to reason about Reo models employing symbolic execution of Constraint Automata. Kokash & Arbab [21] formally verify Long-Running Transactions (LRTs) modelled as Reo connectors using Vereofy, enabling expressing properties of these connectors in logics such as Linear Temporal Logic (LTL) or a variant of Computation Tree Logic (CTL) named Alternating-time Stream Logic (ASL). Kokash et al. [23] use mCRL2 to encode Reo's semantics in Constraint Automata and other automata-based semantics, encoding their behaviour as mCRL2 processes and enabling the expression of properties regarding deadlocks and data constraints which depend upon time. mCRL2 also supports model-checking of Reo in a dynamic logic (with fixed points), where modalities are regular expressions, atomic actions are sets of nodes that fire at the same time. Mouzavi et al. [29] propose an approach based on Maude to model checking Reo models, encoding Reo's operational semantics of the connectors. Footnote 1: [http://www.vereofy.de](http://www.vereofy.de) Proof assistants have been used to reason about Reo connectors [26, 27, 28, 36, 15]. The approaches adopted by Li et al. [26, 34, 15] are among the ones that employ Coq to verify Reo models formally. In [26] a formalization of four of the Reo canonical connectors (Sync, FIFO1, SyncDrain, and LossySync) along with an LTL-based language defined as an inductive type in Coq is presented, while [34] proposes the formalization of five Reo canonical channels and a procedure that creates composite channels by logical conjunction of the connectors modelled. In [15], a framework to provide means of graphically model Reo connectors and validate the generated model in Constraint Automata using Coq and NuSMV2 is discussed. It also enables the automatic generation of Coq code to a Haskell model employing the Coq code extraction apparatus. When restricting the works considering logics and Reo, as far as the authors know there is only the work by [12] which focuses on formalizing the semantics of Reo connectors Sync, LossySync, FIFO1, SyncDrain, AsyncDrain, Filter, Transform, Merger, and Replicator in terms of zero-safe Petri nets [11], a special class of Petri-nets with two types of places: zero and stable places. This encoding is then converted to terms in Intuitionistic Temporal Linear Logic, enabling reasoning about Reo connectors in this logic. ## 3 Background This section provides a succinct overview of Reo [2, 3], considering its main characteristics and a modelling examples as it is the target language _ReLo_ provides a formal semantic to reason over. ### The Reo Modelling Language As a coordination model, Reo focuses on connectors, their composition, and how they behave, not focusing on particular details regarding the entities that are connected, communicate, and interact through those connectors. Connected entities may be modules of sequential code, objects, agents, processes, web services, and any other software component where its integration with other software can be used to build a system [2]. Such entities are defined as component instances in Reo. Channels in Reo are defined as a point-to-point link between two distinct nodes, where each channel has its unique predefined behavior. Each channel in Reo has exactly two ends, which can be of the following types: the source end, which accepts data into the channel, and the sink end, which dispenses data out of the channel. Channels are used to compose more complex connectors, being possible to combine user-defined channels amongst themselves and with the canonical connectors provided by Baier et al. [8]. Figure 1 shows the basic set of connectors as presented by Kokash et al. [22]. Channel ends can be used by any entity to send/receive data, given that the entity belongs to an instance that knows these ends. Entities may use channels only if the instance they belong to is connected to one of the channel ends, enabling either sending or receiving data (depending on the kind of channel end the entity has access to). The bound between a software instance and a channel end is a logical connection that does not rely on properties such as the location of the involved entities. Channels in Reo have the sole objective to enable the data exchange following the behaviour of the connectors composing the channel, utilizing I/O operations predefined for each entity in an instance. A channel can be known by zero or more instances at a time, but its ends can be used by at most one entity at the same time. Figure 1: Canonical Reo connectors Figure 2 introduces a Reo connector known as Sequencer3. It models the data flow between three entities sequentially. The data flows from the first FIFO connector (a buffer), which will be sequentially synchronized with entities in port names names A, B, and C. The Sequencer can be used to model scenarios where processes sequentially interact between themselves. Footnote 3: [http://arcatools.org/reo](http://arcatools.org/reo) In short, Reo circuits may be understood as data flowing from different interfaces (i.e., port names connected to a node), where the connector itself models the communication pattern between two of these interfaces. A _ReLo_ program is composed of one or more Reo connectors as introduced in Figure 1. ## 4 A _ReLo_ Primer _ReLo_[14] was tailored to subsume Reo models' behaviour naturally in a logic, without needing any mechanism to convert a Reo model denoted by one of its formal semantics to some logical framework. Each basic Reo connector is modelled in the logic's language, which is defined as follows. **Definition 1** (_ReLo_'s language).: The language of _ReLo_ consists of the following: * An enumerable set of propositions \(\Phi\). * Reo channels as denoted by Figure 1 * A set of port names \(\mathcal{N}\) * A sequence \(Seq_{\Pi}=\{\varepsilon,s_{1},s_{2},\dots\}\) of data flows in ports of a _ReLo_ program \(\Pi\) (defined below). We define \(s_{i}\leq s_{j}\) if \(s_{i}\) is a proper (i.e., \(s_{j}\) contains all of \(s_{i}\)'s data). Each sequence \(s_{i}\) denotes the data flow of the Reo program \(\Pi\) (i.e., all ports that have data synchronized at a specific moment in time) and \(\varepsilon\) is the empty sequence * Program composition symbol : \(\odot\) * A sequence \(t\) of data flows of ports \(p\) with data values {0,1}, which denotes whether \(p\) contains a data item. This describes a data flow occurring in the Reo channel. A BNF describing \(t\) is defined as follows: \(\langle t\rangle::=\ \langle\mathit{portName}\rangle\ \langle\mathit{data}\rangle\,\ \langle t \rangle\ |\ \langle\mathit{data}\rangle\ \langle\mathit{portName}\rangle\ \langle\mathit{data}\rangle\,\ \langle t \rangle\\ The set \(f\) is the set of connectors \(p\) of the model where data flows in and out of the channel (the connector has at least a source node and a sink node), namely Sync, LossySync, FIFO, Filter, Transform, Merger and Replicator. The set \(b\) is the set of blocking channels (channels without sink nodes whose inability to fire prevents the remainder of connectors related to their port names from fire), namely SyncDrain and AsyncDrain. The following is a simple yet intuitive example of the structure of data flows in \(ReLo\). Let the sequence \(t\) be \(t=\{A1,B1C\}\). It states that the port \(A\) has the data item 1 in its current data flow, while there is a data item 1 in the FIFO between \(B\) and \(C\). **Definition 2** (\(ReLo\) formulae).: We define formulae in \(ReLo\) as follows: \(\phi=p\mid\top\mid\neg\phi\mid\phi\land\psi\mid\langle t,\pi\rangle\phi\), such that \(p\in\Phi\). We use the standard abbreviations \(\top\equiv\neg\bot,\phi\lor\psi\equiv\neg(\neg\phi\land\neg\psi),\phi\to\psi\equiv \neg\phi\lor\psi\) and \([t,\pi]\varphi\equiv\neg\langle t,\pi\rangle\neg\phi\), where \(\pi\) is some Reo program and \(t\) a data flow. The connectors in Figure 3 exemplify compound Reo connectors. The model SyncFIFO is composed of a FIFO and a Sync connector in which the data leaving the FIFO is sent to \(C\) from \(B\) synchronously. Suppose that there is data in the FIFO and in port \(B\) (\(t=\{A1B,B0\}\)). If the FIFO from \(A\) to \(B\) is processed first then the Sync between \(B\) and \(C\), the data flow in \(B\) will be overwritten before it is sent to \(C\), which is not the correct behaviour. The Sync from \(B\) to \(C\) must fire before the FIFO from \(A\) to \(B\). Another example is denoted by the model Sync2Drain. Suppose there is data only in port name \(A\) (\(t=\{A1\}\)). If the Sync from \(B\) to \(A\) is evaluated first then the SyncDrain between \(B\) and \(C\), the restriction imposed by the fact that the condition required for the SyncDrain to fire was not met (as \(C\)'s data flow differs from \(B\)'s at this moment) is not considered, and data will wrongly flow from \(B\) to \(A\). The SyncDrain must be first evaluated before all flows as they may block the flow from data of its ports to other channels. The next definition maps each canonical connector that composes a Reo model to a \(ReLo\) program. The left hand side of each mapping rule in Definition 3 is the atomic Reo connector, while the right hand size is the resulting \(ReLo\) atomic program \(\pi_{i}=(f_{i},b_{i})\), with the same behaviour as of the Reo connector. **Definition 3** (\(parse\) base cases).: Each canonical Reo connector is mapped to a \(ReLo\) program in \(parse\): * \(A\)\(B\) to \(A\to B\) * \(A\)\(B\) to \((A,A\to B)\) * \(A\)\(B\) to \(fifo(A,B)\) * \(B\) to \(SBlock(A,B)\) * \(A\)\(B\) to \(ABlock(A,B)\) * \(A\)\(B\) to \(Transform(f,A,B)\), \(f\colon Data\to Data\) is a transformation function. * \(A\)\(B\) to \(Filter(P,A,B)\), \(P\) is a logical predicate over the data item in \(A\). * \(A\)\(C\) to \((A\to C,B\to C)\) * \(A\)\(C\) to \((A\to B,A\to C)\) Figure 3: Examples of Reo models Considering that each \(ReLo\) program \(\Pi\) is the composition of programs \(\pi_{1}\odot\pi_{2},\odot\cdots\odot\pi_{n},\pi_{i}=(f_{i},b_{i})\) as Reo programs, _parse_ is formalized in Definition 4. The symbol \(\circ\) denote the addition of an element to \(s\), the resulting set of _parse_'s processing. **Definition 4** (_parse_ function).: The function that interprets the execution of a \(ReLo\) program is defined as \(parse(f,b,s)\). We define \(\epsilon\) as an abbreviation to denote when there is no \(ReLo\) program left to process (i.e. the base case when no program is parametrized). Its outcome is detailed as below. * \(s\), if \(f=b=\epsilon\) * \(parse(f_{j},b,s\circ A\to B),\) if \(f=\ A\)\(B\) item in A) to \(B\), and the filtering of data flow by some predicate as \(Filter(P,A,B)\), \(P\) as a quantifiable-free predicate over the data item seen in \(A\). Therefore, data will flow to \(B\) only if \(P(D_{A})\) is satisfied. After processing \(\pi\) with \(parse\), the interpretation of the execution of \(\pi\) is given by \(go(t,s,acc),go\colon s\times s\to s\), where \(s\) is a string denoting the processed program \(\pi\) as the one returned by \(parse\), and \(t\) is the initial data flow of ports of the Reo program \(\pi\). The parameter \(acc\) holds all connectors of the Reo circuit that satisfy their respective required conditions for data to flow. In what follows we define \(ax\prec t\) as an operator which states that \(ax\) is in \(t\), \(ax\) a single data of a port and \(t\) a structure containing data flows for ports \(p\in\mathcal{N}\). Example 1 shows how \(parse\) functions and illustrates why it is necessary. The programs that depict the FIFO connectors from Fig. 2 are the last programs to be executed, while the ones that denote "immediate" flow (the Sync channels) come first. This is done to preserve the data when these connectors fire (if eligible). Suppose that there is a data item in the buffer between X and Y and a data item in Y (i.e., \(t=X1Y,Y0\)). If the data item leaves the buffer first then the data item in Y, the latter will be overwritten and the information is lost. **Example 1**.: let \(\pi\) be the Reo program corresponding to the circuit in Fig. 2: \(\pi=\)\(\times\)\ * \(go(t,s^{\prime},(acc\circ(A\to B))\setminus s^{\prime}_{j})\cup go(t,s^{\prime}, acc),\;\mathrm{iff}\;\begin{cases}Ax\prec t,\\ (A\to B)\nptifs^{\prime}\\ \exists s^{\prime}_{j}\in acc\mid sink(s^{\prime}_{j})=B\end{cases}\) * \(go(t,s^{\prime},acc)\), otherwise * \(s=fifo(A,B)\circ s^{\prime}:\) * \(go(t,s^{\prime},acc\circ(AxB)),\;\mathrm{iff}\;Ax\prec t,fifo(A,B)\nptifs^{ \prime},(AxB)\nptifs\) * \(go(t,s^{\prime},acc\circ(AxB\to Bx)),\;\mathrm{iff}\;AxB\prec t,fifo(A,B)\nptifs^{\prime}\) * \(go(t,s^{\prime},(acc\circ(AxB\to Bx))\setminus s^{\prime}_{j})\cup go(t,s^{ \prime},acc),\;\mathrm{iff}\;\begin{cases}AxB\prec t,\\ fifo(A,B)\nptifs^{\prime},\\ \exists s^{\prime}_{j}\in acc\mid sink(s^{\prime}_{j})=B\end{cases}\) * \(go(t,s^{\prime},acc)\), otherwise * \(s=Sblock(A,B)\circ s^{\prime}:\) * \(go(t,s^{\prime},acc),\;\mathrm{iff}\;\begin{cases}(Ax\prec t\wedge Bx\prec t )\vee(Ax\nptifs\wedge Bx\nptifs)\\ Sblock(A,B)\nptifs^{\prime}\end{cases}\) * \(go(t,halt(A,B,s^{\prime}),acc),\;\mathrm{iff}\;\begin{cases}(Ax\prec t\wedge Bx \nptifs)\vee(Ax\nptifs\wedge Bx\prec t)\\ Sblock(A,B)\nptifs^{\prime}\end{cases}\) * \(s=Ablock(A,b)\circ s^{\prime}:\) * \(go(t,s^{\prime},acc),\;\mathrm{iff}\;\begin{cases}(Ax\nptifs\wedge Bx\prec t )\vee(Ax\prec t\wedge Bx\nptifs)\vee\\ (Ax\nptifs\wedge Bx\nptifs),Alblock(A,B)\nptifs^{\prime}\end{cases}\) * \(go(t,halt(A,B,s^{\prime}),acc),\;\mathrm{iff}\;\begin{cases}(Ax\prec t\wedge Bx \nptifs),\\ Alblock(A,B)\nptifs^{\prime}\end{cases}\) * \(s=Transform(f,A,B)\circ s^{\prime}:\) * \(go(t,s^{\prime},acc\circ(f(D_{A})\to B)),\;\mathrm{iff}\;\begin{cases}ax \prec t\\ Transform(f,A,B)\nptifs^{\prime}\end{cases}\) * \(go(t,s^{\prime},(acc\circ(f(D_{A})\to B))\setminus s^{\prime}_{j})\cup go(t,s ^{\prime},acc),\;\mathrm{iff}\;\begin{cases}Ax\prec t,\\ Transform(f,A,B)\nptifs^{\prime}\\ \exists s^{\prime}_{j}\in acc\mid sink(s^{\prime}_{j})=B\end{cases}\) * \(go(t,s^{\prime},acc)\), otherwise * \(s=Filter(f,A,B)\circ s^{\prime}:\) * \(go(t,s^{\prime},acc\circ(A\to B)),\;\mathrm{iff}\;\begin{cases}Ax\prec t \\ P(D_{A})\;\mathrm{holds}\\ Filter(f,A,B)\nptifs^{\prime}\end{cases}\) * \(go(t,s^{\prime},acc)\), otherwise The existing condition after each return condition of \(go\) denotes the case where two or more Reo connectors within a circuit have the same sink node. This implies that if both of their respective source nodes have data flowing simultaneously, their sink nodes will have data flowing nondeterministically. Such condition models this scenario, considering when both cases may happen as two nondeterministic "distinct" possible executions. Therefore, the operation \(acc\circ(X\to Y))\setminus s^{\prime}_{j}\) removes every interpretation of \(s^{\prime}\) which sink node equals \(Y\), while \(go(t,s^{\prime},acc)\) denotes an execution containing the removed \(s^{\prime}_{j}\) but not considering \(X\to Y\). The return condition \(s=\epsilon\) denotes that the program as a whole has already been processed. Considering the cases including block programs induced by SyncDrain and AsyncDrain connectors, \(halt(A,B,s^{\prime})\) is defined as a supporting function that will be used in the case the block program conditions fail. Then, data flow that was in the ports of the SyncDrain/AsyncDrain evaluated cannot be further considered in this execution steps: channels that have their sink node pointed to \(A\) or \(B\). Intuitively, \(go\) is a function that processes a program \(\pi\) with input \(t\) as the program's data initially available at ports \(p\in\pi\) and returns the next data configuration after processing all connectors and verifying whether they are eligible for data to flow. The return of \(go\) depends on a function \(fire\) which is bound to return the final configuration of the Reo circuit after an iteration (i.e., the last ports that data flow). We define \(sink(s^{\prime}_{j})\) as the sink node of a connector, in this case, the port name where a data item flowing into a Reo connector is bound to. The operation denoted by \(\cup\) is the standard set union. Definition \(go\) employs a function named \(fire\colon T\times s\to T\) which returns the firing of all possible data flows in the Reo connector, given the Reo program \(\pi\) and an initial data flow on ports of \(\pi\). The set \(T\) is the set of possible data flows as constructed by the BNF grammar in Definition 1. The function \(fire\) returns the resulting data flow of this execution step by considering the program processed by \(go\) as \(s\) and the current step's data flow \(t\). Parameter \(s\) contains \(ReLo\) programs as yielded by _parse_. **Definition 6** (Data marking relation \(fire\)).: \[fire(t,s)=\begin{cases}\epsilon,\text{ if }s=\epsilon\\ AxB\circ fire(t,s^{\prime}),\text{ if }s=(AxB)\circ s^{\prime}\text{ and }Ax\prec t\\ B(f(a))\circ fire(t,s^{\prime}),\text{ if }s=(f(D_{A})\to B)\circ s^{\prime}\text{ and }Ax\prec t\\ Bx\circ fire(t,s^{\prime}),\text{ if }\begin{cases}s=(A\to B)\circ s^{\prime} \text{ and }Ax\prec t,or\\ s=(AxB\to Bx)\circ s^{\prime}\text{ and }axb\prec t\end{cases}\end{cases}\] (1) We define \(f_{ReLo}\) as the transition relation of a \(ReLo\) model. It denotes how the transitions of the model fire, i.e., given an input \(t\) and a program \(\pi\) denoting a Reo circuit, \(f_{ReLo}(t,\pi)\) interfaces with \(go\) to return the resulting data flow of \(\pi\) given that data depicted by \(t\) are flowing in the connector's ports. **Definition 7**.: Transition relation \(f_{ReLo}(t,\pi)=go(t,(parse(\pi,[])),[])\)__ We define \(f_{ReLo}(t,\pi^{*})\) as the application of \(f_{ReLo}(t,\pi)\) iteratively for the (nondeterministic finite) number of steps denoted by \(\star\), starting with \(t\) with \(\pi\), and considering the obtained intermediate \(t^{\prime}\) in the steps. A \(ReLo\) frame is a structure based on Kripke frames [25] formally defined as a tuple \(\mathcal{F}=\langle S,\Pi,R_{\Pi},\delta,\)\(\lambda\rangle\), where each element of \(\mathcal{F}\) is described by Definition 8. **Definition 8** (\(ReLo\) frame).: \(S\) is a non-empty enumerable set of states and \(\Pi\) a Reo program. * \(R_{\Pi}\subseteq S\times S\) is a relation defined as follows. * \(R_{\pi_{i}}=\{uR_{\pi_{i}}v\mid f_{ReLo}(t,\pi_{i})\prec\delta(v),\)\(t\prec\delta(u)\}\), \(\pi_{i}\) is any combination of any atomic program which is a subprogram of \(\Pi\). * \(R_{\pi_{i}^{*}}=R_{\pi_{i}}^{*}\), the reflexive transitive closure (RTC) of \(R_{\pi_{i}}\). * \(\lambda\colon S\times\mathcal{N}\to\mathbbm{R}\) is a function that returns the time instant a data item in a data markup flows through a port name of \(\mathcal{N}\). * \(\delta\colon S\to T\), is a function that returns data in ports of the circuit in a state \(s\in S\), \(T\) being the set of possible data flows in the model. From Definition 8, a \(ReLo\) model is formally defined as a tuple \(\mathcal{M}=\langle\mathcal{F},\mathbf{V}\rangle\) by Definition 9. Intuitively, it is a tuple consisting of a \(ReLo\) frame and a valuation function, which given a state \(w\) of the model and a propositional symbol \(\varphi\in\Phi\), maps to either \(true\) or \(false\). **Definition 9** (\(ReLo\) models).: A model in \(ReLo\) is a tuple \(\mathcal{M}=\langle\mathcal{F},\mathbf{V}\rangle\), where \(\mathcal{F}\) is a \(ReLo\) frame and \(V\colon S\times\Phi\to\{true,false\}\) is the model's valuation function **Definition 10** (Satisfaction notion).: * \(\mathcal{M},s\Vdash p\) iff \(V(s,p)=true\) * \(\mathcal{M},s\Vdash\top\) always * \(\mathcal{M},s\Vdash\neg\varphi\) iff \(\mathcal{M},s\Vdash\varphi\) * \(\mathcal{M},s\Vdash\varphi_{1}\wedge\varphi_{2}\) iff \(\mathcal{M},s\Vdash\varphi_{1}\) and \(\mathcal{M},s\Vdash\varphi_{2}\) * \(\mathcal{M},s\Vdash\langle t,\pi\rangle\varphi\) if there exists a state \(w\in S\), \(sR_{\pi}w\), and \(\mathcal{M},s\Vdash\varphi\) We denote by \(\mathcal{M}\Vdash\varphi\) if \(\varphi\) is satisfied in all states of \(\mathcal{M}\). By \(\Vdash\varphi\) we denote that \(\varphi\) is valid in any state of any model. We recover the circuit in Fig. 2 as an example. Let us consider s = \(D_{X}\), (i.e. t = D1) and the Sequencer's corresponding model \(\mathcal{M}\). Therefore, \(\mathcal{M},D_{X}\Vdash\langle t,\pi\rangle p\) holds if \(V(D_{XfifoY},p)=true\) as \(D_{XfifoY}\) is the only state where \(D_{X}R_{\Pi}D_{XfifoY}\). For example, one might state \(p\) as "There is no port with any data flow", hence \(V(D_{XfifoY},p)=true\). As another usage example, we formalize some properties which may be interesting for this connector to have. Let us consider that the data markup is \(t=X1\), \(\mathcal{M}\) the model regarding the Sequencer, and the states' subscript denoting which part of the connector has data. The following example state that for this data flow, after every single execution of \(\pi\), it is not the case that the three connected entities have their data equal to 1 simultaneously, but it does have data in its buffer from \(X\) to \(Y\). **Example 2**.: \([X1,\pi]\neg(D_{A}=1\wedge D_{B}=1\wedge D_{C}=1)\wedge t^{\prime}=X1Y\), where \(t^{\prime}=f_{ReLo}(t,\pi)\) \(\mathcal{M},D_{X}\Vdash[X1,\pi]\neg(D_{A}=1\wedge D_{B}=1\wedge D_{C}=1) \wedge t^{\prime}=X1Y\). \(\mathcal{M},D_{x\dash\neg\varphi_{t}}\Vdash\neg(D_{A}=1\wedge D_{B}=1 \wedge D_{C}=1)\wedge t^{\prime}=X1Y\). \(\mathcal{M},D_{x\dash\neg\varphi_{t^{\prime}}}\Vdash\neg(D_{A}=1\wedge D_{B }=1\wedge D_{C}=1)\) and \(\mathcal{M},D_{x\dash\neg\varphi_{t^{\prime}}}\Vdash t^{\prime}=X1Y\). The notion of \(\mathcal{M},D_{X}\Vdash\langle t,\pi^{\star}\rangle p\) holds if a state \(s\) is reached from \(D_{X}\) by means of \(R^{\star}_{\pi}\) with \(V(s,p)=\top\). If we state \(p\) as "the data item of port \(X\) equals 1", it holds because \(D_{X}R^{\star}_{\pi}D_{X}\) and \(V(D_{X},p)=\top\). If there is an execution of \(\pi\) that lasts a nondeterministic finite number of iterations, and there is data in \(C\) equals to 1, then there is an execution under the same circumstances where the same data has been in \(B\). **Example 3**.: \(\langle t,\pi^{\star}\rangle D_{C}=1\to\langle t,\pi^{\star}\rangle D_{B}=1\) \(\mathcal{M},D_{X}\Vdash\langle t,\pi^{\star}\rangle D_{C}=1\to\langle t,\pi^{ \star}\rangle D_{B}=1\) \(\mathcal{M},D_{X}\Vdash\neg(\langle t,\pi^{\star}\rangle D_{C}=1)\lor \langle t,\pi^{\star}\rangle D_{B}=1\) \(\mathcal{M},D_{X}\Vdash[t,\pi^{\star}]\neg D_{C}=1\lor\langle t,\pi^{\star} \rangle D_{B}=1\) \(\mathcal{M},D_{X}\Vdash[t,\pi^{\star}]\neg D_{C}=1\) or \(\mathcal{M},D_{X}\Vdash\langle t,\pi^{\star}\rangle D_{B}=1\) \(\mathcal{M},D_{X}\Vdash\langle t,\pi^{\star}\rangle D_{B}=1\), because \(\mathcal{M},D_{B}\Vdash D_{B}=1\) and \(D_{X}R_{\pi^{\star}}R_{B}\). ### Axiomatic System We define an axiomatization of \(ReLo\), discuss its soundness and completeness. **Definition 11** (Axiomatic System).: * [label=()] * Enough Propositional Logic tautologies * \([t,\pi](\varphi\to\psi)\to([t,\pi]\varphi\to[t,\pi]\psi)\) * \([t,\pi](\varphi\wedge\psi)\leftrightarrow[t,\pi]\varphi\wedge[t,\pi]\varphi\) * \([t,\pi]\varphi\leftrightarrow\neg(t,\pi)\neg\varphi\) * \(\langle t,\pi\rangle\varphi\leftrightarrow\varphi\) iff \(f_{ReLo}(t,\pi)=\varepsilon\) * \(\varphi\wedge[t,\pi][t_{(f,b)},\pi^{*}]\varphi\leftrightarrow[t,\pi^{*}]\varphi\), \(t_{(f,b)}=\) * \(f_{ReLo}(t,\pi)\) * \(\varphi\wedge[t,\pi^{*}](\varphi\to[t_{(f,b)^{*}},\pi]\varphi)\to[t,\pi^{*}]\varphi\), \(t_{(f,b)^{*}}=f_{ReLo}(t,\pi^{*})\) **Lemma 1** (Soundness).: _Proof._ Axioms **(PL)**, **(K)**, **(And)** and **(Du)** are standard in Modal Logic literature, along with rules **(MP)** and **(Gen)**[16]. Axiom **(It)** and **(Ind)** are similar from PDL. **(R)**: \(\langle t,\pi\rangle\varphi\leftrightarrow\varphi\) iff \(f_{ReLo}(t,\pi)=\varepsilon\) Suppose by contradiction that exists a state \(s\) from a model \(\mathcal{M}=\langle S,\Pi,R_{\Pi},\delta,\lambda,V\rangle\) where **(R)** does not hold. There are two possible cases. (\(\Rightarrow\)) Suppose by contradiction \(\mathcal{M},s\Vdash\langle t,(f,b)\rangle\varphi\) and \(\mathcal{M},s\Vdash\varphi\). \(\mathcal{M},s\Vdash\langle t,(f,b)\rangle\varphi\) iff there is a state \(v\in S\) such that \(sR_{\pi}v\). Because \(f_{ReLo}(t,(f,b))=\varepsilon,s=v\) (i.e., in this execution no other state is reached from \(s\)). Therefore, \(\mathcal{M},s\Vdash\varphi\), contradicting \(\mathcal{M},s\Vdash\varphi\). (\(\Leftarrow\)) Suppose by contradiction \(\mathcal{M},s\Vdash\varphi\) and \(\mathcal{M},s\Vdash\langle t,(f,b)\rangle\varphi\). In order to \(\mathcal{M},s\Vdash\langle t,(f,b)\rangle\varphi\), for every state \(v\in S\) such that \(sR_{\pi}v\), \(\mathcal{M},v\Vdash\varphi\). Because \(f_{ReLo}(t,(f,b))=\varepsilon,s=v\) (i.e., in this execution no other state is reached from \(s\)). Therefore, \(\mathcal{M},v\Vdash\varphi\), contradicting \(\mathcal{M},v\Vdash\varphi\). ### Completeness We start by defining the Fisher-Ladner closure of a formula as the set closed by all of its subformulae, following the idea employed in other modal logic works [16, 10] as follows. **Definition 12** (Fisher-Ladner Closure).: Let \(\Phi\) be a the set of all formulae in \(ReLo\). The Fischer-Ladner closure of a formula, notation \(FL(\varphi)\) is inductively defined as follows: * \(FL\): \(\Phi\to 2^{\Phi}\) * \(FL_{(f,b)}\colon\{\langle t,(f,b)\rangle\varphi\}\to 2^{\Phi}\), where \((f,b)\) is a \(ReLo\) program and \(\varphi\) a \(ReLo\) formula. These functions are defined as * \(FL(p)=\{p\}\), \(p\) an atomic proposition; * \(FL(\varphi\to\psi)=\{\varphi\to\psi\}\cup FL(\varphi)\cup FL(\psi)\) * \(FL_{(f,b)}(\langle t,(f,b)\rangle\varphi)=\{\langle t,(f,b)\rangle\varphi\}\) * \(FL(\langle t,(f,b)\rangle\varphi)=FL_{(f,b)}((\langle t,(f,b)\rangle\varphi) \cup FL(\varphi)\) * \(FL_{(f,b)}(\langle t,(f,b)^{*}\rangle\varphi)=\{\langle t,(f,b)^{*}\rangle \varphi\}\cup FL_{(f,b)}(\langle t,(f,b)\rangle\langle t,(f,b)^{*}\rangle\varphi)\) * \(FL(\langle t,(f,b)^{*}\rangle\varphi)=FL_{(f,b)}((\langle t,(f,b)^{*}\rangle \varphi)\cup FL(\varphi)\) From the definitions above, we prove two lemmas that can be understood as properties that formulae need to satisfy to belong to their Fisher-Ladner closure. **Lemma 2**.: _If \(\langle t,(f,b)\rangle\psi\in FL(\varphi)\), then \(\psi\in FL(\varphi)\)_ **Lemma 3**.: _If \(\langle t,(f,b)^{*}\rangle\psi\in FL(\varphi)\), then \(\langle t,(f,b)\rangle\langle t,(f,b)^{*}\rangle\psi\in FL(\varphi)\)_ The proofs for Lemmas 2 and 3 are straightforward from Definition 12. The following definitions regard the definitions of maximal canonical subsets of \(ReLo\) formulae. We first extend Definition 12 to a set of formulae \(\Gamma\). The Fisher-Ladner closure of a set of formulae \(\Gamma\) is \(FL(\Gamma)=\bigcup_{\varphi\in\Gamma}FL(\varphi)\). Therefore, \(FL(\Gamma)\) is closed under subformulae. For the remainder of this section, we will assume that \(\Gamma\) is finite. **Lemma 4**.: _If \(\Gamma\) is a finite set of formulae, then \(FL(\Gamma)\) also is a finite set of formulae_ Proof.: The proof is standard in literature [11]. Intuitively, because \(FL\) is defined recursively over a set of formulae \(\Gamma\) into formulae \(\psi\) of a formula \(\varphi\in\Gamma\), \(\Gamma\) being finite leads to the resulting set of \(FL(\Gamma)\) also being finite (at some point, all atomic formulae composing \(\varphi\) will have been reached by \(FL\)). **Definition 13** (Atom).: Let \(\Gamma\) be a set of consistent formulae. An atom of \(\Gamma\) is a set of formulae \(\Gamma^{\prime}\) that is a maximal consistent subset of \(FL(\Gamma)\). The set of all atoms of \(\Gamma\) is defined as \(At(\Gamma)\). **Lemma 5**.: _Let \(\Gamma\) a consistent set of formulae and \(\psi\) a ReLo formula. If \(\psi\in FL(\Gamma)\), and \(\psi\) is satisfiable then there is an atom of \(\Gamma\), \(\Gamma^{\prime}\) where \(\psi\in\Gamma^{\prime}\)._ Proof.: The proof follows from Lindembaum's lemma. From Lemma 4, as \(FL(\Gamma)\) is a finite set, its elements can be enumerated from \(\gamma_{1},\gamma_{2},\ldots,\gamma_{n},n=|FL(\Gamma)|\). The first set, \(\Gamma^{\prime}_{1}\) contains \(\psi\) as the starting point of the construction. Then, for \(i=2,\ldots,n\), \(\Gamma^{\prime}_{i}\) is the union of \(\Gamma^{\prime}_{i-1}\) with either \(\{\gamma_{i}\}\) or \(\{\neg\gamma_{i}\}\), respectively whether \(\Gamma^{\prime}_{i}\cup\{\gamma_{i}\}\) or \(\Gamma^{\prime}_{i}\cup\{\neg\gamma_{i}\}\) is consistent. In the end, we make \(\Gamma^{\prime}=\Gamma^{\prime}_{n}\) as it contains the union of all \(\Gamma_{i},1\leq i\leq n\). This is summarized in the following bullets: * \(\Gamma^{\prime}_{1}=\{\psi\}\); * \(\Gamma^{\prime}_{i},=\begin{cases}\Gamma^{\prime}_{i-1}\cup\{\gamma_{i}\}, \text{ if }\Gamma_{n-1}\cup\{\gamma_{n}\}\text{ is consistent}\\ \Gamma^{\prime}_{i-1}\cup\{\neg\gamma_{i}\},\text{ otherwise}\end{cases}\) for \(1<i<n\); * \(\Gamma=\bigcup_{i=1}^{n}\Gamma_{i}\) **Definition 14** (Canonical relations over \(\Gamma\)).: Let \(\Gamma\) a set of formulae, \(A,B\) atoms of \(\Gamma\) (\(A,B\in At(\Gamma)\)), \(\Pi\) a \(ReLo\) program and \(\langle t,(f,b)\rangle\varphi\in At(\Gamma)\). The canonical relations on \(At(\Gamma)\) is defined as \(S^{\Gamma}_{\Pi}\) as follows: \(AS^{\Gamma}_{\Pi}B\leftrightarrow\bigwedge A\wedge\langle t,(f,b)\rangle \bigwedge B)\) is consistent, \(AS^{\Gamma}_{\Pi}B\leftrightarrow\bigwedge A\wedge\langle t,(f,b)^{*} \rangle\bigwedge B)\) is consistent Definition 14 states that the relation between two atoms of \(\Gamma\), \(A\) and \(B\) is done by the conjunction of the formulae in \(A\) with all formulae in \(B\) which can be accessed from \(A\) with a diamond formula, such that this conjunction is also a consistent formula. Intuitively, it states that \(A\) and \(B\) are related in \(S^{\Gamma}_{\Pi}\) by every formula \(\varphi\) of \(B\) which conjunction with \(A\) by means of a diamond results in a consistent scenario. The following definition is bound to formalize the canonical version of \(\delta\) as the data markup function. **Definition 15** (Canonical data markup function \(\delta^{\Gamma}_{c}\)).: Let \(F=\{\langle t_{1},(f_{1},b_{1})\rangle\varphi_{1},\langle t_{2},(f_{2},b_{2}) \rangle\varphi_{2},\ldots,\langle t_{n},(f_{n},b_{n})\rangle\varphi_{n}\}\) be the set of all diamond formula occurring on an atom \(A\) of \(\Gamma\). The canonical data markup is defined as \(\delta^{\Gamma}_{c}\colon At(\Gamma)\to T\) as follows: * The sequence \(\{t_{1},t_{2},\ldots,t_{n}\}\subseteq\delta(A)\) Therefore, \(\{t_{1},t_{2},\ldots,t_{n}\}\subseteq\delta^{\Gamma}_{c}(A)\). Intuitively, this states that all the data flow in the set of formulae must be valid data markups of \(A\), which leads to them to also be valid data markups of \(\delta^{\Gamma}_{c}\) following Definition 14. * for all programs \(\pi=(f,b)\in\Pi\), \(f_{ReLo}((\delta^{\Gamma}_{c}(A)),(f,b))\prec\delta^{\Gamma}_{c}(B)\leftrightarrow AS^{\Gamma}_{\Pi}B\). **Definition 16** (Canonical model).: A canonical model over a set of formulae \(\Gamma\) is defined as a _ReLo_ model \(\mathcal{M}_{c}^{\Gamma}=\langle\mathcal{At}(\Gamma),\Pi,S_{\Pi}^{\Gamma}, \delta_{c}^{\Gamma},\lambda_{c},V_{c}^{\Gamma}\rangle\), where: * \(At(\Gamma)\) is the set of states of the canonical model; * \(\Pi\) is the model's _ReLo_ program; * \(S_{\Pi}^{\Gamma}\) are the canonical relations over \(\Gamma\); * \(\delta_{c}^{\Gamma}\) is the canonical markup function; * \(\lambda_{c}\colon At(\Gamma)\times\mathcal{N}\to\mathbb{R}\); * \(V_{c}^{\Gamma}:At(\Gamma)\times\varphi\to\{true,false\}\), namely \(V_{c}^{\Gamma}(A,p)=\{A\in At(\Gamma)\mid p\in A\}\); **Lemma 6**.: _For all programs \(\pi=(f,b)\) that compose \(\Pi\), \(t=\delta_{c}^{\Gamma}(A)\):_ 1. _If_ \(f_{ReLo}(t,(f,b))\neq\varepsilon\)_, then_ \(f_{ReLo}(t,(f,b))\prec\delta_{c}^{\Gamma}(B)\) _iff_ \(AS_{\Pi}^{\Gamma}B\)_._ 2. _If_ \(f_{ReLo}(t,(f,b))=\varepsilon\)_, then_ \((A,B)\notin S_{\Pi}^{\Gamma}\)_._ Proof.: The proof for 1. is straightforward from Definition 15. The proof for 2. follows from axiom \(R\). Because \(f_{ReLo}(t,(f,b))=\varepsilon\), no other state is reached from the current state, hence no state \(B\) related with \(A\) by \(R_{\Pi}^{\Gamma}\) can be reached. The following lemma states that canonical models always exists if there is a formula \(\langle t,(f,b)\varphi\rangle\in FL(\Gamma)\), a set of formulae \(\Gamma\) and a Maximal Consistent Set \(A\in At(\Gamma)\). This assures that given the required conditions, a canonical model can always be built. **Lemma 7** (Existence Lemma for canonical models).: _Let \(A\) be an atom of \(At(\Gamma)\) and \(\langle t,(f,b)\rangle\varphi\in FL(\Gamma)\). \(\langle t,(f,b)\rangle\varphi\in A\iff\exists\) an atom \(B\in At(\Gamma)\) such that \(AS_{\Pi}^{\Gamma}B\), \(t\prec\delta_{c}^{\Gamma}(A)\) and \(\varphi\in B\)._ Proof.: \(\Rightarrow\) Let \(A\in At(\Gamma)\)\(\langle t,(f,b)\rangle\varphi\in FL(\Gamma)\) and \(\langle t,(f,b)\rangle\varphi\in A\). Because \(A\in At(\Gamma)\), from Definition 15 we have \(t\prec\delta_{c}^{\Gamma}(A)\). From Lemma 5 we have that if \(\psi\in FL(\Gamma)\) and \(\psi\) is consistent, then there is an atom of \(\Gamma\), \(\Gamma^{\prime}\) where \(\psi\in\Gamma^{\prime}\). Rewriting \(\varphi\) as \((\varphi\wedge\gamma)\vee(\varphi\wedge\neg\gamma)\) (a tautology from Propositional Logic), an atom \(B\in At(\Gamma)\) can be constructed, because either \(\langle t,(f,b)\rangle(\varphi\wedge\gamma)\) or \(\langle t,(f,b)\rangle(\varphi\wedge\neg\gamma)\) is consistent. Therefore, considering all formulae \(\gamma\in FL(\Gamma)\), \(B\in At(\Gamma)\) is constructed with \(\varphi\in B\) and \(A\wedge(\langle t,(f,b)\rangle\varphi\bigwedge B\). From Definition 14, \(AS_{\Pi}^{\Gamma}B\). \(\Leftarrow\) Let \(A\in At(\Gamma)\) and \(\langle t,(f,b)\rangle\varphi\in FL(\Gamma)\). Also, let \(B\in At(\Gamma)\), \(AS_{\Pi}^{\Gamma}B\), \(t\prec\delta_{c}^{\Gamma}(A)\), and \(\varphi\in B\). As \(AS_{\Pi}^{\Gamma}B\), from Definition 14, \(AS_{\Pi}^{\Gamma}B\leftrightarrow(A\wedge\langle t,(f,b)\rangle\bigwedge B)\), \(\forall\varphi_{i}\in B\) is consistent. From \(\varphi\in B\), \((A\wedge\langle t,(f,b)\rangle\varphi)\) is also consistent. As \(A\in At(\Gamma)\) and \(\langle t,(f,b)\varphi\in FL(\Gamma)\), by Definition 13, as \(A\) is maximal, then \(\langle t,(f,b)\rangle\varphi\in A\). The following lemma formalizes the truth notion for a canonical model \(\mathcal{M}_{c}^{\Gamma}\), given a state \(s\) and a formula \(\varphi\). It formalizes the semantic notion for canonical models in _ReLo_. **Lemma 8** (Truth Lemma).: _Let \(\mathcal{M}_{c}^{\Gamma}=\langle At(\Gamma),\Pi,S_{\Pi}^{\Gamma},\delta_{c}^{ \Gamma},\lambda,V_{c}^{\Gamma}\rangle\) be a canonical model over a formula \(\gamma\). Then, for every state \(A\in At(\Gamma)\) and every formula \(\varphi\in FL(\gamma)\colon\mathcal{M}_{c}^{\Gamma},A\Vdash\varphi\iff \varphi\in A\)._ Proof.: The proof proceeds by induction over the structure of \(\varphi\). * Induction basis: suppose \(\varphi\) is a proposition \(p\). Therefore, \(\mathcal{M}_{c}^{\Gamma},A\Vdash p\). From Definition 16, \(\mathcal{M}_{c}^{\Gamma}\)'s valuation function is \(V_{c}^{\Gamma}(p)=\{A\in At(\Gamma)\mid p\in A\}\). Therefore, \(p\in A\). * Induction Hypothesis: Suppose \(\varphi\) is a non atomic formula \(\psi\). Then, \(\mathcal{M}_{c}^{\Gamma},A\Vdash\psi\iff\psi\in A\), \(\psi\) a strict subformula of \(\varphi\). * Inductive step: Let us prove it holds for the following cases (we ommit propositional operators): Case \(\varphi=\langle t,(f,b)\rangle\phi\). Then, \(\mathcal{M}_{c}^{\Gamma},A\Vdash\langle t,(f,b)\rangle\phi\iff\langle t,(f,b) \rangle\phi\in A\): \(\Rightarrow\) Let \(\mathcal{M}_{c}^{\Gamma},A\Vdash\langle t,(f,b)\rangle\phi\). From Definition 14, there is a state \(B\) where \(AS_{\Pi}^{\Gamma}B\) and \(\phi\in B\). By Lemma 7, \(\langle t,(f,b)\rangle\phi\in A\). Therefore, it holds. \(\Leftarrow\) Let \(\mathcal{M}_{c}^{\Gamma},A\Vdash\langle t,(f,b)\rangle\phi\). From Definition 16's valuation function \(V_{c}^{\Gamma}\) and Lemma 5, we have \(\mathcal{M}_{c}^{\Gamma},A\Vdash\langle t,(f,b)\rangle\phi\). Therefore, for every \(B\) where \(AS_{\Pi}^{\Gamma}B,\mathcal{M}_{c}^{\Gamma},B\Vdash\neg\phi\). From the induction hypothesis, \(\phi\notin B\). Hence, From Lemma 7, \(\langle t,(f,b)\rangle\phi\notin A\). * Case \(\varphi=\langle t,(f,b)^{*}\rangle\phi\). Then, \(\mathcal{M}_{c}^{\Gamma},A\Vdash\langle t,(f,b)^{*}\rangle\phi\iff\langle t,(f,b)^{*}\rangle\phi\in A\): \(\Rightarrow\) Let \(\mathcal{M}_{c}^{\Gamma},A\Vdash\langle t,(f,b)^{*}\rangle\phi\). From Definition 14, there is a state \(B\) where \(AS_{\Pi^{\Gamma}}^{\Gamma}B\) and \(\phi\in B\). By Lemma 7, \(\langle t,(f,b)^{*}\rangle\phi\in A\). Therefore, it holds. \(\Leftarrow\) Let \(\mathcal{M}_{c}^{\Gamma},A\Vdash\langle t,(f,b)^{*}\rangle\phi\). From Definition 16's valuation function \(V_{c}^{\Gamma}\) and Lemma 5, we have \(\mathcal{M}_{c}^{\Gamma},A\Vdash\langle t,(f,b)^{*}\rangle\phi\). Therefore, for every \(B\) where \(AS_{\Pi^{\Gamma}}^{\Gamma}B,\mathcal{M}_{c}^{\Gamma},B\Vdash\neg\phi\). From the induction hypothesis, \(\phi\notin B\). Hence, From Lemma 7, \(\langle t,(f,b)^{*}\rangle\phi\notin A\). We proceed by formalizing the following lemma, which is bound to show that the properties that define \(\star\) for regular \(ReLo\) models also holds in \(ReLo\) canonical models. **Lemma 9**.: _Let \(A,B\in At(\Gamma)\) and \(\Pi\) a ReLo program. If \(AS_{\Pi^{\Gamma}}B\) then \(AS_{\Pi}^{\star}B\)_ Proof.: Suppose \(AS_{\Pi^{\Gamma}}B\). Define \(C=\{C^{\prime}\in At(\Gamma)\mid AS_{\Pi}^{\star}C\}\) as the set of all atoms \(C^{\prime}\) which \(A\) reaches by means of \(S_{\Pi^{\star}}\). We will show that \(B\in C\). Let \(C_{c}\) be the maximal consistent set obtained by means of Lemma 5, \(C_{c}=\{\bigwedge C_{1}\lor C_{2}\vee\ldots\bigwedge C_{n}\}\), where the conjunction of each \(C_{i}\) is consistent, and each \(C_{i}\) is a maximal consistent set. Also, define \(t=\delta_{c}^{\Gamma}(C_{c})\) as the canonical markup of \(C_{c}\). Note that \(C_{c}\wedge\langle t,(f,b)\rangle\neg C_{c}\) is inconsistent: if it was consistent, then for some \(D\in At(\Gamma)\) which \(A\) cannot reach, \(C_{c}\wedge\langle t,(f,b)\rangle\bigwedge D\) would be consistent, which leads to \(\bigwedge C_{1}\lor C_{2}\vee\cdots\lor C_{i}\vee\langle t,(f,b)\rangle\bigwedge D\) also being consistent, for some \(C_{i}\). By the definition of \(C_{c}\), this means that \(D\in C\) but that is not the case (because \(D\in C_{c}\) contradicts \(D\) not being reached from \(A\) and consequently \(C_{c}\)'s definition, as \(D\in C_{c}\) leads to D being reachable from \(A\)). Following a similar reasoning, \(\bigwedge A\wedge\langle t,(f,b)\rangle C_{c}\) is also inconsistent and therefore its negation, \(\bigwedge\neg(A\wedge\langle t,(f,b)\rangle C_{c})\) is consistent, which can be rewritten as \(\bigwedge A\rightarrow[t,(f,b)]C_{c}\). Because \(C_{c}\wedge\langle t,(f,b)\rangle\neg C_{c}\) is inconsistent, its negation \(\neg(C_{c}\wedge\langle t,(f,b)\rangle\neg C_{c})\) is valid, which can be rewritten to \(\vdash C_{c}\rightarrow[t,(f,b)]C_{c}\) (I). Therefore, by applying generalization we have \(\vdash[t,(f,b)^{\star}](C_{c}\rightarrow[t,(f,b)]C_{c})\). By axiom **(It)**, we derive \(\vdash[t,(f,b)]C_{c}\rightarrow[t,(f,b)^{\star}]C_{c}\) (II). By rewriting (II) in (I) we derive \(C_{c}\rightarrow[t,(f,b)^{\star}]C_{c}\). As \(\bigwedge A\rightarrow[t,(f,b)]C_{c}\) is valid, from (II) \(\bigwedge A\rightarrow[t,(f,b)^{\star}]C_{c}\) also is valid. From the hypothesis \(AS_{\pi^{\star}}B\) and \(C_{c}\)'s definition, \(\bigwedge A\wedge\langle t,(f,b)^{\star}\rangle B\) and \(\bigwedge B\wedge C_{c}\) are consistent (the latter from \(C_{c}\)'s definition). Then, there is a \(C_{i}\in C_{c}\) such that \(\bigwedge B\wedge\bigwedge C\) is consistent. But because each \(C_{i}\) is a maximal consistent set, it is the case that \(B=C_{i}\), which by the definition of \(C_{c}\) leads to \(AS_{\Pi}^{\star}B\). **Definition 17** (Proper Canonical Model).: The proper canonical model over a set of formulae \(\Gamma\) is defined as a tuple \(\langle At(\Gamma),\Pi,R_{\Pi}^{\Gamma},\delta_{\Gamma}^{\Gamma},\lambda_{c},V_{ \Pi}^{\Gamma}\rangle\) as follows: * \(At(\Gamma)\) as the set of atoms of \(\Gamma\); * \(\Pi\) as the \(ReLo\) program; * The relation \(R\) of a \(ReLo\) program \(\Pi\) is inductively defined as: * \(R_{\pi}=S_{\pi}\) for each canonical program \(\pi\); * \(\Pi=\pi_{1}\odot\pi_{2}\odot\cdots\odot\pi_{n}\) a \(ReLo\) program, \(R_{\Pi}\subseteq S\times S\) as follows: \(*\)\(R_{\pi_{i}}=\{uR_{\pi_{i}}v\mid f_{ReLo}(t,\pi_{i})\prec\delta(v)\}\), \(t\prec\delta(u)\) and \(\pi_{i}\) is any combination of any atomic programs which is a subprogram of \(\Pi\). * \(\delta_{\Pi}^{\Gamma}\) as the canonical markup function; * \(\lambda_{c}\colon At(\Gamma)\times\mathcal{N}\to\mathbb{R}\); * \(V_{c}^{\Gamma}(A,p)=\{A\in At(\Gamma)\mid p\in A\}\) as the canonical valuation introduced by Definition 16. **Lemma 10**.: _Every canonical model for \(\Pi\) has a corresponding proper canonical model: for all programs \(\Pi\), \(S_{\Pi}^{\Gamma}\subseteq R_{\Pi}^{\Gamma}\)_ Proof.: The proof proceeds by induction on \(\Pi\)'s length * For basic programs \(\pi\), it follows from Definition 17: * \(\Pi^{*}\): From Definition 8, \(R_{\pi^{*}}=R_{\pi^{*}}^{*}\). By the induction hypothesis, \(S_{\Pi}^{\Gamma}\subseteq R_{\Pi}^{\Gamma}\), also from the definition of RTC, we have that if \((S_{\Pi}^{\Gamma})\subseteq(R_{\Pi}^{\Gamma})\), then \((S_{\Pi}^{\Gamma})^{*}\subseteq(R_{\pi}^{\Gamma})^{*}\) (i). From Lemma 9, \(S_{\Pi^{*}}^{\Gamma}\subseteq(S_{\Pi}^{\Gamma})^{*}\), which leads to \((S_{\Pi}^{\Gamma})^{*}\subseteq(R_{\Pi}^{\Gamma})^{*}\) by (i). Finally, \((R_{\Pi}^{\Gamma})^{*}=(R_{\Pi^{*}}^{\Gamma})\). Hence, \((S_{\Pi^{*}}^{\Gamma})\subseteq(R_{\Pi^{*}}^{\Gamma})\) **Lemma 11** (Existence Lemma for Proper Canonical Models).: _Let \(A\in At(\Gamma)\) and \(\langle t,(f,b)\rangle\varphi\in FL(\Gamma)\). Then, \(\langle t,(f,b)\rangle\varphi\in A\leftrightarrow\) exists \(B\in At(\Gamma),AR_{\Pi}^{\Gamma}B,t\prec\delta_{c}^{\Gamma}(A)\) and \(\varphi\in B\)._ Proof.: \(\Rightarrow\) Let \(\langle t,(f,b)\rangle\varphi\in A\). From Lemma 7 (Existence Lemma for canonical models), there is an atom \(B\in At(\Gamma)\) where \(AS_{\Pi}^{\Gamma}B\), \(t\prec\delta_{c}^{\Gamma}(A)\) and \(\varphi\in B\). From Lemma 10, \(S_{\Pi}^{\Gamma}\subseteq R_{\Pi}^{\Gamma}\). Therefore, there is an atom \(B\in At(\Gamma)\) where \(AR_{\Pi}^{\Gamma}B\), \(t\prec\delta_{c}^{\Gamma}(A)\) and \(\varphi\in B\). \(\Leftarrow\) Let \(B\) an atom, \(B\in At(\Gamma),AR_{\Pi}B,t\prec\delta_{c}^{\Gamma}(A)\) and \(\varphi\in B\). The proof follows by induction on the program \(\Pi=(f,b)\) as follows: * a canonical program \(\pi_{i}\): this case is straightforward as from Definition 17, \(S_{\pi_{i}}=R_{\pi_{i}}\), and consequently \(AS_{\pi}B,t\prec\delta_{c}^{\Gamma}(A)\) and (i) \(\varphi\in B\). From Lemma 7 and (i), \(\langle t,(f,b)\rangle\varphi\in A\). * \(\Pi^{*}\): from Definition 17, \(R_{\Pi^{*}}=R_{\Pi}^{*}\). Then, let \(B\in At(\Gamma),AR_{\Pi^{*}}B,t\prec\delta_{c}^{\Gamma}(A)\) and \(\varphi\in B\). This means that there is a finite nondeterministic number \(n\) where \(AR_{\Pi^{*}}B=AR_{\Pi}A_{1}R_{\Pi}A_{2}\ldots R_{\Pi}A_{n}\), where \(A_{n}=B\). The proof proceeds by induction on \(n\): * \(n=1\): \(AR_{\Pi}B\) and \(\varphi\in B\). Therefore, from Lemma 7,\(\langle t,(f,b)\rangle\varphi\in A\). From axiom Rec, one may derive \(\Vdash\langle t,(f,b)\rangle\varphi\to\langle t,(f,b)\rangle\varphi\). By the definition of FL and \(A\)'s maximality (as it is an atom of \(\Gamma\)) \(\langle t,(f,b)^{*}\rangle\varphi\in A\). * \(n>1\): From the previous proof step and the induction hypothesis, \(\langle t,(f,b)^{*}\rangle\in A_{2}\) and \(\langle t,(f,b)\rangle\langle t,(f,b)^{*}\rangle\in A_{1}\). From axiom Rec, one can derive \(\Vdash\langle t,(f,b)\rangle\langle t,(f,b)^{*}\rangle\varphi\to\langle t,(f,b )^{*}\rangle\varphi\). By the definition of \(FL\), and \(A\)'s maximality (as it is an atom of \(\Gamma\)), \(\langle t,(f,b)^{*}\rangle\varphi\in A\). **Lemma 12** (Truth Lemma for Proper Canonical Models).: _Let \(\mathcal{M}_{c}^{\Gamma}=\langle At(\Gamma),\Pi,R_{\Pi}^{\Gamma},\delta_{\Pi}^{ \Gamma},\lambda_{c},V_{\Pi}^{\Gamma}\rangle\) a proper canonical model constructed over a formula \(\gamma\). For all atoms \(A\) and all \(\varphi\in FL(\gamma)\). \(\mathcal{M},A\Vdash\varphi\leftrightarrow\varphi\in A\)._ Proof.: The proof proceeds by induction over \(\varphi\). * Induction basis: \(\varphi\) is a proposition p. Therefore, \(\mathcal{M}_{c}^{\Gamma},A\Vdash p\) holds from Definition 17 as \(V_{c}^{\Gamma}(p)=\{A\in At(\Gamma)\mid p\in A\}\). * Induction hypothesis: suppose \(\varphi\) is a non atomic formula \(\psi\). Then, \(\mathcal{M},A\Vdash\varphi\iff\varphi\in A\), \(\psi\) a strict subformula of \(\varphi\). * Inductive step: let us prove it holds for the following cases (we show only for modal cases): * Case \(\varphi=\langle t,(f,b)\rangle\phi\). Then, \(\mathcal{M}_{c}^{\Gamma},A\Vdash\langle t,(f,b)\rangle\phi\iff\langle t,(f,b) \rangle\phi\in A\): \(\Rightarrow\) Let \(\mathcal{M}_{c}^{\Gamma},A\Vdash\langle t,(f,b)\rangle\phi\). From Definition 14, there is an atom \(B\) where \(AS_{\Pi}^{\Gamma}B\) and \(\phi\in B\). By Lemma 11, \(\langle t,(f,b)\rangle\phi\in A\). Therefore, it holds. \(\Leftarrow\) Let \(\mathcal{M}_{c}^{\Gamma},A\Vdash\langle t,(f,b)\rangle\phi\). From Definition 16's valuation function \(V_{c}^{\Gamma}\) and Lemma 5, we have \(\mathcal{M}_{c}^{\Gamma},A\Vdash\neg\langle t,(f,b)\rangle\phi\). Therefore, for every \(B\) where \(AS_{\Pi}^{\Gamma}B,\mathcal{M}_{c}^{\Gamma}\Vdash\neg\phi\). From the induction hypothesis, \(\phi\notin B\). Hence, from Lemma 11\(\langle t,(f,b)\rangle\phi\notin A\). * Case \(\varphi=\langle t,(f,b)^{\star}\rangle\phi\). Then, \(\mathcal{M}_{c}^{\Gamma},A\Vdash\langle t,(f,b)^{\star}\rangle\phi\iff \langle t,(f,b)^{\star}\rangle\phi\in A\): \(\Rightarrow\) Let \(\mathcal{M}_{c}^{\Gamma},A\Vdash\langle t,(f,b)^{\star}\rangle\phi\). From Definition 14, there is a state \(B\) where \(AS_{\Pi}^{\Gamma}B\) and \(\phi\in B\). By Lemma 7, \(\langle t,(f,b)^{\star}\rangle\phi\in A\). Therefore, it holds. \(\Leftarrow\) Let \(\mathcal{M}_{c}^{\Gamma},A\Vdash\langle t,(f,b)^{\star}\rangle\phi\). From Definition 16's valuation function \(V_{c}^{\Gamma}\) and Lemma 5, we have \(\mathcal{M}_{c}^{\Gamma},A\Vdash\neg\langle t,(f,b)^{\star}\rangle\phi\). Therefore, for every \(B\) where \(AS_{\Pi}^{\Gamma}B,\mathcal{M}_{c}^{\Gamma},B\Vdash\neg\phi\). From the induction hypothesis, \(\phi\notin B\). Hence, From Lemma 7, \(\langle t,(f,b)^{\star}\rangle\phi\notin A\). **Theorem 1** (Completeness of _ReLo_).: _Proof._ For every consistent formula \(A\), a canonical model \(\mathcal{M}\) can be constructed. From Lemma 5, there is an atom \(A^{\prime}\in At(A)\) with \(A\in A^{\prime}\), and from Lemma 12, \(\mathcal{M},A^{\prime}\Vdash A\). Therefore, _ReLo_'s modal system is complete with respect to the class of proper canonical models as Definition 17 proposes. ## 5 Conclusions and Further Work Reo is a widely used tool to model new systems out of the coordination of already existing pieces of software. It has been used in a variety of domains, drawing the attention of researchers from different locations around the world. This has resulted in Reo having many formal semantics proposed, each one employing different formalisms: operational, co-algebraic, and coloring semantics are some of the types of semantics proposed for Reo. This work extends _ReLo_, a dynamic logic to reason about Reo models. We have discussed its core definitions, syntax, semantic notion, providing soundness and completeness proofs for it. _ReLo_ naturally subsumes the notion of Reo programs and models in its syntax and semantics, and implementing its core concepts in Coq enables the usage of Coq's proof apparatus to reason over Reo models with _ReLo_. Future work may consider the integration of the current implementation of _ReLo_ with ReoXplore4, a platform conceived to reason about Reo models, and extensions to other Reo semantics. Investigations and the development of calculi for _ReLo_ are also considered for future work. Footnote 4: [https://github.com/frame-lab/ReoXplore2](https://github.com/frame-lab/ReoXplore2)
2301.02521
SAIDS: A Novel Approach for Sentiment Analysis Informed of Dialect and Sarcasm
Sentiment analysis becomes an essential part of every social network, as it enables decision-makers to know more about users' opinions in almost all life aspects. Despite its importance, there are multiple issues it encounters like the sentiment of the sarcastic text which is one of the main challenges of sentiment analysis. This paper tackles this challenge by introducing a novel system (SAIDS) that predicts the sentiment, sarcasm and dialect of Arabic tweets. SAIDS uses its prediction of sarcasm and dialect as known information to predict the sentiment. It uses MARBERT as a language model to generate sentence embedding, then passes it to the sarcasm and dialect models, and then the outputs of the three models are concatenated and passed to the sentiment analysis model. Multiple system design setups were experimented with and reported. SAIDS was applied to the ArSarcasm-v2 dataset where it outperforms the state-of-the-art model for the sentiment analysis task. By training all tasks together, SAIDS achieves results of 75.98 FPN, 59.09 F1-score and 71.13 F1-score for sentiment analysis, sarcasm detection, and dialect identification respectively. The system design can be used to enhance the performance of any task which is dependent on other tasks.
Abdelrahman Kaseb, Mona Farouk
2023-01-06T14:19:46Z
http://arxiv.org/abs/2301.02521v1
# SAIDS: A Novel Approach for ###### Abstract Sentiment analysis becomes an essential part of every social network, as it enables decision-makers to know more about users' opinions in almost all life aspects. Despite its importance, there are multiple issues it encounters like the sentiment of the sarcastic text which is one of the main challenges of sentiment analysis. This paper tackles this challenge by introducing a novel system (SAIDS) that predicts the sentiment, sarcasm and dialect of Arabic tweets. SAIDS uses its prediction of sarcasm and dialect as known information to predict the sentiment. It uses MARBERT as a language model to generate sentence embedding, then passes it to the sarcasm and dialect models, and then the outputs of the three models are concatenated and passed to the sentiment analysis model. Multiple system design setups were experimented with and reported. SAIDS was applied to the ArSarcasm-v2 dataset where it outperforms the state-of-the-art model for the sentiment analysis task. By training all tasks together, SAIDS achieves results of 75.98 FPN, 59.09 F1-score and 71.13 F1-score for sentiment analysis, sarcasm detection, and dialect identification respectively. The system design can be used to enhance the performance of any task which is dependent on other tasks. ## 1 Introduction Sentiment analysis (SA) is one of the main tasks in the natural language processing (NLP) field. It is used for opinion mining which supports decision-makers. Working on sentiment analysis starts relatively early, for example, Pang et al. (2002) analysed the sentiment to positive and negative in movie reviews. Following this paper, sentiment analysis becomes one of the most important topics in NLP, especially with the increasing number of reviews on websites and social media platforms. Since then, a lot of work has been done in English sentiment analysis, while Arabic has relatively much less. Since Abbasi et al. (2008) started their work on Arabic SA, multiple researchers also began theirs. Now there are well-known Arabic SA models like (Alayba et al., 2018; Abdulla et al., 2013; Abu Farha and Magdy, 2021; Elshakankery and Farouk, 2019). Of course, working with Arabic has many challenges, one of the most challenging issues is the complex morphology of the Arabic language (Kaseb and Farouk, 2016; Abdul-Mageed, 2019). Another challenge is the variety of Arabic dialects (Abdul-Mageed, 2019). Moreover, one of the well-known challenges in SA for all languages is sarcasm, as the sarcastic person uses words and means the opposite of it. For example, "I'd really truly love going out in this weather!", does it reflect a positive or negative sentiment? because of the sarcasm, we cannot judge the sentiment correctly. Several related works tackle English sarcasm detection with sentiment analysis (Oprea and Magdy, 2020; Abercrombie and Hovy, 2016; Barbieri et al., 2014). On the other hand, there are only a few works on both sentiment and sarcasm in Arabic. There are two shared tasks on sarcasm detection (Ghanem et al., 2019), but for both sarcasm and sentiment there was only one shared task Abu Farha et al. (2021) but each sub-task is independent, meaning that participating teams can submit a different model for each task. Some participants used the same model for both sentiment and sarcasm (El Mahdaouy et al., 2021). Instead of training sentiment independently of sarcasm, this work introduces a new model architecture that works with multi-task training which trains both at the same time. There are other additions to the proposed architecture; firstly, it trains with dialect also. Secondly, the sarcasm and dialect that are initially predicted are used in the prediction of the sentiment. In other words, the sentiment model is informed by the sarcasm and dialect model output. The contributions offered by this work are: * Design a novel model architecture that can be used for a complicated task that is dependent on another task, e.g. sentiment analysis which is dependent on sarcasm detection. * Investigate the design setups for the new architecture and find the best setup that could be used. * Train the model on ArSarcam-v2 dataset and achieve the state-of-the-art results recorded as 75.98 FPN on sentiment analysis. This paper is organized as follows Section 2 shows the related work on sentiment analysis, sarcasm detection, and dialect identification. Section 3 describes the dataset used in this work and shows data statistics. Section 4 describes SAIDS model and all the design setups. Section 5 shows the experimental results and finally section 6 concludes the work. ## 2 Related Work SAIDS works on three tasks sentiment analysis, sarcasm detection, and dialect identification. In this section, the existing methods for each task are discussed. ### Sentiment Analysis Arabic sentiment analysis started with Abbasi et al. (2008) work. Since then, it is developed by multiple researchers. In the beginning, the main focus was on modern standard Arabic (MSA), but over time the researchers start to focus on dialectal Arabic Mourad and Darwish (2013); Kaseb and Farouk (2021). Regarding the datasets, based on Alyafeai et al. (2021), there are more than fifty datasets for sentiment analysis, including Elshakankery et al. (2021); Kaseb and Farouk (2019); Kiritchenko et al. (2016); Rosenthal et al. (2017); Elmadany et al. (2018) datasets. Because of the massive number of datasets, there are a massive number of system approaches for Arabic sentiments Abu Farha and Magdy (2019); Alayba et al. (2018); El-Beltagy et al. (2017). Based on Abu Farha and Magdy (2021) comparative study, using the word embedding with deep learning models outperform, the classical machine learning models and the transformer-based models outperform both of them. There is a reasonable number of Arabic transformer-based models like AraBERT Antoun et al. (2020) and MARBERT Abdul-Mageed et al. (2021) which are used by most Arabic sentiment analysis papers. ### Sarcasm Detection Unlike Arabic sentiment analysis, Arabic sarcasm detection has not gotten much attention yet. Only a few research works tackle the problem and still there is an obvious shortage of the Arabic sarcasm datasets, like Karoui et al. (2017); Abu Farha et al. (2022). Abbes et al. (2020) collected a dataset for sarcastic tweets, they used hashtags to collect the dataset for example #sarcasm. Then, they built multiple classical machine learning models SVM, Naive Bayes, and Logistic Regression, the best F1-score was 0.73. After that, Ghanem et al. (2019) organized a shared task in a workshop on Arabic sarcasm detection. They built the dataset by collecting tweets on different topics and using hashtags to set the class. An additional step was added, by sampling some of the datasets and manually annotating them. In this shared task, eighteen teams were working on sarcasm detection. Khalifa and Hussein (2019) was the first team and achieved a 0.85 F1-score. Then Abu Farha et al. (2021) made two tasks based on the ArSarcasm-v2 dataset; sentiment analysis and sarcasm detection. They have 27 teams participating in the workshop, the top teams achieved 62.25 F1-score and 74.80 FPN for sarcasm detection and sentiment analysis respectively. ### Dialect Identification Arabic dialect identification is an NLP task to identify the dialect of a written text. It can be on three levels, the first level is to identify MSA, classical Arabic (CA), and dialectical Arabic McWhorter (2004). The second level is to identify the dialect based on five main Arabic dialects EGY, LEV, NOR, Gulf, and MSA El-Haj (2020); Khalifa et al. (2016); Sadat et al. (2014); Al-Sabbagh and Girju (2012); Egan (2010). The third level is to identify the country-level dialect Abdul-Mageed et al. (2020). Regarding the datasets, there are datasets more than twenty Arabic datasets labeled with dialect. One of the most popular datasets is MADAR Bouamor et al. (2018) where the data is labeled at the city-level for 25 Arab cities. Abdul-Mageed et al. (2020) built a shared task to detect the dialect, they published three different shared tasks. In the 2020 task, sixty teams participated, and the best results were 26.78 and 6.39 F1-score in the country-level and the city-level dialects respectively. ## 3 Dataset ArSarcasm-v2 Abu Farha et al. (2021) is the main dataset used in this work, it was released on WANLP 2021 shared task for two tasks sarcasm and sentiment analysis. It has about 15k tweets and is divided into 12k for training and 3k for testing, the same test set, as released on WANLP 2021, was used. Each tweet was labelled for the sentiment (positive (POS), neutral (NEU), and negative (NEG)), sarcasm (true, and false), and dialect (MSA, Egypt (EGY), Levantine (LEV), Maghreb (NOR), and Gulf). The authors of the dataset annotate it using a crowd-sourcing platform. This dataset originally consisted of a combination of two datasets, the first one is ArSarcasm Abu Farha and Magdy (2020) and the second one is DAICT Abbes et al. (2020), Abu Farha et al. (2021) merged the two datasets. ### Dataset Statistics In this subsection, we introduce some dataset statistics that motivated us to work on SAIDS. The ArSarcasm-v2 dataset has 15,548 tweets, 3000 tweets are kept for testing and the rest of the tweets for training. Table 1 shows the number of examples for all task labels on the training set, as we can see, most of the data is labeled as MSA and non-sarcastic in dialect and sarcasm respectively. The relationship between sentiment labels and both sarcasm and dialect independently can be shown from Table 2. For the sentiment/sarcasm part, we can see that about 90 percent of sarcastic tweets are sentimentally labeled as negative, and about 50 percent of non-sarcastic tweets are sentimentally labeled as neutral. On the other hand, for the sentiment/dialect part, we can see that about 50 percent of MSA tweets are sentimentally labeled as neutral and about 50 percent of EGY tweets are sentimentally labeled as negative. From this table, we can conclude that the information we can get on sarcasm and dialect will benefit the sentiment analysis task. Table 3 shows the percentage of sarcastic tweets on each dialect. As the number of NOR tweets is limited, its percentage is not reliable, so we can see that Egyptians' tweets are the most sarcastic. This supports the facts from table 2 that most EGY tweets are negative and most of the sarcastic tweets are negative tweets. ## 4 Proposed System This section presents a detailed description of the proposed system. SAIDS learns sentiment analysis, sarcasm detection, and dialect identification at the same time (multi-task training), in addition, it uses the sarcasm detection and dialect outputs as an additional input to the sentiment analysis model which is called "informed decision". SAIDS decides the sentiment class using the information of \begin{table} \begin{tabular}{l l|c} \hline **Task** & **Label** & **Count** \\ \hline \hline **Sentiment** & Positive & 2,180 \\ & Neutral & 5,747 \\ & Negative & 4,621 \\ \hline \hline **Sarcasm** & Sarcastic & 2,168 \\ & Non-sarcastic & 10,380 \\ \hline \hline **Dialect** & MSA & 8,562 \\ & EGY & 2,675 \\ & Gulf & 644 \\ & LEV & 624 \\ & NOR & 43 \\ \hline \hline **Total** & & 12,548 \\ \hline \end{tabular} \end{table} Table 1: Number of labels of sentiment, sarcasm and dialect on the training set \begin{table} \begin{tabular}{l|c c c} \hline & **POS** & **NEU** & **NEG** \\ \hline \hline **Non-sarcastic** & 2,122 & 5,576 & 2,682 \\ **Sarcastic** & 58 & 171 & 1,939 \\ \hline \hline **MSA** & 1,405 & 4,486 & 2,671 \\ **EGY** & 506 & 793 & 1,376 \\ **Gulf** & 121 & 259 & 264 \\ **LEV** & 142 & 197 & 285 \\ **NOR** & 6 & 12 & 25 \\ \hline \end{tabular} \end{table} Table 2: Cross tabulation between sentiment labels and both sarcasm and dialect labels on the training set \begin{table} \begin{tabular}{l|c} \hline **Dialect** & **Sarcasm percentage** \\ \hline \hline **MSA** & 10.83 \% \\ **EGY** & 34.77 \% \\ **Gulf** & 24.38 \% \\ **LEV** & 22.12 \% \\ **NOR** & 34.88 \% \\ \hline \end{tabular} \end{table} Table 3: Percentage of sarcastic tweets for each dialect on the training set sarcasm and dialect class which are both outputs itself. The main idea behind SAIDS is based on analyzing the dataset statistics, as shown in section 3, which says that most sarcastic tweets are classified as negative tweets and most MSA tweets are classified as neutral tweets. ### System Architecture Figure 1 shows the SAIDS architecture. The architecture consists of four main modules, the first module is MARBERTv2 Abdul-Mageed et al. (2021), it is a transformer-based model, its input is the tweet, and its output is a sentence embedding which is a vector of length 768. The second module is the "Sarcasm Model", it is a binary classifier for sarcasm, its input is the sentence embedding, and its output is two values one for sarcastic tweets and another for non-sarcastic tweets. The third module is the "Dialect Model", which is identical to the "Sarcasm Model" except that it outputs five classes (EGY, LEV, NOR, Gulf, and MSA). The fourth module is the "Sentiment Model", it is a classifier for sentiment, its input is the concatenation of the sentence embedding, sarcasm model outputs and dialect model outputs. The loss function used is Cross-Entropy for sentiment and dialect. Of course, since sarcasm is binary, we used binary Cross-Entropy for it. ### Training Setups This subsection describes the multiple setups that were used to arrive at the best model performance. The experiments carried out utilized multiple setups regarding the architecture and the training strategies. **Modules Architecture** Multiple architectures were tested for the "Sentiment Model", "Sarcasm Model" and "Dialect Model". As a proof of concept for the idea, we first built a simple random forest model in each task model (random forest version). For the real scenario, we used multi-layer neural network (MNN) models. The first and the simplest is one output layer model and zero hidden layers. The second is one or two hidden layers, then the output layer. The third is one or two hidden layers the output of the module is the output of the hidden layer, which means that "Sentiment Model" inputs is not the output layer of the "Sarcasm Model" but the last hidden layer of it. The fourth setup is to concatenate the last hidden layer with the output layer and then pass it to "Sentiment Model". **What Should Be Informed** The SAIDS architecture Figure 1 shows that the "Sentiment Model" inputs are "Sarcasm Model" and "Dialect Model" outputs but we experimented with multiple settings in this part; sentiment analysis informed of sarcasm only, dialect only, and both sarcasm and dialect. **Limited Backpropagation** We limited the backpropagation over the dotted lines in Figure 1. It is used to ensure that the "Sarcasm Model" and the "Dialect Model" learn their main target correctly. When the model predicts sentiment incorrectly, its loss propagates directly to the MARBERTv2 model via the solid line and does not propagate via the dotted lines. Also, we evaluate SAIDS without limiting backpropagation which means the loss propagates everywhere, and with partial limiting. The partial limiting can be only set when the "Sarcasm Model" has hidden layers. We then limit the backpropagation through the sarcasm model's output layer but propagate it through the hidden layers. **Activation Function** The experiments were carried out with Softmax as the activation function for the output of all modules. However, for the sake of comparison, we run the training without Softmax for the modules outputs, which means that the values are not from one to zero. **Task By Task Training** As we train all the three tasks together with the same model, we experimented to train the first layer models, "Sarcasm Model" and "Dialect Model", for some epochs first, then train the full system together for multiple epochs. The motivation behind this idea is that as long as the first layer models work correctly, the sentiment analysis will correspondingly work correctly. We train in multiple orders like alternating between first layer models and full system and so on. **Other Training Parameters** In our experi Figure 1: SAIDS architecture ments, we built SAIDS and used the MARBERTv2 model provided by HuggingFace's transformers library Wolf et al. (2020). Most of the experiments trained for five epochs except for a low learning rate where it was twenty epochs. For the learning rate, we used a range from \(1e^{-4}\) to \(1e^{-6}\). The sequence was truncated to a maximum length of 128 tokens. Adam Kingma and Ba (2015) was used as an optimizer for all models. ## 5 Results In this section, the results achieved with SAIDS are discussed. For the sake of comparison, baselines were built for the system. To initially evaluate the idea itself, a random forest model baseline was built and compared with the random forest version of SAIDS. Baselines for real scenario are baseline one (B1) which is identical to BERTModelForSequenceClassification class in HuggingFace's Wolf et al. (2020), which takes the MARBERTv2 sentence embedding and passes it to the output layer for classification, and baseline two (B2) which uses two hidden layers before the classification layer, the hidden layer size is equal to the "Sentiment Model" hidden layer size, and baseline three (B3) which uses a larger hidden layer size to match the total number of trained parameters of SAIDS model. For evaluation, we used the original metrics described for the dataset Abu Farha et al. (2021). For sentiment analysis, the metric is the average of the F1-score for the negative and positive classes (FPN). For sarcasm detection, the metric is F1-score for the sarcastic class only (FSar). For dialect identification, we used the weighted average of the F1-score for all dialects (WFS). ### Results of Different Training Setups This subsection presents the results of the training setups and describes the best setup that was chosen for the proposed model. For each part of this subsection, every other setup was not changed to make the comparison fair. **Modules Architecture** As a proof of concept for our system, the random forest (RF) model baseline was compared with the informed random forest (IRF) which is the random forest version of SAIDS. Table 4 shows that IRF outperforms RF where the FPN is improved by 3 percent which is due to the proposed architecture. The information gained from the new inputs, "outputs of sarcasm model" and "outputs of dialect model", was 5 and 4 percent respectively. This means that about 10 percent of the sentiment analysis decision came from the newly added information. For the MNN architecture of the modules, multiple numbers of hidden layers were trained. At each experiment, all the modules have the same number of hidden layers. Table 5 shows that using zero hidden layers gives the best results. So no hidden layer setup was used in SAIDS. **What Should Be Informed** Experiments were also done to find the best features to use while analysing sentiment. Table 6 shows that using both dialect and sarcasm is better than using only one of them and of course better than not using any of them which is the baseline. With a quick observation, it was found out that the dialect benefits the sentiment more than the sarcasm, this can be obvious when speaking about MSA tweets because most of them are labeled as neutral on sentiment. Accordingly, sarcasm and dialect information was used in SAIDS. **Limited Backpropagation** Experiments were also done to find the best path for backpropagation to work with. "Full limit" is when the loss does not propagate through the "Sarcasm model" and "Dialect Model", "Partial limit" is when it propagates \begin{table} \begin{tabular}{l|c} \hline \hline **Model** & **FPN** \\ \hline \hline **Not Informed (B1)** & 72.40 \\ **Informed of sarcasm** & 73.67 \\ **Informed of dialect** & 74.41 \\ **Informed of sarcasm and dialect** & 75.23 \\ \hline \hline \end{tabular} \end{table} Table 6: Performance comparison for what should be informed on the validation set \begin{table} \begin{tabular}{l|c} \hline \hline **Model** & **FPN** \\ \hline \hline **Random Forest** & 59.36 \\ **Informed Random Forest** & 62.34 \\ \hline \hline \end{tabular} \end{table} Table 4: Performance comparison for the proof of concept on the validation set \begin{table} \begin{tabular}{l|c} \hline \hline **Model** & **FPN** \\ \hline \hline **0 Hidden Layer** & 75.23 \\ **1 Hidden Layer** & 74.90 \\ **2 Hidden Layer** & 74.89 \\ \hline \hline \end{tabular} \end{table} Table 5: Performance comparison for the number of hidden layers in modules on the validation set through some layers, and "Unlimited" is when it propagates through all layers. The model was composed of two hidden layers while running these experiments. Table 7 shows that "Partial limit" gets better results than the others, but on SAIDS we did not use it as we used a no hidden layer setup, so we used the "Full limit" backpropagation. **Activation Function** For the sake of comparison, the Softmax layer was removed from the output layer of the model in the experiments. Table 8 compares both setups, it shows that, as expected, using Softmax is better than not using it, as it quantify the probability of being sarcasm or being a certain dialect. So in SAIDS, Softmax was used on each module. **Task By Task Training** Experiments were also done with training the three tasks together at the same time (All tasks), and multiple sets of the training sequence. The first is one epoch of training for sarcasm and dialect, and the rest for the full system (Seq 1). The second is odd epochs for sarcasm and dialect and even epochs for the full system (Seq 2). The third is two epochs of training for sarcasm and dialect and the rest for sentiment only (Seq 3). Table 9 shows that Seq 1 performs better than the other sequences, so we used it for the final model training. **Summary of Used Setups** SAIDS used information from sarcasm and dialect models, which are both one classification layer with no hidden layers, the sentiment loss does not propagate through sarcasm and dialect models, and the Softmax activation function was used on each model output. The used training sequence was one epoch of training for sarcasm and dialect, and the rest epochs for the full system. ### Results comparison with literature SAIDS was trained and compared to the baselines we built and also the state-of-the-art models. Table 10 shows that SAIDS outperforms the existing state-of-the-art models on the sentiment analysis task. SAIDS's main task is sentiment analysis, the sarcasm detection and dialect identification are considered secondary outputs. Although the FSar score for SAIDS is considerably high, it is ranked third in the state-of-the-art models. On the other hand, most works that achieve state-of-the-art results are using different models for each task but in the proposed architecture, one model is used for both. The model also outputs the dialect, it achieves 71.13 percent on the weighted F1-score metric, but the literature has not reported the dialect performance so it is not included in the table. ## 6 Conclusion Sentiment analysis is an important system that is being used extensively in decision-making, though it has different drawbacks like dealing with sarcastic sentences. In this work, we propose SAIDS which is a novel model architecture to tackle this problem. SAIDS essentially improves the sentiment analysis results while being informed of sarcasm and dialect of the sentence. This was achieved by training on the ArSarcasm-v2 dataset which is la \begin{table} \begin{tabular}{l|l} \hline \hline **Model** & **FPN** \\ \hline \hline **Full limit** & 74.23 \\ **Partial limit** & 74.89 \\ **Unlimited** & 72.31 \\ \hline \hline \end{tabular} \end{table} Table 7: Performance comparison for limiting backpropagation on the validation set \begin{table} \begin{tabular}{l|l} \hline \hline **Model** & **FPN** \\ \hline \hline **With Softmax** & 75.23 \\ **Without Softmax** & 72.15 \\ \hline \hline \end{tabular} \end{table} Table 8: Performance comparison for the activation function setting on the validation set \begin{table} \begin{tabular}{l|l} \hline \hline **Model** & **FPN** \\ \hline \hline **Full limit** & 74.23 \\ **Partial limit** & 74.89 \\ **Unlimited** & 72.31 \\ \hline \hline \end{tabular} \end{table} Table 7: Performance comparison for limiting backpropagation on the validation set \begin{table} \begin{tabular}{l|l} \hline \hline **Model** & **FPN** \\ \hline \hline **All tasks** & 74.35 \\ **Seq 1** & 75.23 \\ **Seq 2** & 73.49 \\ **Seq 3** & 73.01 \\ \hline \hline \end{tabular} \end{table} Table 9: Performance comparison for different model training sequences on the validation set beled for sentiment, sarcasm, and dialect. SAIDS's main target is to predict the sentiment of a tweet. It is trained to predict dialect and sarcasm, and then make use of them to predict the sentiment of the tweets. This means that while the model is predicting the sentiment, it is informed of its sarcasm and dialect prediction. SAIDS achieved state-of-the-art performance on the ArSarcasm-v2 dataset for predicting the sentiment; 75.98 percent average F1-score for negative and positive sentiment. For sarcasm detection, SAIDS achieved a 59.09 percent F1-score for the sarcastic class, whereas for dialect identification it achieved a 71.13 percent weighted F1-score for all the dialects. We believe that this model architecture could be used as a starting point to tackle every challenge in sentiment analysis. Not only sentiment analysis but also this is a general architecture that can be used in any context where the prediction of a task depends on other tasks. The idea behind the architecture is intuitive, train for both tasks and inform the model of the dependent task with the output of the independent task.
2304.07287
Optimizing the Resolution of Hydrodynamic Simulations for MCRaT Radiative Transfer Calculations
Despite their discovery about half a century ago, the Gamma-ray burst (GRB) prompt emission mechanism is still not well understood. Theoretical modeling of the prompt emission has advanced considerably due to new computational tools and techniques. One such tool is the PLUTO hydrodynamics code, which is used to numerically simulate GRB outflows. PLUTO uses Adaptive Mesh Refinement to focus computational efforts on the portion of the grid that contains the simulated jet. Another tool is the Monte Carlo Radiation Transfer (MCRaT) code, which predicts electromagnetic signatures of GRBs by conducting photon scatterings within a jet using PLUTO. The effects of the underlying resolution of a PLUTO simulation with respect to MCRaT post-processing radiative transfer results have not yet been quantified. We analyze an analytic spherical outflow and a hydrodynamically simulated GRB jet with MCRaT at varying spatial and temporal resolutions and quantify how decreasing both resolutions affect the resulting mock observations. We find that changing the spatial resolution changes the hydrodynamic properties of the jet, which directly affect the MCRaT mock observable peak energies. We also find that decreasing the temporal resolution artificially decreases the high energy slope of the mock observed spectrum, which increases both the spectral peak energy and the luminosity. We show that the effects are additive when both spatial and temporal resolutions are modified. Our results allow us to understand how decreased hydrodynamic temporal and spatial resolutions affect the results of post-processing radiative transfer calculations, allowing for the optimization of hydrodynamic simulations for radiative transfer codes.
Jose Arita-Escalante, Tyler Parsotan, S. Bradley Cenko
2023-04-14T17:57:41Z
http://arxiv.org/abs/2304.07287v2
# Optimizing the Resolution of Hydrodynamic Simulations for MCRaT Radiative Transfer Calculations ###### Abstract Despite their discovery about half a century ago, the Gamma-ray burst (GRB) prompt emission mechanism is still not well understood. Theoretical modeling of the prompt emission has advanced considerably due to new computational tools and techniques. One such tool is the PLUTO hydrodynamics code, which is used to numerically simulate GRB outflows. PLUTO uses Adaptive Mesh Refinement to focus computational efforts on the portion of the grid that contains the simulated jet. Another tool is the Monte Carlo Radiation Transfer (MCRaT) code, which predicts electromagnetic signatures of GRBs by conducting photon scatterings within a jet using PLUTO. The effects of the underlying resolution of a PLUTO simulation with respect to MCRaT post-processing radiative transfer results have not yet been quantified. We analyze an analytic spherical outflow and a hydrodynamically simulated GRB jet with MCRaT at varying spatial and temporal resolutions and quantify how decreasing both resolutions affect the resulting mock observations. We find that changing the spatial resolution changes the hydrodynamic properties of the jet, which directly affect the MCRaT mock observable peak energies. We also find that decreasing the temporal resolution artificially decreases the high energy slope of the mock observed spectrum, which increases both the spectral peak energy and the luminosity. We show that the effects are additive when both spatial and temporal resolutions are modified. Our results allow us to understand how decreased hydrodynamic temporal and spatial resolutions affect the results of post-processing radiative transfer calculations, allowing for the optimization of hydrodynamic simulations for radiative transfer codes. 0000-0002-4880-7880]Jose Arita-Escalante 0000-0002-4882-7880]Tyler Parsotan 0000-0002-1888-7880]S. Bradley Cenko ## 1 Introduction A number of different theories have been created to understand the phenomena of Gamma-ray bursts (GRBs) since their initial discovery in the 1960's (Klebesadel et al., 1973). One of the earliest models used to explain GRB prompt emission was the Synchrotron Shock Model (SSM)(Rees & Meszaros, 1994), which considers radiation generated when shells with different Lorentz factors collide with each other outside of the photospheric region (Daigne et al., 2011). The collisions of these shells create perturbations of the magnetic fields that lead to the excitation of leptons which then emit synchroton radiation. The SSM can naturally explain GRB properties such as lightcurve variability and the observed nonthermal spectra. Nevertheless, it fails to agree with observed correlations of GRBs such as the Amati and Yonetoku relations (Amati et al., 2002; Yonetoku et al., 2004; Zhang & Yan, 2011). Another model explaining the prompt emission mechanism is the photospheric model, which explains the phenomenon by describing thermal radiation that originates deep within a relativistic jet (Rees & Meszaros, 2005). The radiation is initially in a part of the jet with a high optical depth, leading to many interactions between the photons and the matter in the jet. As the jet expands, it becomes optically thin, allowing photons to leave the jet's photosphere and travel to the observer without additional interactions with the GRB jet. The photospheric model is able to reproduce correlations that the SSM cannot, but is unable to replicate non-thermal spectral low and high-energy tails without the consideration of the photospheric region (Beloborodov 2010; Pe'er, 2008; Pe'er & Ryde, 2011) and subphotospheric dissipation events (Chhotray & Lazzati, 2015). With the aid of computational tools, we have been better able to understand the physics of GRBs. Previous studies have conducted rigorous radiative transfer calculations, however; they have assumed that the jet structure has been simplified into an analytic profile (Ito et al., 2013, 2014; Vurm & Beloborodov, 2016). In contrast to radiative transfer calculations, other studies have utilized hydrodynamic (HD) calculations to simulate complex jet structures, but these only provide information about the matter within the jet (Lazzati et al., 2009, 2013; Lopez-Camara et al., 2014), which leads to a lack of information regarding the evolution of the radiation. The state of the art method to account for both of these assumptions is to perform post-processing radiative transfer calculations on a hydrodynamic (HD) simulated jet using Monte Carlo methods. There have been tools developed to perform post-processing radiative transfer calculations such as the ones developed by Ito et al. (2015, 2019) and the Monte Carlo Radiation Transfer (MCRaT) code (Lazzati, 2016; Parsotan & Lazzati, 2018; Parsotan et al., 2018; Parsotan & Lazzati, 2021; Parsotan & Lazzati, 2022). MCRaT was developed to conduct radiative transfer calculations on HD simulations to generate mock observations of simulated GRBs using the photospheric model. The impact that HD simulation resolutions have on MCRaT post-processing radiative transfer calculations has not been studied yet. Ensuring that radiative transfer calculations are converged and accurate is critical to testing GRB prompt emission theories against observations. Here, we present an analysis of HD resolution and its effect on post-processing radiative transfer calculations for simulated GRB mock observables. Section 2 outlines the code used to create the HD jet, the code used to perform the radiative transfer calculations, and the way in which resolutions are quantified. Sections 3 and 4 show the effect that HD simulation resolutions have on radiative transfer calculations and the physical implications of these results. ## 2 Methods In this section, we outline the methods to our analysis. In Section 2.1, we discuss the codes used in our analysis. In Section 2.2, we quantify convergences in our simulations. ### Codes Used Here, we discuss the codes used in our study. Section 2.1.1 highlights the tools used to create the numerical HD GRB simulation. In Section 2.1.2, we discuss the tools used to conduct post-processing radiative transfer calculations and analyze the results to generate mock observables for simulated GRBs. #### 2.1.1 Pluto PLUTO is a numerical solver for systems of partial differential equations in the context of astrophysical fluid dynamics (Mignone et al., 2007). PLUTO uses CHOMBO Adaptive Mesh Refinement (AMR) to focus computational efforts on the most relevant parts of the HD simulation (Mignone et al., 2012). Here, we used the PLUTO hydrodynamics code with AMR to simulate the propagation of a long gamma-ray burst (LGRB) jet from a 16TI stellar progenitor, taken from Woosley & Heger (2006). The stellar progenitor profile was interpolated onto the PLUTO grid using the code's capability to do such operations. Following the prescription provided by Lazzati et al. (2013), we inject the jet with a constant luminosity of \(5.33\times 10^{50}\) erg s\({}^{-1}\) for 100 s from an injection radius of \(1\times 10^{9}\) cm, with an initial Lorentz factor of 5, an opening angle \(\theta_{0}=10^{\circ}\), and an internal over rest-mass energy ratio, \(\eta=80\). The simulation domain in PLUTO is logarithmic in radius from \(1\times 10^{9}\) cm to \(5.6\times 10^{14}\) cm although the simulation is only carried out until the jet head reaches \(\sim 2\times 10^{13}\) cm. It also extends in polar angle from \(0^{\circ}\) to \(90^{\circ}\). The AMR refinement is set such that the jet is followed with a resolution of at least \(1\times 10^{9}\) cm along the jet axis. The state of the jet is saved with a frame rate of 5 frames per second. This refinement and the convergence of the hydrodynamic properties of this simulation at the initial moment of photon injection can be seen in Figure 1. We select a shell of grid cells around a radius of \(1.3\times 10^{12}\) cm at 50 s in the simulation and show the spatial resolution that is achieved at each refinement level at that time. We also show the convergence of the bulk Lorentz factor, \(\Gamma\), density, \(\rho\), and temperature, \(T^{1}\), as we traverse the different refinement levels. This convergent behavior is present throughout the whole simulation, and can be seen in Figure 1, in which we show the HD properties of a cell located at \(1.5\times 10^{13}\) cm at 527.6 s in the simulation, which is the last frame of our simulation. #### 2.1.2 MCRaT and ProcessMCRaT The MCRaT2 code conducts radiative transfer calculations to compute the electromagnetic (EM) signature of HD simulated GRB jets. MCRaT reads in HD simu lations of GRB jets and performs Compton scatterings between the injected photons and matter in the jet. MCRaT can run two different radiative transfer calculations. The first one is based off of reading in an HD numerical simulation of a GRB jet. In our study, we use the PLUTO 16TI simulation mentioned in Section 2.1.1. The outflow given by this numerical simulation introduces numerically induced errors in the HD properties of the grid, so the effect of HD resolution on the post-processing radiative transfer calculations can be assessed. The other type of radiative transfer calculation MCRaT is capable of running is one of an analytic spherical outflow. In the spherical outflow case, MCRaT takes the HD simulation files and overwrites the HD properties with those of an analytic outflow with only outward radial velocity components. This analytic outflow is a function of cell radius in which the spherical outflow is accelerated until an asymptotic Lorentz factor is reached. By using an analytic spherical outflow, we can understand how just the HD resolution has an effect on MCRaT mock observables. We set up our spherical outflow to have an asymptotic Lorentz factor \(\Gamma_{\infty}=100\), luminosity \(L=10^{54}\) erg s\({}^{-1}\) and saturation radius \(r_{0}=10^{8}\) cm. Spatial resolution for the HD simulation takes the form of various AMR refinement levels. Since the PLUTO AMR simulation that we conducted dynamically changes the number of refinement levels in order to maintain a resolution element size of \(\sim 10^{9}\) cm, MCRaT reads in the \(n\)th highest refinement level at any given frame. Thus, level 5 is the highest refinement level at any time in the HD simulation, level 4 would be the second highest level, all the way until level 1, which is the lowest refinement level. As outlined in Section 2.1.1, the PLUTO simulation we use has a framerate of 5 frames per second (fps). For the context of our analysis, this is our highest temporal refinement level. We artificially vary the frame rate of our PLUTO simulation by telling MCRaT to only read every \(n\)th frame. With this method, we can achieve our desired framerate while maintaining the total simulation time for the HD simulation. In order to keep the same simulation time, each step in time (\(\Delta t_{sim}\)) has a specific number of frames assigned to it. As the resolution is lowered, the first and last frame in each \(\Delta t_{sim}\) stay the same while the number of frames in between these two is lowered. This leads to a "choppy" simulation. In order to keep the simulation realistic as a function of time, each pair of subsequent frames varies more significantly as the temporal resolution is lowered. Reducing the framerate by a factor of 2 each time would be analogous to contiguous spatial refinement level HD cell radius increasing by a factor of 2. Therefore, 5 fps would be analogous to spatial refinement level 5, 2.5 fps would be analogous to spatial refinement level 4 and so on. This gives us a way to align the spatial refinement levels and come up with a clear way to mix and match temporal and spatial refinement levels, allowing us to investigate the effects of these changes combined with and independent of one another. For all our MCRaT simulations, we kept our parameters as constant as possible. We injected photons into the HD simulation at an angle range of \(0^{\circ}-9^{\circ}\) and radius at \(\sim 10^{12}\) cm. We simulate photons within the first 100 s of the PLUTO 16TI simulation, the time for which the GRB jet is active. Additionally, we simulated \(\sim 10^{5}-10^{6}\) photons per MCRaT simulation. In order to analyze the output of MCRaT's simulations, we used ProcessMCRaT3(Parsotan, 2021). ProcessMCRaT is a Python library developed to analyze Figure 1: HD properties at different refinement levels for the PLUTO 16TI simulation as functions of refinement level at the first moment of photon injection and final frame of the simulation. We took the initial frame measurements of a shell located in radius \(r=1.3\times 10^{12}\) cm and time \(t_{sim}=\frac{\rm frame}{\rm fps}=50\) s. We took the final frame measurements of a shell located in radius \(r=1.5\times 10^{13}\) cm and time \(t_{sim}=\frac{\rm frame}{\rm fps}=527.6\) s. Panel (a) shows the average HD cell radius size (\(\Delta r\)) in cm. Panel (b) shows the average density in g cm\({}^{-3}\). Panel (c) shows the average temperature in K. Panel (d) shows the average bulk Lorentz factor. The early-time conditions are shown as red triangle markers and the final conditions are shown in blue circle markers. and manipulate the output of MCRaT radiative transfer calculations. ProcessMCRaT fits the mock observed spectrum with a Band function (Band et al., 1993) to calculate its low and high energy slopes, \(\alpha\) and \(\beta\) respectively, and its peak energy, \(E_{\rm pk}\). ProcessMCRaT also has the capability to create mock lightcurves for MCRaT simulated GRBs. In producing our mock observables, we placed a mock observer at \(r_{\rm obs}=10^{14}\) cm at various angles \(\theta_{\rm obs}=1^{\circ},3^{\circ},5^{\circ}\) and \(8^{\circ}\) from the GRB jet axis. The opening angle for the area in which the observer detects photons in \(\Delta\theta_{\rm obs}=4^{\circ}\). We set the spectral fit for the observables to be that of a Band function including all photons at an energy range of 0.1 - 4000 keV. We numerically integrated spectra with respect to energy to get luminosities, \(L_{\rm iso}\). We also numerically integrated lightcurves with respect to time to obtain total isotropic energies, \(E_{\rm iso}\). ### Quantifying Convergence Within Radiative Transfer Calculations Since there are two dimensions of change in refinement (spatial and temporal), we populate a \(5\times 5\) matrix that contains entries for its spatial and temporal refinement levels. In order to quantify convergence in MCRaT mock observables between one spatial/temporal level and another, we define the percent change variable \(\zeta_{\rm Prop}\) as: \[\zeta_{\rm Prop}(\text{lev}(n), \text{fps})=\] \[\left|\frac{\text{Prop}(\text{lev}(n),\text{fps})-\text{Prop}( \text{lev}(5),5\text{ fps})}{\text{Prop}(\text{lev}(5),5\text{ fps})}\right|. \tag{1}\] Equation 1 represents a comparison of any level of temporal and spatial refinement with the highest combination of these both (spatial refinement 5 and 5 fps in our context) for any particular property (called Prop in Equation 1) of the GRB EM signature, such as \(\alpha\), \(E_{\rm pk}\), or \(L_{\rm iso}\)4. Footnote 4: The calculations can be found here: [https://github.com/jaritases99/MCRaT-resolution](https://github.com/jaritases99/MCRaT-resolution) This gives us a way to quantify the deviation at each level compared to the highest level for each GRB EM property. For the analysis of our results, the quantity \(\zeta_{\rm Prop}\) will be used to quantify deviations in resulting mock observables at different refinement levels. ## 3 Results Here, we outline the results of our findings for the spherical outflow case and the 16TI simulation as described in Section 2. Our analysis shows the same trends for our mock observer angle \(\theta_{\rm obs}\) at all angles mentioned in Section 2.1.2. As a result, this section will only focus on \(\theta_{\rm obs}=1^{\circ}\). ### Spherical Outflow #### 3.1.1 Spectra Figure 2 shows spectra of a spherical outflow at different spatial and temporal resolutions. Figure 2(a) shows spectra at the highest temporal resolution and varying spatial resolutions. Figure 2(b) shows spectra at the highest spatial resolution and varying temporal resolutions. Figure 2(c) shows spectra at matching temporal and spatial resolution levels. When reducing the spatial resolution, we see an artificial increase of the peak energy of the spectrum. The higher HD cell sizes in lower resolutions cause the injection coordinates for photons to have different HD prop Figure 2: Spectra of a spherical outflow for different refinement levels. The solid purple line is a blackbody spectrum that peaks at each spectrum set’s highest refinement level. Panel (a) shows the spectra of a spherical outflow profile, but at different spatial refinement levels while maintaining the same highest temporal resolution constant. Panel (b) shows the spectra generated with analytic outflow at different temporal resolutions, maintaining the highest spatial resolution constant. Panel (c) shows the spectra with matching temporal and spatial resolution levels. erties. As seen in Figure 1, lower spatial resolution levels have higher temperatures, which means the injected photons will have higher energies, causing a higher \(E_{\mathrm{pk}}\). Figure 3 shows the spectral peak energies at varying temporal and spatial resolutions. The effect of lowering the spatial resolution on spectral peak energies can be seen in Figure 3(a). Reducing the spatial resolution causes an increase in the luminosity of the lightcurves. This happens since there is now more energy in the spectrum. The spectra then are shifted up in luminosity and to the right in energies, while still maintaining a blackbody shape. This effect can also be seen in Figure 3(a). Reducing the temporal resolution of the simulation does not affect the spectrum of the spherical outflow in any significant manner. Since there is no change in spatial resolution, the injected photons read in the same HD values regardless of the temporal resolution. This effect can be seen in Figure 2(b). For this reason, the peak energies are not affected as the temporal resolution is decreased, as seen in Figure 3(b). The luminosity of the spectra seems to be slightly decreased as the temporal resolution decreases. There is not a lot of variation in the spectral shape and properties since the homologous expansion present in a spherical outflow does not depend on time. Mixing spatial and temporal resolutions shows similar trends to only changing spatial resolutions. This is to be expected, since the analytic spherical outflow is defined to be time-independent. This effect can be seen in Figure 2(c). Analyzing the peak energies \(E_{\mathrm{pk}}\) for different refinement levels confirms what we observed in the spectra in Figure 2. As seen in Figure 3(a), there is an increase in peak energy as the spatial refinement level is decreased. This is seen as a shift to the right in the spectra in Figure 2(a). For temporal resolutions, as seen in Figure 2(b), there is no significant change between levels. This can be seen in the \(E_{\mathrm{pk}}\) values in Figure 3(b) where the values are similar to one another. When mixing spatial and temporal resolutions, there is an additive behavior in the differences at various levels. One other important quantity to observe is the luminosity at different spatial and temporal refinement levels. Figure 4 shows the spectral luminosity at varying spatial and temporal resolutions with a spherical outflow. For differing spatial refinement levels, there is an artificial increase in luminosity as seen as a shift upwards in the spectra in Figure 2(a). This effect can be seen in Figure 4(a). For differing temporal resolutions, luminosities tend to oscillate not too far way from each other as seen in Figure 4(b). This aligns with the very similar spectra seen in Figure 2(b). When there is a mix of temporal and spatial refinement levels, luminosities also are artificially increased as the resolution goes down. This is the effect of the same phenomenon happening in spatial resolutions. The additive effect of combining spatial and temporal resolutions can be seen in Figure 4(c). #### 3.1.2 Lightcurves Figure 5 shows lightcurves of a spherical outflow at different spatial and temporal resolutions. Figure 5(a) shows lightcurves at the highest temporal resolution and varying spatial resolutions. Figure 5(b) shows lightcurves at the highest spatial resolution and varying temporal resolutions. Figure 5(c) shows lightcurves at matching temporal and spatial resolution levels. Figure 3: Spectral Peak energies of a spherical outflow for different refinement levels. Panel (a) shows the \(E_{\mathrm{pk}}\) of the same spherical outflow simulation, but at different spatial refinement levels while maintaining the same highest temporal resolution constant. Panel (b) shows the \(E_{\mathrm{pk}}\) generated with the same analytic outflow simulation at different temporal resolutions, maintaining the highest spatial resolution constant. Panel (c) shows the \(E_{\mathrm{pk}}\) with matching temporal and spatial resolution levels. The error bars in panels (a), (b) and (c) are present, but are encompassed within the markers. In a spherical outflow, the lightcurves at different temporal and spatial resolutions have roughly the same shape. If the spatial resolution is decreased, as seen in Section 3.1.1, the luminosity of the spectrum is increased by an upwards shift of the spectrum. This causes lightcurves to have a higher luminosity. Analytic spherical outflows should have relatively "constant" lightcurves. This can be seen in Figure 5(a). Reducing the temporal resolution does not change the luminosity of the lightcurves like spatial resolutions do. All temporal resolutions seem to roughly have the same luminosity. Analyzing the lightcurves, there is an increase in variability as the lightcurves oscillate around one "average" value of the lightcurve. This variation is due to the fact that the photons are not being smoothly injected in a thin shell. Instead, the MCRaT algorithm has to determine which HD cells are the most energetic within a larger set of HD cells and correspondingly place more photons in those photon dense regions of the HD simulation. This leads to us only probing the portions of the outflow with the largest energies. This causes us to no longer get a smooth stream of photons that are detected as a function of time. This effect can be seen in Figure 5(b). Like with the spectra, varying both resolutions at the same time has an additive effect on the changed properties of the lightcurves. Figure 5(c) shows how decreasing both the temporal and spatial resolution increases the luminosity of the lightcurve as well as the variability in the form of an oscillation around the "average" value of the lightcurve. ### 16TI HD Simulation #### 3.2.1 Spectra Figure 4: Spectral luminosities of a spherical outflow for different refinement levels. Panel (a) shows the luminosities using the same analytic outflow simulation, but at different spatial refinement levels while maintaining the same highest temporal resolution constant. Panel (b) shows the luminosities generated with the same spherical outflow simulation at different temporal resolutions while holding the highest spatial resolution constant. Panel (c) shows the luminosities with matching temporal and spatial resolution levels. The error bars in panels (a), (b) and (c) are present, but are encompassed within the markers. Figure 5: Lightcurves of simulated spherical outflow for different refinement levels. Panel (a) shows the lightcurves using the same analytic outflow simulation, but at different spatial refinement levels while maintaining the same highest temporal resolution constant. Panel (b) shows the lightcurves generated with the same spherical outflow simulation at different temporal resolutions, maintaining the highest spatial resolution constant. Panel (c) shows the lightcurves with matching temporal and spatial resolution levels. Figure 6 shows spectra of a GRB simulated with a 16TI stellar progenitor model at different spatial and temporal resolutions. Figure 6(a) shows the GRB spectra at the highest temporal resolution and varying spatial resolutions. Figure 6(b) shows GRB spectra at the highest spatial resolution and varying temporal resolutions. Figure 6(c) shows GRB spectra at matching temporal and spatial resolution levels. Changing spatial resolutions artificially increases the spectral high energy tail when higher framerates are held constant. We see a less pronounced artificial increase in the high energy tail at constant lower framerates. The introduction of a lower spatial resolution leads to a sudden change in the HD properties of each grid cell with respect to its neighbors. The advantage of higher spatial resolutions is the presence of a smoother, more gradual change from cell to cell since there is a higher number of smaller cells at these higher resolutions. A sudden change in the HD properties as the photons propagate through the HD medium and scatter with it leads to them being upscattered to higher energies. This abrupt change in HD properties makes it hard for the photons to be in equilibrium with the medium and leads to them being artificially upscattered. This effect is decreased as the spatial resolution is increased since the smoother HD behavior makes it easier for the photons to stay in equilibrium with the jet. The artificially increased spectral high-energy tail due to lower spatial resolutions and spurious upscatterings can be seen in Figure 6(a). Decreasing the temporal resolution leads to the photons being artificially upscattered into higher energies, causing the spectrum to have an increased high energy tail. Lower temporal resolutions also lead to an abrupt change in the HD properties of the simulation. The abrupt change due to lower temporal resolutions is different in nature than the change in spatial resolutions, although the end result is the same. The lower framerate leads to photons scattering in the same HD frame for longer periods of time. Once the next frame is reached, the gradient in the HD properties is more pronounced, leading to the photons upscattering to higher energies. The artificially increased spectral high-energy tail due to lower temporal resolutions can be seen in Figure 6(b). Combining changes in both temporal and spatial resolutions leads to this effect being additive. There is upscattering due to a large gradient in the jet's properties in both space and time. This additive effect can be seen in Figure 6(c). #### 3.2.2 Lightcurves Figure 7 shows lightcurves of a GRB simulated with a 16TI stellar progenitor model at different spatial and temporal resolutions. Only the first \(\sim 10\) seconds of the lightcurve are shown to emphasize the effects of lowering temporal and/or spatial resolutions. Figure 7(a) shows the GRB lightcurves at the highest temporal resolution and varying spatial resolutions. Figure 7(b) shows GRB lightcurves at the highest spatial resolution and varying temporal resolutions. Figure 7(c) shows the GRB lightcurves at matching temporal and spatial resolution levels. Decreasing the spatial resolution for the outflow given by the 16TI simulation has a similar effect to that of decreasing the spatial resolution for a spherical outflow simulation. Because of the increase in the high energy tail of the spectrum, there is an increased luminosity that becomes more pronounced as the spatial resolution is decreased. This can be seen in Figure 7(a). Figure 6: Spectra of a 16TI HD simulated GRB at different refinement levels. The solid purple line is a blackbody spectrum that peaks at each spectrum set’s highest refinement level. Panel (a) shows the spectra of a simulated GRB using the same PLUTO 16TI simulation, but at different spatial refinement levels while maintaining the same highest temporal resolution constant. Panel (b) shows the spectra generated with the same 16TI simulation at different temporal resolutions, maintaining the highest spatial resolution constant. Panel (c) shows the spectra of a simulated GRB with matching temporal and spatial resolution levels. Changing the temporal resolution for 16TI HD simulated GRBs affects the lightcurve differently than doing so in a spherical outflow. This is due to the time dependence now present in the simulated GRB jet. Since the jet changes more drastically from frame to frame, there is an artificially enhanced high-energy tail in the spectrum, leading to a higher luminosity in the lightcurve. Not only is there a higher luminosity but the variation present in lower temporal resolutions is also present. This variation can be seen in Figure 7(b). Changing both temporal and spatial resolutions leads to the combination of their individual effects. There is an increase in luminosity and in the variability of the lightcurve. The presence of the increasingly high lightcurve luminosity and variability can be seen in Figure 7(c). #### 3.2.3 Errors in Mock Observables To better visualize the two degrees of freedom for the change in resolution, we use a color gradient matrix for all combinations of spatial and temporal resolutions. Darker colors show larger deviations from the highest level of refinement, both spatial and temporal. We find the biggest change in the lowest spatial and temporal resolutions. As we get closer to the highest resolution (refinement level 5, 5 fps), we find a smaller change in the error. Luminosity shows the smallest deviation from the highest resolution at 17%, in the (level 4, 5 fps) entry in the resolution matrix. As we deviate from the highest resolution value in the lower right corner, we start seeing our deviation change more drastically. The largest deviation comes from the (level 1, 0.3125 fps) entry in the matrix, at a \(\sim 5000\%\) change. This result is expected, since this is the lowest resolution case in both space and time. In Figure 8(a), we see \(\zeta_{L_{\rm iso}}\) for all refinement levels in the resolution matrix. The isotropic luminosity comes from integrating the spectrum with respect to energy. By looking at Figure 6, we can see a clear reason why these values are changing the way that they are. The high energy tails present at lower spatial and temporal resolutions give this integral a higher value. Since there is a larger change in the spectral shape with respect to temporal resolutions, we see more similar values between spatial resolutions at constant frames. We see an identical trend with isotropic energies \(E_{\rm iso}\). This is expected since this is an integration of the lightcurve with respect to time. Peak energies see deviations at around \(6-55\%\), as long as we stay within the lower right \(3\times 3\) block in our matrix. As we step outside of this block (lower than level 3 or 1.25 fps), we start seeing deviations of up to 83%. Figure 8(b) shows \(\zeta_{E_{\rm pk}}\) for all refinement levels in the resolution matrix for 16TI HD simulated GRBs. There is a present trend in which there is more variation present as we get further away from the lower right (level 5, 5 fps) entry. This implies that both spatial and temporal resolutions affect the resulting peak energies \(E_{\rm pk}\). The spatial resolution effect is due to the change in temperatures between levels seen in Figure 1. Since Equation 1 calculates the magnitude of the deviation and not the direction of them, these figures do not encompass this information. The peak energy values for different refinement levels are decreased as the spatial resolution is decreased. When lowering spatial and temporal resolutions, the low energy slope, \(\alpha\) varies as low as 0.05% and as high Figure 7: Lightcurves of the 16TI HD simulated GRB for different refinement levels. Only the first \(\sim\) 10 seconds of the simulation are shown to better visualize the qualitative effects of reducing spatial and/or temporal resolutions. Panel (a) shows the lightcurves of a simulated GRB using the same PLUTO 16TI simulation, but at different spatial refinement levels while maintaining the same highest temporal resolution constant. Panel (b) shows the lightcurves generated with the same 16TI simulation at different temporal resolutions, maintaining the highest spatial resolution constant. Panel (c) shows the lightcurves of a simulated GRB with matching temporal and spatial resolution levels. as 67%. If we stay in the lower right \(3\times 3\) block of the matrix, we see a deviation of around \(1-13\%\). As we step outside of this block, (lower than level 3 or 1.25 fps), we start seeing higher deviations. The largest deviation comes from the (level 4, 0.3125 fps) entry in the matrix, at a 67% change. Figure 8(c) shows \(\zeta_{\alpha}\) for all refinement levels in the resolution matrix. Figure 6 shows some of the spectra that were evaluated here, but the trend of change in the low-energy slope \(\alpha\) is not as clear by just looking at the plotted spectra. This figure shows that the lower energy slope is conserved as resolutions change, showing that lower spatial and temporal resolutions do not have a large impact on the lower energy subset of photons. When lowering spatial and temporal resolutions, the high energy slope, \(\beta\) is impacted more than the low energy slope \(\alpha\). From Figure 6, we can see that the temporal resolution affects \(\beta\) more than the spatial resolution. Figure 8(d) shows \(\zeta_{\beta}\) for all refinement levels in the resolution matrix. There is an decrease in the high energy slope \(\beta\) as the temporal resolutions are decreased. \(\beta\) shows similar values between spatial resolutions when holding temporal resolutions constant. This effect is due to photons being upscattered to higher energies as they are shocked by drastically changing jet properties as new HD frames are loaded. The fitted Band spectrum attempts to account for the higher amount of high-energy photons by making the high energy slopes flatter. ## 4 Discussions We have used MCRaT and PLUTO to quantify the effect that a simulation's spatial and temporal resolution has on post-processing radiative transfer calculations. We show that the lower spatial and temporal resolutions affect both the shape of the GRB's spectrum and lightcurve. The presence of high-energy spectral tails lead to an increase in isotropic luminosities as resolutions are decreased. Additionally, lower spatial resolutions lead to higher temperatures, causing photon energies to be increased, leading an increase in the normalization of the mock observed spectra. Figure 8: Matrices containing \(\zeta_{\rm Prop}\) for different EM properties of the 16TI simulated GRB. Panel (a) shows the deviation \(\zeta_{L_{\rm in}}\) for all spatial and temporal refinement levels. Panel (b) shows the deviation \(\zeta_{\rm E_{pk}}\). Panel (c) shows the deviation \(\zeta_{\alpha}\). Panel (d) shows the deviation \(\zeta_{\beta}\). With the lower spatial and temporal resolutions, photons propagate and scatter within a choppier HD simulation. As seen in Figure 1, lower spatial resolutions have higher temperatures, causing injected photons to have slightly higher energies and increased normalization for their spectrum. As temporal resolution is decreased, the lightcurves face an increased variability, and the smoothness of this mock observed property is gone. This is due to the fact that lower framerates lead to photons not being injected as smoothly as a function of time. Higher framerates lead to a more continuous injected photon flow, while lower framerates have more spaced out photon injections. This leads to the lightcurve having more peaks and troughs compared to the higher temporal resolution simulations. At the end of the HD simulation, photons have ideally reached the photosphere. At this point, photons are expected to scatter into an angle of \(\theta=1/\Gamma^{2}\)(Lazzati, 2016). Figure 1(a) shows the radial resolution at the photon position \(r=1.5\times 10^{13}\)cm, and Figure 1(d) Shows the bulk Lorentz factor at this radius. The quantity \(\Delta r/r=\sin\theta\approx\theta\) can be used to compare to \(\theta=1/\Gamma^{2}\). If the HD resolution is too low, the HD cell sizes are larger, which means that the mock observed properties of the photons are inaccurate due to the detected photons not being able to fully probe the angular scale visible to the observer. Instead photons are only probing the properties of just one large HD cell. Figure 9 shows how at higher refinement levels, the ratio \(\frac{1/\Gamma^{2}}{\Delta r/r}\) is large enough to easily probe the visible portion of the GRB jet regime since this angle encompasses multiple HD cells. At lower resolutions, we get closer but larger than unity, as seen for levels 2 and 3. Once level 1 is reached, our ratio is smaller than 1. At this resolution, the photons are not properly probing the visible angular jet region as they only probe one HD cell. If we consider the lightcrossing defined by \(c/\text{fps}=c\Delta t\), we get the distance traveled by a photon in between two frames at any given framerate. When comparing this distance to the size of HD cells at any given time, optimally we want the photons to scatter within multiple HD elements as they move within a given frame. For this to be true, the ratio \(\frac{c\Delta t}{\Delta r}\) must be larger than unity. This would mean that the photons are able to travel through more than one HD cell during the duration of any given frame. This allows the photons to properly probe the GRB jet properties. Figure 10(a) shows the ratio \(\frac{c\Delta t}{\Delta r}\) at the beginning of the 16TI simulation. We can see that higher resolutions with lower framerates lead to photons probing multiple cells in each individual frame as they travel through the jet. The lower left corner of the matrix shows framerates and resolutions in which only one cell is probed in each frame. This phenomenon is also present and amplified at the end of the simulation, when the HD cell size is increased from the presence of higher refinement levels. This can be seen in Figure 10(b). We can make an analogy with polar coordinates where the photon propagates both radially, as it diffuses outward with the jet, and in polar angle, where the observer sees some amount of photons that are propagating towards them if the observer is located within the photons' local \(1/\Gamma^{2}\) angle. As the MCRaT photon travels outward within a single simulation frame time step, we want it to interact with many cells of the GRB jet. This will allow the photon to change its properties as the jet's HD properties change radially; see for example the non-thermal spectra obtained by Parsotan et al. (2018). Once at the photosphere, we want to receive many photons that are properly probing the angular size of the jet that the observer is able to see. This can lead to the appearance of non-thermal spectra, such as the multi-color blackbody (Pe'er and Ryde, 2011), which is typically seen in GRBs. This can only be seen if the photons are able to interact with and probe the properties of many different HD fluid elements in the polar angle direction. HD simulation resolution needs to be accounted for to retrieve accurate post-processing radiative transfer calculations. When deciding the HD simulation resolution for these calculations, one must look at multiple factors to make an informed decision. These factors include limitations such as storage and computational resources. Another factor that needs to be considered is what margin of error is acceptable for one's particular analysis. In Figure 9: \(\frac{1/\Gamma^{2}}{\Delta r/r}\) at the final frame of the 16TI HD simulation. Quantities \(>>1\) mean that detected photons are sufficiently able to probe the smallest spatial scales of the jet. order to save time, storage and computational resources, if a particular analysis allows for it, a lower resolution can be chosen, assuming the loss of accuracy in the mock EM observables is acceptable. The material is based upon work supported by NASA under award number 80GSFC21M0002. Resources supporting this work were provided by the NASA High-End Computing (HEC) Program through the NASA Advanced Supercomputing (NAS) Division at Ames Research Center.
2308.01523
Miss It Like Messi: Extracting Value from Off-Target Shots in Soccer
Measuring soccer shooting skill is a challenging analytics problem due to the scarcity and highly contextual nature of scoring events. The introduction of more advanced data surrounding soccer shots has given rise to model-based metrics which better cope with these challenges. Specifically, metrics such as expected goals added, goals above expectation, and post-shot expected goals all use advanced data to offer an improvement over the classical conversion rate. However, all metrics developed to date assign a value of zero to off-target shots, which account for almost two-thirds of all shots, since these shots have no probability of scoring. We posit that there is non-negligible shooting skill signal contained in the trajectories of off-target shots and propose two shooting skill metrics that incorporate the signal contained in off-target shots. Specifically, we develop a player-specific generative model for shot trajectories based on a mixture of truncated bivariate Gaussian distributions. We use this generative model to compute metrics that allow us to attach non-zero value to off-target shots. We demonstrate that our proposed metrics are more stable than current state-of-the-art metrics and have increased predictive power.
Ethan Baron, Nathan Sandholtz, Devin Pleuler, Timothy C. Y. Chan
2023-08-03T03:53:50Z
http://arxiv.org/abs/2308.01523v2
# Miss It Like Messi: ###### Abstract Measuring soccer shooting skill is a challenging analytics problem due to the scarcity and highly contextual nature of scoring events. The introduction of more advanced data surrounding soccer shots has given rise to model-based metrics which better cope with these challenges. Specifically, metrics such as expected goals added, goals above expectation, and post-shot expected goals all use advanced data to offer an improvement over the classical conversion rate. However, all metrics developed to date assign a value of zero to off-target shots, which account for almost two-thirds of all shots, since these shots have no probability of scoring. We posit that there is non-negligible shooting skill signal contained in the trajectories of off-target shots and propose two shooting skill metrics that incorporate the signal contained in off-target shots. Specifically, we develop a player-specific generative model for shot trajectories based on a mixture of truncated bivariate Gaussian distributions. We use this generative model to compute metrics that allow us to attach non-zero value to off-target shots. We demonstrate that our proposed metrics are more stable than current state-of-the-art metrics and have increased predictive power. _Keywords:_ generative model; mixture model; shot trajectories; player valuation; Bayesian hierarchical model; spatial data ## 1 Introduction Effectively evaluating shooting skill is a key challenge in soccer analytics. Historically, the finishing skill of players has been compared using classical statistics such as goals and shots on target, or more advanced models derived from these statistics (e.g., McHale and Szczepanski 2014). These statistics are easily obtainable but suffer from small sample sizes and a lack of comparability across players, since the probability of scoring a goal or landing a shot on-target depends heavily on the situation in which the shot was taken. Thus, a large focus in soccer analytics is the development of advanced shooting metrics that are both stable over time and comparable across players. Much research to date has focused on the issue of comparability. For example, expected goals models address the issue of comparability by measuring a shot's value within the context of a game situation. The _pre-shot expected goals_ (PreXg) metric estimates the probability of a shot scoring given contextual variables at the time of the shot, such as the shot location, the proximity of other players, and the body part used (i.e., head or foot) (Rathke 2017; Brechot and Flepp 2018; Lucey et al. 2015; Rowlinson 2020; Anzer and Bauer 2021). Subtracting a player's expected goals scored from their actual goals scored results in _goals above expectation_ (GAX), which provides a measure of shooting skill that is comparable across players. Unfortunately, this metric offers limited empirical stability, as defined by Franks et al. (2016); a player's GAX in one season is poorly predictive of their GAX in the next season (Pleuler 2014b; 11tegen11 2014). An improvement over GAX relies on _post-shot expected goals_ (PostXg), which considers the probability of a shot scoring conditional on its spatial trajectory after being struck (Goodman 2018). Note that PostXg models assign a value of zero to all off-target shots. Subtracting a shot's post-shot expected goals value from its pre-shot expectation results in a metric known as _expected goals added_ (EGA). While the additional
2303.15624
Feynman Integrals from Positivity Constraints
We explore inequality constraints as a new tool for numerically evaluating Feynman integrals. A convergent Feynman integral is non-negative if the integrand is non-negative in either loop momentum space or Feynman parameter space. Applying various identities, all such integrals can be reduced to linear sums of a small set of master integrals, leading to infinitely many linear constraints on the values of the master integrals. The constraints can be solved as a semidefinite programming problem in mathematical optimization, producing rigorous two-sided bounds for the integrals which are observed to converge rapidly as more constraints are included, enabling high-precision determination of the integrals. Positivity constraints can also be formulated for the $\epsilon$ expansion terms in dimensional regularization and reveal hidden consistency relations between terms at different orders in $\epsilon$. We introduce the main methods using one-loop bubble integrals, then present a nontrivial example of three-loop banana integrals with unequal masses, where 11 top-level master integrals are evaluated to high precision.
Mao Zeng
2023-03-27T22:41:06Z
http://arxiv.org/abs/2303.15624v2
# Feynman Integrals from Positivity Constraints ###### Abstract We explore inequality constraints as a new tool for numerically evaluating Feynman integrals. A convergent Feynman integral is non-negative if the integrand is non-negative in either loop momentum space or Feynman parameter space. Applying various identities, all such integrals can be reduced to linear sums of a small set of master integrals, leading to infinitely many linear constraints on the values of the master integrals. The constraints can be solved as a semidefinite programming problem in mathematical optimization, producing rigorous two-sided bounds for the integrals which are observed to converge rapidly as more constraints are included, enabling high-precision determination of the integrals. Positivity constraints can also be formulated for the \(\epsilon\) expansion terms in dimensional regularization and reveal hidden consistency relations between terms at different orders in \(\epsilon\). We introduce the main methods using one-loop bubble integrals, then present a nontrivial example of three-loop banana integrals with unequal masses, where 11 top-level master integrals are evaluated to high precision. + Footnote †: institutetext: \({}^{\dagger}\)_Mathematisches Institut, Universitat Mainz,_ [MISSING_PAGE_POST] of Feynman integrals still presents challenges in the ongoing quest for higher precisions in perturbative calculations, and new explorations are warranted. A fruitful recent idea in theoretical physics is the use of positivity constraints, e.g. arising from the unitarity of a Hilbert space, to constrain unknown parameters from first principles, sometimes reaching great accuracy. Prominent examples include the conformal bootstrap [33], the non-perturbative S-matrix bootstrap (see review [34] and references within), and EFT positivity bounds [35; 36; 37; 38; 39; 40; 41]. Some of the predictions in the S-matrix and EFT contexts have been checked against explicit perturbative calculations involving Feynman integrals [42; 43; 44], so it is natural to explore positivity properties for Feynman integrals themselves. What directly inspired this paper, though, is recent applications of positivity constraints to bootstrapping simple quantum mechanical systems [45; 46; 47; 48] and lattice models [49; 50; 51], which crucially use various linear identities that are reminiscent of integration-by-parts identities for Feynman integrals [52] as well as a numerical technique known as semidefinite programming [53] which will be adapted to our calculations. Semidefinite programming was introduced to theoretical physics by Ref. [54] in the conformal bootstrap and subsequently applied to wider contexts. In this work, we will formulate positivity constraints for _Euclidean Feynman integrals_, which can be either (1) integrals in Euclidean spacetime or can be rewritten in Euclidean spacetime after a trivial Wick rotation, or (2) integrals in Minkowskian spacetime but with kinematics in the so called _Euclidean region_, i.e. with center-of-mass energy of incoming momenta below the particle production threshold. In case (1), the loop integrand in Euclidean momentum space has non-negative propagator denominators, and the integrand is non-negative as long as the numerator is non-negative. In case (2), the integral is real due to the absence of Cutkosky cuts. After Feynman parametrization, the parametric integrand involves non-negative graph polynomials and stays non-negative when multiplied by a positive function of the Feynman parameters. In both cases, a non-negative integrand implies a non-negative integral as long as the integral is convergent, i.e. has no ultraviolet or infrared divergences. The initial restriction to convergent integrals is not a fundamental limitation, as divergent integrals can be rewritten as linear sums of convergent integrals multiplied by divergent coefficients [55], and then it suffices to evaluate the convergent integrals as a Taylor series in the dimensional regularization parameter \(\epsilon\). We will see that positivity constraints, expressed in the language of semidefinite programming, are strong enough to precisely determine the values of Feynman integrals. Moreover, rigorous error bounds can be obtained. Our machinery relies on linear relations between Feynman integrals, including integration-by-parts identities [52] and dimension-shifting identities [56], to express all positivity constraints as linear constraints on the values of a set of master integrals. The outline of the paper is as follows. In Section 2, we introduce the main methods using the simple example of massive one-loop bubble integrals. Specifically, Subsection 2.1 gives a short review of two kinds of linear identities for Feynman integrals, integration-by-parts identities and dimension shifting identities. Subsection 2.2 discusses positivity constraints in Euclidean momentum space, starting from ad hoc constraints that narrow down the allowed value of the bubble master integral to \(\sim 50\%\) accuracy at an example nematic point, then developing the machinery of semidefinite programming to reach an accuracy of \(10^{-14}\). We switch to Feynman parameter space in Subsection 2.3, which allows calculating the integrals in a wider region of kinematic parameters below the two-particle cut threshold, while many steps of the calculations are unchanged from the momentum-space case. Subsection 2.4 formulates positivity constraints for \(\epsilon\) expansions of Feynman integrals in dimensional regularization. Subsection 2.5 presents an alternative method for calculating the \(\epsilon\) expansion based on numerical differentiation of results w.r.t. the spacetime dimension. Having laid out most of the methods, in Section 3, we present a nontrivial application to three-loop banana integrals with four unequal masses. Due to a large number of undetermined master integrals, semidefinite programming becomes essential in efficiently solving the positivity constraints. We calculate all master integrals at an example kinematic point up to the second order in \(\epsilon\) expansion in \(d=2-2\epsilon\), and a detailed account of the numerical accuracies is given. We end with some discussions in Section 4. ## 2 One-loop bubble integrals We consider the following family of Feynman integrals in Minkowski spacetime parametrized by two integers \(a_{1}\) and \(a_{2}\), \[I^{d}_{a_{1},a_{2}}\equiv\int\frac{d^{d}l\,e^{\gamma_{E}\epsilon}}{i\pi^{d/2}} \frac{1}{(-l^{2}+m^{2})^{a_{1}}[-(p+l)^{2}+m^{2}]^{a_{2}}}, \tag{1}\] which correspond to the diagram in Fig. but with the propagator denominators raised to general integer powers. The external mass is \(\sqrt{p^{2}}\) and the internal mass for the two propagators is \(m\), with the two propagators raised to powers \(a_{1}\) and \(a_{2}\). If either \(a_{1}\) or \(a_{2}\) is non-positive, Eq. (1) becomes a massive tadpole integral possibly with a numerator. If \(a_{1}\) and \(a_{2}\) are both non-positive, the integral vanishes in dimensional regularization. The spacetime dimension \(d\) is equal to \(4-2\epsilon\), with \(\epsilon\) being the dimensional regularization parameter. Eq. (1) can be re-written, by Wick rotation of the integration contour, as the following integrals in _Euclidean_ spacetime, \[I^{d}_{a_{1},a_{2}}\equiv\int\frac{d^{d}\mathbf{l}\,e^{\gamma_{E}\epsilon}}{\pi^{ d/2}}\frac{1}{(\mathbf{l}^{2}+m^{2})^{a_{1}}[(\mathbf{p}+\mathbf{l})^{2}+m^{2}]^{a_{2}}},\,. \tag{2}\] where \(\mathbf{p}^{2}\equiv-p^{2}\). We will use positivity constraints to numerically evaluate bubble integrals Eq. (1) for \(p^{2}<4m^{2}\), i.e. below the two-particle production threshold. We first consider the case Figure 1: The one-loop bubble integral with external legs of virtuality \(p^{2}\) and two internal massive line with the same squared mass \(m^{2}\). \(p^{2}<0\), i.e. \(\mathbf{p}^{2}>0\). In this case, \(\mathbf{p}\) can be literally embedded in Euclidean spacetime as a vector with real-valued components, and we will derive positivity constraints starting from the loop momentum integral Eq. (2). We will subsequently present a treatment applicable to all \(p^{2}<4m^{2}\) by using Feynman parameter representations of the integrals. Before actual calculations, we first briefly review linear identities for the Feynman integrals involved, arising from _integration by parts_ and _dimension shifting_. Efficiently solving IBP identities for more complicated Feynman integrals is a major research problem, and we refer readers to Refs. [57, 58, 59, 60, 61, 62, 63, 64] for relevant computational algorithms and software. ### Review: integration-by-parts (IBP) and dimensional-shifting identities \(I^{d}_{a_{1},a_{2}}\) at different values of \(a_{1}\) and \(a_{2}\), as defined in Eq. (1), are related through integration-by-parts (IBP) identities [52], as total derivatives integrate to zero without boundary terms in dimensional regularization. To derive the identities, we will write dot products as linear combinations of denominators and constants, \[l^{2}=-(-l^{2}+m^{2})+m^{2},\qquad 2p\cdot l=-[-(p+l)^{2}+m^{2}]+[-l^{2}+m^{2}]- p^{2}\,. \tag{3}\] The actual identities are \[\begin{split}&\int\frac{d^{d}l\,e^{\gamma_{E}\epsilon}}{i\pi^{d/2}} \frac{\partial}{\partial l^{\mu}}\frac{p^{\mu}}{(-l^{2}+m^{2})^{a_{1}}[-(p+l) ^{2}+m^{2}]^{a_{2}}}=0\\ &\implies-a_{1}p^{2}I^{d}_{a_{1}+1,a_{2}}+a_{2}p^{2}I^{d}_{a_{1},a_{2}+1}-a_{1}I^{d}_{a_{1}+1,a_{2}-1}+(a_{1}-a_{2})I^{d}_{a_{1},a_{2}}+a_{2}I ^{d}_{a_{1}-1,a_{2}+1}=0\,,\end{split} \tag{4}\] and \[\int\frac{d^{d}l\,e^{\gamma_{E}\epsilon}}{\pi^{d/2}}\frac{ \partial}{\partial l^{\mu}}\frac{l^{\mu}}{(-l^{2}+m^{2})^{a_{1}}[-(p+l)^{2}+m ^{2}]^{a_{2}}}=0\implies\\ & 2a_{1}m^{2}I^{d}_{a_{1}+1,a_{2}}+a_{2}(-p^{2}+2m^{2})I^{d}_{a_{1},a_{2}+1}+(d-2a_{1}-a_{2})I^{d}_{a_{1},a_{2}}-a_{2}I^{d}_{a_{1}-1,a_{2}+1}=0\,, \tag{5}\] We also need the diagram reflection symmetry relations from relabeling \(l\to-(p+l)\), \[I^{d}_{a_{1},a_{2}}=I^{d}_{a_{2},a_{1}}\,, \tag{6}\] and the boundary condition in dimensional regularization, \[I^{d}_{a_{1},a_{2}}=0,\text{ when both }a_{1}\leq 0\text{ and }a_{2}\leq 0\,. \tag{7}\] Solving eqs. (4)-(7), e.g. by iteratively eliminating \(I^{d}_{a_{1},a_{2}}\) with the largest values of \(|a_{1}|+|a_{2}|\), allow us to rewrite any \(I^{d}_{a_{1},a_{2}}\) with fixed spacetime dimension \(d\) in terms of two _master integrals_, \[I^{d}_{1,1},\quad I^{d}_{1,0}, \tag{8}\] i.e. a bubble integral and a tadpole integral. We will then change to an alternative basis \(I_{1}\) and \(I_{2}\), which is ultraviolet finite as \(d\) approaches \(4\) and in fact for any \(d<6\), \[\begin{split}& I^{d}_{2,1}=\frac{d-3}{p^{2}-4m^{2}}I^{d}_{1,1}+ \frac{d-2}{2m^{2}(p^{2}-4m^{2})}I^{d}_{1,0}\,,\\ & I^{d}_{3,0}=\frac{(d-2)(d-4)}{8m^{4}}I^{d}_{1,0}\,.\end{split} \tag{9}\] _For the rest of this paper, we will impose \(d<6\) in the treatment of the bubble integrals and pay special attention to values of \(d\) near \(4\)._ The process of carrying out the above calculation and rewriting any integral as linear sums of master integrals is referred to as _IBP reduction_. Here are two examples showing how other integrals are expressed as linear combinations of the basis Eq. (9), \[I_{2,2}^{d} =\frac{1}{p^{2}(4m^{2}-p^{2})}\Big{(}[(6-d)p^{2}-4m^{2}]I_{2,1}^{ d}+4m^{2}I_{3,0}^{d}\Big{)} \tag{10}\] \[I_{3,1}^{d} =I_{1,3}^{d}=\frac{1}{2p^{2}(4m^{2}-p^{2})}\Big{(}[(4-d)p^{2}+4m^{ 2}]I_{2,1}^{d}+2(p^{2}-2m^{2})I_{3,0}^{d}\Big{)}\,. \tag{11}\] The tadpole integral \(I_{3,0}^{d}\) is well known in textbooks. The bubble integral with one of the propagator raised to a higher power, \(I_{2,1}^{d}\), is also known analytically, but we will use positivity constraints to numerically evaluate it for the purpose of demonstrating our methods. We will write \[I_{2,1}^{d}=\hat{I}_{2,1}^{d}\cdot I_{3,0}^{d}\,, \tag{12}\] and treat the normalized bubble integral \[\hat{I}_{2,1}^{d}=I_{2,1}^{d}/I_{3,0}^{d} \tag{13}\] as an unknown quantity to be bounded by positivity constraints. More generally, we define a "hatted" notation for integrals normalized against the finite tadpole integral, \[\hat{I}_{a_{1},a_{2}}^{d}=I_{a_{1},a_{2}}^{d}/I_{3,0}^{d}, \tag{14}\] under which Eqs. (10) and (11) become \[\hat{I}_{2,2}^{d} =\frac{1}{p^{2}(4m^{2}-p^{2})}\Big{(}[(6-d)p^{2}-4m^{2}]\hat{I}_{ 2,1}^{d}+4m^{2}\Big{)} \tag{15}\] \[\hat{I}_{3,1}^{d} =I_{1,3}^{d}=\frac{1}{2p^{2}(4m^{2}-p^{2})}\Big{(}[(4-d)p^{2}+4m^ {2}]\hat{I}_{2,1}^{d}+2(p^{2}-2m^{2})\Big{)}\,. \tag{16}\] We also need dimensional-shifting identities. Using the Schwinger parametrization, Eq. (1) is rewritten as \[I_{a_{1},a_{2}}^{d} =\frac{e^{\gamma_{E}\epsilon}}{\Gamma(a_{1})\Gamma(a_{2})}\int_{ 0}^{\infty}dx_{1}\int_{0}^{\infty}dx_{2}\,x_{1}^{a_{1}-1}x_{2}^{a_{2}-1}(x_{1} +x_{2})^{-d/2}\exp(-i\mathcal{U}/\mathcal{F})\,, \tag{17}\] where \(\mathcal{U}\) and \(\mathcal{F}\) are graph polynomials depending on \(x_{1}\) and \(x_{2}\), \[\mathcal{U}(x_{1},x_{2})=x_{1}+x_{2},\quad\mathcal{F}(x_{1},x_{2})=m^{2}(x_{1} +x_{2})^{2}-p^{2}x_{1}x_{2}-i0^{+}\,. \tag{18}\] Therefore, \((d-2)\) dimensional integrals can be written as \[I_{a_{1},a_{2}}^{d-2} =\frac{e^{\gamma_{E}\epsilon}}{\Gamma(a_{1})\Gamma(a_{2})}\int_{ 0}^{\infty}dx_{1}\int_{0}^{\infty}dx_{2}\,x_{1}^{a_{1}-1}x_{2}^{a_{2}-1}(x_{1} +x_{2})^{-(d-2)/2}\exp(-iU/F)\] \[=\frac{e^{\gamma_{E}\epsilon}}{\Gamma(a_{1})\Gamma(a_{2})}\int_{ 0}^{\infty}dx_{1}\int_{0}^{\infty}dx_{2}\,x_{1}^{a_{1}-1}x_{2}^{a_{2}-1}(x_{1} +x_{2})(x_{1}+x_{2})^{-d/2}\exp(-iU/F)\] \[=\frac{e^{\gamma_{E}\epsilon}}{\Gamma(a_{1})\Gamma(a_{2})}\int_{ 0}^{\infty}dx_{1}\int_{0}^{\infty}dx_{2}\,\big{(}x_{1}^{a_{1}}x_{2}^{a_{2}-1}+ x_{1}^{a_{1}-1}x_{2}^{a_{2}}\big{)}(x_{1}+x_{2})^{-d/2}\exp(-iU/F)\] \[=a_{1}I_{a_{1}+1,a_{2}}^{d}+a_{2}I_{a_{1},a_{2}+1}^{d}\,, \tag{19}\] where the last line used Eq. (18) to map different monomials in \(x_{1}\) and \(x_{2}\) to bubble integrals with different propagator powers. Applying Eq. (19) to the finite bubble master integral \(I_{2,1}^{d}\) on the LHS of Eq. (9) in \((d-2)\) spacetime dimensions and then performing IBP reduction, we obtain \[I_{2,1}^{d-2}=\frac{2(d-5)}{p^{2}-4m^{2}}I_{2,1}^{d}-\frac{2}{p^{2}-4m^{2}}I_{3,0}^{d}\,. \tag{20}\] The dimension-shifting formula for the tadpole integral can be obtained easily, e.g. by using closed-form results for tadpole integrals. The result is \[I_{3,0}^{d-2}=\frac{6-d}{2m^{2}}I_{3,0}^{d}\,. \tag{21}\] Eqs. (20) and (21) are the dimension-shifting identities that express the two master integrals in Eq. (9) in lower spacetime dimensions to the same master integrals in higher spacetime dimensions. By inverting Eqs. (20) and (21), we obtain dimension shifting identities in the reverse direction, i.e. expressing master integrals in higher spacetime dimensions in terms of master integrals in lower spacetime dimensions, \[I_{2,1}^{d+2} =\frac{p^{2}-4m^{2}}{2(d-3)}I_{2,1}^{d}-\frac{2m^{2}}{(d-3)(d-4)} I_{3,0}^{d}\,, \tag{22}\] \[I_{3,0}^{d+2} =\frac{2m^{2}}{4-d}I_{3,0}^{d}\,. \tag{23}\] ### Positivity constraints in loop momentum space Here we focus on the case \(p^{2}<0\) and use the Wick-rotated expression for the integrals, Eq. (2), with \[\mathbf{p}^{2}=M^{2}=-p^{2}\,. \tag{24}\] We will later present a Feynman parameter-space treatment that seems more powerful and in particular works for all \(p^{2}<4m^{2}\), but the loop momentum space treatment here will help build up intuitions. #### 2.2.1 A crude first attempt We first consider the following two convergent integrals with non-negative integrands in loop momentum space, \[I_{2,2}^{d} =\int\frac{d^{d}\mathbf{l}\,e^{\gamma E\epsilon}}{\pi^{d/2}}\frac{1}{ (\mathbf{l}^{2}+m^{2})^{2}[(\mathbf{p}+\mathbf{l})^{2}+m^{2}]^{2}}\,, \tag{25}\] \[I_{3,1}^{d}+I_{1,3}^{d}-2I_{2,2}^{d} =\int\frac{d^{d}\mathbf{l}\,e^{\gamma E\epsilon}}{\pi^{d/2}}\frac{1}{ (\mathbf{l}^{2}+m^{2})[(\mathbf{p}+\mathbf{l})^{2}+m^{2}]}\] \[\times\left(\frac{1}{\mathbf{l}^{2}+m^{2}}-\frac{1}{(\mathbf{p}+\mathbf{l})^ {2}+m^{2}}\right)^{2}\,.\] We have used the notation Eq. (1) and the subsequent Wick rotation Eq. (2). Since the integrals are massive and have no infrared divergence, a simple ultraviolet power-counting shows that the above integrals are convergent near 4 dimensions. These integrals therefore have finite non-negative values, \[I_{2,2}^{d}\geq 0,\qquad I_{3,1}^{d}+I_{1,3}^{d}-2I_{2,2}^{d}\geq 0\,. \tag{26}\] By IBP reduction as described in Section 2.1, the above inequalities translate into constraints on the finite master integrals on the LHS of Eq. (9). The needed IBP reduction results are shown in Eqs. (10) and (11), in which we will rewrite \(p^{2}=-M^{2}\). We use the parametrization Eq. (13) to factor out the positive tadpole integral \(I_{3,0}^{d}\), finally arriving at \[\frac{I_{3,0}^{d}}{M^{2}(M^{2}+4m^{2})}\big{[}\left((6-d)M^{2}+4m^ {2}\right)\hat{I}_{2,1}^{d}-4m^{2}\big{]}\geq 0\,,\] \[\frac{I_{3,0}^{d}}{M^{2}(M^{2}+4m^{2})}\big{[}-\left((8-d)M^{2}+1 2m^{2}\right)\hat{I}_{2,1}^{d}+(2M^{2}+12m^{2})\big{]}\geq 0\,,\] \[\text{for any }d<6\,, \tag{27}\] i.e. \[\frac{4m^{2}}{(6-d)M^{2}+4m^{2}}\leq\hat{I}_{2,1}^{d}=I_{2,1}^{d}/I_{3,0}^{d} \leq\frac{2M^{2}+12m^{2}}{(8-d)M^{2}+12m^{2}}\,,\quad\text{for any }d<6\,. \tag{28}\] It is easily shown that the LHS of the inequality above is always less than the RHS when \(d<6\), so the normalized bubble integral \(\hat{I}_{2,1}^{d}=I_{2,1}^{d}/I_{3,0}^{d}\) is bounded in a finite range. When \(d\) tends to 6 from below, the LHS and RHS of the inequality both approach 1, leading to the prediction that \(I_{2,1}^{d}/I_{3,0}^{d}\to 1\) as \(d\to 6\); this is exactly as expected since both \(I_{2,1}^{d}\) and \(I_{3,0}^{d}\) have an ultraviolet pole \(1/(d-6)\) with the same coefficient. Specializing to the case \(d=4\), Eq. (28) becomes \[\frac{2m^{2}}{M^{2}+2m^{2}}\leq\hat{I}_{2,1}^{d=4}\leq\frac{M^{2}+6m^{2}}{2M^ {2}+6m^{2}}\,,\quad\text{for }d=4\,. \tag{29}\] Let us arbitrarily choose an example numerical point, \[M^{2}=2,\quad m^{2}=1\,, \tag{30}\] At this point, Eq. (29) becomes \[0.5\leq\hat{I}_{2,1}^{d=4}\leq 0.8\,,\quad\text{for }M^{2}=2,m^{2}=1\,. \tag{31}\] This is consistent with the analytic result given in Appendix A with \(p^{2}=-M^{2}\), \[\hat{I}_{2,1}^{d=4} =I_{2,1}^{d=4}/I_{3,0}^{d=4}=\frac{2m^{2}}{\beta M^{2}}\log\frac{ \beta+1}{\beta-1}\,, \tag{32}\] \[\beta \equiv\sqrt{1+\frac{4m^{2}}{M^{2}}}\, \tag{33}\] which evaluates to \[\hat{I}_{2,1}^{d=4}\approx 0.7603459963\,,\quad\text{at }M^{2}=2,m^{2}=1\,. \tag{34}\] ince our crude positivity bound Eq. (31) and the analytic result Eq. (32) only depend on the dimensionless ratio \(M^{2}/m^{2}\), we plot them in Fig. 2. It is not surprising that all three curves in the plot tend to 1 as \(M^{2}/m^{2}\to 0\), since in this case we can set the external momenta to 0, and the bubble and tadpole integrals in Eq. (9) then become identical. The general observation is that our positivity bounds can become exact in special limits of kinematics or the spacetime dimension. #### 2.2.2 Formulation in terms of matrix eigenvalues In preparation for the introduction of semidefinite programming, we will first recast positivity constraints in terms of eigenvalues of an appropriate matrix. To simplify the notation of Eq. (1), let us define the following shorthand notations for the denominators, \[\rho_{1}=-l^{2}+m^{2},\qquad\rho_{2}=-(p+l)^{2}+m^{2}\,. \tag{35}\] Let us consider a class of positive-semidefinite integrals of the bubble family, parametrized by three real numbers \(\alpha_{1},\alpha_{2},\alpha_{3}\), \[\int\frac{d^{d}\mathbf{l}\,e^{\gamma E\epsilon}}{i\pi^{d/2}}\frac{1}{\rho_{1}^{2} \rho_{2}}\left(\alpha_{1}+\frac{\alpha_{2}}{\rho_{1}}+\frac{\alpha_{3}}{\rho_ {2}}\right)^{2}=\sum_{i,j}\alpha_{i}M_{ij}\alpha_{j}=\vec{\alpha}^{T}\,\mathbb{ M}\,\vec{\alpha}\,, \tag{36}\] where the last line switched to a notation involving a length-3 column vector \[\vec{\alpha}=\begin{pmatrix}\alpha_{1}\\ \alpha_{2}\\ \alpha_{3}\end{pmatrix} \tag{37}\] and a \(3\times 3\) symmetric matrix \(\mathbb{M}\), given by \[\mathbb{M}=\begin{pmatrix}I_{2,1}^{d}&I_{3,1}^{d}&I_{2,2}^{d}\\ I_{3,1}^{d}&I_{4,1}^{d}&I_{3,2}^{d}\\ I_{2,2}^{d}&I_{3,2}^{d}&I_{2,3}^{d}\end{pmatrix}\,, \tag{38}\] Figure 2: Comparison between ad hoc positivity bounds Eq. (29) and the analytic result for the bubble integral \(\tilde{I}_{2,1}^{d=4}\) normalized according to Eq. (14). using the index notation Eq. (1). The expression Eq. (36) is non-negative for any choice of \((\alpha_{1},\alpha_{2},\alpha_{3})\) because after Wick rotation, \(\rho_{1}\) and \(\rho_{2}\) are non-negative and the squared expression is also non-negative. Therefore, the symmetric matrix \(\mathbb{M}\) must be positive-semidefinite, represented by the shorthand notation \[\mathbb{M}\succcurlyeq 0\,,\,. \tag{39}\] By IBP reduction, the matrix entries of \(\mathbb{M}\) are integrals which can be re-expressed as linear sums of the two finite master integrals on the LHS of Eq. (9). As an example, \[\mathbb{M}_{23}=\mathbb{M}_{32}=\int\frac{d^{d}\mathbf{l}\,e^{\gamma_{E}\epsilon}}{ i\pi^{d/2}}\frac{1}{\rho_{1}^{2}\rho_{2}^{2}}=I_{2,2}^{d}, \tag{40}\] which is then reduced to the finite master integrals according to Eq. (10). Therefore \(\mathbb{M}\) can be written as the sum of two individual master integral contributions, \[\mathbb{M}=I_{3,0}^{d}\mathbb{M}_{1}+I_{2,1}^{d}\mathbb{M}_{2}=I_{3,0}^{d} \left(\mathbb{M}_{1}+\hat{I}_{2,1}^{d}\mathbb{M}_{2}\right)\,, \tag{41}\] using the notation \(\hat{I}_{2,1}^{d}=I_{2,1}^{d}/I_{3,0}^{d}\) introduced in Eq. (13). Since \(I_{3,0}\) is itself positive, the positive-semidefiniteness of \(\mathbb{M}\) implies \[\widetilde{\mathbb{M}}\equiv\mathbb{M}/I_{3,0}^{d}=\mathbb{M}_{1}+\hat{I}_{2, 1}^{d}\mathbb{M}_{2}\succcurlyeq 0\,, \tag{42}\] again employing the shorthand notation introduced in Eq. (39) to indicate positive-semidefiniteness. Equivalently, all the eigenvalues of \(\mathbb{M}_{1}+\hat{I}_{2,1}^{d}\mathbb{M}_{2}\) must be non-negative. In Eq. (41), the matrices \(\mathbb{M}_{1}\) and \(\mathbb{M}_{2}\) contain entries that are rational functions of the spacetime \(d\) and kinematic variables \(p^{2},m^{2}\), since IBP reduction always produces rational coefficients for master integrals. Although the general \(d\) dependence is not complicated, we will present \(\mathbb{M}_{1}\) and \(\mathbb{M}_{2}\) in the \(d=4\) case for brevity of presentation, \[\left.\mathbb{M}_{1}\right|_{d=4} =\begin{pmatrix}0&\frac{2m^{2}+M^{2}}{M^{2}(4m^{2}+M^{2})}&-\frac {4}{M^{2}(4m^{2}+M^{2})}\\ \frac{2m^{2}+M^{2}}{M^{2}(4m^{2}+M^{2})}&\frac{\left(m^{2}+M^{2}\right)\left(6 m^{2}+M^{2}\right)}{3m^{2}M^{2}(4m^{2}+M^{2})^{2}}&-\frac{2m^{2}-M^{2}}{m^{2}M^{2}(4 m^{2}+M^{2})^{2}}\\ -\frac{4m^{2}}{M^{2}(4m^{2}+M^{2})}&\frac{M^{2}-2m^{2}}{M^{2}(4m^{2}+M^{2})^{2} }&-\frac{2m^{2}-M^{2}}{m^{2}M^{2}(4m^{2}+M^{2})^{2}}\end{pmatrix}\,, \tag{43}\] \[\left.\mathbb{M}_{2}\right|_{d=4} =\begin{pmatrix}1&-\frac{2m^{2}}{M^{2}(4m^{2}+M^{2})}&\frac{2 (2m^{2}+M^{2})}{m^{2}M^{2}(4m^{2}+M^{2})}\\ -\frac{2m^{2}}{M^{2}(4m^{2}+M^{2})}&-\frac{2m^{2}}{M^{2}(4m^{2}+M^{2})^{2}}& \frac{2\left(m^{2}+M^{2}\right)}{m^{2}M^{2}(4m^{2}+M^{2})^{2}}\\ \frac{2\left(2m^{2}+M^{2}\right)}{M^{2}(4m^{2}+M^{2})}&\frac{2\left(m^{2}+M^{ 2}\right)}{M^{2}(4m^{2}+M^{2})^{2}}&\frac{2\left(m^{2}+M^{2}\right)}{m^{2}M^{2 }(4m^{2}+M^{2})^{2}}\end{pmatrix}\,. \tag{44}\] Now we treat \(\hat{I}_{2,1}^{d}\) as an undetermined parameter to be constrained by Eq. (42). Again we look at the example numerical point as in Eq. (30), \[M^{2}=2,\quad m^{2}=1\,, \tag{45}\] and plot the three eigenvalues of the \(3\times 3\) matrix \(\widetilde{\mathbb{M}}=\mathbb{M}_{1}+\hat{I}_{2,1}^{d}\mathbb{M}_{2}\) in Fig. 3. It can be seen in the figure that most of the parameter range shown is ruled out due to the presence of a negative eigenvalue indicated by the lowest orange curve. In Fig. 4, we zoom in to a smaller parameter region and only plot the smallest eigenvalue, since the matrix is positive semidefinite as long as the smallest eigenvalue is non-negative. The allowed parameter region shown in the plot is \[0.630\leq\hat{I}_{2,1}^{d=4}\leq 0.847\,, \tag{46}\] which provides a more stringent lower bound than the previous result Eq. (31). In fact, far better lower and upper bounds can be achieved in this approach by using an ansatz larger than the one in Eq. (36). For example, let us replace the squared term in Eq. (36) by an arbitrary degree-3 polynomial in \(1/\rho_{1}\) and \(1/\rho_{2}\), parametrized by 10 undetermined free coefficients. We obtain the constraint, \[0<\int\frac{d^{d}\mathbf{l}\,e^{\gamma_{E}\epsilon}}{i\pi^{d/2}}\frac{1}{\rho_{1}^ {2}\rho_{2}}\left(\alpha_{1}+\frac{\alpha_{2}}{\rho_{1}}+\frac{\alpha_{3}}{ \rho_{2}}+\frac{\alpha_{4}}{\rho_{1}^{2}}+\frac{\alpha_{5}}{\rho_{1}\rho_{2}}+ \frac{\alpha_{6}}{\rho_{2}^{2}}+\frac{\alpha_{7}}{\rho_{1}^{3}}+\frac{\alpha_{ 8}}{\rho_{1}^{2}\rho_{2}}+\frac{\alpha_{9}}{\rho_{1}\rho_{2}^{2}}+\frac{\alpha _{10}}{\rho_{2}^{3}}\right)^{2}\,. \tag{47}\] Repeating the above analysis, we obtain a constraint for \(\hat{I}_{2,1}^{d}\) similar to Eq. (42), except that the matrices involved have \(10\times 10\) sizes. The ten eigenvalues as functions of \(\hat{I}_{2,1}^{d}\) are plotted in Fig. 5, as an "upgraded" version of Fig. 3 with a larger ansatz size. Some of the ten eigenvalues are too close to each other on the plot to be seen individually, but only the smallest eigenvalue (represented by the lowest curve in orange color) matters as Figure 4: Magnified version of the vicinity of a small region of Fig. 3 in which the lowest eigenvalue of \(\tilde{\mathbb{M}}\), shown in the curve, is non-negative. Figure 3: Three eigenvalues of \(\widetilde{\mathbb{M}}\) defined in Eq. (42) as a function of \(\hat{I}_{2,1}^{d}\), at the kinematic point Eq. (30). The lowest eigenvalue corresponds to the bottom orange curve and the remaining two eigenvalues correspond to the upper black curves. it determines whether we can satisfy the constraint that all eigenvalues are positive. In Fig. (6), we zoom in to the small allowed range for the parameter \(\hat{I}_{2,1}^{d}\). The allowed region, as shown in the plot, is \[0.7598\leq\hat{I}_{2,1}^{d=4}\leq 0.7610\,, \tag{48}\] which tightly constrains \(\hat{I}_{2,1}^{d=4}\) around its true value \(\hat{I}_{2,1}^{d=4}\approx 0.7603\) from evaluating the known analytic result at \(d=4,M^{2}=-p^{2}=2\), with a relative error of around \(10^{-3}\). The above two-sided bounds are rigorous, but we will also explore a prescription to assign a "central value", or "best estimate", of the value of the integrals. The prescription described below, though not justified from first principles, empirically achieve a closer agreement with true values of the integrals than the rigorous bounds in the examples in this paper. The prescription is simply finding the value of \(\hat{I}_{2,1}^{d}\) which maximizes the smallest eigenvalue of the matrix that is required to be positive semidefinite, e.g. the matrix \(\widetilde{M}\) of Eq. (41). For the positivity constraint Eq. (47) with 10 free parameters, the prescription picks the value of \(\hat{I}_{2,1}^{d}\) corresponding to the maximum of the curve in Fig. 6, which deviates from the exact result, again at the example point \(d=4,m=1,M^{2}=-p^{2}=2\), by only a relative error of about \(10^{-6}\). In this case, the prescription happens to produce a value that is very close to the exact result, but typically we observe the prescription to give a "central value" that is one to two orders of magnitude better than the accuracy indicated by rigorous bounds. Figure 5: Eigenvalues as a function \(\hat{I}_{2,1}^{d}\), for the \(10\times 10\) symmetric matrix that represent the quadratic dependence of the RHS of Eq. (47) on the \(\alpha_{i}\) parameters after factoring out \(\hat{I}_{3,0}^{d}\). Figure 6: A magnified version of Fig. 5 around the small region of \(\hat{I}_{2,1}^{d}\) where all eigenvalues are positive, showing only the smallest eigenvalue. #### 2.2.3 High-precision evaluation using semidefinite programming We have formulated positivity constraints in terms of eigenvalues of a matrix which, in the toy example of the one-loop bubble integrals, depends linearly on only one undetermined parameter, as shown in Eq. (42). For more complicated Feynman integrals, there will be more than one master integrals to be evaluated and all of them will be considered as undetermined parameters. So a search in higher-dimensional space is needed to locate the region in which all eigenvalues of the matrix are non-negative, and this can become computationally challenging. Fortunately, very efficient algorithms exist to solve _semidefinite programming problems_ in mathematical optimization [53]. Loosely speaking, semidefinite programs are generalizations of linear programs allowing not only linear constraints but also positive semidefiniteness constraints on matrices that have linear dependence on the optimization variables. Here we show how our problem of constraining unknown master integrals can be stated as semidefinite programming problems. Following the treatment of Section 2.2.2 above, finding the minimum allowed value of \(\hat{I}^{d}_{2,1}\) can be formulated as \[\begin{split}\text{minimize}&\quad\hat{I}^{d}_{2,1}\,,\\ \text{subject to}&\quad\mathbb{M}_{1}+\hat{I}^{d}_{2,1}\cdot\mathbb{M}_{2}\succcurlyeq 0\,,\end{split} \tag{49}\] which is in the form of a semidefinite program. We used the \(\succcurlyeq 0\) notation, already introduced in Eq. (39), to indicate that the matrix on the LHS must be positive semidefinite. Similarly, finding the minimum allowed value of \(\hat{I}^{d}_{2,1}\) can be formulated as \[\begin{split}\text{maximize}&\quad\hat{I}^{d}_{2,1}\,,\\ \text{subject to}&\quad\mathbb{M}_{1}+\hat{I}^{d}_{2,1}\cdot\mathbb{M}_{2}\succcurlyeq 0\,.\end{split} \tag{50}\] Finally, to implement our prescription of maximizing the smallest eigenvalue to find the "central value" of the undetermined master integrals, we introduce an additional undetermined parameter \(\lambda\) and formulate the problem as \[\begin{split}\text{maximize}&\quad\lambda\,,\\ \text{subject to}&\quad\mathbb{M}_{1}+\hat{I}^{d}_{2,1}\cdot\mathbb{M}_{2}-\lambda\mathbb{I}\succcurlyeq 0\,.\end{split} \tag{51}\] This is again in the form of a semidefinite program, where both \(\hat{I}^{d}_{2,1}\) and \(\lambda\) are undetermined parameters whose values will be fixed to satisfy the optimization objective, namely to maximize \(\lambda\). Note that in this case, finding a optimal solution to the semidefinite program does not guarantee \(\mathbb{M}_{1}+\hat{I}^{d}_{2,1}\cdot\mathbb{M}_{2}\succcurlyeq 0\), _unless_ the value of \(\lambda\) in the solution is non-negative. The value of \(\hat{I}^{d}_{2,1}\) in the solution is then taken as the central value for this undetermined free parameter. There exist many computer codes that specialize in solving semidefinite programs. Wolfram Mathematica has supported semidefinite programming since version 12 with the SemidefiniteOptimization function, and the default backend (which can be changed by the Method option) is the open source library CSDP [65] at least for the problems we deal with, working with double-precision floating numbers, i.e. the standard machine precision on current hardware. The SDPA family [66] of computer programs support computation at double precision as well as a variety of extended precisions, e.g. double-double precision with SDPA-DD, quadruple-double precision with SDPA-QD, and arbitrary precision with SDPA-GMP. The SDPB solver by Simmons-Duffin [67] specializes in polynomial programming problems in the conformal bootstrap and works in arbitrary precision. Most of the work in this paper will make use of the SDPA family, while some results from Mathematica / CSDP will also be shown for the purpose of comparison. We go on to discuss how to achieve higher numerical precision for the one-loop bubble integral. We enlarge the ansatz for positive integrals in Eqs. (36) and (47) to have more parameters, as \[0<\int\frac{d^{d}l\,e^{\gamma_{E}\epsilon}}{i\pi^{d/2}}\frac{1}{\rho_{1}^{2} \rho_{2}}P(1/\rho_{1},1/\rho_{2})^{2}\,, \tag{52}\] where P is an arbitrary polynomial with a maximum degree \(N\), i.e. an arbitrary linear sum of all monomials in \(1/\rho_{1}\) and \(1/\rho_{2}\), with each monomial multiplied by a free parameter \(\alpha_{i}\). The \(N=1\) and \(N=3\) cases are shown previously in Eqs. (36) and (47), respectively. Generally, the number of possible monomials in two variables with maximum degree \(N\) is equal to \((N+1)(N+2)/2\). For each value of \(N\) from 1 to 10, we again set \(\hat{I}_{2,1}^{d}=I_{3,0}/I_{2,1}\) at \(d=4,m=1,M^{2}=-p^{2}=2\) and solve the semidefinite programs in Eqs. (49) to (51) to obtain the lower bound \((\hat{I}_{2,1}^{d})_{\rm min}\), upper bound \((\hat{I}_{2,1}^{d})_{\rm max}\), and central value \((\hat{I}_{2,1}^{d})_{\rm central}\) for the undetermined parameter \(\hat{I}_{2,1}^{d}\). To ensure numerical stability, we use the SDPA-DD solver working at double-double precision. Then we compare with the exact result \((\hat{I}_{2,1}^{d})_{\rm exact}\) to find the relative error in the best estimate value, defined as \[\left|\frac{(\hat{I}_{2,1}^{d})_{\rm central}}{(\hat{I}_{2,1}^{d})_{\rm exact }}-1\right| \tag{53}\] as well as the relative error of the rigorous bounds, defined as \[\left|\frac{(\hat{I}_{2,1}^{d})_{\rm max}-(\hat{I}_{2,1}^{d})_{\rm min}}{2( \hat{I}_{2,1}^{d})_{\rm exact}}\right| \tag{54}\] In Fig. 7, we plot the relative errors against the cutoff degree \(N\) of the polynomial \(P\) in Eq. (52). The plot is on a log scale, and we can see that the numerical results appear to converge exponentially to the true value, reaching a precision as high as \(10^{-14}\) with a cutoff degree of 10. The central value from our somewhat arbitrary prescription is seen to be consistently more precise than the rigorous bounds. We revisit the issue of numerical stability. In Fig. 8, we compare the accuracy obtained for the central values obtained by SDPA-DD with double-double precision, which was used above, and Mathematica / CSDP with double precision. The two results visibly deviate from each other once the cutoff degree is 5 or above, and only the double-double precision computation continues to exhibit exponential reduction in errors as the cutoff degree is increased. This signals that numerical instability has occurred if one computes at double precision only. We have checked that the accuracies do not improve further when computing with quadruple-double precision using SDPA-QD. ### Positivity constraints in Feynman parameter space In Section 2.2, we used positivity constraints in loop momentum space to evaluate bubble integrals defined in Eq. (2.1) in the case \(p^{2}<0\) when it is possible to Wick-rotate the integrals into Euclidean spacetime with real-valued external momenta. We now use positivity constrains in Feynman parameter space instead to evaluate bubble integrals for any value of \(p^{2}\) less than \(4m^{2}\), which is what is commonly referred to as the "Euclidean region" where the bubble integrals have no imaginary parts. We write down the Feynman parameter Figure 8: Relative errors in numerical results for the central values of \(\hat{I}_{2,1}^{d=4}\) at the kinematic point Eq. (2.30), obtained with semidefinite programming solvers working at two different numerical precisions, namely Mathematica / CSDP working at double precision and SDPA-DD working at double-double precision. As the plot shows, double-double precision is needed for the relative errors to improve exponentially beyond a cutoff degree of 5. Figure 7: Relative errors in numerical results for \(\hat{I}_{2,1}^{d=4}\) at the kinematic point Eq. (2.30) from solving positivity constraints Eq. (2.52) using semidefinite programming. The horizontal axis is the cutoff degree of the polynomial \(P\). The relative errors are defined by Eq. (2.54) for the rigorous bounds and Eq. (2.53) for the central values. representation of bubble integrals defined in Eq. (1), \[I^{d}_{a_{1},a_{2}} \equiv\int\frac{d^{d}l\,e^{\gamma_{E}\epsilon}}{i\pi^{d/2}}\frac{1}{( -l^{2}+m^{2})^{a_{1}}[-(p+l)^{2}+m^{2}]^{a_{2}}}\] \[=\frac{\Gamma(a_{1}+a_{2}-d/2)e^{\gamma_{E}\epsilon}}{\Gamma(a_{1 })\Gamma(a_{2})}\int_{0}^{\infty}dx_{1}\int_{0}^{\infty}dx_{2}\,\delta(1-x_{1} -x_{2})\] \[\quad\times x_{1}^{a_{1}-1}x_{2}^{a_{2}-1}\frac{\mathcal{U}(x_{1},x_{2})^{a_{1}+a_{2}-d}}{\mathcal{F}(x_{1},x_{2})^{a_{1}+a_{2}-d/2}} \tag{55}\] \[=\frac{\Gamma(a_{1}+a_{2}-d/2)e^{\gamma_{E}\epsilon}}{\Gamma(a_{1 })\Gamma(a_{2})}\int_{0}^{1}dx\,x^{a_{1}-1}(1-x)^{a_{2}-1}\frac{1}{\mathcal{F }(x)^{a_{1}+a_{2}-d/2}}\,, \tag{56}\] where the graph polynomials were already given in Eq. (57) for the Schwinger parametrization, printed again here: \[\mathcal{U}(x_{1},x_{2})=x_{1}+x_{2},\quad\mathcal{F}(x_{1},x_{2})=m^{2}(x_{1 }+x_{2})^{2}-p^{2}x_{1}x_{2}-i0^{+}\,. \tag{57}\] In the last line of Eq. (56), we integrated \(x_{2}\) over the delta function in Eq. (55) to arrive at Eq. (56) with \[\mathcal{F}(x)\equiv\mathcal{F}(x,1-x)=m^{2}-p^{2}x(1-x)-i0^{+}\,, \tag{58}\] and \[\mathcal{U}(x)\equiv\mathcal{U}(x,1-x)\equiv 1\,, \tag{59}\] which appears as a unit numerator in Eq. (56). _Aside:_ we note that Eq. (55) has the property that when ignoring the Dirac delta function \(\delta(1-\sum_{i}x_{i})\), the rest of the expression (with the integration measure taken into account) is invariant under the rescaling \[x_{i}\to\lambda x_{i}, \tag{60}\] where \(\lambda\) is an arbitrary nonzero real number. This is called _projective invariance_ and holds for the Feynman parameter form of arbitrary Feynman integrals written down in Eq. (80). The deeper reason is that Feynman parameter integrals can generally be written as integrals in real projective space \(\mathbb{RP}^{N-1}\) (see e.g. Section 2.5.3 of Ref. [31]). \(\mathbb{RP}^{N-1}\) is the space of \(N\) real coordinates \(x_{i}\), excluding the origin, where any ray, i.e. a set of points related to each other by a rescaling Eq. (60), is identified as the same point. Abusing the language of gauge theory, Eq. (60) is a gauge symmetry and \(\delta(1-\sum_{i}x_{i})\) in Eq. (55) is a gauge-fixing term that restricts the integration to one of the infinitely many gauge-equivalent slices. The Fadeev-Popov Jacobian associated with this gauge-fixing term is unity and therefore does not appear explicitly. Projective invariance is not a prerequisite for following the rest of the paper, though it helps motivate some of the developments. Now we specialize to the following kinematic region for bubble integrals, \[0<p^{2}<4m^{2}, \tag{61}\] i.e. with the value of \(p^{2}\) below the Cutkosky cut threshold but cannot be trivially Wick-rotated into Euclidean spacetime. We have \[m^{2}-p^{2}x(1-x)\geq m^{2}-p^{2}/4>0, \tag{62}\] so the \(-i0^{+}\) prescription in Eq. (58) is negligible and can be dropped, and the integral is real. For the rest of the paper, we will adopt the common terminology of the Euclidean region to be the kinematic region in which all graph polynomials are non-negative and the Feynman integral is real-valued due to the lack of Cutkosky cuts. Generally such integrals cannot be embedded into Euclidean spacetime. This is e.g. the working definition when the literature refers to the Euclidean region of Feynman integrals with massless external legs, since nonzero massless momenta cannot be literally embedded into Euclidean spacetime. If we set \(a_{2}=1\), we can invert Eq. (56) to obtain \[\int_{0}^{1}dx\,x^{a_{1}-1}\frac{1}{\mathcal{F}(x)^{1+a_{1}-d/2}}\] \[=\int_{0}^{1}dx\,x^{a_{1}-1}\frac{1}{\left[m^{2}-p^{2}x(1-x) \right]^{1+a_{1}-d/2}}\] \[=\frac{\Gamma(a_{1})}{e^{\gamma_{E}\epsilon}\,\Gamma(a_{1}+1-d/2 )}I^{d}_{a_{1},1}\,, \tag{63}\] It will be more useful to have a version of the above equation with a fixed exponent for \(\mathcal{F}(x)\) on the LHS, even when the value of \(a_{1}\) changes. Below is a version with a fixed exponent \(d/2-3\) for \(\mathcal{F}(x)\), obtained by replacing \(d\to d+2(a_{1}-2)\) in Eq. (63) and multiplying by a constant prefactor, \[2(m^{2})^{3-d/2}\int_{0}^{1}dx\,x^{a_{1}-1}\frac{1}{\mathcal{F}( x)^{3-d/2}}\] \[=2\int_{0}^{1}dx\,x^{a_{1}-1}\frac{1}{\left[1-p^{2}x(1-x)/m^{2} \right]^{3-d/2}}\] \[=\frac{2\Gamma(a_{1})(m^{2})^{1+\epsilon}}{e^{\gamma_{E}\epsilon }\,\Gamma(3-d/2)}I^{d+2a_{1}-4}_{a_{1},1}\] \[=I^{d+2a_{1}-4}_{a_{1},1}/I^{d}_{3,0}\,, \tag{64}\] where the last line used the explicit result for \(I^{d}_{3,0}\) in Eq. (A.6). We define \[\hat{F}(x)=\mathcal{F}(x)/m^{2}=1-p^{2}x(1-x)/m^{2}\,, \tag{65}\] and rewrite Eq. (64) as \[2\int_{0}^{1}dx\,x^{a_{1}-1}\frac{1}{\hat{F}(x)^{3-d/2}}=I^{d+2a_{1}-4}_{a_{1 },1}/I^{d}_{3,0}\,. \tag{66}\] The RHS of Eq. (66) can be simplified further, as IBP identities and dimension-shifting identities can be applied to reduce \(I^{d+2a_{1}-4}_{a_{1},1}\) to a linear combination of the two master integrals, \(I^{d}_{2,1}\) and \(I^{d}_{3,0}\). We will only use Eq. (66) in the case \(a_{1}\geq 2\), when the RHS involves a bubble integral in spacetime dimension greater than or equal to \(d=4-2\epsilon\). As an example, consider the case \(a_{1}=3\), and Eq. (66) becomes \[2\int_{0}^{1}dx\,x^{2}\frac{1}{\hat{F}(x)^{3-d/2}}=I_{3,1}^{d+2}/I_{3,0}^{d}\,. \tag{67}\] Then we simplify the expression using the IBP reduction result Eq. (11) with \(d\) replaced by \(d+2\), obtaining \[2\int_{0}^{1}dx\,x^{2}\frac{1}{\hat{F}(x)^{3-d/2}}=\frac{1}{2p^{2}(4m^{2}-p^{2 })}\Big{(}[(2-d)p^{2}+4m^{2}]I_{2,1}^{d+2}+2(p^{2}-2m^{2})I_{3,0}^{d+2}\Big{)}/I _{3,0}^{d}\,. \tag{68}\] Finally, applying dimension-shifting identities Eqs. (22) and (23), the above equation becomes \[2\int_{0}^{1}dx\,x^{2}\frac{1}{\hat{F}(x)^{3-d/2}} =\left(\frac{(d-2)p^{2}-4m^{2}}{4(d-3)p^{2}}I_{2,1}^{d}+\frac{m^{ 2}}{(d-3)p^{2}}I_{3,0}^{d}\right)/I_{3,0}^{d}\] \[=\frac{(d-2)p^{2}-4m^{2}}{4(d-3)p^{2}}\hat{I}_{2,1}^{d}+\frac{m^{ 2}}{(d-3)p^{2}}\,. \tag{69}\] This concludes our example for simplifying the RHS of Eq. (66) in the case of \(a_{1}=3\). We now formulate a first version of positivity constraints for any \(d<6\), to be improved upon later, as, \[0\leq 2\int_{0}^{1}dx\,xP(x)^{2}\frac{1}{\hat{F}(x)^{3-d/2}}\,, \tag{70}\] where \(P(x)\) is an arbitrary polynomial in \(x\), which is analogous to the arbitrary polynomial \(P(1/\rho_{1},1/\rho_{2})\) in Eq. (52) used to construct positive integrals in momentum space. In the special case \(P(x)=1\), the RHS of Eq. (70) is simply proportional to \(I_{2,1}^{d}\) in unshifted spacetime dimension \(d\), according to Eq. (66). Since \(x\) is non-negative in the range of integration \(0\leq x\leq 1\), \(xP(x)^{2}\) is non-negative. We have chosen to use \(xP(x)^{2}\) instead of just \(P(x)^{2}\) to ensure that each monomial in the expanded expression contains at least one power of \(x\) and is related to an ultraviolet convergent integral in Eq. (66) as discussed above. Another valid choice is \((1-x)P(x)^{2}\), but this will not give more constraints for the bubble integral, because in Eq. (56), \(\mathcal{F}(x)=p^{2}-m^{2}x(1-x)\) is invariant under the exchange \(x\leftrightarrow 1-x\), owing to a reflection symmetry of the bubble diagram. It is possible to slightly refine the inequality Eq. (70) to make the constraint stronger. As we assume \(p^{2}>0\), we have \[\mathcal{F}(x)=m^{2}-p^{2}x(1-x)<m^{2},\quad\hat{F}(x)=\mathcal{F}(x)/m^{2}<1\,. \tag{71}\] So for any \(d<6\), we can modify Eq. (70) with an extra term, \[0\leq 2\int_{0}^{1}dx\,xP(x)^{2}\left(\frac{1}{\hat{F}(x)^{3-d/2}}-1\right)\,. \tag{72}\] Eq. (72) can also be written in a form that manifests the projective invariance discussed around Eq. (60), \[0\leq 2\int_{0}^{\infty}dx_{1}\int_{0}^{\infty}dx_{2}\,\delta(1-x_{1 }-x_{2})\] \[\quad\times\frac{x_{1}}{U(x_{1},x_{2})}P\left(\frac{x_{1}}{ \mathcal{U}(x_{1},x_{2})}\right)^{2}\left(\frac{\mathcal{U}(x_{1},x_{2})^{3-d }}{\hat{F}(x)^{3-d/2}}-\frac{1}{\mathcal{U}(x_{1},x_{2})^{3}}\right)\,, \tag{73}\] but we will use the form Eq. (72) below. To be more concrete, in Eq. (72), we use \[P(x)=\alpha_{1}+\alpha_{2}x^{2}+\cdots+\alpha_{N}x^{N}\,, \tag{74}\] where \(N\) is the cutoff degree of the polynomial and \(\alpha_{i}\) with \(1\leq i\leq N\) are free parameters, and Eq. (72) must hold for any values of the \(\alpha_{i}\) parameters. For each monomial from expanding \(xP(x)^{2}\), the first term in the curly bracket of Eq. (72) gives a bubble integral in a shifted dimension, normalized against the tadpole integral \(I_{3,0}^{d}\), according to the formula Eq. (66), while the second term in the curly bracket of Eq. (72) contributes to an integral of a monomial in \(x\) over \(0\leq x\leq 1\) which can be evaluated trivially. Using both dimension shifting identities and IBP identities, the bubble integrals produced above are rewritten as linear combinations of finite master integrals Eq. (9). Therefore Eq. (72) is turned into the form \[\vec{\alpha}^{T}\,\mathbb{M}\,\vec{\alpha}\geq 0\,, \tag{75}\] similar to the momentum-space version Eq. (36), with \[\mathbb{M}=\mathbb{M}_{1}+\hat{I}_{2,1}^{d}\mathbb{M}_{2}\,, \tag{76}\] where we use the definition \(\hat{I}_{2,1}^{d}=I_{2,1}^{d}/I_{3,0}^{d}\) as before, and the "inhomogeneous" term \(\mathbb{M}_{1}\) receives contribution from both constant terms in Eq. (72) and tadpole integrals coming from dimension-shifting and IBP identities. Following analogous developments in Sections 2.2.2 and 2.2.3, we solve the constraint \[\mathbb{M}_{1}+\hat{I}_{2,1}^{d}\mathbb{M}_{2}\succcurlyeq 0 \tag{77}\] while minimizing or maximizing \(\hat{I}_{2,1}^{d}\) to find rigorous bounds for \(\hat{I}_{2,1}^{d}\), or alternatively maximizing the smallest eigenvalue of \(\mathbb{M}_{1}+\hat{I}_{2,1}^{d}\mathbb{M}_{2}\) to find a central value for \(\hat{I}_{2,1}^{d}\) using the same prescription as described before. As an example, we pick numerical values \[p^{2}=2,m=1\,, \tag{78}\] for bubble integrals in \(d=4\), and compare our numerical results against the exact result. As \(p^{2}\) is positive, though below the Cutkosky cut threshold \(4m^{2}\), the Euclidean momentum space treatment in Section 2.2 is not applicable. SDPA-QD working at quadruple-double precision is used to compute central values and SDPA-GMP working at 8 times the double precision is used to compute rigorous bounds. In Fig. 9, we plot the relative error of the central value for \(\hat{I}_{2,1}^{d}\) as well as the relative error of the rigorous bounds for \(\hat{I}_{2,1}^{d}\). We can see on the log-scale plot that the numerical result again converge exponentially to the exact result as the cutoff degree is increased. In particular, with cutoff degree \(N=14\), the numerical result is \[\hat{I}_{2,1}^{d=4}\Big{|}_{p^{2}=2,\,m=1}\approx 1.57079632679413\,, \tag{79}\] which is slightly smaller than the exact result, with a relative error of \(4.9\times 10^{-13}\). ### Constraints for expansions in dimensional regularization parameter The methods described in Sections 2.2 and 2.3 are applicable to fixed spacetime dimensions, i.e. \(d=4-2\epsilon\) with fixed values of \(\epsilon\), which can be \(0\) if we target the \(4\)-dimensional case, or any value larger than \((-1)\) which will preserve the ultraviolet convergence properties of the integrals involved in the text above. However, for practical applications, Feynman integrals typically need to be evaluated as a Laurent expansion in \(\epsilon\). In the examples given in this paper, we can choose master integrals which are finite as \(\epsilon\to 0\), so the task is to calculation their Taylor expansions in \(\epsilon\). Any divergent integral can be reduced to rational-linear combinations of the master integrals, with all divergences absorbed into \(\epsilon\) poles of the coefficients.1 Footnote 1: In fact, it is believed that in general, one can choose “quasi-finite” master integrals [55] which are convergent as \(\epsilon\to 0\) except for a possible \(1/\epsilon\) pole that appears as an overall prefactor in the Feynman parameter representation. We now two strategies for calculating the \(\epsilon\) expansion, using the one-loop bubble integral example. The first strategy presented below is directly formulating positivity constraints for the \(\epsilon\) expansion terms, and the second strategy presented is numerical differentiation of the results with respect to \(\epsilon\) around \(\epsilon=0\). Figure 9: Relative errors in numerical results for \(\hat{I}_{2,1}^{d=4}\) at the kinematic point Eq. (78) from solving positivity constraints Eq. (72) in Feynman-parameter space using semidefinite programming. The horizontal axis is the cutoff degree of the polynomial \(P\). The relative errors are defined by Eq. (54) for the rigorous bounds and Eq. (53) for the central values. #### 2.4.1 Generic constraints We will take a break from the bubble integrals and write down the general form of the Feynman parametrization for an \(L\)-loop integral with \(n\) propagators, \[I^{d}_{a_{1},a_{2},\ldots,a_{n}}\equiv\left(\prod_{i=1}^{L}\int \frac{d^{d}l_{i}\,e^{\gamma_{E}\epsilon}}{i\pi^{d/2}}\right)\frac{1}{\rho_{1}^{a _{1}}\rho_{2}^{a_{2}}\ldots\rho_{n}^{a_{n}}}\] \[=\frac{\Gamma(a-Ld/2)e^{\gamma_{E}\epsilon}}{\Gamma(a_{1})\Gamma( a_{2})\ldots\Gamma(a_{n})}\int_{x_{i}\geq 0}d^{n}x_{i}\,\delta\left(1-\sum x_{i} \right)\,\left(\prod_{i}x_{i}^{a_{i}-1}\right)\frac{\mathcal{U}(x_{i})^{a-(L+ 1)d/2}}{\mathcal{F}(x_{i})^{a-Ld/2}}\,, \tag{2.80}\] where \(a\equiv\sum a_{i}\), and \(\mathcal{U}\) and \(\mathcal{F}\) are graph polynomials. In the Euclidean region, i.e. when external kinematics do not allow any Cutkosky cuts, \(\mathcal{U}\) and \(\mathcal{F}\) are non-negative in the range of integration. To make it easier to formulate positivity constraints, we adjust constant prefactors and slightly rewrite Eq. (2.80) as \[\tilde{I}^{d}_{a_{1},a_{2},\ldots,a_{n}}\equiv\frac{\Gamma(a_{1}) \Gamma(a_{2})\ldots\Gamma(a_{n})}{\Gamma(a-Ld/2)e^{\gamma_{E}\epsilon}}I^{d}_{ a_{1},a_{2},\ldots,a_{n}}\] \[=\frac{\Gamma(a_{1})\Gamma(a_{2})\ldots\Gamma(a_{n})}{\Gamma(a-Ld /2)e^{\gamma_{E}\epsilon}}\left(\prod_{i=1}^{L}\int\frac{d^{d}l_{i}\,e^{ \gamma_{E}\epsilon}}{i\pi^{d/2}}\right)\frac{1}{\rho_{1}^{a_{1}}\rho_{2}^{a_ {2}}\ldots\rho_{n}^{a_{n}}}\] \[=\int_{x_{i}\geq 0}d^{n}x_{i}\,\delta\left(1-\sum x_{i}\right) \,\left(\prod_{i}x_{i}^{a_{i}-1}\right)\frac{\mathcal{U}(x_{i})^{a-(L+1)d/2}} {\mathcal{F}(x_{i})^{a-Ld/2}}\,. \tag{2.81}\] We will restrict our attentions to values of \(d\) and \(a_{i}\) under which the RHS of Eq. (2.4.1) is convergent. Since there are otherwise no restrictions on \(a_{i}\), generally the integrals are not master integrals which are usually chosen to have small values of \(a_{i}\). We set \[d=d_{0}-2\epsilon\,, \tag{2.82}\] where \(d_{0}\) is usually an integer spacetime dimension such as 4. The Taylor expansion of the LHS of Eq. (2.4.1) is written as \[\tilde{I}_{a_{1},a_{2},\ldots,a_{n}}=\tilde{I}_{a_{1},a_{2},\ldots,a_{n}}\big{|} _{\epsilon^{0}}+\epsilon\cdot\tilde{I}_{a_{1},a_{2},\ldots,a_{n}}\big{|}_{ \epsilon^{1}}+\epsilon^{2}\cdot\tilde{I}_{a_{1},a_{2},\ldots,a_{n}}\big{|}_{ \epsilon^{2}}\,\ldots \tag{2.83}\] The only \(\epsilon\) dependence of the RHS of Eq. (2.4.1) is in the exponents on the graph polynomials, so the \(\mathcal{O}(\epsilon^{k})\) term in the Taylor expansion is \[\tilde{I}_{a_{1},a_{2},\ldots,a_{n}}\Big{|}_{\epsilon^{k}}=\int_{x_{i}\geq 0}d^ {n}x_{i}\left(1-\sum x_{i}\right)\,\left(\prod_{i}x_{i}^{a_{i}-1}\right)\frac{ \mathcal{U}(x_{i})^{a-(L+1)d_{0}/2}}{\mathcal{F}(x_{i})^{a-Ld_{0}/2}}\frac{1} {k!}\log^{k}\frac{\mathcal{U}^{L+1}}{\mathcal{F}^{L}}\,. \tag{2.84}\] Note that \(\mathcal{U}^{L+1}/\mathcal{F}^{l}\) in the equation above is a quantity that is invariant under the rescaling symmetry Eq. (2.4.1), since \(\mathcal{U}\) is a homogeneous polynomial of degree \(l\) and \(\mathcal{F}\) is a homogeneous polynomial of degree \(l+1\). For the Euclidean region, as \(\mathcal{U}\) and \(\mathcal{F}\) are positive in the range of integration, no branch-cut singularities from the logarithm are encountered. Now we write down a positivity constraint using our usual trick of constructing non-negative integrands from squares of polynomials, \[0\leq\int_{x_{i}\geq 0}d^{n}x_{i}\left(1-\sum x_{i}\right)\,\left(\prod_{i}x_{i }^{a_{i}-1}\right)\frac{\mathcal{U}(x_{i})^{a-(L+1)d_{0}/2}}{\mathcal{F}(x_{i}) ^{a-Ld_{0}/2}}P^{2}\left(\log\frac{\mathcal{U}^{L+1}}{\mathcal{F}^{L}}\right)\,, \tag{2.85}\] where \(P\) is an arbitrary polynomial (of the argument in the bracket) under a cutoff degree \(N\), as a sum of monomials each multiplied by a free parameter. After expanding the square of the polynomial, each monomial term is identified with a term in the Taylor expansion over \(\epsilon\) using Eq. (84). Using the same manipulations as in Sections 2.2 and 2.3, Eq. (85) implies that a certain symmetric matrix is positive semidefinite. We first define an auxiliary notation \[H_{k}\equiv(k!)\tilde{I}^{d}_{a_{1},a_{2},\ldots,a_{n}}\Big{|}_{\epsilon^{k}}\,. \tag{86}\] Then we have \[\begin{pmatrix}H_{0}&H_{1}&H_{2}&\ldots&H_{N}\\ H_{1}&H_{2}&H_{3}&\ldots&H_{N+1}\\ H_{2}&H_{3}&H_{4}&\ldots&H_{N+2}\\ \vdots&\vdots&\vdots&\ddots&\vdots\\ H_{N}&H_{N+1}&H_{N+2}&\ldots&H_{2N}\end{pmatrix}\succcurlyeq 0\,, \tag{87}\] where we again used the notation \(\succcurlyeq 0\) to indicate that a matrix is positive semidefinite. The matrix is in a special form called a Hankel matrix, where all matrix entries are defined through a sequence \(H_{0}\), \(H_{1}\),..., \(H_{N}\). Hankel matrices have also appeared in the context of EFT positivity bounds, e.g. in Ref. [39]. Eq. (87) is extremely general and applies to any convergent Feynman integral (or quasi-finite Feynman integral [55] after dropping an overall divergent prefactor) in the Euclidean region with arbitrary powers of propagators and no numerators. This tells us that the \(\epsilon\) expansion of such Feynman integrals are not arbitrary but are constrained by positivity constraints which, to our best knowledge, have not been previously revealed in the literature. #### 2.4.2 Taylored constraints for specific Feynman integrals From Eq. (84), it is not hard to anticipate that more specialized positivity constraints exist if we focus on a particular family of Feynman integrals if \(\log(\mathcal{U}^{L+1}/\mathcal{F}^{l})\) has either an upper bound or lower bound, or both, in the range of integration. For example, for one-loop bubble integrals, the \(\mathcal{U}\) polynomial is equal to \(x_{1}+x_{2}\) and is set to \(1\) by the Dirac delta function in Eq. (80). So we recover the Feynman parametrization for the one-loop bubble integral, Eq. (56), with \(x_{1}=x\), \(x_{2}=1-x\), \(\mathcal{U}(x)=1\), \(\mathcal{F}(x)=m^{2}-p^{2}x(1-x)\). We have, for \(0<p^{2}<4m^{2}\) under consideration, in the integration range \(0\leq x\leq 1\), \[\log\frac{\mathcal{U}^{L+1}}{\mathcal{F}^{L}}=\log\frac{1}{\mathcal{F}}=\log \frac{1}{m^{2}-p^{2}x(1-x)}\leq\log\frac{1}{m^{2}-p^{2}/4}\equiv\log\max\frac{ \mathcal{U}^{L+1}}{\mathcal{F}^{L}}\,. \tag{88}\] Therefore \[\log\max\frac{\mathcal{U}^{L+1}}{\mathcal{F}^{L}}-\log\frac{\mathcal{U}^{L+1}} {\mathcal{F}^{L}} \tag{89}\] is a positive quantity. Similarly, if a minimum of \(\log\mathcal{U}^{L+1}/\mathcal{F}^{L}\) exists over the range of integration, then \[\log\frac{\mathcal{U}^{L+1}}{\mathcal{F}^{L}}-\log\min\frac{\mathcal{U}^{L+1}} {\mathcal{F}^{L}} \tag{90}\] is a positive quantity. In the bubble integral example, again under \(0<p^{2}<4m^{2}\) and \(0\leq x\leq 1\), \[\log\frac{\mathcal{U}^{L+1}}{\mathcal{F}^{L}}=\log\frac{1}{\mathcal{F}}=\log \frac{1}{m^{2}-p^{2}x(1-x)}\geq\log\frac{1}{m^{2}}\equiv\log\min\frac{\mathcal{ U}^{L+1}}{\mathcal{F}^{L}}\,. \tag{91}\] Now we show an example of using Eq. (88) to contrain the \(O(\epsilon)\) term in the expansion of the finite bubble integral \(I_{2,1}\), in the style of an ad hoc constraint as was done for the \(\mathcal{O}(\epsilon^{0})\) part in Section 2.2.1.2 Using the definition \(\hat{F}(x)=\mathcal{F}(x)/m^{2}\) in Eq. (65), Eq. (88) is rewritten as Footnote 2: It is also possible to use Eq. (91) instead, or in combination. \[\log\frac{1}{\hat{F}(x)}\leq\log\max\frac{1}{\mathcal{F}}=\log\frac{1}{1-p^{2 }/(4m^{2})}\,, \tag{92}\] i.e., \[\log\frac{1}{1-p^{2}/(4m^{2})}-\log\frac{1}{\hat{F}(x)}\geq 0\,, \tag{93}\] We expand both the LHS and RHS of Eq. (69), with \(d=4-2\epsilon\), as a Taylor series in \(\epsilon\). Equating the \(\epsilon^{0}\) terms gives \[2\int_{0}^{1}dx\,x^{2}\frac{1}{\hat{F}(x)}=\frac{p^{2}-2m^{2}}{2p^{2}}\left( \hat{I}_{2,1}\big{|}_{\epsilon^{0}}\right)+\frac{m^{2}}{p^{2}}\,, \tag{94}\] while equating the \(\epsilon^{1}\) terms gives \[2\int_{0}^{1}dx\,x^{2}\frac{1}{\hat{F}(x)}\cdot\log\left(\frac{1}{\hat{F}(x)} \right)=\frac{p^{2}-4m^{2}}{2p^{2}}\left(\hat{I}_{2,1}\big{|}_{\epsilon^{0}} \right)+\frac{p^{2}-2m^{2}}{2p^{2}}\left(\hat{I}_{2,1}\big{|}_{\epsilon^{1}} \right)+\frac{2m^{2}}{p^{2}}\,. \tag{95}\] Now we use Eq. (93) to write down the positivity constraint \[2\int_{0}^{1}dx\,x^{2}\frac{1}{\hat{F}(x)}\left(\log\frac{1}{1-p^{2}/(4m^{2}) }-\log\frac{1}{\hat{F}(x)}\right)\geq 0\,. \tag{96}\] Then applying Eqs. (94) and (95) leads to a constraint on \(\left.\hat{I}_{2,1}\right|_{\epsilon^{1}}\), \[0 \leq\left(\frac{p^{2}-2}{2p^{2}}\log\frac{1}{1-p^{2}/(4m^{2})}+ \frac{4m^{2}-p^{2}}{2p^{2}}\right)\left(\hat{I}_{2,1}\big{|}_{\epsilon^{0}} \right)+\frac{2m^{2}-p^{2}}{2p^{2}}\left(\hat{I}_{2,1}\big{|}_{\epsilon^{1}}\right)\] \[+\left(\log\frac{1}{1-p^{2}/(4m^{2})}-2\right)\frac{m^{2}}{p^{2}}\,. \tag{97}\] We plot the RHS of Eq. (97) versus \(p^{2}/m^{2}\) in Fig. 10. We can see that it is indeed non-negative in the range \(0<p^{2}/m^{2}<4\). #### 2.4.3 Bubble integral up to second order in \(\epsilon\) We proceed to combine positivity constraints for \(\epsilon\) expansion coefficients covered in Sections 2.4.1 and 2.4.2 with the semidefinite programming technique already used in Sections 2.2 and 2.3, in order to obtain high-precision results for the \(\epsilon\) expansion of the bubble integral. Recall that we evaluated the \(\mathcal{O}(\epsilon^{0})\) part, i.e. the \(d=4\) result, for the bubble integral starting from Eq. (2.70) and the refined version Eq. (2.72). For simplicity, we will build upon the first version, Eq. (2.70), and use the additional positive building block Eq. (2.93) to write down the following positivity constraint at \(d=d_{0}-2\epsilon=4-2\epsilon\), \[0 \leq\int_{0}^{1}dx\,xP(x)^{2}\frac{1}{\hat{F}(x)^{3-d_{0}/2}} \left(\log\max\frac{1}{\hat{F}}-\log\frac{1}{\hat{F}(x)}\right) \tag{2.98}\] \[=\int_{0}^{1}dx\,xP(x)^{2}\frac{1}{[1-x(1-x)p^{2}/m^{2}]}\left( \log\frac{1}{1-p^{2}/(4m^{2})}-\log\frac{1}{1-x(1-x)p^{2}/m^{2}}\right)\,, \tag{2.99}\] where \(P(x)\) is again an arbitrary polynomial in \(x\), and we are free to choose the maximum degree of monomials that are included, depending on the accuracy we would like to attain. After expanding \(P(x)^{2}\) into a sum of monomials, the contribution from each monomial can be evaluated following the same procedure as used in the example in Section 2.4.2. In particular, the result will be a sum of terms proportional to \(\hat{I}_{2,1}\big{|}_{\epsilon^{0}}\), terms proportional to \(\hat{I}_{2,1}\big{|}_{\epsilon^{1}}\), and constant terms. The remaining calculation steps are very similar to those of Section 2.3. Re-using the parametrization Eq. (2.74) for the polynomial \(P\) with a cutoff degree \(N=14\), we again arrive at \[\vec{\alpha}^{T}\,\mathbb{M}\,\vec{\alpha}\geq 0\,, \tag{2.100}\] stating that an appropriate matrix \(\mathbb{M}\) is positive semidefinite, i.e. \[\mathbb{M}\succcurlyeq 0\,. \tag{2.101}\] In this case, \(\mathbb{M}\) is a sum of three terms, \[\mathbb{M}=\mathbb{M}_{1}+I_{2,1}\big{|}_{\epsilon^{0}}\cdot\mathbb{M}_{2}+I_ {2,1}\big{|}_{\epsilon^{1}}\cdot\mathbb{M}_{3}\,. \tag{2.102}\] where the three matrices \(\mathbb{M}_{1,2,3}\), with rational dependence on \(p^{2}\) and \(m\), are obtained from dimension shifting, IBP, and finally \(\epsilon\)-expansion as in the example of Section 2.4.2. We approximate \(I_{2,1}\big{|}_{\epsilon^{0}}\) to be the central value Eq. (2.79) obtained in the previous \(d=4\) calculation with the same cutoff degree \(N=14\). At this point, \(I_{2,1}\big{|}_{\epsilon^{1}}\) remains the only unknown parameter on the RHS of Eq. (2.102), and its allowed range as well as the Figure 10: The RHS of Eq. (2.97) versus \(p^{2}/m^{2}\) in the range of validity \(0<p^{2}<4\). central value can be determined by semidefinite programming solvers as covered in previous sections. We again use SDPA-QD to produce the "central value" for the numerical result, defined by the same prescription as before, obtaining \[I_{2,1}\big{|}_{\epsilon^{1}}\approx 0.74313814320586\,,\text{ at }p^{2}=2,m=1\,, \tag{103}\] which is slightly larger than the exact result with a relative error of \(4.3\times 10^{-12}\). We continue to present how the \(\mathcal{O}(\epsilon^{2})\) term of the bubble integral is calculated. We use the positivity constraint, \[0\leq\int_{0}^{1}dx\,xP\left(x,\log\frac{1}{\hat{F}}\right)^{2}\big{[}m^{2}-p^ {2}x(1-x)\big{]}^{d_{0}/2-3}\, \tag{104}\] where \(P\) is a polynomial in \(x\) and \(\log 1/\hat{F}(x)\) with maximum degrees \(N_{1}\) and \(N_{2}\) in the two variables, parametrized as \[P\left(x,\log\frac{1}{\hat{F}(x)}\right)=\sum_{0\leq i_{1}\leq N_{1}}\sum_{0 \leq i_{2}\leq N_{2}}\alpha_{i_{1},i_{2}}\,x^{i_{1}}\left(\log\frac{1}{\hat{F} (x)}\right)^{i_{2}}\,. \tag{105}\] We use \(N_{1}=14\) and \(N_{2}=1\), so that there are at most 14 power of \(x\) in \(P(x)\) and at most one power of \(\log[1/\hat{F}(x)]\). After expanding the \(P^{2}\), there are at most two powers of the aforementioned logarithm, therefore there are \(\epsilon\) expansions coefficients at orders \(\epsilon^{0}\), \(\epsilon^{1}\), and \(\epsilon^{2}\). We obtain an expression of the form \[\vec{\alpha}^{T}\,\mathbb{M}\,\vec{\alpha}\geq 0\,, \tag{106}\] where the column vector \(\vec{\alpha}\) groups together all the \(\alpha_{i_{1},i_{2}}\) parameters. The matrix \(\mathbb{M}\) is the sum of a constant term, a term proportional to \(I_{2,1}\big{|}_{\epsilon^{0}}\), a term proportional to \(I_{2,1}\big{|}_{\epsilon^{1}}\), and finally a term proportional to \(I_{2,1}\big{|}_{\epsilon^{2}}\). We use previous numerical results for \(I_{2,1}\big{|}_{\epsilon^{0}}\) and \(I_{2,1}\big{|}_{\epsilon^{1}}\), and solve a semidefinite programming problem to find the central value of \(I_{2,1}\big{|}_{\epsilon^{2}}\) to be \[I_{2,1}\big{|}_{\epsilon^{2}}\approx 0.208108744452\,,\text{ at }p^{2}=2,m=1\,, \tag{107}\] which is slightly larger than the exact result with a relative error of \(1.9\times 10^{-11}\). There is no obstruction to obtaining results at even higher orders in the \(\epsilon\) expansion for the bubble integral example. Generally, we calculate iteratively to higher and higher orders in \(\epsilon\), at each step taking previous numerical results as known input. The positivity constraint is Eq. (104) with an appropriate cutoff degree for \(\log[1/\hat{F}(x)]\) depending on the desired order in the \(\epsilon\) expansion. The RHS of Eq. (104) can be optionally multiplied by either Eq. (89) or Eq. (90), with \(\mathcal{U}=1\) and \(L=1\) in the case of bubble integrals, to give more constraints. This will produce constraints for bubble integrals to any desired order in the \(\epsilon\) expansion. ### \(\epsilon\) expansion from numerical differentiation w.r.t. spacetime dimension Here we present an alternative method for numerically evaluating the \(\epsilon\)-expansion of the normalized bubble integral \(\hat{I}_{2,1}^{d}\) defined in Eq. (13). Instead of formulating constraints for the \(\epsilon\) expansion coefficients, we calculate the \(\hat{I}_{2,1}^{d}\) at numerical values of the spacetime dimension near 4, and use finite-difference approximations to obtain derivatives w.r.t. \(\epsilon\). The derivatives are related to the terms in the \(\epsilon\) expansion via \[\left.\hat{I}_{2,1}\right|_{\epsilon^{k}}=\frac{1}{k!}\frac{d^{k}}{d\epsilon^{k }}\hat{I}_{2,1}^{d=4-2\epsilon}\Big{|}_{\epsilon=0}\,. \tag{108}\] For an arbitrary function \(f(\epsilon)\), we use 4th-order finite-difference approximations, \[\frac{d}{d\epsilon}f(\epsilon)\bigg{|}_{\epsilon=\epsilon^{0}} \approx\frac{1}{\Delta\epsilon}\bigg{[}\frac{1}{12}f(\epsilon_{0}-2 \Delta\epsilon)-\frac{2}{3}f(\epsilon_{0}-\Delta\epsilon)\] \[\qquad+\frac{2}{3}f(\epsilon_{0}+\Delta\epsilon)-\frac{1}{12}f( \epsilon_{0}+2\Delta\epsilon)\bigg{]}\,, \tag{109}\] \[\frac{d^{2}}{d\epsilon^{2}}f(\epsilon)\bigg{|}_{\epsilon=0} \approx\frac{1}{\Delta\epsilon^{2}}\bigg{[}-\frac{1}{12}f( \epsilon_{0}-2\Delta\epsilon)+\frac{4}{3}f(\epsilon_{0}-\Delta\epsilon)- \frac{5}{2}f(\epsilon_{0})\] \[\qquad+\frac{4}{3}f(\epsilon_{0}+\Delta\epsilon)-\frac{1}{12}f( \epsilon_{0}+2\Delta\epsilon)\bigg{]}\,, \tag{110}\] where \(\Delta\epsilon\) is the step size. As the name suggests, these formulas are exact when \(f\) is a polynomial with a degree up to 4. Note that the method of Section 2.3 can be used to evaluate the normalized bubble integral \(\hat{I}_{2,1}^{d}\) in any spacetime dimension \(d<6\), i.e. \(\epsilon>-1\), so Eqs. (109) and (110) can be readily used with \(\epsilon_{0}=0\) and a small \(\Delta\epsilon\), chosen to be \[\Delta\epsilon=10^{-3}\,. \tag{111}\] We again choose kinematic parameter values \(p^{2}=2\) and \(m=1\). The final results from numerical differentiation, up to the second order in \(\epsilon\), are \[\left.\hat{I}_{2,1}\right|_{\epsilon^{1}}\approx 0.7431381432049\,, \tag{112}\] which is slightly larger than the exact result with a relative error of \(3.2\times 10^{-12}\), and \[\left.\hat{I}_{2,1}\right|_{\epsilon^{2}}\approx 0.20810874450\,, \tag{113}\] which is slightly larger than the exact result with a relative error of \(2.8\times 10^{-10}\). ## 3 Three-loop banana integrals with unequal masses ### Definitions and conventions Here we present a three-loop example, the so called banana integrals. Banana integrals are various loop orders have received intense interest from an analytic perspective due to connections with Calabi-Yau manifold [68; 69; 70; 71; 72; 73; 74]. We apply our numerical method developed in the previous section 2 on one-loop bubble integrals, with some minor adaptations, to evaluate 11 nontrivial master integrals for the banana diagram. We assume the readers to be familiar with the previous section as many shared techniques will not be introduced again. The diagram for the integrals is shown in Fig. 11. Due to dimension-shifting identities reviewed in Section 2.1, the \(\epsilon\) expansions of integrals in \(d=4-2\epsilon\) and \(d=2-2\epsilon\) can be related to each other. We will always use \[d=2-2\epsilon\,, \tag{3.1}\] for three-loop banana integrals, which is convenient as the scalar integral has no ultraviolet divergence in this spacetime dimension. The banana family of integrals is defined as \[I_{a_{1},a_{2},a_{3},a_{4}} \equiv\left(\prod_{i=1}^{3}\int\frac{d^{d}l_{i}\,e^{\gamma z \epsilon}}{i\pi^{d/2}}\right)\frac{1}{(-l_{1}^{2}+m_{1}^{2})^{a_{1}}}\frac{1}{ (-l_{2}^{2}+m_{2}^{2})^{a_{2}}}\frac{1}{(-l_{3}^{2}+m_{3}^{2})^{a_{3}}}\] \[\qquad\times\frac{1}{[-(p+l_{1}+l_{2}+l_{3})^{2}+m_{4}^{2}]^{a_{4 }}}\,,\qquad\text{with }d=2-2\epsilon\,. \tag{3.2}\] We have suppressed the \(d\) dependence on the LHS of the above equation, unlike the one-loop case Eq. (2.1), since we will not make use of dimension-shifting in the treatment of three-loop banana integrals and will exclusively work with \(d=2-2\epsilon\). If any one of the four indices \(a_{i}\) is non-positive in Eq. (3.2), the remaining propagators have the structure of three one-loop massive tadpole integrals. For example, if \(a_{4}=0\), Eq. (3.2) clearly factorizes into the product of three scalar tadpole integrals. For facilitating the discussion of positivity constraints, it will be convenient to define a variant of Eq. (3.2) with slightly adjusted constant factors, \[\hat{I}_{a_{1},a_{2},a_{3},a_{4}} \equiv\left(\prod_{i=1}^{3}\int\frac{d^{d}l_{i}}{i\pi^{d/2}} \right)\frac{1}{\Gamma(4-3d/2)}\frac{1}{(-l_{1}^{2}+m_{1}^{2})^{a_{1}}}\frac{ 1}{(-l_{2}^{2}+m_{2}^{2})^{a_{2}}}\frac{1}{(-l_{3}^{2}+m_{3}^{2})^{a_{3}}}\] \[\qquad\times\frac{1}{[-(p+l_{1}+l_{2}+l_{3})^{2}+m_{4}^{2}]^{a_{4 }}}\,. \tag{3.3}\] By IBP reduction, all integrals of the banana family, with integer values of \(a_{i}\) in Eq. (3.3), can be expressed as linear sums of 15 master integrals. Publicly available software, such as those presented Refs. [58; 59; 60; 61; 62; 63; 64], can be used to give a list of master integrals as well as performing the actual IBP reduction of integrals. There are 11 nontrivial "top-level" master integrals that are not products of tadpole integrals, shown in three groups below according to the total number of the four indices, \[\hat{I}_{1,1,2,2},\,\hat{I}_{1,2,1,2},\,\hat{I}_{1,2,2,1},\, \hat{I}_{2,1,1,2},\,\hat{I}_{2,1,2,1},\,\hat{I}_{2,2,1,1},\] \[\hat{I}_{1,1,1,2},\,\hat{I}_{1,1,2,1},\,\hat{I}_{1,2,1,1},\,\hat{ I}_{2,1,1,1}, \tag{3.4}\] \[\hat{I}_{1,1,1,1}\,.\] Figure 11: Three-loop banana family of integrals, with internal and external squared masses labeled. In addition, there are 4 master integrals that are trivial products of tadpole integrals. These master integrals are chosen as \[\hat{I}_{0,2,2,2},\,\hat{I}_{2,0,2,2},\,\hat{I}_{2,2,0,2},\,\hat{I}_{2,2,2,0}\,, \tag{3.5}\] where we raised every propagator to a 2nd power to make the integral UV finite in \(d=2-2\epsilon\). Each of these four master integrals is a product of three one-loop tadpole integrals given in Eq. (A.8) with \(n=2\), with some adjustment of the overall factor according to Eq. (3.3), \[\hat{I}_{0,2,2,2} =\frac{\Gamma^{3}(1+\epsilon)}{\Gamma(1+3\epsilon)}\left(\frac{1} {m_{2}^{2}m_{3}^{2}m_{4}^{2}}\right)^{1+\epsilon},\quad\hat{I}_{2,0,2,2}= \frac{\Gamma^{3}(1+\epsilon)}{\Gamma(1+3\epsilon)}\left(\frac{1}{m_{1}^{2}m_{3 }^{2}m_{4}^{2}}\right)^{1+\epsilon},\] \[\hat{I}_{2,2,0,2} =\frac{\Gamma^{3}(1+\epsilon)}{\Gamma(1+3\epsilon)}\left(\frac{1} {m_{1}^{2}m_{2}^{2}m_{4}^{2}}\right)^{1+\epsilon},\quad\hat{I}_{2,2,2,0}= \frac{\Gamma^{3}(1+\epsilon)}{\Gamma(1+3\epsilon)}\left(\frac{1}{m_{1}^{2}m_{ 2}^{2}m_{3}^{2}}\right)^{1+\epsilon}\,. \tag{3.6}\] The values of the 11 remaining master integrals in Eq. (3.4) will be calculated numerically from positivity constraints. We will work with kinematic variables in the range \[p^{2}<(m_{1}+m_{2}+m_{3}+m_{4})^{2}\,, \tag{3.7}\] i.e. below the particle production threshold, which ensures that the integrals are real-valued. Similar to the case of one-loop bubble integrals, when the chosen value of \(p^{2}\) is non-negative, we cannot embed the integrals into Euclidean momentum space and need to use Feynman-parameter space to formulate positivity constraints. The Feynman parametrization follows from the general formula Eq. (2.80) with adjustment of constant factors according to Eq. (3.3), \[\hat{I}_{a_{1},a_{2},a_{3},a_{4}} =\frac{\Gamma(a-3d/2)/\Gamma(4-3d/2)}{\Gamma(a_{1})\Gamma(a_{2}) \Gamma(a_{3})\Gamma(a_{4})}\int_{x_{i}\geq 0}dx_{1}dx_{2}dx_{3}dx_{4}\, \delta(1-x_{1}-x_{2}-x_{3}-x_{4})\] \[\quad\times\left(\prod_{i=1}^{4}x_{i}^{a_{i}-1}\right)\frac{ \mathcal{U}(x_{i})^{a-2d}}{\mathcal{F}(x_{i})^{a-3d/2}}\,, \tag{3.8}\] where we used the definition \(a\equiv a_{1}+a_{2}+a_{3}+a_{4}\). The two graph polynomials \(\mathcal{U}\) and \(\mathcal{F}\) are, for the banana family of integrals, \[\mathcal{U}(x_{1},x_{2},x_{3},x_{4}) =x_{2}x_{3}x_{4}+x_{1}x_{3}x_{4}+x_{1}x_{2}x_{4}+x_{1}x_{2}x_{3},\] \[\mathcal{F}(x_{1},x_{2},x_{3},x_{4}) =p^{2}x_{1}x_{2}x_{3}x_{4}+(m_{1}^{2}x_{1}+m_{2}^{2}x_{2}+m_{3}^{ 2}x_{3}+m_{4}^{2}x_{4})\mathcal{U}(x_{1},x_{2},x_{3},x_{4})\,. \tag{3.9}\] Note that we have \[\mathcal{U}(x_{1},x_{2},x_{3},x_{4})\geq 0,\quad\mathcal{F}(x_{1},x_{2},x_{3},x _{4})\geq 0\,, \tag{3.10}\] in the integration region of Eq. (3.8), i.e. \(x_{i}\geq 0\), \(\sum_{i}x_{i}=1\). This will help us formulate positivity constraints. As in the one-loop bubble case, except for the "gauge fixing" Dirac delta function, the rest of Eq. (3.8) has the projective invariance Eq. (2.60), as the \(\mathcal{U}\) and \(\mathcal{F}\) polynomials are homogeneously of degree 3 and 4, respectively. ### Positivity constraints We define \(\tilde{x}_{i}\) variables \[\tilde{x}_{i}=\frac{\mathcal{U}(x_{1},x_{2},x_{3},x_{4})}{\mathcal{F}(x_{1},x_{2},x_{3},x_{4})}x_{i},\quad i=1,2,3,4\,, \tag{3.11}\] which are invariant under the scaling Eq. (2.60). We rewrite Eq. (3.8) as \[\frac{\Gamma(4-3d/2)}{\Gamma(a-3d/2)}\Gamma(a_{1})\Gamma(a_{2}) \Gamma(a_{3})\Gamma(a_{4})\,\hat{I}_{a_{1},a_{2},a_{3},a_{4}}\] \[=\int_{x_{i}\geq 0}dx_{1}dx_{2}dx_{3}dx_{4}\,\delta(1-x_{1}-x_{2} -x_{3}-x_{4})\left(\prod_{i=1}^{4}\tilde{x}_{i}^{a_{i}-1}\right)\frac{\mathcal{ U}(x_{i})^{4-2d}}{\mathcal{F}(x_{i})^{4-3d/2}}\,, \tag{3.12}\] again with \(a\equiv a_{1}+a_{2}+a_{3}+a_{4}\). Note that if \(a_{i}\geq 1\), \(a\geq 4\), \[\frac{\Gamma(4-3d/2)}{\Gamma(a-3d/2)}=\frac{1}{(4-3d/2)(5-3d/2)\ldots(a-1-3d/2)} \tag{3.13}\] is a rational function in \(d\). With any non-negative polynomial \(Q(\tilde{x}_{i})\), we formulate a positivity constraint, \[0\leq\int_{x_{i}\geq 0}dx_{1}dx_{2}dx_{3}dx_{4}\,\delta(1-x_{1}-x_{2}-x_{3}-x _{4})Q(\tilde{x}_{i})\frac{\mathcal{U}(x_{i})^{4-2d}}{\mathcal{F}(x_{i})^{4-3 d/2}}\,, \tag{3.14}\] which is compatible with the projective invariance Eq. (2.60). After expanding the polynomial \(Q(\tilde{x}_{i})\) into a sum of monomials, the contribution of each monomial \(\prod_{i}\tilde{x}_{i}^{a_{i}-1}\) can be written as some \(\hat{I}_{a_{1},a_{2},a_{3},a_{4}}\) multiplied by a prefactor that is rational in \(d\), according to Eq. (3.12). All such integrals are UV convergent by power counting and also IR convergent due to internal masses. No change of spacetime dimensions is involved, unlike the treatment of one-loop bubble integrals in Section 2.3. We consider the following choices of \(Q(\tilde{x}_{i})\), with the help of an arbitrary polynomial \(P(\tilde{x}_{i})\) under a chosen maximum degree, \[\text{choice 1:}\quad Q(\tilde{x}_{i})=P(\tilde{x}_{i})^{2}, \tag{3.15}\] \[\text{choice 2:}\quad Q(\tilde{x}_{i})=\tilde{x}_{1}P(\tilde{x}_{i} )^{2},\] (3.16) \[\text{choice 3:}\quad Q(\tilde{x}_{i})=\tilde{x}_{2}P(\tilde{x}_{i} )^{2},\] (3.17) \[\text{choice 4:}\quad Q(\tilde{x}_{i})=\tilde{x}_{3}P(\tilde{x}_{i} )^{2},\] (3.18) \[\text{choice 5:}\quad Q(\tilde{x}_{i})=\tilde{x}_{4}P(\tilde{x}_{i} )^{2}\,. \tag{3.19}\] With any of the above five choices for \(Q(\tilde{x}_{i})\) and with any choice of \(P(\tilde{x}_{i})\), the inequality Eq. (3.14) must hold. The general form of \(P(\tilde{x}_{i})\) is a sum of all monomials under a chosen cutoff degree, each multiplied by an arbitrary coefficient. For example, if the cutoff degree is 1, then \(P(\tilde{x}_{i})\) is parametrized as \[\text{cutoff degree 1:}\quad P(\tilde{x}_{i})=\alpha_{0,0,0,0}+\alpha_{1,0,0,0} \tilde{x}_{1}+\alpha_{0,1,0,0}\tilde{x}_{2}+\alpha_{0,0,1,0}\tilde{x}_{3}+ \alpha_{0,0,0,1}\tilde{x}_{4}\,. \tag{3.20}\] With cutoff degree \(N\), the parametrization is \[\text{cutoff degree }N\text{:}\quad P(\tilde{x}_{i})=\sum_{i_{1},i_{2},i_{3},i_{4 }\geq 0}^{i_{1}+i_{2}+i_{3}+i_{4}\leq N}\alpha_{i_{1},i_{2},i_{3},i_{4}}\tilde{x}_ {1}^{i_{1}}\tilde{x}_{2}^{i_{2}}\tilde{x}_{3}^{i_{3}}\tilde{x}_{4}^{i_{4}}\,, \tag{3.21}\] where the number of free parameters \(\alpha_{i_{1},i_{2},i_{3},i_{4}}\) is equal to \(\binom{N+4}{4}\) by combinatorics. Since we have already discussed how to set up semidefinite optimization programs in the context of one-loop bubble integrals, we will be brief in covering the analogous steps here. Grouping the \(\alpha_{i_{1},i_{2},i_{3},i_{4}}\) parameters into a column vector \(\vec{\alpha}\) of length \(\binom{N+4}{4}\), the five choices of \(Q\), Eqs. (3.15) to (3.19), lead to \[(\vec{\alpha})^{T}\mathbb{M}^{(1)}\vec{\alpha}\geq 0,\quad( \vec{\alpha})^{T}\mathbb{M}^{(2)}\vec{\alpha}\geq 0,\quad(\vec{\alpha})^{T} \mathbb{M}^{(3)}\vec{\alpha}\geq 0,\] \[(\vec{\alpha})^{T}\mathbb{M}^{(4)}\vec{\alpha}\geq 0,\quad( \vec{\alpha})^{T}\mathbb{M}^{(5)}\vec{\alpha}\geq 0, \tag{3.22}\] respectively, for any values of the vector \(\alpha\). For reasons we do not fully understand, the first constraint \((\vec{\alpha})^{T}\mathbb{M}^{(1)}\vec{\alpha}\geq 0\) leads to poor numerical convergence and is discarded. The remaining four constraints are rewritten as requiring the matrices to be positive semidefinite using the notation Eq.(2.39), \[\mathbb{M}^{(2)}\succcurlyeq 0,\quad\mathbb{M}^{(3)}\succcurlyeq 0,\quad \mathbb{M}^{(4)}\succcurlyeq 0,\quad\mathbb{M}^{(5)}\succcurlyeq 0\,. \tag{3.23}\] For convenience, this can be rephrased as the positive semidefiniteness of a single matrix which contains the above four matrices as diagonal blocks, \[\mathbb{M}=\begin{pmatrix}\mathbb{M}_{2}&0&0&0\\ 0&\mathbb{M}_{3}&0&0\\ 0&0&\mathbb{M}_{4}&0\\ 0&0&0&\mathbb{M}_{5}\end{pmatrix}\succcurlyeq 0\,. \tag{3.24}\] Analogous to the case of one-loop bubble integrals, IBP reduction expresses \(\mathbb{M}\) as a linear combination of the 15 master integrals in Eq. (3.4) and (3.5), each multiplied by a matrix of rational functions in \(p^{2},m_{1}^{2},m_{2}^{2},m_{3}^{2},m_{4}^{2}\). It is necessary to perform IBP reduction for banana integrals with up to 13 "dots", i.e. additional powers of propagators beyond the standard first power, since the positive polynomial \(Q\) in Eqs. (3.16)-(3.19) bring 13 powers of \(x_{i}\) when \(P\) has degree 6. The four master integrals in Eq. (3.5) are known analytically in Eq. (3.6), and the values of the remaining 11 master integrals are unknown parameters to be constrained by Eq. (3.24). Before presenting numerical results, we also formulate positivity constraints for the \(\epsilon\) expansion of banana integrals. Recall that banana integrals in \(d=2-2\epsilon\), normalized according to Eq. (3.3), has a Feynman-parameter representation Eq. (3.12) using redefined Feynman parameters in Eq. (3.11). For any integers \(a_{i}\geq 1\), Taylor-expanding both sides of Eq. (3.12) and equating the coefficients of the \(\epsilon^{k}\) term for any integer \(k\), we have \[\left[\frac{\Gamma(4-3d/2)}{\Gamma(a-3d/2)}\Gamma(a_{1})\Gamma(a_{2} )\Gamma(a_{3})\Gamma(a_{4})\,\hat{I}_{a_{1},a_{2},a_{3},a_{4}}\right]\Bigg{|}_{ \epsilon^{k}}\] \[=\int_{x_{i}\geq 0}dx_{1}dx_{2}dx_{3}dx_{4}\,\delta(1-x_{1}-x_{2}-x _{3}-x_{4})\left(\prod_{i=1}^{4}\tilde{x}_{i}^{a_{i}-1}\right)\frac{1}{{\cal F }(x_{i})} \tag{3.25}\] \[\quad\times\frac{1}{k!}\log^{k}\frac{{\cal U}^{4}(x_{i})}{{\cal F }^{3}(x_{i})}\,.\] By integration-by-parts reduction, the LHS of the above equation can be written as linear combinations of the \(\epsilon\) expansions of the 15 master integrals up to the \(\epsilon^{k}\) order, assuming that the coefficients of the master integrals (from IBP reduction) are finite as \(\epsilon\to 0\), which is the case here. Now we are ready to write down positivity constraints for the \(\epsilon\) expansion by extending Eq. (3.14), \[0\leq\int_{x_{i}\geq 0}dx_{1}dx_{2}dx_{3}dx_{4}\,\delta(1-x_{1}-x_{2}-x_{3}- x_{4})Q\left(\tilde{x}_{i},\log\frac{{\cal U}^{4}(x_{i})}{{\cal F}^{3}(x_{i})} \right)\frac{{\cal U}(x_{i})^{4-2d}}{{\cal F}(x_{i})^{4-3d/2}}\,, \tag{3.26}\] where \(Q\) is now a positive polynomial in its two arguments above. To build the most general form of \(Q\), we follow Section 2.4.2 and use the building block \[\log\max\frac{{\cal U}^{4}}{{\cal F}^{3}}-\log\frac{{\cal U}^{4}(x_{i})}{{\cal F }^{3}(x_{i})}\geq 0\,. \tag{3.27}\] The value of \(\max({\cal U}^{4}/{\cal F}^{3})\) will be found numerically once \(p^{2}\) and \(m_{i}^{2}\) parameters are specified in Eq. (3.30), in the next subsection on numerical results. However, we find no minimum of \(\log({\cal U}^{4}/{\cal F}^{3})\) at the same parameter values, as \({\cal U}^{4}/{\cal F}^{3}\) can become arbitrarily close to zero (from above) in the range of integration. We will use the following choices of \(Q\), \[Q\left(\tilde{x}_{i},\log\frac{{\cal U}^{4}(x_{i})}{{\cal F}^{3} (x_{i})}\right) =\tilde{x}_{k}P^{2}\left(\tilde{x}_{i},\log\frac{{\cal U}^{4}(x_{ i})}{{\cal F}^{3}(x_{i})}\right), \tag{3.28}\] \[\text{or}\ \ Q\left(\tilde{x}_{i},\log\frac{{\cal U}^{4}(x_{i})}{{\cal F }^{3}(x_{i})}\right) =\tilde{x}_{k}\left(\log\max\frac{{\cal U}^{4}}{{\cal F}^{3}}- \log\frac{{\cal U}^{4}(x_{i})}{{\cal F}^{3}(x_{i})}\right)P^{2}\left(\tilde{x }_{i},\log\frac{{\cal U}^{4}(x_{i})}{{\cal F}^{3}(x_{i})}\right)\,, \tag{3.29}\] where \(k\) can be 1, 2, 3, or 4, and \(P\) is an arbitrary polynomial with a maximum total degree \(N_{1}\) for the four \(\tilde{x}_{i}\) variables and maximum degree \(N_{2}\) for \(\log({\cal U}^{4}/{\cal F}^{3})\). Similar to the bubble integral case in Section 2.4, to constrain the \({\cal O}(\epsilon)\) part of the master integrals, we will use Eq. (3.29) with \(N_{2}=0\), and to constrain the \({\cal O}(\epsilon^{2})\) part, we will use Eq. (3.28) \(N_{2}=1\). For both \({\cal O}(\epsilon^{1})\) and \({\cal O}(\epsilon^{2})\) parts, \(N_{1}\) will be chosen to be the same as the cutoff degree used for the \({\cal O}(\epsilon^{0})\) calculation. ### Numerical results We present numerical results for the 11 nontrivial master integrals of the banana family in Eq. (3.4) at the following numerical values for kinematic variables, \[p^{2}=2,\quad m_{1}^{2}=2,\quad m_{2}^{2}=3/2,\quad m_{3}^{2}=4/3,\quad m_{4}^ {2}=1\,. \tag{3.30}\] We remind readers that the spacetime dimension is set to \[d=2-2\epsilon\,. \tag{3.31}\] As this paper is aimed at illustrating a new method, we have chosen example integrals that are known to high precision in the existing literature. For three-loop banana integrals, high precision results from series solutions to differential equations are available from the DiffExp package [9]. In fact, we have chosen the same values for the masses in Eq. (3.30) as the example in the aforementioned paper, though we chose a different value of \(p^{2}\) as we restrict to the Euclidean region Eq. (3.7). DiffExp is used to compute the master integrals to a precision of about \(10^{-54}\), which can be taken as exact values for the purpose of validating our numerical results. For the values of the master integrals at \(\mathcal{O}(\epsilon^{0})\), i.e. in exactly \(d=2\), we will use cutoff degrees of up to 6 in Eq. (3.21). Unlike the one-loop case in Section 2, we will not fully characterize the allowed parameter region which is a sub-region of an 11-dimensional parameter space and cannot be described by just a lower bound and an upper bound. Instead, we will only compute the central values for the 11 undetermined master integrals. Following the prescription laid out in Section 2, the central values are defined to maximize the lowest eigenvalue of the matrix \(\mathbb{M}\) in Eq. (3.24). With the largest cutoff degree 6, there are \(\binom{6+4}{4}=210\) free parameters in \(\bar{\alpha}\). Therefore, each of the four diagonal blocks in Eq. (3.24) has size \(210\times 210\), and the full matrix \(\mathbb{M}\) has size \(840\times 840\). We use SDPA-QD as the semidefinite programming solver working at quadruple-double precision. The solver is able to take advantage of the block diagonal structure of the matrix \(\mathbb{M}\) to improve efficiency. In Fig. 12, we plot the relative errors of the central values of three representative master integrals against the cutoff degree, for the \(\mathcal{O}(\epsilon^{0})\) order only. The actual results are given later in Eq. (3.34) together with further terms in the \(\epsilon\) expansion. The vertical axis of the plot is on a logarithmic scale, and we can see that the results converge rapidly, in a apparently exponential fashion, as the cutoff degree is raised. With the largest cutoff degree 6, each of the 11 master integrals is evaluated to an accuracy of at least \(10^{-9}\). For the values of master integrals at \(\mathcal{O}(\epsilon^{1})\). We will need the result for the parameter values Eq. (3.30), \[\max\left(\mathcal{U}^{4}/\mathcal{F}^{3}\right)\approx 5000/229059\,, \tag{3.32}\] which we found by numerical maximization in Mathematica. To be conservative, we have slighted rounded up the numerical result to a larger nearby rational number to guarantee that the inequality Eq. (3.27) is true when using Eq. (3.32). This maximum value occurs at \[x_{1}\approx 0.12222,\quad x_{2}\approx 0.22592,\quad x_{3}\approx 0.26701, \quad x_{4}\approx 0.38485\,. \tag{3.33}\] While Eq. (3.32) will be used directly in calculations, Eq. (3.33) is only included for completeness. The normalization in Eq. (3.33) does not matter since \(\mathcal{U}^{4}/\mathcal{F}^{3}\) is invariant under the scaling transformation Eq. (2.60). Then the calculation is similar to the calculation of one-loop bubble intervals to \(\mathcal{O}(\epsilon^{1})\) in Section 2.4.3. We use the positivity constraint Eq. (3.26) with Eq. (3.29) for the positive polynomial \(Q\), taking the values \(k=1,2,3,4\) and combining the constraints from the four different choices. For the \(P\) polynomial in Eq. (3.29), we use a maximum total degree \(N_{1}=6\) for the \(\tilde{x}_{i}\) variables and a maximum degree of \(0\) for the logarithm, i.e. dropping any terms involving the logarithm. The logarithm still appears in the bracket preceding \(P^{2}\) in Eq. (3.29) and contributes to \(O(\epsilon)\) parts of integrals by Eq. (3.25). Using the \(\mathcal{O}(\epsilon^{0})\) results as known inputs, we again solve a semidefinite programming problem involving a \(840\times 840\) matrix with four diagonal blocks, each of size \(210\ \times 210\), to obtain values for the \(\mathcal{O}(\epsilon^{1})\) terms of the master integrals. For the values of master integrals at \(\mathcal{O}(\epsilon^{2})\), we use the positivity constraint Eq. (3.26) with Eq. (3.28) for the positive polynomial \(Q\). We again use a maximum total degree \(N_{1}=6\) for the \(\tilde{x}_{i}\) variables but now uses a maximum degree of \(1\) for the logarithm. Since the logarithm can appear in a monomial in \(P\) with either power \(0\) or power \(1\), the size of the matrix in the semidefinite programming problem is doubled to \(1680\times 1680\), with four diagonal blocks each of size \(420\times 420\). Taking both \(\mathcal{O}(\epsilon^{0})\) and \(\mathcal{O}(\epsilon^{1})\) results as known inputs, we run SDPA-QD to find the central values for the \(\mathcal{O}(\epsilon^{2})\) results. For brevity, we show results for \(3\) representative master integrals out of the \(11\) top-level master integrals, with kinematic variables taking values of Eq. (3.30), \[\begin{split}&\hat{I}_{1122}\approx 0.31328353052153-0.121375161 61264\epsilon-1.5577062442336\epsilon^{2},\\ &\hat{I}_{1112}\approx 1.3758733318476-3.5451169250640\epsilon+0.6 1363537070259\epsilon^{2},\\ &\hat{I}_{1111}\approx 5.9437542439912-33.914772364319\epsilon+106.87 640125797\epsilon^{2}\,.\end{split} \tag{3.34}\] For documenting the computational outputs, we have kept each number to \(14\) significant figures, even their actual accuracies are lower as shown in plots in this section. We have also calculated both \(\mathcal{O}(\epsilon^{1})\) and \(\mathcal{O}(\epsilon^{2})\) results using numerical differentiation of integrals evaluated at fixed values of dimensions, following the same strategy of Section 2.5 for one-loop bubble integrals. The calculations are identical to the \(d=2\), i.e. \(\mathcal{O}(\epsilon^{0})\) Figure 12: Relative errors of the central values of three representative master integrals of the banana family, versus the cutoff degree in the calculation. case and are based on Eq. (3.14) without any Taylor expansion in \(\epsilon\), with the only change being that \(\epsilon\) is set to small numerical values different from \(0\), i.e. \(d\) is set to numerical values that slightly deviate from \(2\). The 4th-order numerical differentiation formulas, Eqs. (2.109) and Eq. (2.110) are applied with \(\epsilon_{0}=0\) and \(\Delta\epsilon=10^{-3}\), with the spacetime dimension \(d=2-2\epsilon\). Example results from this alternative method are, again keeping each number of \(14\) significant figures, \[\begin{split}&\hat{I}_{1122}\approx 0.31328353052153-0.121375191 05424\epsilon-1.5576503067221\epsilon^{2},\\ &\hat{I}_{1112}\approx 1.3758733318476-3.5451170400199\epsilon+0. 61369255775305\epsilon^{2},\\ &\hat{I}_{1111}\approx 5.9437542439912-33.914771261794\epsilon+1 06.87318272740\epsilon^{2}\,.\end{split} \tag{3.35}\] Note that the \(\mathcal{O}(\epsilon^{0})\) results are copied from Eq. (3.34) as they are not re-calculated. These numerical results for the \(\epsilon\) expansion of master integrals Eq. (3.4) are obtained with the normalization of Eq. (3.3). The reference results from DiffExp are have the normalization of Eq. (3.2) and additional factors for individual master integrals. The reference results have been converted to use our normalizations for comparison. DiffExp results for Figure 13: Log-scale plot of relative errors of numerical results for 11 nontrivial master integrals of the three-loop banana family, Eq. (3.4), with the normalization Eq. (3.3), up to second order in the dimension regularization parameter \(\epsilon\). The four numbers under each bar indicates the subscript indices of \(\hat{I}_{a_{1},a_{2},a_{3},a_{4}}\), with commas omitted as all four indices are single-digit numbers (either \(1\) or \(2\)). For each master integral, the five vertical bars, from left to right, show the relative errors for the \(\epsilon^{0}\) term, the \(\epsilon^{1}\) term calculated from direct positivity constraints (abbreviated as cons in the legend), the \(\epsilon^{1}\) term calculated from numerical differentiation (abbreviated as diff in the legend), the \(\epsilon^{2}\) term calculated from direct positivity constraints, and the \(\epsilon^{2}\) term calculated from numerical differentiation. the three sample integrals, truncated to 14 significant digits, are \[\begin{split}\hat{I}_{1122}&\approx 0.31328353056677-0.121375 191032390\epsilon-1.5577067713048\epsilon^{2},\\ \hat{I}_{1112}&\approx 1.37587333189510-3.5451170391547 \epsilon+0.61363351857945\epsilon^{2},\\ \hat{I}_{1111}&\approx 5.9437542414259-33.91477126310 7\epsilon+106.876390717227\epsilon^{2}\,.\end{split} \tag{100}\] The final numerical accuracy for the 11 master integrals, with values of kinematic parameters chosen in Eq. (101), is shown in Fig. 13. The \(\epsilon\) expansion results from "direct positivity constraints" are labeled cons and results from numerical differentiation are labeled diff in the plot legend. We can see that numerical differentiation gives very good accuracy for \(\mathcal{O}(\epsilon^{1})\) terms, comparable with the accuracy of \(\mathcal{O}(\epsilon^{0})\) terms, while for the \((\epsilon^{2})\) terms, direct positivity constraints yield more accurate results. In any case, both methods for the \(\epsilon\) expansion have demonstrated their potentials in this initial investigation, as all results for \(\mathcal{O}(\epsilon^{1})\) terms have relative errors below \(10^{-6}\) and all results for \(\mathcal{O}(\epsilon^{2})\) terms have relative errors below \(10^{-3}\). We briefly comment on computational resources used. IBP reduction takes a few CPU-hours with FIRE6 [61] with numerical kinematics Eq. (101). The IBP reduction results are obtained with analytic dependence on \(d\) and can be subsequently expanded in \(\epsilon\), so no extra IBP reduction is needed for obtaining the \(\epsilon\) expansions of master integrals beyond the zeroth order. Running the semidefinite programming solver SDPA-QD takes a few CPU-hours for every run, including one run for solving positivity constraints for the \(\mathcal{O}(\epsilon^{i})\) part for each \(i=0,1,2\), and for the alternative method based on numerical differentiation, several runs at different numerical values of \(\epsilon\) to generate the data needed to feed into finite-difference approximations. ## 4 Discussions We have demonstrated a new method for evaluating Feynman integrals which, to our best knowledge, is the first method based on inequality constraints, while previously exploited consistency conditions for Feynman integrals are based on equality constraints (such as the vanishing of the coefficient of a certain spurious singularity). Our calculation strategy is writing down an infinite class of convergent integrals with non-negative integrands and reducing them to linear sums of a set of master integrals. This constraints an infinite number of linear sums of master integrals to be non-negative. A truncated set of the constraints can be solved as a semidefinite programming problem in mathematical optimization. Surprisingly, the constraints appear strong enough to determine the integrals to any desired precision since the bounds appear to converge exponentially as the truncation cutoff is increased. Like the method of differential equations [75; 76; 77; 78; 79], our method relies on integration-by-parts (IBP) identities, but instead of using differential equations to transport the values of the integrals across kinematic space, we only use IBP information at a single point in kinematic space. Though our study is preliminary, the numerical results are promising. We have demonstrated the applicability of our methods to a nontrivial example, namely three-loop banana integrals with four unequal internal masses in \(d=2-2\epsilon\) dimensions. With modest computational resources, we evaluated the \(\mathcal{O}(\epsilon^{0})\) part of all the 11 nontrivial master integrals to a relative accuracy of at least \(10^{-9}\). The accuracies for \(\mathcal{O}(\epsilon^{1})\) and \(\mathcal{O}(\epsilon^{2})\) terms are lower, though only slightly so for \(\mathcal{O}(\epsilon^{1})\) terms when the numerical differentiation method is used. For all but the smallest problems, extended-precision floating point arithmetic is needed to ensure numerical stability in the semidefinite programming solver, similar to what was encountered in the conformal bootstrap [67] and the quantum mechanics bootstrap [48]. We note that extended precision is also generally need in the evaluation of Feynman integrals by series solutions of differential equations as observed in e.g. Refs. [8; 9]. We have also revealed hidden consistency relations that link different terms in the \(\epsilon\) expansions of Feynman integrals. As explained in Section 2.4.1, for any (quasi-) finite Feynman integral without numerators, the \(\epsilon\) expansion terms (appropriately normalized) must give rise to a positive-semidefinite Hankel matrix. This is an extremely general statement which can be checked against a huge number of Feynman integral computations in the literature, because many Feynman integrals have Euclidean regions and it is believed that a quasi-finite basis exist for any family of integrals [55]. This result is an elementary consequence of our analysis but has not been previously exposed in the literature. Such constraints have been solved numerically in our paper to predict the \(\epsilon\) expansion terms to high accuracy. We have also formulated an alternative method to obtain the \(\epsilon\) expansion terms by numerical differentiation of semidefinite programming solutions with respect to the spacetime dimension. The above two methods for calculating \(\epsilon\) expansion terms are complementary and we have found cases in which either of them outperforms the other in accuracy. Our new method for calculating Feynman integrals is analogous to recent developments in bootstrapping quantum mechanics systems and lattice models [45; 46; 47; 48; 49; 50; 51]. For example, the role of IBP and dimensional-shifting identities in our work is analogous to the role of moment recursion relations in the quantum mechanics bootstrap. Analogous identities also appear in EFT bounds as "null constraints" from crossing symmetry [37].3 While our work has imported techniques developed in non-perturbative contexts to perturbative physics, in the reverse direction, the differential equation method of perturbative calculations has been applied to non-perturbative lattice correlation functions in Refs. [80; 81; 82], also exploiting identities similar to those from IBP. Therefore, we expect a fruitful exchange of techniques between perturbative and non-perturbative calculations. Footnote 3: We thank Francesco Riva for pointing out this connection. Finally, we speculate on possible future work. Except for the generic constraints on the \(\epsilon\) expansion, this paper has mainly treated massive Feynman integrals, and for integral families involving massless internal lines, it would be necessary to identify non-negative integrals free of not only ultraviolet but also infrared divergences to generate the positivity constraints. To extend our method to integrals outside the Euclidean region, it remains to be seen how positivity constrains can be formulated, possibly for real and imaginary parts separately after a suitable deformation of the integration contour. Connections with other notions of positivity relevant for Feynman integrals [83; 84] remain to be explored. Another interesting question is whether positivity constraints can be used to understand complete scattering amplitudes (rather than individual Feynman integrals) at a fixed order in perturbation theory, in light of numerical observations in \(\mathcal{N}=4\) super-Yang-Mills theory amplitudes in Ref. [85]. ## Acknowledgements M.Z.'s work is supported in part by the U.K. Royal Society through Grant URF\(\backslash\)R1\(\backslash\)20109. For the purpose of open access, the author has applied a Creative Commons Attribution (CC BY) license to any Author Accepted Manuscript version arising from this submission. ## Appendix A Analytic results for bubble integrals The material here is well known in the literature but included for completeness. We work in spacetime dimension \(d=4-2\epsilon\). Using Feynman parametrization, Eq. (1) is written as, for \(a_{1}\geq 1,a_{2}\geq 1\). \[I^{d}_{a_{1},a_{2}}=\frac{\Gamma(a_{1}+a_{2}-d/2)e^{\gamma_{E}\epsilon}}{ \Gamma(a_{1})\Gamma(a_{2})}\int_{0}^{1}dx\,x^{a_{1}-1}(1-x)^{a_{2}-1}\left[m^{ 2}-p^{2}x(1-x)-i0^{+}\right]^{d/2-a_{1}-a_{2}}\,. \tag{110}\] This allows us to evaluate the first of the master integrals in Eq. (9) as \[I^{d=4-2\epsilon}_{2,1}=\Gamma(1+\epsilon)(m^{2})^{-1-\epsilon}e^{\gamma_{E} \epsilon}\int_{0}^{1}dx\,\frac{x}{\left[1-x(1-x)p^{2}/m^{2}-i0^{+}\right]^{1+ \epsilon}} \tag{111}\] Since the denominator in the above integrand is symmetric under \(x\leftrightarrow 1-x\), we can symmetrize the numerator as \(x\rightarrow[x+(1-x)]/2=1/2\), obtaining \[I^{d=4-2\epsilon}_{2,1}=\Gamma(1+\epsilon)(m^{2})^{-1-\epsilon}e^{\gamma_{E} \epsilon}\int_{0}^{1}dx\,\frac{1}{2\left[1-x(1-x)p^{2}/m^{2}-i0^{+}\right]^{1+ \epsilon}} \tag{112}\] For \(p^{2}<0\), this evaluates to \[I^{d=4-2\epsilon}_{2,1}=\Gamma(1+\epsilon)(-p^{2}+i0^{+})^{-1-\epsilon}e^{ \gamma_{E}\epsilon}\left[\frac{1}{\beta}\log\frac{\beta+1}{\beta-1}+\mathcal{ O}(\epsilon)\right]\,, \tag{113}\] where we defined \[\beta=\sqrt{1-\frac{4m^{2}}{p^{2}}-i0^{+}}\,. \tag{114}\] The omitted \(\mathcal{O}(\epsilon)\) term in Eq. (112) is easy to evaluate and we do not include the explicit result here. The second master integral in Eq. (9) is a trivial tadpole integral, \[I^{d=4-2\epsilon}_{3,0}=\frac{1}{2}\Gamma(3-d/2)e^{\gamma_{E}(4-d)/2}(m^{2})^ {-3+d/2}=\frac{1}{2}\Gamma(1+\epsilon)e^{\gamma_{E}\epsilon}(m^{2})^{-1- \epsilon}\,. \tag{115}\] So the ratio defined in Eq. (13) is equal to, when \(d=4-2\epsilon\), \[\hat{I}^{d=4-2\epsilon}_{2,1} =I^{d=4-2\epsilon}_{2,1}/I^{d=4-2\epsilon}_{3,0}=2\int_{0}^{1}dx \,\frac{x}{\left[1-x(1-x)p^{2}/m^{2}-i0^{+}\right]^{1+\epsilon}}\] \[=-\frac{2m^{2}}{p^{2}}\frac{1}{\beta}\log\frac{\beta+1}{\beta-1}+ \mathcal{O}(\epsilon)\,. \tag{116}\] This result is real and positive for any \(p^{2}<4m^{2}\), after taking into cancellation of imaginary parts with the definition Eq. (100). The corrections at higher order in the dimensional regularization parameter \(\epsilon\) are also readily obtained but we do not include the results here. The tadpole integral with an arbitrary power for the propagator is well known, \[\int\frac{d^{d}l\,e^{\gamma_{E}\epsilon}}{i\pi^{d/2}}\frac{1}{(-l^{2}+m^{2})^{n }}=\frac{1}{2}e^{\gamma_{E}(4-d)/2}\frac{\Gamma(n-d/2)}{\Gamma(n)}\frac{1}{(m ^{2})^{n-d/2}}\,. \tag{102}\]
2309.00514
A Machine Vision Method for Correction of Eccentric Error: Based on Adaptive Enhancement Algorithm
In the procedure of surface defects detection for large-aperture aspherical optical elements, it is of vital significance to adjust the optical axis of the element to be coaxial with the mechanical spin axis accurately. Therefore, a machine vision method for eccentric error correction is proposed in this paper. Focusing on the severe defocus blur of reference crosshair image caused by the imaging characteristic of the aspherical optical element, which may lead to the failure of correction, an Adaptive Enhancement Algorithm (AEA) is proposed to strengthen the crosshair image. AEA is consisted of existed Guided Filter Dark Channel Dehazing Algorithm (GFA) and proposed lightweight Multi-scale Densely Connected Network (MDC-Net). The enhancement effect of GFA is excellent but time-consuming, and the enhancement effect of MDC-Net is slightly inferior but strongly real-time. As AEA will be executed dozens of times during each correction procedure, its real-time performance is very important. Therefore, by setting the empirical threshold of definition evaluation function SMD2, GFA and MDC-Net are respectively applied to highly and slightly blurred crosshair images so as to ensure the enhancement effect while saving as much time as possible. AEA has certain robustness in time-consuming performance, which takes an average time of 0.2721s and 0.0963s to execute GFA and MDC-Net separately on ten 200pixels 200pixels Region of Interest (ROI) images with different degrees of blur. And the eccentricity error can be reduced to within 10um by our method.
Fanyi Wang, Pin Cao, Yihui Zhang, Haotian Hu, Yongying Yang
2023-09-01T15:06:39Z
http://arxiv.org/abs/2309.00514v1
# A Machine Vision Method for Correction of Eccentric Error: Based on Adaptive Enhancement Algorithm ###### Abstract In the procedure of surface defects detection for large-aperture aspherical optical elements, it is of vital significance to adjust the optical axis of the element to be coaxial with the mechanical spin axis accurately. Therefore, a machine vision method for eccentric error correction is proposed in this paper. Focusing on the severe defocus blur of reference crosshair image caused by the imaging characteristic of the aspherical optical element, which may lead to the failure of correction, an Adaptive Enhancement Algorithm (AEA) is proposed to strengthen the crosshair image. AEA is consisted of existed Guided Filter Dark Channel Dehazing Algorithm (GFA) and proposed lightweight Multi-scale Densely Connected Network (MDC-Net). The enhancement effect of GFA is excellent but time-consuming, and the enhancement effect of MDC-Net is slightly inferior but strongly real-time. As AEA will be executed dozens of times during each correction procedure, its real-time performance is very important. Therefore, by setting the empirical threshold of definition evaluation function SMD2, GFA and MDC-Net are respectively applied to highly and slightly blurred crosshair images so as to ensure the enhancement effect while saving as much time as possible. AEA has certain robustness in time-consuming performance, which takes an average time of 0.2721s and 0.0963s to execute GFA and MDC-Net separately on ten 200pixels x 200pixels Region of Interest (ROI) images with different degrees of blur. And the eccentricity error can be reduced to within 10um by our method. Machine vision, Eccentricity error correction, Adaptive enhancement, Large-aperture aspherical optical element ## I Introduction Aspherical optical elements are widely used in large-aperture space telescopes, inertial confinement fusion systems, and high-energy laser systems [1]. Due to some uncontrollable factors in the manufacturing process, some elements' surface will inevitably exist defects, which will not only affect the imaging quality of the system, but also bring great potential risks in industrial application. Therefore, it is necessary to carry out precise detection for surface quality. For the micron-level defects detection requirement of large-aperture aspherical optical elements, a single image can not satisfy both the field of view and the precision of inspection. It is essential to scan the partial images in order. To be specific, to scan the surface of the element through spinning and swinging the mechanical axes, and collect local sub-aperture images, then according to the sub-aperture scanning equation, the acquired sub-aperture images are sequentially stitched [2] to reconstruct the entire picture of the inspected component. The premise of high-precision reconstruction is to adjust as much as possible the coincidence of the optical axis of the element and the mechanical spin axis of the detection system [2]. Low-precision stitching will cause defects to break on the stitched image, misclassification of defect grades and other serious consequences. In view of these problems, this paper proposed an automatic correction method based on machine vision [3, 4] to correct the eccentric error. In this method, when the depth of field of the machine vision system is smaller than the aberration, defocus [5, 6] blur comes into being during imaging. Consequently, defocus restoration for the reference crosshair image is necessary. Traditional defocus restoration algorithms such as Wiener filter [7]-[9], blind deconvolution [10, 11], least squares filter, and Lucy-Richardson method [12]-[14] all need to know the Point Spread Function (PSF) of the defocus process. Nevertheless, for different aspheric optical elements, their PSFs are different, and are difficult and time-consuming to obtain. Therefore, the traditional defocus restoration algorithms are not applicable here. Given that the defocused crosslink images collected in experiments are similar with the foggy blurred ones, we compared the generation processes of defocus blur and foggy, and found that the mathematical principles of their formations are analogous. Consequently, we innovatively proposed to utilize dehazing algorithm to enhance the grayscale defocused image. Currently, the existing dehazing methods [15]-[19] based on the traditional image process ideas are mainly originated from the dark channel dehazing algorithm [20] proposed in 2009. This algorithm starts from the mathematical principle of the generation of fog, and has excellent performance but high algorithm complexity. The GFA in this paper utilizes guided filter [15] to optimize the soft matting [20] so as to reduce time complexity, but still can not achieve real-time performance. Owing to the rise of convolutional neural networks, a plenty of researchers devote themselves to designing lightweight enhancement networks [21]-[25] to take the place of traditional algorithms so as to achieve a reduction in processing time while retaining effectiveness. For the consideration that for some non-severely defocused images, sacrificing some enhancement effectivity to ensure real-time performance is feasible, therefore the lightweight MDC-Net is designed as a supplement for the poor real-time performance of GFA. AEA is constituted by GFA and MDC-Net, the real-time performance mentioned in the article refers to the average time taken to perform AEA once. Since AEA consists of GFA and MDC-Net, the real-time performance of the two algorithms will be discussed in the experimental part. Aimed at solving the above two problems of "how to correct the eccentricity error quickly and precisely" and "how to strengthen the defocused crosshair image" during the large-aperture aspheric optical element surface defects detection, combined with deep learning method, an automatic eccentricity error correction method based on AEA is proposed in this paper. First of all, the necessity of eccentricity error correction is presented in Chapter II, and the principle of correction method is explained as well. Then, the eccentricity error correction method is detailly introduced in Chapter III, in which, AEA is mainly introduced, followed by the experiment operation introduction and experiment results analysis in Chapter IV. the center of the trajectory circle is the mechanical spin axis theoretically, then control the movement system to move the center \(\ O_{{}_{c}}\) of the crosshair image to the location of the mechanical spin axis \(\ O_{{}_{\mathbf{\mathrm{a}}}}\). The key of this method is to accurately extract the pixel coordinate of the crosshair center, but due to the aspheric imaging characteristic, the crosshair image will inevitably show defocus. The simulation result in Fig. 4(a) illustrates that due to the aberration, except for the emergent light at the paraxial axis, the others all deviate from the incident light. The incident light \(\ I_{{}_{1}}\) and \(\ I_{{}_{2}}\) are imaged at point \(\ C_{{}_{1}}\) and \(\ C_{{}_{2}}\) separately, and the distances between the vertex ball center \(\ C_{{}_{0}}\) are their normal aberrations. As can be seen, due to the existence of the normal aberration, the light energy diverges. As is illustrated in Fig. 4(b), the crosshair image formed at the vertex ball center is blurry, with a bright center and dark edge, and the grayscale of background is high which results in that the crosshair center coordinate can not be directly extracted. In view of this problem, our article further proposes AEA to enhance the clarity of the crosshair image. Fig. 4(c) is the ROI of the crosshair image and Fig. 4(d) is the enhancement result of our AEA. ## III Correction Method Based On AEA As is shown in Fig. 5, the automatic eccentricity error correction method can be divided into three steps: System Fig. 4: Aspheric optics vertex ball center imaging simulation, (a) is the ray tracing image, (b) is the defocused crosshair image, (c) is the ROI of crosshair image and (d) is the enhancement result of AEA. Fig. 5: Flow chart of automatic eccentricity error correction method. Fig. 3: Optical principle of eccentricity error correction, (a) is the optical path of imaging and (b) is the images of crosshair on the condition of different relative positions of the optical axis and the spin axis. Initialization, AEA and Eccentricity Correction. The core of our method is AEA, which is composed of GFA and MDC-Net, and they have been deepened out in Fig. 5. The entire eccentricity correction method is optimized and accelerated on both software and hardware, as is marked out in Fig. 5, the purple dotted area uses dual-threads, and the red chain-dotted areas use parallel processing, which greatly improve the efficiency of our method. The general process of our method is illustrated in Fig. 5. First, implement System Initialization step to make the optical system accurately focus on the vertex ball center of the aspherical optical element, and obtain the pixel coordinate of the ROI where the center of the crosshair is approximately located. Then enter the AEA step, the spin axis spins for one revolution at the setting time, collect a focused image with each rotation angle of \(~{}30^{{}^{\circ}}\), and evaluate the SMD2 value of the ROI area of each image. If the SMD2 value is greater than the threshold 0.5, the binarization and morphology are directly applied to the ROI to extract the center coordinate of the crosshair image. Otherwise, determine whether the SMD2 value is greater than the threshold value 0.1, and if so, use the faster MDC-Net for clarity enhancement, if not, sacrifice time efficiency and use GFA for clarity enhancement, and then the enhanced image is subjected to the operation of extracting the center coordinate of the crosshair image as described before. Finally, in the Eccentricity Correction step, the least square circle fitting method is performed on the extracted center coordinates of the 12 crosshair images. The center of the fitted circle is the position of the theoretical mechanical spin axis, and then move the crosshair center to this position to correct eccentricity error. To ensure the accuracy of the error correction, we set the constraint condition that the eccentricity error should be less than \(10\,um\), loop the AEA and Eccentricity Correction steps until the accuracy requirement is satisfied. ### _System Initialization_ In the System Initialization step, the crosshair is roughly focused on the center of the vertex ball, and collects an image of the crosshair at first. Then starts two threads to respectively execute Automatic ROI Acquisition Algorithm (ARAA) and Auto Focus Algorithm (AFA). #### a.1 Araa It can be draw from the imaging principle that the central area of the crosshair image is the clearest, and can be located by utilizing the definition evaluation function. The definition evaluation function utilized in this article is the normalized SMD2 (it is still called SMD2 for convenience in this paper) in consideration of its excellent sensitivity performance. The expression of SMD2 is as follows: \[\textit{SMD2}=\sum_{x,y=1,i}^{i,j}\frac{|f(x,y)-f(x-1,y)|\cdot|f(x,y)-f(x,y-1 )|}{255j} \tag{3}\] In which, \(~{}i\) and \(~{}j\) are the pixel width and height of the image, and \(~{}f(x,y)\) is the gray value of the image at pixel coordinate \(~{}(x,y)\). Based on the SMD2 definition evaluation function, we utilize the Block Definition Measurement Algorithm (BDMA) to search for the clearest ROI which contains the crosshair center on the entire image. Next, the system records the top left corner of the ROI so as to locate the relative position of the crosshair center on the entire image. Follow-up processing will only be performed on the ROI which greatly reduces the calculation amount of the whole method. The BDMA is introduced as follows: First of all, set the width \(~{}W\) and height \(~{}H\) of the sub-area and the step \(~{}T\) which defines the search step to the right or down each time on the input image, and calculate the SMD2 values of all the sub-areas of the specified width and height in parallel on the input image according to the search step. Then, store the SMD2 values in the mapping table data structure in order. Ultimately, arrange the mapping table according to the key values in descending order, and the corresponding sub-region rank at the head of the mapping table is the ROI with the highest definition score on the image, that is, the region where the crosshair center is located, and then returns the coordinate of the upper left corner of this ROI region. #### a.2 Afa The acquired image at the beginning is only a roughly focused one, and the comparison of SMD2 values between the images of the adjacent location is required to obtain the best focus position. Set the vertical resolution to \(10\,um\), and collect 10 images up and down respectively in step of vertical resolution. Collect 20 images, plus the coarse focus one which is originally acquired, for a total of 21 images. After receiving the end signal from ARAA and the coordinate of the upper left corner of the ROI, calculate the SMD2 values of the ROIs of 21 images, and the one with the largest SMD2 value corresponds to the best focus position. The key of AFA is to choose an appropriate definition evaluation function which is selected from nine commonly used ones: Tenengrad, Laplacian, PVA Grad, SMD, SMD2, Energy, Vollath, Entropy and FFT [26]. Take twenty-one crosshair images acquired in once experiment as inputs, the above 9 types of definition evaluation functions are calculated and then Fig. 6: Curves of 9 types of definition evaluation functions (Tenengrad, Laplacian, PVA Grad, SMD, SMD2, Energy, Vollath, Entropy and FFT). The decimals after the label of the first 6 definition evaluation functions are the differential gradient values at the best focus position. normalized, their broken lines are shown in Fig. 6, the larger the value, the higher the definition of the ROI corresponding to the abscissa position, and the position with the highest definition can be regarded as the best focus position. In Fig. 6, except for the vertices of Vollath, Entropy and FFT these three broken lines, the vertices of the other six broken lines correspond to the same abscissa and are at the coarse focus position. Therefore, the conclusion can be drawn that the coarse focus position is the best focus position in this experiment. Then, by calculating the differential gradient values of the six broken lines at the best focus position, the definition evaluation function with the highest sensitivity at the best focus position can be selected. The formula to calculate the differential gradient value at \(n\) is as follows: \[\delta=\left|{{{{\delta}}\left({n}\right)}-{f}\left({{n}-{I}}\right)-{f}\left({{ n}+{I}}\right)}\right|/2 \tag{4}\] From the differential gradient values at the best focus position shown in Fig. 6, it can be concluded that SMD2 has the best sensitivity at the best focus position. Therefore, we choose SMD2 definition evaluation function as the evaluation index. ### _AEA_ After the step of System Initialization and the function of the self-centering holder, the eccentricity error between the optical axis of the component and the mechanical spin axis is generally on the order of sub-millimeters. In order to reduce the amount of calculation, only the ROI area is processed. Therefore, the algorithm first uses BDMA to obtain the crosshair center area, and then determines whether its SMD2 clarity index is greater than the prior threshold index 0.5, the threshold is an empirical value obtained by performing a large number of center extraction experiments on images of different clarity. For ROI with SMD2 value greater than the threshold, directly uses the binarization and morphological methods to extract the crosshair center and the steps are as follows: Firstly, adaptive threshold binarization is performed on the ROI to obtain a binary image, and the mathematical principle of adaptive threshold binarization is as follows: Determine the size of the binarized single region module as _ksize_ which in our method is 17, and then calculate the gaussian weight value \(T(x,y)\) for each pixel in the module. \[T(x,y)=\alpha\cdot\exp[-(i(x,y)-(ksize-1)\ /\ 2)^{2}\ /\ (2\cdot\theta)^{2}] \tag{5}\] Where \(i(x,y)=\sqrt{x^{2}+y^{2}}\), and \((x,y)\) is the pixel coordinate with the center of a single _ksize_\(\times\)_ksize_ area module as the origin, \(\theta=0.3\cdot[\{(ksize-1)\ 0.5-1\}+0.8\) and \(\alpha\) satisfies \(\sum T(x,y)=1\). The rule of binarization is as follows: \[dst(x,y)=\left\{\begin{array}{l}0,src(x,y)>T(x,y)\\ 255,src(x,y)\leq T(x,y)\end{array}\right. \tag{6}\] Where \(dst(x,y)\) is the target binary image, and \(src(x,y)\) is the original ROI image. Secondly, due to there exists noise in the background of the image which results in the stray connected domains on the binarized image, a small structure element is used to perform morphological eroding operation to erase them. The erase result is shown in Fig. 7(a). Based on the characteristics of the bright center and the dim surrounding of crosshair image, we draw the conclusion that the inscribed circle in the center area of the crosshair should be the largest. Hence, the problem becomes searching for the largest inscribed circle in the connected domain and obtain the coordinate of its center, which is equivalent to the crosshair center. The extraction result is shown in Fig. 7(b), and the coordinate position of the crosshair center on the global image can be obtained by combining the coordinate position of the upper left corner of the ROI obtained by BDMA. For the case where the SMD2 value is less than or equal to the threshold index 0.5, it is necessary to use AEA before extracting the center coordinate of the crosshair image. AEA is composed of GFA and MDC-Net, GFA will be introduced at first. ### _B.1 Gfa_ Previously, the dehazing algorithm was mainly applied to RGB color images. In this paper, the single-channel grayscale defocused image is "dehazed". The flow chart of GFA is shown in Fig. 8. In computer vision and computer graphics, the generation process of fog can be described by formula (7): \[I(x,y)=J(x,y)\cdot t(x,y)+\mathcal{A}(1-t(x,y)) \tag{7}\] Where \(I(x,y)\) represents a haze image and in this paper stands for a defocused image, \(J(x,y)\) is the clear image which requires to be solved, and \(\mathcal{A}\) is the global atmospheric light condition, which can be analogized to the lighting situation of light source, \(t(x,y)\) is the transmittance function, and can be analogized to the optical transfer function here. The acquired image \(I(x,y)\) is used as the dark channel image, and the average of the first 1% gray value in the dark channel image is taken as \(\mathcal{A}\) in this paper, (7) can be simplified to formula (8): \[\frac{I(x,y)}{\mathcal{A}}=\frac{J(x,y)}{\mathcal{A}}\cdot t(x,y)+1-t(x,y) \tag{8}\] Currently, we only know \(I(x,y)\), so some prior conditions Fig. 8: Flowchart of GFA. Fig. 7: Images in the process of crosshair center extraction, (a) is the after morphological eroding operation and (b) is the extracted center of crosshair. are required for the solution of \(J(x,y)\), according to the dark primary color prior theory, we have: \[J_{min}(x,y)=0 \tag{9}\] Assume that the transmittance function \(t(x,y)\) is locally constant, and take the minimum value on both sides of (9) to get the following formula: \[\frac{I_{min}(x,y)}{A}=\frac{J_{min}(x,y)}{A}\cdot t(x,y)+1-t(x,y) \tag{10}\] From the prior condition that \(J_{min}(x,y)=0\), a rough optical transfer function \(t(x,y)\) can be obtained: \[t(x,y)=1-\frac{I_{min}(x,y)}{A} \tag{11}\] The boundary of the optical transfer function obtained by the above formula is rough and can not obtain fine enhancement results. Therefore, this paper utilizes guided filter algorithm which has better performance but a little bit slower than fast guided filter with the input image size of \(200\times 200\) to refine the optical transfer function. The time complexity of the guided filter is \(O(N)\) comparing to \(O(N\,/\,s^{2})\) of fast guided filter, where \(s\) is the scaling ratio of the image, taking 2 in the algorithm, however, the processing time is influenced by many other factors. The mathematical expression of the guided filter is shown in (12) - (15) : \[\left\{\begin{array}{l}mean_{t}=f_{mean}(I)\\ mean_{t}=f_{mean}(t)\\ corr_{t}=f_{mean}(I,*I)\\ corr_{t}=f_{mean}(I,*I)\end{array}\right. \tag{12}\] Where \(mean_{t}\) and \(mean_{t}\) are the results of the mean filter for \(I\) and \(t\) respectively, \(corr_{t}\) is the result of self-correlation of \(I\), and \(corr_{n}\) is the result of cross-correlation between \(I\) and \(t\). \[\left\{\begin{array}{l}\mathrm{var}_{t}=corr_{t}-mean_{t}\cdot*mean_{t}\\ \mathrm{cov}_{n}=corr_{n}-mean_{t}\cdot*mean_{t}\end{array}\right. \tag{13}\] \(\mathrm{var}_{t}\) is the variance of \(I\), and \(cov_{n}\) is the covariance between \(I\) and \(t\). \[a=\mathrm{cov}_{n}\cdot(\mathrm{var}_{+}+\delta)\], \[b=mean_{t}-a\cdot*mean_{t} \tag{14}\] \(\delta\) is the regularization parameter. \[\left\{\begin{array}{l}mean_{t}=f_{mean}(a,r)\\ mean_{t}=f_{mean}(b,r)\\ q=mean_{*}\cdot t+mean_{t}\end{array}\right. \tag{15}\] \(q\) is the output result after the optical transfer function \(t(x,y)\) refined by the guided filter. Substitute the refined optical transfer function \(t(x,y)\) into equation (7), the enhanced image \(J(x,y)\) can be obtained: \[J(x,y)=\frac{(I(x,y)-A)}{t(x,y)}+A \tag{16}\] ### MDC-Net Even after optimization, \(\mathrm{GFA}\) is still problematic in achieving real-time performance in actual implementation, and for some crosshair images with relatively good definition, as shown in Figure. 9(b), certain enhancement effect can be sacrificed to compensate for time efficiency. Based on the mathematical principle of the haze image generation process reflected by (7), we designed a light-weight dehazing network, called MDC-Net. Formula (17) can be deduced from (7) that: \[J(x,y)=\frac{I(x,y)}{t(x,y)}-\frac{A}{t(x,y)}+A \tag{17}\] In order to integrate two unknown parameters: the lighting situation of light source \(A\) and the optical transfer function \(t(x,y)\), and one known parameter \(I(x,y)\) into one variable, the formula (17) is simplified to (18): \[J(x,y)=F(x,y)I(x,y)-F(x,y)+b \tag{18}\] In which, \(F(x,y)\) can be regarded as the equation with the input of \(I(x,y)\), and the variables are \(A\) and \(t(x,y)\), as shown in the following formula (19): \[F(x,y)=\frac{\frac{I(x,y)-A}{t(x,y)}+A-b}{I(x,y)-1} \tag{19}\] Where \(b\) is a constant, and the relationship between \(F(x,y)\) and \(I(x,y)\) can be learned. Therefore, lightweight network MDC-Net is designed to solve this problem. The structure of MDC-Net is shown in Fig. 9. The whole network is densely connected and contains five convolutional layers. In view of the multi-scale feature extractor is beneficial to the dehazing operation [22], four different scales of convolution kernels are used for feature extraction, and the convolution results are connected in depth. The input of the network is a 200\(\times\)200 single-channel grayscale image. After each convolution, the activation function ReLU is applied to increase the nonlinearity of the network. The network structure is simple but effective, as illustrated in Fig. 9, _conv1_ is a 1\(\times\)1 convolution, has 1 input and 3 outputs which is utilized to increase the number of channels and the non-linearity of the network, and to reduce the amount of calculation of the network as well. _conv2_ is a 3\(\times\)3 convolution, has 3 inputs and 3 outputs. _concat1_ is the connection of the first two layers in depth, and then the feature map with a depth of 3 is output after 5\(\times\)5 convolution. _concat2_ is the connection of the first three layers in depth, and Fig. 9: Structure of MDC-Net, (a) is the enhanced crosshair image and (b) is the origin defocused crosshair image. then the feature map with a depth of 3 is output after 7\(\times\)7 convolution. The last layer _concat3_ is the connection of all the previous layers in depth, and then the 3\(\times\)3 convolution with depth of 1 is used to predict \(F(x,y)\). Ultimately, substitute \(F(x,y)\) into (18) to obtain the solution of \(J(x,y)\). The network structure is designed in a relatively light form, for the purpose of being real-time. The parameter amount of MDC-Net is only 1.98KB, and the calculation amount is melely 79.12MFLOPs, which is very portable. Although the large-scale convolution occupies a large amount of calculations, it is found through experiments that the multi-scale convolution operation is indeed helpful for the enhancement effect. ### _Eccentricity Correction_ In the previous two steps, the BDMA is executed every time the axis \(a\) spins 30\({}^{\circ}\), and after rotating 360\({}^{\circ}\), a total of 12 ROIs are collected, and 12 center coordinates of crosshair images are extracted. The theoretical trajectory of the 12 coordinates is a circle, the center of which is the mechanical spin axis. In this paper, the least square circle fitting algorithm is used to locate the fitted circle and obtain the coordinate of the circle center \(D_{i}(X,Y)\) and the radius of the circle \(R\), in which \(D_{i}(X,Y)\) represents the mechanical spin axis and \(R\) is equal to the eccentricity error. Next, adjust the center of the crosshair image to \(D_{i}(X,Y)\). Due to the existence of mechanical errors, electrical errors, crosshair center coordinate extraction errors, least square circle fitting errors, etc., it is difficult to satisfy the accuracy requirement by performing eccentricity correction only once. Therefore, set a terminal condition that \(R<\)10\(nm\) for iterative operations for System Initialization step and AEA step is recommended. ## IV Experiments And Analysis The experimental test bed is illustrated in Fig. 10, the whole mechanical structure is complicated, and the spin axis \(\alpha\) and swing axis \(\beta\) involved in this article are marked out. ### _ARAA Experiment_ ARAA is applied to extract the centers of 420 crosshair images. The step T is set to 50 Pixels, and the specified width W and height H are both 200 Pixels. The parameters of 7 samples used in our experiment are shown in TABLE I. Twelve extraction results of ARAA are shown in Fig. 11. Obviously, the centers of the crosshairs are all within the extracted ROIs. ### _AEA Experiment_ The pixel size of the CMOS used in vision system is 5.5\(nm\times\)5.5\(nm\), the resolution is 3296\(\times\)2472, the magnification \(K\) of the machine vision system is 4, and the actual field size is 4.532\(nm\times\)3.399\(nm\) which can be calculated based on geometric optics knowledge. A parabolic optical element with a vertex ball radius of 18.28\(nm\) and a conic constant of -1 was selected as the experimental sample. Based on geometrical optics knowledge and known system parameters, depth of field of the system can be calculated, which is 0.0275 \(nm\), as illustrated by the solid green line in Fig. 12, and the normal aberration curve is drawn by formula (2), as is shown by the red dashed line in Fig. 12. From the intersection point of the normal aberration and the depth of field, we can draw that for this experimental sample, the imaging area with normal aberrations smaller than the system depth of field is only a small radius area centered on the optical axis, that is, the theoretical none-defocused imaging area is only a circular area with a radius of 1\(nm\). Meanwhile, due to the surface reflection, internal refraction, and transmission of the optical element, the returned light will inevitably loss a lot, which will further lead to blur. ### _B.1 GFA Experiment_ After AFA, a crosshair image is acquired. Fig. 13 are the images of the intermediate process of GFA. Fig. 13(a) is the ROI of the original defocused crosshair image. Fig. 13(b) is the distribution image of the optical transfer function obtained by formula (11) which is relatively rough and has obvious graininess. Fig. 13(c) is the optical transfer function smoothed Fig. 11: The extraction results of ARAA. by guided filter. Fig. 13(d) is the enhancement result of GFA. Because the background of the original image has electronic noise during the acquisition process, the enhanced background is grainy, but does not affect the extraction of the center coordinate of ROI. Select four images with different blur levels, intercept the area where the crosshair center is located and use GFA to perform the enhancement experiment. The original image and enhancement results are shown in Fig. 14. As can be seen, the GFA largely suppresses the background fogging caused by defocusing, the crosshairs are thin and bright, which facilitates the subsequent process of our method. What's more, the success of the GFA experiment confirms our inference and hypothesis. ### _MDC-Net Experiment_ In the MDC-Net enhancement experiment, since absolute clear images can not be obtained, and in view of the significant enhancement effect of GFA, the defocused images and the images enhanced by GFA are used to generate input-output pairs for training. The concrete method is as follows, randomly extract 200\(\times\)200 ROIs from the center area of the crosshair images and guarantee that the extracted ROI contains the crosshair center by setting constraints and revision, then, apply GFA to those extracted ROIs to make 800 input-output pairs for training, examples of input-output pairs are shown in Fig. 15. The hardware and software configurations are listed in TABLE II. And for training, the gradient optimization method uses SGD with momentum which is set to 0.99, and train 200 epoches, the initial learning rate is set to 0.0004, and is adjusted by cosine annealing method, for detail, the learning rate is reduced from 0.0004 to 0.00001 in the form of cos function curve. The batch size is set to 20, and a minimum Mean Squared Error (MSE) loss function with "mean" descending mode is used. The constant term \(b\) is set to 1.0. The training process uses a GTX 1080Ti GPU. In order to save the time of uploading and downloading images between memory and video memory, inference and subsequent experiments are all performed on Fig. 16: (a)-(d) are the \(200\times 200\) ROIs of origin defocused crosshair images, and (e)-(h) are the corresponding strengthen results utilizing MDC-Net. Fig. 14: (a)-(d) are the \(500\times 500\) ROIs of origin defocused crosshair images, and (c)-(h) are the corresponding strengthen results utilizing GFA. Fig. 12: The relationship between normal aberration and the depth of field of our machine vision system with growing of the distance from the optical axis. Fig. 13: The image in the process of GFA, (a) is the origin ROI image, (b) is the origin transmittance image, (c) is the transmittance image smoothed by guided filter and (d) is the strengthen result of GFA. Fig. 15: Training data input-output pairs of MDC-Net. CPU. The inputs and prediction results of the trained MDC-Net are shown in Fig. 16. The processing time of GFA is longer than that of MDC-Net. GFA and MDC-Net are both running on the CPU which means there is no upload and download of images between memory and video memory, so the time performance of AEA is better than GFA undoubtedly. If only MDC-Net is used for enhancement, the ROI with severe blur will lead to the overall failure of the method. Under comprehensive consideration, AEA is better than GFA or MDC-Net alone. ### _B.3 Enhancement and Real-time performance Experiment_ As can be seen, the defocus blur of Fig. 16(d) is very serious. However, after the enhancement of MDC-Net, the gray gradient of the center and edge of the crosshair image is separated. For the purpose of proving the enhancement effect and real-time performance of GFA and MDC-Net, we use Max-Min, Multi-Scale Retinxt (MSR) [27, 28], GFA, Fast Guided Filter, Dehaze-Net [21], AOD-Net [22] and MDC-Net to enhance Fig. 16(d), take SMD2 as the definition evaluation index, and compare the time efficiency of those seven methods, the results are shown in TABLE III, and the results of GFA and MDC-Net are bolded. The enhancement results of Fig. 16(d) using four enhancement algorithms and three networks are shown in Fig. 17. Max-Min and MSR increase the grayscale of both the crosshair and background to a degree which result in that they can not be significantly distinguished. GFA and Fast Guided Filter effectively widen the grayscale level of the crosshair and the background, the processing time of them are similar, but the SMD2 value of GFA is better than Fast Guided Filter, which is obviously more favorable. Because there is no clear image without defocus when training networks, the enhanced result of GFA is used as the focused image. Therefore, the enhancement effect of networks is not as good as that of GFA theoretically. Dehaze-Net, AOD-Net and MDC-Net all pull the grayscale of the crosshair and background to a certain extent. Among them, the SMD2 value 0.7217 of MDC-Net is the highest, and has achieved a great improvement compared with 0.0024 of the original image, which is superior to Max-Min and MSR as well. What's more, it reduces the enhancement time from 0.2732s to 0.0951s compared with GFA. In order to verify that GFA and MDC-Net both have a certain robustness in time-consuming performance, the above 7 enhancement methods were used for experiments on ten \(200\times 200\) ROI images with different degrees of blur, and the average time was calculated and recorded in the penultimate column of TABLE III. The experimental result shows that the average time is close to the time to process a single image (the third-to-last column), that is, the GFA and MDC-Net are robust in time-consuming performance for ROI images with different degrees of blur. In order to explore the time-consuming performance of the GFA and MDC-Net on the crosshair images with different scales, ten defocused crosshair images were selected, and \(200\times 200\), \(300\times 300\), \(500\times 500\) ROIs containing the crosshair center were intercepted respectively. Then, the average processing time of the two algorithms were calculated, and the results are shown in TABLE IV. ### _Eccentricity Correction Experiment_ The following is an eccentric error correction experiment. Based on the extracted 12 crosshair center coordinates, the position of the mechanical spin center is obtained by least square circle fitting method. The algorithm sets an iteration terminal condition that the eccentricity error must be less than \(10\,um\), TABLE V shows the results of three iterations during an eccentricity correction process. The eccentricity error is corrected from the initial \(284.996\,um\) to the final \(1.682\,um\). Fig. 18 vividly displays the process of three iterations. Due to the uncontrollable factors such as mechanical errors in the system, the trajectory of crosshair center coordinates extracted during the eccentricity correction process is not strictly circular. Fig. 17: Comparison of results of four image enhancement algorithms and three networks, (a) is the \(200\times 200\) ROI of origin defocused crosshair image, (b) is the result of Max-Min, (c) is the result of MSR, (d) is the result of GFA, (e) is the result of Fast Guided Filter, (f) is the result of Dehaze-Net, (g) is the result of AOD-Net and (h) is the result of MDC-Net. Fig. 18: The process of three times of eccentricity error correction. In the first correction, the trajectory of the crosshair centers could be approximately fitted as a circle, which is marked in red. The second time, due to the error, an ellipse is formed, and the trajectory is outlined in green. The third adjustment is close to the mechanical accuracy limitation, and at this time, the trajectory of the crosshair centers are related to the mechanical error. As illustrated in Fig. 18, the radius of three fitted circles are decreasing gradually, and reaching \(~{}1.682\mu\) at last, which is within the terminal condition of less than \(~{}10\mu m\). ### _Repetitive Experiment_ After executing our automatic eccentricity error correction method for 10 times, the final corrected eccentricity errors are plotted in Fig. 19. The entire eccentricity correction algorithm generally requires 2 to 3 iterations. The red inverted triangle in Fig. 19 represents 2 iterations, and the blue circle represents 3 iterations. The origin of the polar coordinate is the center of the mechanical spin \(~{}O_{\mathrm{a}}\). The center of the inverted triangle and circle is equivalent to the optical axis position of the aspherical optical element. Therefore, the distance between the scatter point center and the origin represents the magnitude of the eccentricity error. The position of the scatter points in Fig. 19 vividly reflect the relative position of the optical axis and the mechanical spin center of the component after correction. The corrected eccentricity error of 10 times of experiments all can be adjusted to within \(10\,um\), which proves that our correction method is repeatable. ## V Conclusion From the experimental results, it can be drawn that ARAA has excellent robustness, and 420 times of repetitive ROI extraction experiments were all successful. The SMD2 value of the defocused image enhanced by AEA has improved by three orders of magnitude than the original one. It respectively takes 0.2732s and 0.0951s to perform GFA and MDC-Net on a \(~{}200\times 200\) defocused ROI image, and the SMD2 values are 0.9675 and 0.7217 separately. And the average time to perform GFA and MDC-Net on ten \(~{}200\times 200\) ROI images with different degrees of blur are 0.2721s and 0.0963s respectively, which proves that the GFA and MDC-Net are robust in time-consuming performance for ROI images with different degrees of blur. After three iterations, the eccentricity error between the optical axis of the aspherical optical element and the spin axis of the mechanical device can be automatically corrected to within \(~{}10\mu m\). This paper solves the scientific research problem of "how to correct the eccentricity error quickly and precisely" during the large-aperture aspheric optical element surface defects detection. And aim at the enhancement for defocused image, this article proposes AEA which is constitute of GFA and MDC-Net. GFA is applied to enhance the defocusing grayscale image base on the analogy between the defocus blur generation model and the dark channel dehazing model. To compensate for the lack of time efficiency of GFA, the MDC-Net is designed based on the mathematical principle of dark channel dehazing model, and ingeniously takes advantage of GFA to make training data set. The machine vision method proposed in this article can adaptively, quickly and precisely realizes the eccentricity error correction of aspherical optical elements, which ensures the accuracy of sub-aperture stitching of large-aperture aspherical optical elements, and provides a more reliable foundation for aspherical surface defects detection. ## Acknowledgment The authors would like to thank the Associate Editor and the Reviewers for their constructive comments.
2310.02260
TransRadar: Adaptive-Directional Transformer for Real-Time Multi-View Radar Semantic Segmentation
Scene understanding plays an essential role in enabling autonomous driving and maintaining high standards of performance and safety. To address this task, cameras and laser scanners (LiDARs) have been the most commonly used sensors, with radars being less popular. Despite that, radars remain low-cost, information-dense, and fast-sensing techniques that are resistant to adverse weather conditions. While multiple works have been previously presented for radar-based scene semantic segmentation, the nature of the radar data still poses a challenge due to the inherent noise and sparsity, as well as the disproportionate foreground and background. In this work, we propose a novel approach to the semantic segmentation of radar scenes using a multi-input fusion of radar data through a novel architecture and loss functions that are tailored to tackle the drawbacks of radar perception. Our novel architecture includes an efficient attention block that adaptively captures important feature information. Our method, TransRadar, outperforms state-of-the-art methods on the CARRADA and RADIal datasets while having smaller model sizes. https://github.com/YahiDar/TransRadar
Yahia Dalbah, Jean Lahoud, Hisham Cholakkal
2023-10-03T17:59:05Z
http://arxiv.org/abs/2310.02260v1
# TransRadar: Adaptive-Directional Transformer for Real-Time Multi-View Radar Semantic Segmentation ###### Abstract Scene understanding plays an essential role in enabling autonomous driving and maintaining high standards of performance and safety. To address this task, cameras and laser scanners (LiDARs) have been the most commonly used sensors, with radars being less popular. Despite that, radars remain low-cost, information-dense, and fast-sensing techniques that are resistant to adverse weather conditions. While multiple works have been previously presented for radar-based scene semantic segmentation, the nature of the radar data still poses a challenge due to the inherent noise and sparsity, as well as the disproportionate foreground and background. In this work, we propose a novel approach to the semantic segmentation of radar scenes using a multi-input fusion of radar data through a novel architecture and loss functions that are tailored to tackle the drawbacks of radar perception. Our novel architecture includes an efficient attention block that adaptively captures important feature information. Our method, TransRadar, outperforms state-of-the-art methods on the CARRADA [26] and RADIai [28] datasets while having smaller model sizes. [https://github.com/YahilDar/TransRadar](https://github.com/YahilDar/TransRadar) ## 1 Introduction Automotive systems rely on radar sensing for most of the tasks that require deterministic distance measurements, such as collision avoidance, blind spot detection, and adaptive cruise control. The prevalence of radar sensors in these tasks has been attributed to their relatively low cost, low processing time, and ability to measure the velocity of objects. On the other hand, LiDAR sensors have risen in popularity as the main automotive perception tool for autonomous driving due to their relatively higher resolution and ability to generate detailed point-cloud data. This popularity is noticeable in recent literature, where LiDAR sensors are dominantly used in object detection and semantic segmentation tasks. However, LiDAR sensors suffer from few drawbacks originating from the shorter wavelength of their signals. LiDAR sensors are highly prone to errors, weather fluctuations, and occlusion with raindrops and/or dust [7]. Moreover, LiDAR signals' higher frequencies result in a rapid attenuation of their strength with respect to distance traveled, which results in a maximum range of operation of 100 to 200m. Unlike LiDARs, frequency-modulated continuous wave radars operate in the millimeter wave band in which signals do not get significantly attenuated when faced with occlusions, allowing operation ranges of up to 3,000m. Radars function in adverse weather conditions more robustly than other commonly used sensing methods like cameras and LiDARs. Radar signals are also rich in information as they contain Doppler information that includes the velocity of the objects. These radar features have motivated its usage not only in deterministic instrumentation but also for computer vision tasks [33, 39]. The radar signals can be processed to be used in an image-like pipeline in the form of Range-Angle (RA), Range-Doppler (RD), and Angle-Doppler (AD) maps. These maps are sliced views of the total 3D Range-Angle-Doppler (RAD) cube, and obtaining any two combinations allows for the calculation of the third. The task of semantic segmentation using raw/processed radar data has been a growing task in the radar perception community and has shown promising development in recent years [8, 14, 22, 23, 26, 27, 30, 33, 39]. Nonetheless, segmenting radar images still poses a challenge due to the noisy and sparse nature of the data, as well as the high imbal Figure 1: mIoU scores vs No. of Parameters (millions) of state-of-the-art models in semantic segmentation on the CARRADA dataset. Our method, TransRadar, outperforms previous state-of-the-art methods in the semantic segmentation task with an mIoU of 63.9% for RD maps and 47.5% for RA maps. ance between the foreground and background. Also, despite the information-rich nature of radar data and the ability to obtain multiple views from a single sensing instance, most works do not utilize these benefits and tend to limit their approaches to Convolutional Neural Network (CNN) models on a single view, resulting in models that do not adequately capture global information from these maps. To circumvent that, we propose a novel attention-based approach for semantic segmentation using radar data signals in radar learning. Our technique extends the definition of attention models to apply attention to adaptively sampled variations of our input feature maps, tackling the sparse nature of radar data. The adaptability nature of our attention block allows it to attend to multiple views of the Range-Angle-Doppler (RAD) cube in an efficient way. We also combine our model with a loss function tailored to sparse and highly imbalanced data of our task. We propose a combination of class-agnostic, multi-class, and multi-view consistency losses. **Contribution:** In this work, we propose an automotive radar sensing method that outperforms previous state-of-the-art works and sets new top scores in the reported metrics (Figure 1). Our main contributions are: * We introduce a novel adaptive-directional attention block that efficiently captures information from a sparse receptive field and simultaneously tackles the multi-input multi-output nature of our task. * We propose a novel loss function for the radar semantic segmentation task tailored to address the inherent main drawbacks of radar data. These drawbacks include the noisy and sparse nature of radar signals and the disproportional level of background/foreground objects. * Our proposed approach results in state-of-the-art performance in radar semantic segmentation of two recent datasets for radar perception, CARRADA [26] and RADIal [28], and achieves state-of-the-art results in the object detection task of the RADIal dataset. ## 2 Related Work Low-cost frequency modulated continuous wave radars have been historically used in multiple applications involving machine learning and pattern recognition such as human activity and hand gesture recognition [43, 44, 41]. In the context of automotive driving and autonomous vehicles, LiDAR sensors are more popular with a common data output in the form of a point cloud. While multiple works have explored point-cloud fusion of radars and LiDARs [1, 7], radar signals processing usually yields different physical representation than the LiDAR. The low resolution and high sparsity of radar data make the point-cloud format and associated architectures unsuitable. While some datasets provide point-cloud radar data [2, 30], recent approaches to radar processing use the full/split processed RAD tensors in the shape of 3D/2D image-like data. Common radar datasets provide either a single view of the data (either RA or RD) [28, 33], the original raw and unprocessed radar signals [28], or the full RAD tensors [25, 39]. RAD tensors provide cohesive information of the radar data; however, it is often undesirable to use 3D data due to the increased complexity of models when associated with the density of radar data, especially when taking multiple frames from the temporal domain. In this work, we focus our efforts on getting an automated radar perception model through sliced radar RAD tensors and comparing our method to similar works. With the recent emergence of radar datasets [26, 28], few methods have been proposed for semantic segmentation and object detection. While common methods for image semantic segmentation can be employed, such as UNet [29]and DeepLabv3+ [4], these methods are not tailored to the noisy and sparse nature of radar images. We highlight the most recent and relevant works that process radar data. TMVA-Net [25] is a multi-view method that is composed of an encoding block, a latent-space processing, and a decoding block. It fully consists of convolutional layers and presents a strong baseline for predictions in RD and RA maps on the CARRADA dataset. RAMP-CNN [9] is a CNN-based model that was mainly designed for processing 3D RAD tensors but was re-purposed for this dataset. T-RODNet [14] is a recent model utilizing Swin Transformers [20] but does not produce RD predictions and operates only on RA inputs. While T-RODNet shows improved RA scores, we focus on simultaneous prediction of the RD and RA semantic segmentation maps. PeakConv [42] applies the convolution operation with a receptive field consisting of the peaks of the signal. While this approach achieves improved segmentation performance compared to TMVA-Net, it also increases the number of parameters. Sparse variants of attention have been proposed in the literature. ReLA [35] replaces the softmax activation with ReLu to achieve sparsity in attention and uses layer normalization to improve translation tasks. The sparsity can range from switching off attention to applying attention to all the input. On the other hand, our method learns the offsets to which the attention is applied and targets consistent efficiency for the radar segmentation task. Other sparse attention methods, such as NPA [36] and SCAN [40] address point clouds, which are sparse in nature. Our method aims at learning to select important locations in the radar map dense grid. ## 3 Baseline TMVA-Net starts by encoding the RA, RD, and AD input maps to reduce the input size to one-fourth of its original resolution. Each output is then passed into an Atrous Spatial Pyramid Pooling (ASPP) block [3], and is also concatenated into a single feature maps holder. Both the ASPP output and the concatenation are then passed into a two-branches (RA and RD) decoding space that produces prediction maps. TMVA-Net uses a combination of three loss functions: a weighted Cross-Entropy loss, where the weights correspond to the frequency of classes in the dataset, a weighted Soft Dice loss, and a coherence loss. The coherence loss is a mean-square error of the RD and RA outputs to ensure coherence of predictions from different views. ### Limitations The mentioned models yield state-of-the-art results in radar semantic segmentation on the CARRADA dataset. Nonetheless, these models have limitations pertaining to the nature of the implementation and the task. First, the models are limited to convolution layers that learn local spatial information of the multi-input data. While increasing the number of feature maps at every layer would slightly improve the accuracy of these models, it imposes a large computation burden. This impedes the model from further improving without increasing the number of parameters with the majority of parameters being employed in the convolutional layers. The second limitation is the ability of these models to learn and retain information from other maps. T-RODNet processes RA maps only, while TMVA-Net concatenates all feature maps in the bottleneck along with the ASPP outputs. For the rest of the model, all combined feature maps are treated as a single set of feature maps coming from one source that gets split into two prediction heads. Another important aspect to be considered in these methods is the number of parameters. TMVA-Net produces multi-view results with \(50\times\) less parameters than T-RODNet. Lastly, all reported models were trained using the combination of losses which are not optimally designed for the task of radar semantic segmentation. Therefore, we propose an alternative approach in Section 4.4. ## 4 The Proposed Method ### Motivation Our proposed method is designed to address the limitations we observed in the state-of-the-art models discussed previously. We aim to create a compact model that improves upon previous methods by addressing the issues observed in model learning through a proposed novel architecture and loss functions. Our method overcomes the hurdle of introducing attention in deep learning models by minimizing the number of tokens to keep the model fast and small. We also take into consideration the sparse nature of the radar data while implementing our method. We propose a loss function tailored specifically for the task of radar learning by taking the acquisition structure into consideration. We extend our approach to addressing the issue of class imbalance in a more refined way compared to weighted cross-entropy, and we tackle the poor localization ability of the proposed models in our loss functions. Lastly, we propose a new multi-view range matching loss that addresses the drawbacks of fused multi-view inputs. ### Overall Architecture We propose a lightweight attention-based neural network architecture, shown in Figure 2, which addresses the limitations of the previous works. The model starts by us Figure 2: Overview of our proposed method for radar semantic segmentation. The model starts by encoding multiple frames of the Angle-Doppler (AD), Range-Doppler (RD), and Range-Angle (RA) maps. The encoded features are concatenated into a single block of feature maps that is then passed into our adaptive-directional attention block. The adaptive-directional attention blocks sample rows and columns following Eqs. 1 & 3 and apply self attention following Eq. 2 after each sampling instance. The outputs are then split into two decoders generating RD and RA masks that are passed into our three loss functions described in Section 4.4. ing a similar encoding module as the one used in TMVA-Net [25], with \(x_{i}\in\mathbb{R}^{1\times T\times H\times W}\) where \(x_{i}\) is an RA, RD, or AD feature map, \(T\) is the number of past frames taken from the range \([t_{0}-T,t_{0}]\), and \(H\) and \(W\) are the height and width of the radar frequency map, respectively. The feature maps generated from the encoders are expressed as \(x_{en}\in\mathbb{R}^{C\times H_{d}\times H_{d}}\), where \(x_{en}\) is an encoded feature map, \(C\) is the number of feature maps, and \(H_{d}\) and \(W_{d}\) are the downsampled heights and widths, respectively. The produced feature maps are then channel-wise concatenated into a single latent space that constitutes the input to our adaptive-directional attention block. In convolution-based competing methods, we noticed that reducing the feature maps below \(128\) channels in the latent bottleneck greatly reduces the mIoU, so we adopt an attention-based approach that achieves similar scores with smaller feature maps. Contrary to other attention-based approaches in radar perception [14], we do not need to use convolutional layers or heavy positional embeddings. Instead, we shed light on the way the dataset is constructed, where the multi-view input has implicit information that can be shared across axes and channels. Figure 2 illustrates the operation mechanism of our adaptive-directional attention block after the concatenation of the inputs' encoding. ### Adaptive-Directional Attention In our model architecture, we propose a novel adaptive-directional attention block that composes the backbone of our model. Similar concepts of sampling straight-vector axes were previously proposed in the literature [11, 12, 32]. However, our adaptive-directional attention tackles the sparse nature of radar data by utilizing attention that can extend further than single-column/row attention. In this way, it ensures a comprehensive outlook of the information space while being computationally efficient. For a 2D input image of shape \(C\times H_{d}\times W_{d}\), we obtain two attention variations, one of the shape \(H_{d}\times W_{d}\times C\) and another of the shape \(W_{d}\times H_{d}\times C\). For example, for a width \(W_{d}\), we have \(W_{d}\) sampled vectors of size \(H_{d}\times C\). The rationale behind incorporating the channels in our sampling traces back to the rich information provided by the radar data's feature maps. We sample our axes by employing vertical and horizontal iteration limits of sizes \(k_{h}\) and \(k_{w}\), respectively. We also define the horizontal and vertical shifts, \(\Delta h\) and \(\Delta w\), that constitute the offset limits of sampling. Lastly, we define learnable parameters \(\theta_{h}\) and \(\theta_{w}\) that perform a modulating operation to limit the effect of noise seen in data, allowing the model to learn to suppress insignificant regions. Using these definitions, we then write the sampling operation that occurs before the attention on the columns as: \[x_{i,j}=\sum_{k=1}^{w}{(\theta_{h})_{k}\cdot X_{H,C}^{(i,j+\Delta h_{k})}} \tag{1}\] where \(x_{i,j}\) is the value of the column with indices \(i,j\) belonging to the axes as \(i\in[0,H]\) and \(j\in[0,C]\). Parameter \(w\) refers to the horizontal iterations limit (i.e. how many pixels we iterate over), belonging to the previously defined parameter \(k_{w}\). \((\theta_{h})_{w}\) is the corresponding modulation weight for the associated shift, and \(\Delta h_{w}\) covers how far we sample from the axis center (i.e. the starting column). After the sampling operation, we obtain \(W_{d}\) vectors of size \(H_{d}\times C\). The query, key, and values (**q**, **k**, **v**) are then obtained through multi-layer perceptron layers, where the multi-headed self-attention (MSA) is then calculated as: \[SA(q,k,v)=Softmax(\frac{qk^{T}}{\sqrt{d_{k}}})v \tag{2}\] \[MSA=[SA_{1};SA_{2};...;SA_{s}]\] for \(s\) heads obtained from the input, following the formulation in vision transformers [6]. We note that we first sample by columns (i.e. produce \(W_{d}\) vectors of size \(H_{d}\times C\)) and apply MSA, then sample by rows (i.e. produce \(H_{d}\) vectors of size \(W_{d}\times C\)) and apply the second MSA. The formulation for the MSA applied to the rows is similar to that of the columns, with the following row sampling: \[x_{i,j}=\sum_{k=1}^{h}{(\theta_{w})_{k}\cdot X_{W,C}^{(i+\Delta w_{k},j)}} \tag{3}\] Unlike convolution-based transformers or other types of attention modules, the nature of our adaptive-directional attention allows us to alleviate the need for convolutional channel mixing or expansions. The adaptive sampling reduces the model complexity significantly by incorporating a convolution-like operation before applying attention. ### Proposed Loss Function Model learning in both semantic segmentation and object detection can prove difficult due to the large ratio of background to foreground pixels. This disparity was historically studied in multiple works that addressed the issue either through employing multi-stage detectors [18, 31] in object detection, or targeting the way models learn through innovative loss functions that handle class imbalance in semantic segmentation [19, 38]. Radar-based datasets have a larger proportion of background pixels when compared to actual objects (foreground). This discrepancy is notably present in the datasets we operate on, where the background class consists of more than 99% of the total dataset pixels [26, 28]. In addition to the class imbalance between background and foreground pixels, the annotated objects are relatively small in pixel size. Lastly, RD, RA, and AD maps' noisy nature is a learning hurdle for the models. To tackle these issues, we propose an Object Centric-Focal loss (OC) and a Class-Agnostic Object Localization Loss (CL). We add both of them in a single term, the Class-Agnostic Object Loss (CA), and propose a new multi-view range matching loss (MV) that suits our multi-output architecture. #### 4.4.1 Class-Agnostic Object loss **Object Centric-Focal Loss**: The main highlight of this loss is the weighing of the binary cross-entropy between the background and foreground of the predictions, with higher weight being given to the foreground. This is defined as: \[\mathcal{L}_{OC}=(1-y_{pred})(\delta\mathcal{L}_{BCE_{FG}}+(1-\delta)\mathcal{ L}_{BCE_{BG}}) \tag{4}\] where \(\delta\) is a weighing factor (set to 0.6) and \(\mathcal{L}_{BCE}\) is the binary cross entropy, calculated with the two classes 'background' and 'foreground'. While our semantic segmentation objective includes multi-class labels, we aim to use this loss to penalize the model on hard background prediction, keeping it only to a binary background/foreground calculation. While other loss functions [19] propose a power factor on the \((1-y_{pred})\) term, we instead remove it and use one-hot prediction masks. Both operations come in favor of having a balanced approach between ground truth probabilities and loss value, and heavily penalizing misclassification between the background and foreground. **Class-Agnostic Object Localization Loss**: To illustrate the rationale of proposing this localization loss, we show RA and RD input maps with their output predictions, along with the corresponding RGB image in Figure 3. Any other object signature seen in the RA input image can be attributed to speckle noise, Doppler-induced noise, or any other sort of undesired noise that is unaccounted for. Due to this noisy nature of radar data, producing a significantly larger amount of false positives was a noticeable pattern across tested models. We also noticed similar behavior in the opposite way, where the model learns the noise as part of the background and confuses objects with similar signatures as the noise for being part of the background, resulting in many false negatives. Therefore, we propose an intersection-based loss that penalizes the model on false background/foreground predictions. This builds on the previous object-centric loss by creating an IoU-based loss that penalizes mislocalization of objects, defined as: \[\mathcal{L}_{CL}=1-\frac{TP}{TP+FN+FP}, \tag{5}\] where \(TP\) refers to true positives, \(FN\) to false negatives, and \(FP\) to false positives. Similar to \(\mathcal{L}_{OC}\), we extend our implementation to focus on the one-hot predictions instead of the probability maps, which imposes a larger penalty for making a false background prediction. Adding \(\mathcal{L}_{OC}\) and \(\mathcal{L}_{CL}\) terms yields our class-agnostic object loss: \(\mathcal{L}_{CA}=\mathcal{L}_{OC}+\mathcal{L}_{CL}\). #### 4.4.2 Multi-Class Segmentation Loss To include the multi-class nature of our dataset and localization of different class predictions, we use a similar Soft Dice loss (SD) term to the one used in [25], described as: \[\mathcal{L}_{SD}=\frac{1}{K}\sum_{k=1}^{K}[1-\frac{2\sum\textbf{y}\textbf{p}}{ \sum\textbf{y}^{2}+\textbf{p}^{2}}] \tag{6}\] where **y** and **p** refer to the ground truth and probability map output of the model. Unlike the previous terms, we do not use a one-hot binary map prediction and instead use the original continuous probability map. We also do not limit \(\mathcal{L}_{SD}\) to background/foreground classes since we use it for multi-class predictions. #### 4.4.3 Range Consistency Loss In addition to the class-agnostic object loss and multi-class segmentation loss, we define a Multi-View range matching loss (MV) as: \[\mathcal{L}_{MV}=\begin{cases}\frac{1}{2}(RD_{m}-RA_{m})^{2}&|RD_{m}-RA_{m}|< 1\\ |RD_{m}-RA_{m}|-\frac{1}{2}&otherwise\end{cases} \tag{7}\] where \(RD_{m}\) and \(RA_{m}\) are the max-pooled RA and RD probability maps, leaving only the \(R\) direction. The analytical term of this loss is a special case of the Huber loss [13] and was proven to be more robust than mean-square error when dealing with outliers. **Overall Loss:** Our total loss is then defined as the weighted Figure 3: Radar (b) RA, (c) RD, and (d) AD maps with (a) synchronized RGB image. Red and blue annotation boxes correspond to the person and car, respectively, shown in the RGB image. We highlight a sample random noise appearing on the RA map with a yellow box. (e) shows the ground truth mask for the RA and RD maps (left to right) of this scene, and (f) shows a false segmentation with the noise seen as an object. The noise shown in the RA map does not appear as frequently in RD maps. The contrast of the maps was edited for illustration purposes. sum of all proposed losses with weights \(\alpha_{1}\), \(\alpha_{2}\), and \(\alpha_{3}\) as: \[\mathcal{L}_{total}=\alpha_{1}\mathcal{L}_{CA}+\alpha_{2}\mathcal{L}_{SD}+\alpha _{3}\mathcal{L}_{MV} \tag{8}\] ## 5 Experiments ### Datasets To test the effectiveness of our proposed approach, we use the CARRADA [26] dataset as the main multi-class radar semantic segmentation dataset. We also test our proposed method on RADIal [28] dataset and compare to previous state-of-the-art methods in radar semantic segmentation and object detection. **CARRADA:** The CARRADA [26] dataset consists of synchronized camera-radar recordings of various driving scenarios containing 12,666 frames. The annotations of the data were done semi-automatically and provided for the RD and RA views [26]. The dataset contains four object categories: pedestrian, cyclist, car, and background. The input are the RA, RD, and AD maps decomposed from the 3D RAD tensor. RA maps have a size of \(1\times 256\times 256\) while RD and AD have a different resolution of \(1\times 256\times 64\). We use the 2D decomposition of the RAD tensor to reduce the model complexity, which is an important factor in radar perception in automotive driving. **RADIal:** The RADIal [28] dataset is a new high-resolution dataset consisting of 8,252 labeled frames. RADIal varies from CARRADA in that it does not provide a multi-view input and depends only on RD input. The outputs are also produced and compared to projected annotated RGB images, unlike the CARRADA dataset that compares annotation directly in the RD/RA planes. RADIal also provides a high-definition input, where the input size is \(32\times 512\times 256\). RADIal provides annotations for two classes only: free-driving-space and vehicle annotations (i.e. free or occupied). ### Evaluation Metrics We follow the same evaluation metrics used in previous works, which are the common intersection over union (IoU), the Dice score (F1 score), and the mean of each across the classes. The mIoU is also used to evaluate the semantic segmentation task on the RADIal dataset. The combination of the mIoU and the Dice score creates a fair and comprehensive assessment of the results. For the object detection task in RADIal, we use the same metrics as [28] with Average Precision (AP), Average Recall (AR), and regression errors. ### Implementation Details We implement and train TransRadar using the PyTorch library on a single NVIDIA A100 GPU. All reported models on the CARRADA dataset were trained with a batch size of 6 and using 5 past frames. We use Adam optimizer [17], initial learning rate of \(1\times 10^{-4}\), and an exponential scheduler (step = 10). For our final TransRadar model, we use \(8\times\) cascaded blocks of our adaptive-directional attention block. For the testing, we use a batch size of 1 and a similar number of past frames. For the RADIal dataset training, we replace FFTRadNet [28] backbone with our proposed model. We employ a single-view encoding/decoding paradigm similar to the one shown in Figure 2. We use the same segmentation and detection heads from the FFTRadNet model, and the same optimizer and scheduling as CARRADA dataset training. ### State-of-the-art Comparisons **Semantic Segmentation on the CARRADA:** Table 1 shows the quantitative comparisons of the proposed approach with existing state-of-the-art frameworks for radar semantic segmentation. The results listed in the table show that TransRadar outperforms state-of-the-art methods in both the mIoU and mDice metrics. A large part of this is attributed to the introduction of the CA loss, which will be discussed in detail in the ablation studies in Section 5.5. Our model achieves new state-of-the-art performance with an RD mIoU score of 63.9%, which outperforms the closest baseline by 3.2%, and has a mDice score of 75.6%. For the RA map predictions, our method yields a mIoU of 47.5%, outperforming the state-of-the-art score by 4.0%, with a mDice of 59.3%. We also point out that our model significantly outperforms other models in the Cyclist class, where we note a large gap of 12.0% between our model and the second-best model in the RA map, and 13.1% in the RD map. This can be attributed to the consistency with RD as well as the ability to predict harder examples. Across the board, our model sets new state-of-the-art scores except for the car class IoU and Dice in the RA maps, where T-RODNet has a slightly higher score. Figure 4 shows two qualitative results on a hard scene and a normal scene from the test split of CARRADA. The first scene shows a good segmentation with instances of mislocalization in all tested methods, with TransRadar and UNet giving the best prediction results. We then present a well-segmented RD and RA predictions in the second scene relative to the mask from our method when compared to other models. We also notice a coherent translation of the RD to RA views in the range dimension in both scenes. **Semantic Segmentation on RADIal:** We further look at the semantic segmentation results of the RADIal dataset shown in Table 2. Our method outperforms all previously reported models in the semantic segmentation task with a mIoU of 81.1% and less than half the model size of the most recently reported state-of-the-art method, C-M DNN [15]. These results showcase the ability of our proposed method, which is tailored to radar data, to tackle various datasets. **Object detection on RADIal:** Object detection results on the RADIal dataset are shown in Table 3. Our method outperforms all previously reported models in this task as well with significantly higher AR and lower angular prediction error. Despite our method not being designed for the task of object detection, the model still sets a new record for this task. All taken into account, our model sets a new standard for state-of-the-art predictions in these two datasets. ### Discussion & Ablation Study **Different Backbone Architectures:** To evaluate the effect of using our loss function, we compare different \begin{table} \begin{tabular}{l l c c c c c c c c c c} \hline \hline \multirow{2}{*}{View} & \multirow{2}{*}{Method} & \multirow{2}{*}{Params (M)} & \multicolumn{6}{c}{IoU (\%)} & \multicolumn{6}{c}{Dice (\%)} \\ \cline{3-13} & & & Bkg. & Ped. & Cycl. & Car & mIoU & Bkg. & Ped. & Cycl. & Car & mDice \\ \hline \multirow{6}{*}{RD} & FCN-8s [21] & 134.3 & 99.7 & 47.7 & 18.7 & 52.9 & 54.7 & 99.8 & 24.8 & 16.5 & 26.9 & 66.3 \\ & U-Net [29] & 17.3 & 99.7 & 51.1 & 33.4 & 37.7 & 55.4 & 99.8 & 67.5 & 50.0 & 54.7 & 68.0 \\ & DeepLabv3+ [4] & 59.3 & 99.7 & 43.2 & 11.2 & 49.2 & 50.8 & 99.9 & 60.3 & 20.2 & 66.0 & 61.6 \\ & RSS-Net [16] & 10.1 & 99.3 & 0.1 & 4.1 & 25.0 & 32.1 & 99.7 & 0.2 & 7.9 & 40.0 & 36.9 \\ & RAMP-CNN [9] & 106.4 & 99.7 & 48.8 & 23.2 & 54.7 & 56.6 & 99.9 & 65.6 & 37.7 & 70.8 & 68.5 \\ & MVNet [25] & 2.4 & 98.0 & 0.0 & 3.8 & 14.1 & 29.0 & 99.0 & 0.0 & 7.3 & 24.8 & 32.8 \\ & TMVA-Net [25] & 5.6 & 99.7 & 52.6 & 29.0 & 53.4 & 58.7 & 99.8 & 68.9 & 45.0 & 69.6 & 70.9 \\ & PeakConv [42] & 6.3 & - & - & - & - & 60.7 & - & - & - & 72.5 \\ & **TransRadar** & 4.8 & **99.9** & **57.7** & **36.1** & **61.9** & **63.9** & **99.9** & **73.2** & **53.1** & **76.5** & **75.6** \\ \hline \multirow{6}{*}{RA} & FCN-8s [21] & 134.3 & 99.8 & 14.8 & 0.0 & 23.3 & 34.5 & 99.9 & 25.8 & 0.0 & 37.8 & 40.9 \\ & U-Net [29] & 17.3 & 99.8 & 22.4 & 8.8 & 0.0 & 32.8 & 99.9 & 25.8 & 0.0 & 37.8 & 40.9 \\ & DeepLabv3+ [4] & 59.3 & 99.9 & 3.4 & 5.9 & 21.8 & 32.7 & 99.9 & 6.5 & 11.1 & 35.7 & 38.3 \\ & RSS-Net [16] & 10.1 & 99.5 & 7.3 & 5.6 & 15.8 & 32.1 & 99.8 & 13.7 & 10.5 & 27.4 & 37.8 \\ & RAMP-CNN [9] & 106.4 & 99.8 & 1.7 & 2.6 & 7.2 & 27.9 & 99.9 & 3.4 & 5.1 & 13.5 & 30.5 \\ & MVNet [25] & 2.4 & 98.8 & 0.1 & 1.1 & 6.2 & 26.8 & 99.0 & 0.0 & 7.3 & 24.8 & 28.5 \\ & TMVA-Net [25] & 5.6 & 99.8 & 26.0 & 8.6 & 30.7 & 41.3 & 99.9 & 41.3 & 15.9 & 47.0 & 51.0 \\ & T-RODNet [14] & 162.0 & 99.9 & 25.4 & 9.5 & **39.4** & 43.5 & 99.9 & 40.5 & 17.4 & **56.6** & 53.6 \\ & PeakConv [42] & 6.3 & - & - & - & - & 42.9 & - & - & - & - & 53.3 \\ \cline{2-11} & **TransRadar** & 4.8 & **99.9** & **30.3** & **21.5** & 38.2 & **47.5** & **99.9** & **46.6** & **35.3** & 55.3 & **59.3** \\ \hline \hline \end{tabular} \end{table} Table 1: Semantic segmentation performance on the test split of the CARRADA dataset, shown for the RD (Range-Doppler) and RA (Range-Angle) views. Columns from left to right are the view (RD/RA), the name of the model, the number of parameters in millions, the intersection-over-union (IoU) score of the four different classes with their mean, and the Dice score for the same classes. \begin{table} \begin{tabular}{l c c c} \hline \hline Backbone & \% AP \(\uparrow\) & \% AR \(\uparrow\) & R(m) \(\downarrow\) & A(\({}^{\circ}\))\(\downarrow\) \\ \hline Pixor [37] & 96.6 & 81.7 & 0.10 & 0.20 \\ FFTRadNet [28] & 96.8 & 82.2 & **0.11** & 0.17 \\ C-M DNN [15] & 96.9 & 83.5 & - & - \\ **TransRadar** & **97.3** & **98.4** & **0.11** & **0.10** \\ \hline \hline \end{tabular} \end{table} Table 3: Object detection results on the RADIal dataset. Our method yields an increase in the average recall and a significant decrease in the angle regression error. The best scores per column are in bold. ’-’ is an unreported value with no replicable results. Figure 4: Qualitative results on two test scenes from the CARRADA test split showing the RGB camera view with results of semantic segmentation from different methods. For every image, (top) depicts the RD and (bottom) depicts RA. (a) RD/RA inputs, (b) ground-truth, (c) TransRadar, (d) TMVA-Net [25], (e) MVNet [25], and (f) UNet [29]. All RD outputs were rotated for visual coherency. Different colors correspond to different classes. Blue: Car, Green: Cyclist, Red: Pedestrian. Black: background. other backbones using the same configuration on the CARRADA dataset. Tested backbones include available state-of-the-art methods and other transformer architectures such as ViT [6], UNETR [10], ConViT [34], and CSWin Transformer [5]. This allows us to evaluate both the loss function with other state-of-the-art models and our adaptive-directional attention with other attention-based techniques. Table 4 lists the quantitative comparison between them. Other than TMVA-Net, models were implemented with the same encoding and decoding as our adaptive-directional attention block. We notice that our loss improves TMVA-Net's performance significantly in both RD and RA mIoU scores. TransRadar still outperforms all other attention models and shows that the sparse nature of the adaptive-directional attention yields the best results in radar perception. To evaluate the effect of the adaptive sampling, we implement our model by applying attention to unshifted and unmodulated axes. Adding adaptive-directional sampling yields an increase of \(1.40\%\) in the RD mIoU and a \(4.04\%\) increase in the RA mIoU, while using less parameters than previous state-of-the-art methods. **Ablation for the adaptive-directional attention:** We also perform ablation experiments on the adaptive-directional attention head. We show the semantic segmentation performance on the test split of the CARRADA dataset in Table 5. Noticeably, attention contributes to the increments in RD map performance, while the directional sampling contributes to RA's mIoU. **Evaluation of Loss Functions:** We further test the effect of the loss functions on the learning of our method, where we test our model under different combinations of the functions defined in Section 4.4. Removing \(\mathcal{L}_{SD}\) yields poor prediction scores, which showcases its necessity in this task. Using our model without RA-RD coherence yields a poor RA score, while using a coherence loss boosts RA's score by at least 3.5%. We also report the effects of \(\mathcal{L}_{OC}\) and \(\mathcal{L}_{CL}\), separately, or both combined (\(\mathcal{L}_{CA}\)). Removing \(\mathcal{L}_{OC}\) from the \(\mathcal{L}_{CA}\) term reduces RD score heavily while removing \(\mathcal{L}_{CL}\) from \(\mathcal{L}_{CA}\) reduces RA score. Localization is a harder task in RA maps than it is in RD due to its larger resolution which results in a more pronounced effect from \(\mathcal{L}_{CL}\). Lastly, we compare the effect of introducing our \(\mathcal{L}_{MV}\) loss instead of the baseline coherence loss. Following our discussion in Section 4.4, \(\mathcal{L}_{MV}\) remedies the problem of RA reducing RD's accuracy, where we notice an increase in the accuracy of RA without compromising RD scores. ## 6 Conclusion We introduce a novel attention-based architecture for the task of semantic segmentation on radar frequency images, named TransRadar. Our method uses an adaptive-directional attention block and a novel loss function tailored to the needs of radar perception. Our model achieves state-of-the-art performance on two semantic segmentation radar frequency datasets, CARRADA [26] and RA-Dlal [28], using a smaller model size. Our proposed method also achieves improved performance for the task of object detection in radar images. Paths of future works include implementing approaches that fuse radar input with RGB images to produce more robust predictions. The ability to fuse both data sources is promising in creating a new standard for automotive driving.
2308.07152
IQP Sampling and Verifiable Quantum Advantage: Stabilizer Scheme and Classical Security
Sampling problems demonstrating beyond classical computing power with noisy intermediate-scale quantum (NISQ) devices have been experimentally realized. In those realizations, however, our trust that the quantum devices faithfully solve the claimed sampling problems is usually limited to simulations of smaller-scale instances and is, therefore, indirect. The problem of verifiable quantum advantage aims to resolve this critical issue and provides us with greater confidence in a claimed advantage. Instantaneous quantum polynomial-time (IQP) sampling has been proposed to achieve beyond classical capabilities with a verifiable scheme based on quadratic-residue codes (QRC). Unfortunately, this verification scheme was recently broken by an attack proposed by Kahanamoku-Meyer. In this work, we revive IQP-based verifiable quantum advantage by making two major contributions. Firstly, we introduce a family of IQP sampling protocols called the \emph{stabilizer scheme}, which builds on results linking IQP circuits, the stabilizer formalism, coding theory, and an efficient characterization of IQP circuit correlation functions. This construction extends the scope of existing IQP-based schemes while maintaining their simplicity and verifiability. Secondly, we introduce the \emph{Hidden Structured Code} (HSC) problem as a well-defined mathematical challenge that underlies the stabilizer scheme. To assess classical security, we explore a class of attacks based on secret extraction, including the Kahanamoku-Meyer's attack as a special case. We provide evidence of the security of the stabilizer scheme, assuming the hardness of the HSC problem. We also point out that the vulnerability observed in the original QRC scheme is primarily attributed to inappropriate parameter choices, which can be naturally rectified with proper parameter settings.
Michael J. Bremner, Bin Cheng, Zhengfeng Ji
2023-08-14T14:03:33Z
http://arxiv.org/abs/2308.07152v1
# IQP Sampling and Verifiable Quantum Advantage: ###### Abstract Sampling problems demonstrating beyond classical computing power with noisy intermediate scale quantum (NISQ) devices have been experimentally realized. In those realizations, however, our trust that the quantum devices faithfully solve the claimed sampling problems is usually limited to simulations of smaller-scale instances and is, therefore, indirect. The problem of verifiable quantum advantage aims to resolve this critical issue and provides us with greater confidence in a claimed advantage. Instantaneous quantum polynomial-time (IQP) sampling has been proposed to achieve beyond classical capabilities with a verifiable scheme based on quadratic-residue codes (QRC). Unfortunately, this verification scheme was recently broken by an attack proposed by Kahanamoku-Meyer. In this work, we revive IQP-based verifiable quantum advantage by making two major contributions. Firstly, we introduce a family of IQP sampling protocols called the _stabilizer scheme_, which builds on results linking IQP circuits, the stabilizer formalism, coding theory, and an efficient characterization of IQP circuit correlation functions. This construction extends the scope of existing IQP-based schemes while maintaining their simplicity and verifiability. Secondly, we introduce the _Hidden Structured Code_ (HSC) problem as a well-defined mathematical challenge that underlies the stabilizer scheme. To assess classical security, we explore a class of attacks based on secret extraction, including the Kahanamoku-Meyer's attack as a special case. We provide evidence of the security of the stabilizer scheme, assuming the hardness of the HSC problem. We also point out that the vulnerability observed in the original QRC scheme is primarily attributed to inappropriate parameter choices, which can be naturally rectified with proper parameter settings. ## 1 Introduction Quantum computing represents a fundamental paradigm change in the theory of computation, and promises to achieve quantum speedup in many problems, such as integer factorization [1] and database search [2]. However, many quantum algorithms are designed to be implemented in the fault-tolerant regime, which are too challenging for our current noisy intermediate-scale quantum (NISQ) era [3]. Experimentally, we can perform random-circuit sampling [4; 5; 6; 7] and boson sampling [8; 9] at a scale that is arguably beyond the capability of classical simulation. But when it comes to verifiability, although these experiments can use some benchmarking techniques such as cross-entropy benchmarking [5] to certify the quantum devices, they cannot be efficiently verified in an adversarial setting without modification of the underlying computational task. Classical verification of quantum computation is a long-standing question, which was first asked by Gottesman [10]. In the context of verifying arbitrary quantum computation, there have been a plethora of important results [11; 12; 13; 14; 15; 16; 17; 18; 19]. The more relevant context of this work is generating a test of quantumness. The goal is to create a computational task that is beyond the capabilities of classical computing, that uses minimal quantum and classical computing to generate and verify. A motivating example is given by Shor's algorithm for integer factorization [1], which is appealing in that hard instances can be easily generated and verified classically yet finding the solution is beyond the capabilities of classical computers. However, this also has the drawback that the quantum solution also seems to be beyond the capabilities of NISQ devices. Recently, there have been tests of quantumness that combine the power of both interactive proofs and cryptographic assumptions [20; 21; 22]. This class of cryptographic verification protocols usually uses a primitive called trapdoor claw-free (TCF) functions, which has the following properties. First, it is a 2-to-1 function that is hard to invert, meaning that given \(y=f(x)=f(x^{\prime})\), it is hard for an efficient classical computer to find the preimage pair \((x,x^{\prime})\). Second, given a trapdoor to the function \(f(x)\), the preimage pair can be efficiently found on a classical computer. We will refer to this class of verification protocols as the TCF-based protocols. The TCF-based protocols require the quantum prover to prepare the state of the form \(\sum_{x}\ket{x}\ket{f(x)}\). Although a recent experiment implemented a small-scale TCF-based protocol on a trapped-ion platform [23], implementing this class of protocols is still very challenging for the current technology. Another class of verification protocols is based on instantaneous quantum polynomial-time (IQP) circuits initiated by Shepherd and Bremner [24]. IQP circuits are a family of quantum circuits that employ only commuting gates, typically diagonal in the Pauli-\(X\) basis. In IQP-based verification protocols, the verifier generates a pair consisting of an IQP circuit \(U_{\text{IQP}}\) and a secret key \(\mathbf{s}\in\{0,1\}^{n}\). After transmitting the classical description of the IQP circuit to the prover, the verifier requests measurement outcomes in the computational basis. Then, the verifier uses the secret to determine whether the measurement outcomes are from a real quantum computer. Such a challenge seems hard for classical computers, as random IQP circuits are believed to be computationally difficult to simulate classically with minimal physical resources, assuming some plausible complexity-theoretic assumptions such as the non-collapse of polynomial hierarchy [25; 26; 27]. The use of random IQP circuits for the verification protocol is problematic due to the anti-concentration property [28; 26]. To address this issue, the Shepherd-Bremner scheme employs an obfuscated quadratic-residue code (QRC) to construct the pair \((U_{\text{IQP}},\mathbf{s})\)[29]. While the Shepherd-Bremner scheme was experimentally attractive, it suffered from a drawback as its cryptographic assumptions were non-standard and lacked sufficient study compared to TCF-based protocols. This was especially apparent when in 2019 Kahanamoku-Meyer discovered a loophole in the Shepherd-Bremner scheme, enabling a classical prover to efficiently find the secret, which subsequently allows the prover to generate data to spoof the test [30]. Given the potential of IQP-based protocols to achieve verifiability beyond classical computing using fewer resources than, say, Shor's algorithm, it is crucial to investigate the possibility of extending and rectifying the Shepherd-Bremner construction. In this work, we propose a new IQP-based protocol, which we refer to as the _stabilizer scheme_. Our construction allows the verifier to efficiently generate an IQP circuit, \(U_{\text{IQP}}=e^{i\pi H/8}\), and a secret, \(\mathbf{s}\), so that the correlation function relative to the secret has a magnitude equal to \(2^{-g/2}\), where \(g\) is a tunable integer. The stabilizer scheme is based on the interplay between IQP circuits, stabilizer formalism and coding theory, and it significantly strengthens previous constructions based on quadratic-residue codes [24] or random small IQP circuits [28]. Our characterization on IQP circuits builds upon and integrates several previous results [31, 32], which tackle this problem from the perspective of binary matroids and Tutte polynomials. In order to explore the classical security, we formulate the _Hidden Structured Code_ problem, which captures the hardness of classical attacks based on secret extraction. Then, we investigate a general class of such classical attacks, which includes Kahanamoku-Meyer's attack as an instance. We give positive evidence that this class of classical attacks takes exponential time to generate the data with correct correlation relative to the secret. Specifically, we show that a generalization of Kahanamoku-Meyer's attack, named Linearity Attack, fails to break the stabilizer scheme if the parameters are chosen appropriately. Additionally, we have designed a new obfuscation technique called _column redundancy_, which can even be used to fix the recently-found weakness in the Shepherd-Bremner construction [33]. Specifically, Claim 3.1 in Ref. [30] states that the attack algorithm for the Shepherd-Bremner construction succeeds in \(O(n^{3})\) time on average, which turns out to be true only under certain parameter choices. This can be naturally rectified with proper parameter settings enabled by our column redundancy technique. Our results provide positive evidence for the security of the IQP-based verification protocols. This paper is organized as follows. In the rest of the Introduction, we first give the general framework of IQP-based verification protocols. Then, we state our main results in more detail, followed by discussing the related works. In Section 2, we give the preliminaries, including stabilizer formalism, necessary results from coding theory and the Shepherd-Bremner construction. In Section 3, we give the characterization of the state generated by IQP circuits with \(\theta=\pi/4\) and the correlation function \(\langle\mathcal{Z}_{\mathbf{s}}\rangle\) with \(\theta=\pi/8\). Then, in Section 4, we present the stabilizer construction for the IQP-based protocols. In Section 5, we analyze the classical security of the stabilizer scheme and explore the classical attacks based on secret extraction. Finally, we conclude and give open problems in Section 6. ### IQP-based verification protocol Here, we focus on a specific family of IQP circuits, the \(X\) program [24], where all local gates are diagonal in the Pauli-\(X\) basis. One can represent this family of IQP circuits by a time evolution of the Hamiltonian \(H\), which consists of only products of Pauli \(X\)'s. For example, for \(H=X_{1}X_{2}X_{4}+X_{3}X_{4}+X_{1}X_{3}\), the corresponding IQP circuit is given by \(U_{\text{IQP}}=e^{i\theta H}=e^{i\theta X_{1}X_{2}X_{4}}e^{i\theta X_{3}X_{4} }e^{i\theta X_{1}X_{3}}\). In the general case, the evolution time for each term in \(H\) can be different, but we focus on the case where \(\theta=\pi/8\) for all terms in this work. One can also use an \(m\)-by-\(n\) binary matrix to represent the IQP Hamiltonian, where \(m\) is the number of local terms and \(n\) is the number of qubits. Each row of the matrix represents one local term and the locations of \(1\)'s indicate the qubits that it acts on. The matrix representation for \(H\) in the previous example is given by \[\mathbf{H}=\begin{pmatrix}1&1&0&1\\ 0&0&1&1\\ 1&0&1&0\end{pmatrix}. \tag{1.1}\] General framework.The general framework for the IQP-based verification protocol is shown in Fig. 1. Here, the verifier first generates the pair of IQP Hamiltonian \(H\) and the secret \(\mathbf{s}\in\{0,1\}^{n}\). She computes the correlation function \(\langle\mathcal{Z}_{\mathbf{s}}\rangle:=\langle 0^{n}|U^{\mathrm{t}}_{\mathrm{IQP }}\mathcal{Z}_{\mathbf{s}}U_{\mathrm{IQP}}|0^{n}\rangle\) relative to the secret, which can be achieved classically efficiently [28, 31]. Then, the classical description of the Hamiltonian \(H\) is sent to the prover, while the secret is kept on the verifier's side. The verifier also instructs the prover the evolution time for each term of the Hamiltonian. After that, the prover repeatedly prepares the state \(e^{i\theta H}\left|0^{n}\right\rangle\), measures all qubits in the computational basis, and obtains a set of samples \(\mathbf{x}_{1},\ldots,\mathbf{x}_{T}\in\{0,1\}^{n}\), which will be sent back to the verifier. From the prover's measurement samples, the verifier estimates the correlation function relative to \(\mathbf{s}\) by \[\langle\widetilde{\mathcal{Z}_{\mathbf{s}}}\rangle:=\frac{1}{T}\sum_{i=1}^{T} \left(-1\right)^{\mathbf{x}_{i}\cdot\mathbf{s}}. \tag{1.2}\] If the value of \(\langle\widetilde{\mathcal{Z}_{\mathbf{s}}}\rangle\) is within an allowed error of the ideal value \(\langle\mathcal{Z}_{\mathbf{s}}\rangle\), then the verifier accepts the result and the prover passes the verification. In order to ensure the effectiveness of the verification process, two key challenges must be addressed. The first one is to evaluate the ideal correlation function, so that the verifier can compare it with the value obtained from the prover's measurement outcomes. The second one is to design a suitable pair \((H,\mathbf{s})\), so that the correlation function \(\langle\mathcal{Z}_{\mathbf{s}}\rangle\) is sufficiently away from zero. Otherwise, the verifier may need to request a super-polynomial number of samples from the prover to make the statistical error small enough, making the protocol inefficient. Figure 1: Schematic for IQP-based verification protocol in the case \(\theta=\pi/8\). Evaluating the correlation function.To evaluate the correlation function, first note that the Hamiltonian can be divided into two part \(H=H_{\mathbf{s}}+R_{\mathbf{s}}\) based on the secret \(\mathbf{s}\). Here, \(H_{\mathbf{s}}\) anti-commutes with \(\mathcal{Z}_{\mathbf{s}}\), i.e., \(\{\mathcal{Z}_{\mathbf{s}},H_{\mathbf{s}}\}=0\), and the redundant part \(R_{\mathbf{s}}\) commutes with \(\mathcal{Z}_{\mathbf{s}}\), i.e., \([R_{\mathbf{s}},\mathcal{Z}_{\mathbf{s}}]=0\). Correspondingly, the matrix representations satisfy \(\mathbf{H}_{\mathbf{s}}\;\mathbf{s}=\mathbf{1}\) and \(\mathbf{R}_{\mathbf{s}}\;\mathbf{s}=\mathbf{0}\). Due to these commutation relations, the value of the correction function only depends on the \(H_{\mathbf{s}}\), i.e. [28, 31], \[\left\langle\mathcal{Z}_{\mathbf{s}}\right\rangle =\;\left\langle 0^{n}|e^{i2\theta H_{\mathbf{s}}}|0^{n}\right\rangle\;. \tag{1.3}\] Then, one can observe an intriguing point from this expression. When \(\theta=\pi/8\), the IQP circuit is non-Clifford and there is complexity-theoretic evidence that the IQP circuits in this setting is hard to simulate classically [26]. However, \(e^{i2\theta H_{\mathbf{s}}}\) becomes a Clifford circuit, which means that the correlation function can be computed classically efficiently! Indeed, \(\left\langle\mathcal{Z}_{\mathbf{s}}\right\rangle=\;\left\langle 0^{n}|e^{i(\pi/4) H_{\mathbf{s}}}|0^{n}\right\rangle\) actually corresponds to an amplitude of the Clifford circuit \(e^{i(\pi/4) H_{\mathbf{s}}}\). In this way, the verifier can evaluate the correlation function efficiently using the Gottesman-Knill algorithm [34]. ### Main results In this subsection, we briefly overview the main results of the paper in the following and refer the reader to later sections for the detailed analysis. The main objective of this work is to devise a new scheme of the IQP-based verification protocol that strengthens its classical security and invalidates the known attacks. To achieve this, we start by studying the properties of the state \(e^{i\pi H/4}\left|0^{n}\right\rangle\). Given a binary matrix \(\mathbf{H}=(\mathbf{c}_{1},\ldots,\mathbf{c}_{n})\), we first transform it into an IQP Hamiltonian \(H\). By Theorem 3.1, the stabilizer tableau of \(\left|\psi\right\rangle=e^{i\pi H/4}\left|0^{n}\right\rangle\) is given by \((\mathbf{G},\mathbf{I}_{n},\mathbf{r})\), where the \(X\) part is a Gram matrix \(\mathbf{G}=\mathbf{H}^{T}\mathbf{H}\), the \(Z\) part is an identity matrix \(\mathbf{I}_{n}\), and the phase column \(\mathbf{r}\) depends on the Hamming weight of columns in \(\mathbf{H}\). Next, we compute the correlation function \(|\left\langle\mathcal{Z}_{\mathbf{s}}\right\rangle|\) and connect it to a property of the code \(\mathcal{C}_{\mathbf{s}}\) generated by columns of \(\mathbf{H}_{\mathbf{s}}\). Let \(\mathcal{C}_{\mathbf{s}}^{\perp}\) be the dual code of \(\mathcal{C}_{\mathbf{s}}\), \(\mathcal{D}_{\mathbf{s}}:=\mathcal{C}_{\mathbf{s}}\bigcap\mathcal{C}_{\mathbf{s }}^{\perp}\) be the self-dual intersection and consider \(g:=\dim(\mathcal{C}_{\mathbf{s}})-\dim(\mathcal{D}_{\mathbf{s}})\). We then prove in Theorem 3.2 that the magnitude of the correlation function \(|\left\langle\mathcal{Z}_{\mathbf{s}}\right\rangle|\) is \(2^{-g/2}\) if the self-dual intersection \(\mathcal{D}_{\mathbf{s}}\) is a doubly-even code, and \(0\) if it is an unbiased even code. Moreover, it can be proved that \(\mathcal{D}_{\mathbf{s}}\) must be in one of the two cases, and thus the above gives a complete characterization of the magnitude of the correlation function. Interestingly, the \(g\) number happens to be the rank of the Gram matrix \(\mathbf{G}_{\mathbf{s}}=\mathbf{H}_{\mathbf{s}}^{T}\mathbf{H}_{\mathbf{s}}\) associated with \(\mathbf{H}_{\mathbf{s}}\) (Proposition 2.5), which also characterizes the overlap between \(|0^{n}\rangle\) and \(e^{i\pi H_{\mathbf{s}}/4}\left|0^{n}\right\rangle\) from a group-theoretic perspective (Proposition 2.1). Theorem 3.2 is an effective merging of a number of results that were first discussed by Shepherd in Ref. [31], with a particular focus on coding theory. Originally, Shepherd studied IQP circuits from the perspective of binary matroids, codes, and Tutte polynomials. With these results established, the construction of \((\mathbf{H},\mathbf{s})\) for the verification protocol can be formulated as follows. Let \(\mathcal{H}_{n,m,g}=\{(\mathbf{H},\mathbf{s})\}\) be a family of pairs of an IQP matrix \(\mathbf{H}\in\mathbb{F}_{2}^{m\times n}\) and a secret \(\mathbf{s}\in\mathbb{F}_{2}^{n}\) so that the corresponding correlation function satisfies \(|\left\langle\mathcal{Z}_{\mathbf{s}}\right\rangle|=2^{-g/2}\); the precise definition is presented in Definition 4.1. Here, the parameters \(n\) and \(m\) correspond to the size of the IQP circuits, and \(g\) corresponds to the value of the correlation function relative to the secret. Other than these three parameters, no other structure is imposed on the IQP circuits in this family. We give an efficient algorithm to sample random instances from \(\mathcal{H}_{n,m,g}\), which we call the stabilizer construction (Meta-Algorithm 1). Essentially, the stabilizer construction is to randomly generate an obfuscated code and a secret, so that the corresponding correlation function is sufficiently away from zero, to enable efficient verification. Specifically, we reduce this problem to sampling two matrices \(\mathbf{D}\) and \(\mathbf{F}\), so that \(\mathbf{D}\) is a generator matrix of a random doubly-even code, and \(\mathbf{F}\) consists of \(g\) random columns satisfying the constraints \(\mathbf{D}^{T}\mathbf{F}=\mathbf{0}\) and \(\operatorname{rank}(\mathbf{F}^{T}\mathbf{F})=g\). Jointly, columns in \(\mathbf{D}\) and \(\mathbf{F}\) span a linear subspace that contains the all-ones vector, which must be a codeword because \(\mathbf{H}_{\mathbf{s}}\ \mathbf{s}=\mathbf{1}\). We give an efficient algorithm to sample such matrices \(\mathbf{D}\) and \(\mathbf{F}\). To explore the classical security, we consider a general class of classical attacks based on secret extraction. Given \((\mathbf{H},\mathbf{s})\in\mathcal{H}_{n,m,g}\), extracting the secret \(\mathbf{s}\) from \(\mathbf{H}\) leads to finding the hidden code \(\mathcal{C}_{\mathbf{s}}\) from a larger obfuscated code. Such a hidden substructure problem seems hard for a classical computer, and we formulate the following conjecture. **Conjecture 1.1** (Hidden Structured Code (HSC) Problem).: _For certain appropriate choices of \(n,m,g\), there exists an efficiently samplable distribution over instances \((\mathbf{H},\mathbf{s})\) from the family \(\mathcal{H}_{n,m,g}\), so that no polynomial-time classical algorithm can find the secret \(\mathbf{s}\) given \(n,m\) and \(\mathbf{H}\) as input, with high probability over the distribution on \(\mathcal{H}_{n,m,g}\)._ To support this conjecture, we extend Kahanamoku-Meyer's attack to target general IQP circuits with \(\theta=\pi/8\), and we call this attack the Linearity Attack. This generalized attack uses linear algebraic techniques to search for a candidate set of secrets, and performs classical sampling according to this candidate set. By choosing appropriate parameters, random instances drawn by our stabilizer scheme turns out to invalidate the Linearity Attack, since the search for the candidate set takes exponential time. As a result, the stabilizer scheme is secure against the Linearity Attack. Moreover, our analysis suggests that choosing a different set of parameters for the QRC-based construction can fix the recent loophole in the original Shepherd-Bremner scheme. This refutes the Claim 3.1 in Ref. [30], which states that the QRC-based construction can be efficiently broken classically in general. ### Related works The first explicit construction recipe of \((\mathbf{H},\mathbf{s})\) for the case \(\theta=\pi/8\) is given by Shepherd and Bremner [24]. In the their construction, \(\mathbf{H}_{\mathbf{s}}\) is constructed from a specific error-correcting code, the quadratic-residue code (QRC) [29], which guarantees that the correlation function is always \(1/\sqrt{2}\), a value sufficiently away from zero as desired. Formally, let \(\mathcal{H}_{n,m,q}^{\text{QRC}}=\{(\mathbf{H},\mathbf{s})\}\) be a family of pairs of an IQP matrix \(\mathbf{H}\in\mathbb{F}_{2}^{m\times n}\) and a secret \(\mathbf{s}\) so that \(\mathbf{H}_{\mathbf{s}}\) generates a QRC of length \(q\) (up to row permutations) and \(\mathbf{H}\) is of full column rank. What the Shepherd-Bremner construction achieves is to randomly sample instances from \(\mathcal{H}_{n,m,q}^{\text{QRC}}\), where \(n=(q+3)/2\). However, it turns out that this set of parameters can only give easy instances. In Ref. [30], Kahanamoku-Meyer gave a secret-extraction attack (KM attack) against the Shepherd-Bremner construction. With his attack, a classical prover can find the secret \(\mathbf{s}\) efficiently with high probability. Once the secret is found, the prover can easily pass the test by generating appropriately biased data in the direction of the secret, without the need of actually simulating the IQP circuits. In Ref. [28], Yung and Cheng proposed to circumvent the attack by starting with a small randomized IQP circuit and using the obfuscation technique in the Shepherd-Bremner scheme to hide that small IQP circuit [24]. The verifier cannot directly use a fully randomized IQP circuit because the correlation function will be close to zero for most choices of secrets in that case, due to the anti-concentration property of IQP circuits [26]. Small correlation functions make it difficult for the verifier to distinguish between an honest quantum prover and a cheating classical prover outputting random bit strings. This poses a challenge, to balance the security given by randomized constructions with the scale of the correlation functions that enables easy verification. This challenge is not fully resolved by the heuristic construction in Ref. [28]. In addition, Shepherd studied IQP circuits with tools of binary matroids and Tutte polynomials, and derived some related results to this work [31]. Specifically, the amplitude of the IQP circuit \(\,\langle 0^{n}|e^{i\theta H}|0^{n}\rangle\) is expressed in terms of the normalized Tutte polynomial, and its computational complexity is studied in various cases. When \(\theta=\pi/4\), the magnitude of the related Tutte polynomial can be efficiently evaluated using Vertigan's algorithm [35], which is similar to the Gottesman-Knill algorithm [34]. This idea was further explored by Mann [32], which related computing the amplitude to the bicycle dimension and the Brown's invariant using results of Ref. [36]. But when \(\theta=\pi/8\) (and any other values except for the multiple of \(\pi/4\)), computing the amplitude is \(\#P\)-hard in the worst case. Moreover, Ref. [31] also derived similar relation to Eq. (1.3), in the language of the normalized Tutte polynomial. Therefore, it was proved that the correlation function is efficiently classical computable when \(\theta=\pi/8\), and suggests that this could be used to perform hypothesis test for access to quantum computers, although no new construction was proposed in Ref. [31]. ## 2 Preliminaries ### Notations We mainly work on the field \(\mathbb{F}_{2}\). We use bold capital letters such as \(\mathbf{H}\) to denote a matrix and bold lower-case letters such as \(\mathbf{s}\) to denote a vector. If not stated otherwise, a vector is referred to as a column vector, and a row vector will be added the transpose symbol, like \(\mathbf{p}^{T}\). The (Hamming) weight of a vector \(\mathbf{x}\) is denoted as \(|\mathbf{x}|\). The inner product between two vectors \(\mathbf{x}\) and \(\mathbf{s}\) is denoted as \(\mathbf{x}\cdot\mathbf{s}\); sometimes we will also use \(\mathbf{H}\cdot\mathbf{s}\) to denote the matrix multiplication. We use \(\operatorname{col}(\mathbf{H})\) and \(\operatorname{row}(\mathbf{H})\) to denote the columns and rows of a matrix \(\mathbf{H}\), respectively. We use \(c(\mathbf{H})\) and \(r(\mathbf{H})\) to denote the number of columns and the number of rows of a matrix \(\mathbf{H}\), respectively. The rank of a matrix \(\mathbf{H}\) is denoted as \(\operatorname{rank}(\mathbf{H})\). We use \(\ker(\mathbf{H})\) to denote the kernel space of \(\mathbf{H}\), i.e., the space of vector \(\mathbf{v}\) such that \(\mathbf{H}\mathbf{v}=\mathbf{0}\). We call two square matrices \(\mathbf{A}\) and \(\mathbf{B}\) congruent if there exists an invertible matrix \(\mathbf{Q}\) satisfying \(\mathbf{A}=\mathbf{Q}^{T}\mathbf{B}\mathbf{Q}\), denoted as \(\mathbf{A}\sim_{c}\mathbf{B}\). We call such an transformation _congruent transformation_. The all-ones vector will be denoted as \(\mathbf{1}\), with its dimension inspected from the context; the similar rule applies to the all-zeros vector (or matrix) \(\mathbf{0}\). The \(n\times n\) identity matrix is denoted as \(\mathbf{I}_{n}\). For a vector \(\mathbf{x}\), we define its support as \(\{j:x_{j}=1\}\). We define \([n]:=\{1,2,\ldots,n\}\). If not stated otherwise, a full-rank matrix is referred to a matrix with full column rank. We denote the linear subspace spanned by a set of vectors \(\{\mathbf{c}_{1},\ldots,\mathbf{c}_{k}\}\) as \(\langle\mathbf{c}_{1},\ldots,\mathbf{c}_{k}\rangle\). Given linear subspaces \(V=\langle\mathbf{c}_{1},\ldots,\mathbf{c}_{l}\rangle\) and \(U=\langle\mathbf{c}_{1},\ldots,\mathbf{c}_{k}\rangle\) with \(k<l\), we denote the complement subspace of \(U\) in \(V\) with respect to the basis \(\{\mathbf{c},\ldots,\mathbf{c}_{l}\}\) by \((V/U)_{\mathbf{c}_{1},\ldots,\mathbf{c}_{l}}\); namely, \((V/U)_{\mathbf{c}_{1},\ldots,\mathbf{c}_{l}}:=\langle\mathbf{c}_{k+1},\ldots, \mathbf{c}_{l}\rangle\). Usually, we are not interested in a specific basis, so we use \(V/U\) to denote a random complement subspace of \(U\) in \(V\), i.e., \(V/U\leftarrow_{\mathcal{R}}\{\langle\mathbf{c}_{k+1},\ldots,\mathbf{c}_{l} \rangle:V=\langle\mathbf{c}_{1},\ldots,\mathbf{c}_{l}\rangle,U=\langle\mathbf{c }_{1},\ldots,\mathbf{c}_{k}\rangle\}\), where \(\leftarrow_{\mathcal{R}}\) denotes a random instance from a set. We let \(V\backslash U:=\{\mathbf{v}:\mathbf{v}\in V,\mathbf{v}\not\in U\}\) be the ordinary complement of two sets. ### Stabilizer formalism Overlap of two stabilizer states.Given two stabilizer states \(|\psi\rangle\) and \(|\phi\rangle\), let \(\operatorname{Stab}(|\psi\rangle)\) and \(\operatorname{Stab}(|\phi\rangle)\) be their stabilizer groups, respectively, which are subgroups of the \(n\)-qubit Pauli group. Let \(\{P_{1},\ldots,P_{n}\}\) be the generators of \(\operatorname{Stab}(|\psi\rangle)\) and \(\{Q_{1},\ldots,Q_{n}\}\) be those of \(\operatorname{Stab}(|\phi\rangle)\). Note that the set of generators is not unique. Then, the overlap \(|\,\langle\psi|\phi\rangle\,|\) is determined by their stabilizer groups [37]. **Proposition 2.1** ([37]).: _Let \(|\psi\rangle\) and \(|\phi\rangle\) be two stabilizer states. Then, \(\langle\psi|\phi\rangle=0\) if their stabilizer groups contain the same Pauli operator of the opposite sign. Otherwise, \(|\langle\psi|\phi\rangle|=2^{-g/2}\), where \(g\) is the minimum number of different generators over all possible choices._ For completeness, we provide an alternative proof in Appendix A. In particular, this implies that \(\langle\mathcal{Z}_{\mathbf{s}}\rangle=\langle 0^{n}|e^{i\pi H_{\mathbf{s}}/4}|0^{n}\rangle\) has magnitude either \(0\) or \(2^{-g/2}\), where \(n-g\) is the maximum number of independent Pauli-\(Z\) products in the stabilizer group of \(e^{i\pi H_{\mathbf{s}}/4}\,|0^{n}\rangle\). Tableau representation.A stabilizer state or circuit can be represented by a stabilizer tableau, which is an \(n\)-by-\((2n+1)\) binary matrix. The idea is to use \(2n+1\) bits to represent each generator of the stabilizer group. First, a single-qubit Pauli operator can be represented by \((x,z)\); \((0,0)\) corresponds to \(I\), \((1,0)\) corresponds to \(X\), \((0,1)\) corresponds to \(Z\) and \((1,1)\) corresponds to \(Y\). For stabilizer generators, the phase can only be \(\pm 1\) since the stabilizer group does not contain \(-I\). So, one can use an extra bit \(r\) to represent the phase; \(r=0\) is for \(+1\) while \(r=1\) is for \(-1\). Then, an \(n\)-qubit stabilizer generator can be represented by \(2n+1\) bits, \[(x_{1},\ldots,x_{n},z_{1},\ldots,z_{n},r). \tag{2.1}\] For example, the vector for \(-X_{1}Z_{2}\) is \((1,0,0,1,1)\). Any stabilizer state can be specified by \(n\) stabilizer generators, which commute with each other. Therefore, the state is associated with the following tableau, \[\begin{pmatrix}x_{11}&\cdots&x_{1n}&z_{11}&\cdots&z_{1n}&r_{1}\\ \vdots&\ddots&\vdots&\vdots&\ddots&\vdots&\vdots\\ x_{n1}&\cdots&x_{nn}&z_{n1}&\cdots&z_{nn}&r_{n}\end{pmatrix}\, \tag{2.2}\] whose rows define the stabilizer generators. The first \(n\) columns are called the \(X\) part, the \((n+1)\)-th to \(2n\)-th columns are called the \(Z\) part, and the last column are called the phase column of the stabilizer tableau. As an example, the \(|0^{n}\rangle\) state is stabilized by \(\langle Z_{1},\ldots,Z_{n}\rangle\), and its stabilizer tableau is given by, \[\begin{pmatrix}0&\cdots&0&1&\cdots&0&0\\ \vdots&\ddots&\vdots&\vdots&\ddots&\vdots\\ 0&\cdots&0&0&\cdots&1&0\end{pmatrix}. \tag{2.3}\] We will call it the standard stabilizer tableau of \(|0^{n}\rangle\). ### Coding theory We present some results regarding coding theory here, with the proof presented in Appendix B. We only consider linear codes over \(\mathbb{F}_{2}\) in this paper. A linear code, or simply a code \(\mathcal{C}\) of length is a linear subspace of \(\mathbb{F}_{2}^{m}\). One can use a generator matrix \(\mathbf{H}\) to represent a code, with its columns spanning the codespace \(\mathcal{C}\). The dual code is defined as \(\mathcal{C}^{\perp}:=\{\mathbf{v}\in\mathbb{F}_{2}^{m}:\mathbf{v}\cdot\mathbf{w }=0\text{ for }\mathbf{w}\in\mathcal{C}\}\). The dual code of a linear code is also a linear code. It is not hard to see that \(\mathcal{C}^{\perp}=\ker(\mathbf{H}^{T})\), which implies \(\dim(\mathcal{C})+\dim(\mathcal{C}^{\perp})=m\). A code \(\mathcal{C}\) is weakly self-dual if \(\mathcal{C}\subseteq\mathcal{C}^{\perp}\) and (strictly) self-dual if \(\mathcal{C}=\mathcal{C}^{\perp}\), in which case \(\dim(\mathcal{C})=m/2\). A code \(\mathcal{C}\) is an even code if all codewords have even Hamming weight and a doubly-even code if all codewords have Hamming weight a multiple of \(4\). It is not hard to show that a doubly-even code is a weakly self-dual code. Moreover, we have the following proposition. **Proposition 2.2**.: _The all-ones vector is a codeword of \(\mathcal{C}\) if and only if its dual code \(\mathcal{C}^{\perp}\) is an even code._ We define the notion of (un)biased even codes, which will be useful in the stabilizer characterization of IQP circuits (Section 3). **Definition 2.3**.: A code \(\mathcal{C}\) is called a _biased even code_ if it is an even code where the number of codewords with Hamming weight \(0\) and \(2\) modulo \(4\) are not equal. It is called an _unbiased even code_ otherwise. Let the (maximum) self-dual subspace of \(\mathcal{C}\) be \(\mathcal{D}:=\mathcal{C}\bigcap\mathcal{C}^{\perp}\), which is itself a weakly self-dual code. Note that \(\mathcal{D}\) must be an even code, since all codewords are orthogonal to themselves and hence have even Hamming weight. We have the following lemma. **Lemma 2.4**.: _A weakly self-dual even code is either a doubly-even code or an unbiased even code. For the former case, all columns of its generator matrix have weight 0 modulo 4 and are orthogonal to each other. For the latter case, there is at least one column in the generator matrix with weight 2 modulo 4._ One can apply a basis change to the generator matrix \(\mathbf{H}\), resulting in \(\mathbf{H}\mathbf{Q}\), where \(\mathbf{Q}\) is an invertible matrix. This will not change the code \(\mathcal{C}\). Define the Gram matrix of the generator matrix by \(\mathbf{G}:=\mathbf{H}^{T}\mathbf{H}\). A basis change on \(\mathbf{H}\) transforms \(\mathbf{G}\) into \(\mathbf{Q}^{T}\mathbf{G}\mathbf{Q}\), which is a congruent transformation. The rank of Gram matrix is also an invariant under basis change. It can be related to the code \(\mathcal{C}\) in the following way. **Proposition 2.5**.: _Given a generator matrix \(\mathbf{H}\), let its Gram matrix be \(\mathbf{G}=\mathbf{H}^{T}\mathbf{H}\) and the generated code be \(\mathcal{C}\). Let \(\mathcal{D}=\mathcal{C}\bigcap\mathcal{C}^{\perp}\), where \(\mathcal{C}^{\perp}\) is the dual code of \(\mathcal{C}\). Then, \(\operatorname{rank}(\mathbf{G})=\dim(\mathcal{C})-\dim(\mathcal{D})\)._ ### Shepherd-Bremner construction In the Shepherd-Bremner construction, the part \(\mathbf{H_{s}}\) is constructed from the quadratic-residue code. The quadratic residue code is a cyclic code. Its cyclic generator has \(1\) in the \(j\)-th position if \(j\) is a non-zero quadratic residue modulo \(q\). The size parameter \(q\) of the QRC is a prime number and \(q+1\) is required to be a multiple of eight [24]. For \(q=7\), the cyclic generator reads \((1,1,0,1,0,0,0)^{T}\), because \(j=1,2,4\) are quadratic residues modulo \(7\). The basis for the codespace of QRC is generated by rotating the cyclic generator, which is the last 4 columns of the following matrix, \[\mathbf{H}_{\mathbf{s}}^{\text{QRC}}=\begin{pmatrix}1&1&0&0&0\\ 1&1&1&0&0\\ 1&0&1&1&0\\ 1&1&0&1&1\\ 1&0&1&0&1\\ 1&0&0&1&0\\ 1&0&0&0&1\end{pmatrix}. \tag{2.4}\] The first column is added so that the secret is easy to find, i.e., \(\mathbf{s}=\left(1,0,0,0,0\right)^{T}\). After obtaining the initial \(\mathbf{H}_{\mathbf{s}}^{\text{QRC}}\), the verifier needs to hide the secret and make the IQP circuit look random, while leaving the value of the correlation function unchanged. In the Shepherd-Bremner construction, the verifier will first add redundant rows \(\mathbf{R}_{\mathbf{s}}\), which are rows that are orthogonal to \(\mathbf{s}\), to obtain the full IQP matrix \[\mathbf{H}=\begin{pmatrix}\mathbf{H}_{\mathbf{s}}^{\text{QRC}}\\ \mathbf{R}_{\mathbf{s}}\end{pmatrix}. \tag{2.5}\] Its corresponding Hamiltonian \(R_{\mathbf{s}}\) commutes with \(\mathcal{Z}_{\mathbf{s}}\) and hence will not affect the correlation function. After initializing \(\mathbf{H}\) and \(\mathbf{s}\), the verifier needs to apply obfuscation to hide the secret. The obfuscation is achieved by randomly permuting rows in \(\mathbf{H}\) and performing column operations to \(\mathbf{H}\) and changing \(\mathbf{s}\) accordingly. **Definition 2.6** (Obfuscation).: Given an instance \((\mathbf{H},\mathbf{s})\), the obfuscation is defined as the transformation \[\mathbf{H}\leftarrow\mathbf{P}\mathbf{H}\mathbf{Q} \mathbf{s}\leftarrow\mathbf{Q}^{-1}\mathbf{s}\, \tag{2.6}\] where \(\mathbf{P}\) is a random row-permutation matrix and \(\mathbf{Q}\) is a random invertible matrix. Note that row permutations will not change the value of the correlation function, since the gates in IQP circuits commute with each other. As for the column operations, it can be shown that if the secret \(\mathbf{s}\) is transformed accordingly, to maintain the inner-product relation with the rows in \(\mathbf{H}\), then the value of the correlation function remains unchanged [24, 28]. In the Shepherd-Bremner scheme [24], the measure of success is given by the probability bias \(\mathcal{P}_{\mathbf{s}\perp}:=\sum_{\mathbf{x}\cdot\mathbf{s}=0}p(\mathbf{x})\), the probability of receiving bit strings that are orthogonal to the secret \(\mathbf{s}\), where \(p(\mathbf{x})\) is the output probability of the IQP circuit. This measure is equivalent to the correlation function, since \(\mathcal{P}_{\mathbf{s}\perp}=\frac{1}{2}(\langle\mathcal{Z}_{\mathbf{s}} \rangle+1)\)[31, 38]. Due to the properties of QRC, \(\langle\mathcal{Z}_{\mathbf{s}}\rangle\) always equals \(1/\sqrt{2}\) (in terms of probability bias, 0.854). ## 3 Stabilizer characterization of IQP circuits In this section, we establish the connection between IQP circuits, stabilizer formalism and coding theory, which turns out to be useful in constructing the IQP circuits for the verification protocol. For \(\theta=\pi/8\), we show that the stabilizer tableau of the Clifford operation \(e^{i2\theta H_{\mathbf{s}}}\) has a nice structure that allows us to determine the value of \(\langle\mathcal{Z}_{\mathbf{s}}\rangle=\ \langle 0^{n}|e^{i2\theta H_{ \mathbf{s}}}|0^{n}\rangle\) efficiently. As an application, we analyze the Shepherd-Bremner construction with this framework. We first give the form of the stabilizer tableau of \(e^{i\pi H/4}\,|0^{n}\rangle\). **Theorem 3.1**.: _Given a binary matrix \(\mathbf{H}=(\mathbf{c}_{1},\ldots,\mathbf{c}_{n})\) and transforming it into an IQP Hamiltonian \(H\), the stabilizer tableau of the state \(\ket{\psi}=e^{i\pi H/4}\ket{0^{n}}\) can be expressed as,_ \[\left(\begin{array}{cccc|c}\mathbf{c}_{1}\cdot\mathbf{c}_{1}&\cdots&\mathbf{c }_{1}\cdot\mathbf{c}_{n}&1&\cdots&0&r_{1}\\ \vdots&\ddots&\vdots&\vdots&\ddots&\vdots&\vdots\\ \mathbf{c}_{n}\cdot\mathbf{c}_{1}&\cdots&\mathbf{c}_{n}\cdot\mathbf{c}_{n}&0 &\cdots&1&r_{n}\end{array}\right). \tag{3.1}\] _Here, if one uses \(00,01,10,11\) to represent \(\ket{\mathbf{c}_{j}}=0,1,2,3\pmod{4}\), then \(r_{j}\) is equal to the first bit._ This theorem can be proved by starting from the standard tableau of \(\ket{0^{n}}\), and keeping track of the stabilizer tableau after applying each terms of \(e^{i\pi H/4}\) (i.e., each row of \(\mathbf{H}\)). The complete proof is delayed to Appendix C. We will call Eq. (3.1) the IQP (stabilizer) tableau and it is of the form \((\mathbf{G},\mathbf{I}_{n},\mathbf{r})\). We apply the above theorem to \(\mathbf{H_{s}}\), in which case the \(X\) part is \(\mathbf{G_{s}}=\mathbf{H_{s}^{T}}\mathbf{H_{s}}\). Next, we relate the correlation function to the code generated by \(\mathbf{H_{s}}\), denoted as \(\mathcal{C}_{\mathbf{s}}\). Note that \(\mathbf{H_{s}s}=\mathbf{1}\) means that the all-ones vector is a codeword of \(\mathcal{C}_{\mathbf{s}}\). From Proposition 2.2, this means that the dual code \(\mathcal{C}_{\mathbf{s}}^{\perp}\) is an even code and the intersection \(\mathcal{D}_{\mathbf{s}}:=\mathcal{C}_{\mathbf{s}}\bigcap\mathcal{C}_{\mathbf{ s}}^{\perp}\) is a weakly self-dual even code. Then, \(\mathcal{D}_{\mathbf{s}}\) will be either a doubly-even code or an unbiased even code, according to Lemma 2.4. **Theorem 3.2**.: _Given an IQP matrix \(\mathbf{H_{s}}\) and a vector \(\mathbf{s}\), so that \(\mathbf{H_{s}}\)\(\mathbf{s}=\mathbf{1}\). Denote the code generated by columns of \(\mathbf{H_{s}}\) by \(\mathcal{C}_{\mathbf{s}}\) and its dual code by \(\mathcal{C}_{\mathbf{s}}^{\perp}\). Let \(\mathcal{D}_{\mathbf{s}}:=\mathcal{C}_{\mathbf{s}}\bigcap\mathcal{C}_{\mathbf{ s}}^{\perp}\). Then, transforming \(\mathbf{H_{s}}\) into an IQP Hamiltonian \(H_{\mathbf{s}}\), the magnitude of the correlation function \(\langle\mathcal{Z}_{\mathbf{s}}\rangle=\langle 0^{n}|e^{i\pi H_{\mathbf{s}}/4} \ket{0^{n}}\rangle\) is \(2^{-g/2}\) if \(\mathcal{D}_{\mathbf{s}}\) is a doubly-even code and zero if \(\mathcal{D}_{\mathbf{s}}\) is an unbiased even code. Here, \(g:=\dim(\mathcal{C}_{\mathbf{s}})-\dim(\mathcal{D}_{\mathbf{s}})\) is also the rank of the Gram matrix \(\mathbf{G_{s}}=\mathbf{H_{s}^{T}}\mathbf{H_{s}}\)._ We leave the proof in Appendix C. Interestingly, from a group-theoretic perspective, the rank of the Gram matrix \(g\) is also the minimum number of different generators over all possible choices of the stabilizer groups between \(\ket{0^{n}}\) and \(e^{i\pi H_{\mathbf{s}}/4}\ket{0^{n}}\) (Proposition 2.1). Furthermore, we note that this result integrates several results in Ref. [31] concisely, with a particular focus on coding theory, so that it aligns better with our objective of constructing IQP circuits for the verification protocol. Ref. [31] studies the IQP circuits with \(\theta=\pi/4\) with a reworking of Vertigan's algorithm for evaluating the magnitude of the Tutte polynomial of a binary matroid at the point \((-i,i)\)[35]. There, the amplitude \(\langle\mathbf{x}|e^{i\theta H}|0^{n}\rangle\) is considered for \(\theta=\pi/4\) and any IQP Hamiltonian \(H\), where the all-ones vector may not be a codeword of the code generated by the binary matrix \(\mathbf{H}\). Such an amplitude has been further studied in Ref. [32], which gives the expression of the phase of the amplitude by applying results of Ref. [36]. In the language of binary matroids, the dual intersection \(\mathcal{D}_{\mathbf{s}}\) is the bicycle space of the matroid represented by \(\mathbf{H_{s}}\) and its dimension \(\dim(\mathcal{D}_{\mathbf{s}})\) is also known as the bicycle dimension [35, 32]. Finally, we note that although computing the magnitude suffices for our later construction, the sign of the correlation function can also be computed efficiently, as shown in Ref. [32]. In addition, when \(g=O(\log n)\), the correlation function has an inverse polynomial scaling. In this case, one can use the random sampling algorithm in Ref. [28] to determine the sign efficiently. To show the usefulness of the stabilizer characterization, we apply these two theorems to analyze the Shepherd-Bremner construction. Combined with the properties of QRC, we have the following corollary (with proof presented in Appendix C). **Corollary 3.3**.: _Let \(q\) be a prime such that 8 divides \(q+1\). Let \(\mathbf{H_{s}^{\mathrm{QRC}}}\) be a matrix whose first column is \(\mathbf{1}\) (of length \(q\)), and whose remaining columns are the basis of the quadratic-residue code of length \(q\), formed by the cyclic generator (i.e., in the form of Eq. (2.4)). Then, translating \(\mathbf{H}_{\mathbf{s}}^{\mathrm{QRC}}\) into an IQP Hamiltonian \(H_{\mathbf{s}}\), the stabilizer tableau of \(|\psi_{\mathbf{s}}\rangle=e^{i\pi H_{\mathbf{s}}/4}\,|0^{n}\rangle\) can be expressed as the following form,_ \[\left(\begin{array}{cccc|c|c}1&\cdots&1&1&\cdots&0&1\\ \vdots&\ddots&\vdots&\vdots&\ddots&\vdots\\ 1&\cdots&1&1&\cdots&1&1\end{array}\right). \tag{3.2}\] _As a result, the corresponding stabilizer group is given by,_ \[\langle-Y_{1}X_{2}\cdot\cdots X_{n},-X_{1}Y_{2}X_{3}\cdot\cdots X_{n},\ldots, -X_{1}X_{2}\cdots X_{n-1}Y_{n}\rangle\, \tag{3.3}\] _where \(n=(q+3)/2\). Moreover, the correlation function \(\langle\mathcal{Z}_{\mathbf{s}}\rangle=\langle 0^{n}|\psi_{\mathbf{s}}\rangle\) has a magnitude \(1/\sqrt{2}\)._ ## 4 Stabilizer construction In this section, we present the stabilizer construction, which is a systematic way to construct IQP circuits with \(\theta=\pi/8\) for verification. In fact, the goal is to generate a pair \((\mathbf{H},\mathbf{s})\), such that they satisfy certain conditions, which stem from Theorem 3.2. We first define the family of pairs that we would like to sample from. **Definition 4.1**.: Let \(\mathcal{H}_{n,m,g}=\{(\mathbf{H},\mathbf{s})\}\) be a family of pairs of an IQP matrix \(\mathbf{H}\in\mathbb{F}_{2}^{m\times n}\) and a secret \(\mathbf{s}\in\mathbb{F}_{2}^{n}\) satisfying the following conditions. (1) \(\mathcal{D}_{\mathbf{s}}=\mathcal{C}_{\mathbf{s}}\bigcap\mathcal{C}_{\mathbf{s }}^{\perp}\) is a doubly-even code, where \(\mathcal{C}_{\mathbf{s}}\) is the code generated by columns of \(\mathbf{H}_{\mathbf{s}}\) and \(\mathcal{C}_{\mathbf{s}}^{\perp}\) is its dual code; (2) \(\mathrm{rank}(\mathbf{H}_{\mathbf{s}}^{T}\mathbf{H}_{\mathbf{s}})=g\); (3) \(\mathrm{rank}(\mathbf{H})=n\). In this definition, the size of the IQP circuits are determined by \(n\) and \(m\), which correspond to the number of qubits and gates, respectively. Additionally, condition (1) is to guarantee that the correlation function \(\langle\mathcal{Z}_{\mathbf{s}}\rangle\) corresponding to instances of \(\mathcal{H}_{n,m,g}\) is nonzero, and condition (2) states that its magnitude is given by \(2^{-g/2}\). Therefore, the family \(\mathcal{H}_{n,m,g}\) includes all instances of IQP circuits of a certain size that have correlation function \(\pm 2^{-g/2}\) with respect to some secret \(\mathbf{s}\). Note that the rank of the Gram matrix \(\mathbf{H}_{\mathbf{s}}^{T}\mathbf{H}_{\mathbf{s}}\) should be \(g=O(\log n)\) for the protocol to be practical. The reason for considering IQP matrices \(\mathbf{H}\) with full column rank will be made clear when we discuss the classical security of the IQP-based verification protocol (Section 5.1.2). Moreover, we give an efficient classical sampling algorithm to sample instances from \(\mathcal{H}_{n,m,g}\), which is the stabilizer construction (Meta-Algorithm 1). **Theorem 4.2**.: _There exists an efficient classical sampling algorithm that sample from \(\mathcal{H}_{n,m,g}\), given the parameters \(n,m\) and \(g\)._ For the algorithmic purpose, we set two additional parameters, \(m_{1}\) and \(d\), which are the number of rows in \(\mathbf{H}_{\mathbf{s}}\) and the dimension of \(\mathcal{D}_{\mathbf{s}}\), respectively. These are random integers satisfying certain natural constraints (see Appendix D). The rank of \(\mathbf{H}_{\mathbf{s}}\) is then equal to \(r=g+d\). The stabilizer construction works by sampling \(\mathbf{H}_{\mathbf{s}}\) and \(\mathbf{R}_{\mathbf{s}}\) in certain'standard forms', up to row permutations and column operations. Note that the'standard forms' of \(\mathbf{H}_{\mathbf{s}}\) and \(\mathbf{R}_{\mathbf{s}}\) are not necessarily unique. We first discuss \(\mathbf{R}_{\mathbf{s}}\). To ensure that \(\mathrm{rank}(\mathbf{H})=n\), observe that in any \(\mathbf{H}\) of full column rank, the redundant rows \(\mathbf{R}_{\mathbf{s}}\) can always be transformed by row permutations into a form, where the first \(n-r\) rows form a basis of \(\mathbb{F}_{2}^{n}\) together with the rows in \(\mathbf{H_{s}}\). Therefore, up to row permutations, the first \(n-r\) rows of \(\mathbf{R_{s}}\) are sampled to be random independent rows that are orthogonal to \(\mathbf{s}\) and lie outside the row space of \(\mathbf{H_{s}}\). The remaining rows in \(\mathbf{R_{s}}\) are random rows orthogonal to \(\mathbf{s}\). Next, we discuss sampling \((\mathbf{H_{s}},\mathbf{s})\), which is the core of the stabilizer construction. Essentially, we want to randomly generate a (possibly redundant) generator matrix \(\mathbf{H_{s}}\) of a code \(\mathcal{C_{s}}\), so that its dimension is \(r\), its intersection \(\mathcal{D_{s}}\) with the dual code is a doubly-even code with dimension \(d=r-g\) and the all-ones vector is a codeword. The last condition guarantees that a secret \(\mathbf{s}\) can always be found. Note that, we allow \(\operatorname{rank}(\mathbf{H_{s}})<n\). That is, we allow \(\mathbf{H_{s}}\) to be a "redundant" generator matrix of \(\mathcal{C_{s}}\), instead of a full-rank one. This is called adding column redundancy to the full-rank generator matrix of \(\mathcal{C_{s}}\), because after the obfuscation process, there will be redundant linear combinations in the columns of \(\mathbf{H_{s}}\). We give a more formal discussion of column redundancy in Appendix E. For such a generator matrix \(\mathbf{H_{s}}\), there is an invertible matrix \(\mathbf{Q}\) to perform a basis change so that \[\mathbf{H_{s}}\mathbf{Q}=(\mathbf{F},\mathbf{D},\mathbf{0}_{m_{1}\times(n-r)} )\, \tag{4.1}\] where \(\mathbf{D}\in\mathbb{F}_{2}^{m_{1}\times d}\) is a generator matrix of the doubly-even code \(\mathcal{D_{s}}\), and columns in \(\mathbf{F}\in\mathbb{F}_{2}^{m_{1}\times g}\) span \(\mathcal{C_{s}}/\mathcal{D_{s}}\). In addition, it can be shown that \(\operatorname{rank}(\mathbf{F}^{T}\mathbf{F})=\operatorname{rank}(\mathbf{Q }^{T}\mathbf{H_{s}^{T}}\mathbf{H_{s}}\mathbf{Q})=\operatorname{rank}(\mathbf{ H_{s}^{T}}\mathbf{H_{s}})=g\). Moreover, although there might be no unique standard form of \(\mathbf{H_{s}}\), the Gram matrix has a unique standard form. First note that row permutations have no effect on the Gram matrix, since \(\mathbf{P}^{T}\mathbf{P}=\mathbf{I}\) for a permutation matrix \(\mathbf{P}\). So we focus on column operations. As shown in Ref. [39], there exists an invertible matrix \(\mathbf{Q}\), so that \(\mathbf{Q}^{T}\mathbf{H_{s}^{T}}\mathbf{H_{s}}\mathbf{Q}=\operatorname{diag} \left(\mathbf{I_{g}},\mathbf{0}\right)\) or \(\operatorname{diag}\left(\bigoplus\limits_{i=1}^{g/2}\mathbf{J},\mathbf{0} \right)\), depending on whether at least one diagonal element of \(\mathbf{H_{s}^{T}}\mathbf{H_{s}}\) is \(1\) or not, where \(\mathbf{J}:=\begin{pmatrix}0&1\\ 1&0\end{pmatrix}\). However, for the construction purpose, we need to ensure that the all-ones vector is a codeword of \(\mathcal{C_{s}}\). Therefore, in Appendix D, we give a slightly different standard form of \(\mathbf{H_{s}^{T}}\mathbf{H_{s}}\), which can be achieved by \(\mathbf{H_{s}}\) in the form of \((\mathbf{F},\mathbf{D},\mathbf{0})\). In summary, sampling \((\mathbf{H_{s}},\mathbf{s})\) is reduced to generating an \(\mathbf{H_{s}}=(\mathbf{F},\mathbf{D},\mathbf{0})\) so that the Gram matrix \(\mathbf{H_{s}^{T}}\mathbf{H_{s}}\) is in the standard form presented in Appendix D. Then, a secret \(\mathbf{s}\) is sampled from the solutions of \(\mathbf{H_{s}}\ \mathbf{s}=\mathbf{1}\). Sampling such an \(\mathbf{H_{s}}\) is further reduced to sampling \(\mathbf{D}\) and \(\mathbf{F}\), so that \(\mathbf{D}\) is a generator matrix for a random doubly-even code and \(\mathbf{F}\) is a random matrix satisfying \(\mathbf{D}^{T}\mathbf{F}=\mathbf{0}\), \(\text{rank}(\mathbf{F}^{T}\mathbf{F})=g\) and that \(\mathbf{1}\) is in the column space of \((\mathbf{F},\mathbf{D})\). We claim that sampling such \(\mathbf{D}\) and \(\mathbf{F}\) can be done efficiently, with details deferred to Appendix D. ## 5 Classical attacks and security In this section, we examine the classical security of our protocol, i.e., the possibility that an efficient classical prover can pass the test. A straightforward classical attack is to simulate the IQP circuit sent by the verifier. We do not expect this to be efficient, since there is generally no structure to be exploited by a classical simulation algorithm. For example, due to the obfuscation as in Eq. (2.6), the geometry of the IQP circuit can be arbitrary, which implies that the treewidth in a tensor network algorithm cannot be easily reduced [40]. Here, we focus on another class of classical attacks based on extracting secrets. Given an IQP matrix \(\mathbf{H}\), once the hidden secret \(\mathbf{s}\) is found, a classical prover can first calculate the correlation function \(\langle\mathcal{Z}_{\mathbf{s}}\rangle\) efficiently. Then, he generates a sample \(\mathbf{x}\) which is orthogonal to \(\mathbf{s}\) with probability \((1+\langle\mathcal{Z}_{\mathbf{s}}\rangle)/2\) and not orthogonal to \(\mathbf{s}\) with probability \((1-\langle\mathcal{Z}_{\mathbf{s}}\rangle)/2\). The generated samples will have the correct correlation with the secret \(\mathbf{s}\) and hence pass the test. Kahanamoku-Meyer's attack algorithm for the Shepherd-Bremner construction is an instance of this class [30]. But generally, this attack may not be efficient. From a code perspective, the stabilizer construction is to sample a random code satisfying certain constraints, and hide it by adding redundancy and performing obfuscation. Finding the secret allows one to find the hidden subcode, which should be a hard problem in general. In particular, we formulate the following conjecture. **Conjecture 5.1** (Hidden Structured Code (HSC) Problem, Restatement of Conjecture 1.1).: _For certain appropriate choices of \(n,m,g\), there exists an efficiently samplable distribution over instances \((\mathbf{H},\mathbf{s})\) from the family \(\mathcal{H}_{n,m,g}\), so that no polynomial-time classical algorithm can find the secret \(\mathbf{s}\) given \(n,m\) and \(\mathbf{H}\) as input, with high probability over the distribution on \(\mathcal{H}_{n,m,g}\)._ Naturally, sampling instances with uniform distribution from \(\mathcal{H}_{n,m,g}\) is more favorable, since it does not put any bias on specific instances. For the underlying distribution induced by the stabilizer construction (Meta-Algorithm 1), it seems that it is uniform or close to uniform, as the output instances are random instances satisfying certain natural constraints imposed by the structure of the family \(\mathcal{H}_{n,m,g}\). Though, we do not have a rigorous proof for this claim. Moreover, a similar conjecture was given in Ref. [24] for the family \(\mathcal{H}_{n,m,q}^{\text{QRC}}\), where the problem is to decide whether a given \(\mathbf{H}\) is from the family \(\mathcal{H}_{n,m,q}^{\text{QRC}}\) or not. They conjectured that such a problem is NP-complete. Here, to better align with the classical attack, we consider the problem of finding the secret \(\mathbf{s}\) instead. To support Conjecture 5.1, we first generalize Kahanamoku-Meyer's attack algorithm to target any IQP-based verification protocols with \(\theta=\pi/8\). We show that this generalized attack, named the Linearity Attack, fails to break our construction. Furthermore, our analysis reveals that the loophole of the original Shepherd-Bremner construction stems from an improper choice of parameters. The Shepherd-Bremner construction can be improved by the column redundancy technique, which enables random sampling from the family \(\mathcal{H}_{n,m,q}^{\text{QRC}}\) with any possible parameters and thereby fixes the loophole. ``` 1:procedureExtractSecret(\(\mathbf{H}\)) 2: Initialize \(S\leftarrow\emptyset\). \(\triangleright\) candidate set 3:repeat 4: Uniformly randomly pick \(\mathbf{d}\in\mathbb{F}_{2}^{n}\). 5: Construct \(\mathbf{H_{d}}\) and \(\mathbf{G_{d}}=\mathbf{H_{d}^{T}H_{d}}\) 6:for each vector \(\mathbf{s}_{i}\in\ker(\mathbf{G_{d}})\)do 7:if\(\mathbf{s}_{i}\) passes certain property check then\(\triangleright\) To be specified 8: Add \(\mathbf{s}_{i}\) to \(S\). 9:endif 10:endfor 11:until some stopping criterion is met. 12:return\(S\) 13:endprocedure ``` Meta-Algorithm 2: The ExtractSecret(\(\mathbf{H}\)) procedure of Linearity Attack. ### Linearity Attack Classical attacks based on secret extraction aim to mimic the quantum behavior on certain candidate set \(S\). Observe that given an IQP circuit represented by the binary matrix \(\mathbf{H}\), a quantum prover can output a sample \(\mathbf{x}\), which has the correlation function \(\langle\mathcal{Z}_{\mathbf{s}}\rangle\) in the direction of \(\mathbf{s}\) for every \(\mathbf{s}\), even if it is not the secret of the verifier. If a classical prover can also generate samples that have the correct correlation with every \(\mathbf{s}\), then he has the power to classically sample from an IQP circuit, which is implausible [25, 26]. However, he has the knowledge that the verifier will only check one secret. Therefore, a general attack strategy for him is to first reduce the set of candidate secrets from \(\{0,1\}^{n}\) to a (polynomial-sized) subset \(S\), and then generate samples that have the correct correlation with every vector in the candidate set. Here, we discuss Linearity Attack, which is an instance of classical attacks based on secret extraction and generalizes the attack algorithm in Ref. [30]. It consists of two steps. First, it uses linear algebraic techniques to construct a candidate set \(S\). Then, the prover calculates the correlation function for every vector in \(S\), and outputs samples that have the correct correlation with those vectors. #### 5.1.1 Secret extraction Overview.The secret extraction procedure in the Linearity Attack is presented in Meta-Algorithm 2, which is a generalized version of the procedure described in Ref. [30]. The algorithm begins by randomly selecting a vector \(\mathbf{d}\) and eliminating rows in \(\mathbf{H}\) that are orthogonal to \(\mathbf{d}\), resulting in \(\mathbf{H_{d}}\). Subsequently, the algorithm searches for vectors that satisfy certain property check in \(\ker(\mathbf{G_{d}})\), where \(\mathbf{G_{d}}=\mathbf{H_{d}^{T}H_{d}}\) represents the Gram matrix associated with \(\mathbf{d}\). In what follows, we discuss some technical details and defer the analysis to Section 5.2. Secret extraction in Kahanamoku-Meyer's attack.Meta-Algorithm 2 differs slightly from the approach described in Ref. [30]. In the original algorithm, the classical prover begins by constructing a matrix \(\mathbf{M}\in\mathbb{F}_{2}^{l\times n}\) through linear combinations of rows in \(\mathbf{H}\). Specifically, after sampling the vector \(\mathbf{d}\), the classical prover proceeds to sample \(l\) random vectors \(\mathbf{e}_{1},\ldots,\mathbf{e}_{l}\). Then, the \(j\)-th row of \(\mathbf{M}\) is defined by, \[\mathbf{m}_{j}^{T}:=\sum_{\begin{subarray}{c}\mathbf{p}^{T}\in\mathrm{row}( \mathbf{H})\\ \mathbf{p}\cdot\mathbf{d}=\mathbf{p}\cdot\mathbf{e}_{j}=1\end{subarray}}\mathbf{ p}^{T}\;. \tag{5.1}\] After that, the original algorithm searches for the vectors that can pass certain property check in \(\ker(\mathbf{M})\) instead. Our secret extraction algorithm is a generalization and simplification to the original approach. In Appendix G.1, we show that rows in \(\mathbf{M}\) belong to the row space of \(\mathbf{G}_{\mathbf{d}}\). Therefore, to minimize the size of \(\ker(\mathbf{M})\), one can simply set \(\mathbf{M}=\mathbf{G}_{\mathbf{d}}\), eliminating the need to sample the vectors \(\mathbf{e}_{1},\ldots,\mathbf{e}_{l}\). Property check.Next, we discuss the property checks designed to determine whether a vector in \(\ker(\mathbf{G}_{\mathbf{d}})\) can serve as a potential secret or not. In the context of the Shepherd-Bremner construction targeted in Ref. [30], the property check is to check whether \(\mathbf{s}_{i}\) in \(\ker(\mathbf{M})\) corresponds to a quadratic-residue code or not. To accomplish this, the prover constructs \(\mathbf{H}_{\mathbf{s}_{i}}\) for the vector \(\mathbf{s}_{i}\) and performs what we refer to as the QRC check, examining whether \(\mathbf{H}_{\mathbf{s}_{i}}\) generates a quadratic-residue code (with possible row reordering). However, determining whether a generator matrix generates a quadratic-residue code is a nontrivial task. Consequently, the algorithm in Ref. [30] attempts to achieve this by assessing the weight of the codewords in the code generated by \(\mathbf{H}_{\mathbf{s}_{i}}\). In a quadratic-residue code, the weight of the codewords will be either \(0\) or \(3\) (\(\mathrm{mod}\)\(4\)). But still, there will be exponentially many codewords, and checking the weights of the basis vectors is not sufficient to ensure that all codewords have weight either \(0\) or \(3\) (\(\mathrm{mod}\)\(4\)). So in practice, the prover can only check a small number of the codewords. For instances derived from the stabilizer construction, the prover will have less information about the code \(\mathcal{C}_{\mathbf{s}}\); he only has the knowledge that this code has a large doubly-even subcode, as quantified by the rank of \(\mathbf{G}_{\mathbf{s}}\). Therefore, the property check for Meta-Algorithm 2 involves checking whether the rank of \(\mathbf{H}_{\mathbf{s}_{i}}^{T}\mathbf{H}_{\mathbf{s}_{i}}\) falls below certain threshold and whether self-dual intersection \(\mathcal{D}_{\mathbf{s}_{i}}\) is doubly-even. However, determining an appropriate threshold presents a challenge for the classical prover, who can generally only make guesses. If the chosen threshold is smaller than the rank of \(\mathbf{G}_{\mathbf{s}}\), then the secret extraction algorithm will miss the real secret, even if it lies within \(\ker(\mathbf{G}_{\mathbf{d}})\). Stopping criteria.Lastly, various stopping criteria can be employed in the secret extraction procedure. One approach is to halt the procedure once a vector successfully passes the property check, as adopted in Ref. [30]. Alternatively, the procedure can be stopped after a specific number of repetitions or checks. In our implementation, we utilize a combination of these two criteria. If no vectors are able to pass the property check before the stopping criterion is reached, an empty candidate set \(S\) is returned, indicating a failed attack. Conversely, if the candidate set \(S\) is non-empty, the attack proceeds to the classical sampling step to generate classical samples. #### 5.1.2 Classical sampling Classical sampling based on multiple candidate secrets is nontrivial. Mathematically, the problem is formulated as follows. **Problem 5.2**.: _Given an IQP circuit \(C\) and a candidate set \(S=\{\mathbf{s}_{1},\ldots,\mathbf{s}_{t}\}\), outputs a sample \(\mathbf{x}\) so that_ \[\mathbb{E}[(-1)^{\mathbf{x}\cdot\mathbf{s}_{i}}]=\left\langle\mathcal{Z}_{ \mathbf{s}_{i}}\right\rangle\, \tag{5.2}\] _for \(i=1,\ldots,t\), where \(\mathbb{E}[\cdot]\) is over the randomness of the algorithm._ Note that \(\mathbb{E}[(-1)^{\mathbf{x}\cdot\mathbf{s}_{i}}]\) is the expectation value of Eq. (1.2). We may allow a polynomially-bounded additive error in the problem formulation, considering the inevitable shot noise due to finite samples. The complexity of this problem depends on various situations. To the best of our knowledge, we are not aware of an efficient classical algorithm that solves this problem in general. In Appendix G.2, we present two sampling algorithms that will work in some special cases. A sufficient condition for these two sampling algorithms to work is that the candidate set is an independent subset of \(\{0,1\}^{n}\). Naive sampling algorithm.In this work, we mainly focus on the case \(|S|=1\), in which case the problem is easy to solve, yet remains worth discussing. A naive sampling algorithm is as follows. To generate samples with the correct correlation on \(\mathbf{s}\), one just needs to output samples that are orthogonal to the candidate vector \(\mathbf{s}^{\prime}\) with probability \(\beta_{\mathbf{s}^{\prime}}=(\left\langle\mathcal{Z}_{\mathbf{s}^{\prime}} \right\rangle+1)/2\) and otherwise with probability \(1-\beta_{\mathbf{s}^{\prime}}\). One can prove that if the candidate secret from the ExtractSecret procedure is the real secret \(\mathbf{s}\), then the generated samples using this strategy will have the correlation function approximately \(\left\langle\mathcal{Z}_{\mathbf{s}}\right\rangle\) with the real secret. Otherwise, the correlation function with the real secret will be zero. We have the following lemma (see Appendix F for the proof). **Lemma 5.3**.: _Given a matrix \(\mathbf{H}\) and two vectors \(\mathbf{s}\neq\mathbf{s}^{\prime}\), let \(\left\langle\mathcal{Z}_{\mathbf{s}}\right\rangle\) and \(\left\langle\mathcal{Z}_{\mathbf{s}^{\prime}}\right\rangle\) be their corresponding correlation functions, as defined in Eq. (1.3). If a sample \(\mathbf{x}\) is generated to be a vector orthogonal to \(\mathbf{s}^{\prime}\) with probability \(\beta_{\mathbf{s}^{\prime}}=(\left\langle\mathcal{Z}_{\mathbf{s}^{\prime}} \right\rangle+1)/2\) and otherwise with probability \(1-\beta_{\mathbf{s}^{\prime}}\), then \(\mathbb{E}[(-1)^{\mathbf{x}\cdot\mathbf{s}}]=0\)._ The above lemma holds even if \(\mathbf{Hs}=\mathbf{Hs}^{\prime}\), in which case \(\mathbf{s}\) and \(\mathbf{s}^{\prime}\) are said to be _equivalent secrets_. Equivalent secrets have the same non-orthogonal and redundant part, and the correlation functions \(\left\langle\mathcal{Z}_{\mathbf{s}}\right\rangle\) and \(\left\langle\mathcal{Z}_{\mathbf{s}^{\prime}}\right\rangle\) are the same. It is clear that the number of equivalent secrets is given by \(2^{n-\mathrm{rank}(\mathbf{H})}\), which will be \(1\) if \(\mathbf{H}\) is of full column rank. When there are multiple equivalent secrets, it could be the case that the vector \(\mathbf{s}^{\prime}\) is returned by the secret extraction procedure, because it can also pass the property check, even if it is not the real secret itself. In this case, our previous classical sampling algorithm can only give samples with zero correlation function on the real secret \(\mathbf{s}\), according to Lemma 5.3. Sampling according to \(\mathbf{H}\).To address this issue, we propose a second classical sampling algorithm. Observe that linear combination of rows in \(\mathbf{R}_{\mathbf{s}}\) gives vectors that are orthogonal to \(\mathbf{s}\) and summation of an odd number of rows in \(\mathbf{H}_{\mathbf{s}}\) gives vectors that are not orthogonal to \(\mathbf{s}\). We denote the former set of vectors \(\mathsf{S}_{0}(\mathbf{s})\) and the latter \(\mathsf{S}_{1}(\mathbf{s})\). The identification of these sets relies on determining the submatrices \(\mathbf{H}_{\mathbf{s}}\) and \(\mathbf{R}_{\mathbf{s}}\). To achieve this, it suffices to find a vector \(\mathbf{s}^{\prime}\) that is equivalent to the real secret \(\mathbf{s}\). Therefore, upon receiving the candidate secret \(\mathbf{s}^{\prime}\) from the secret extraction procedure, the classical prover proceeds by computing \(\left\langle\mathcal{Z}_{\mathbf{s}^{\prime}}\right\rangle\) and \(\beta_{\mathbf{s}^{\prime}}\), followed by identifying \(\mathsf{S}_{0}(\mathbf{s}^{\prime})\) and \(\mathsf{S}_{1}(\mathbf{s}^{\prime})\). A sample \(\mathbf{x}\) is drawn from \(\mathsf{S}_{0}(\mathbf{s}^{\prime})\) with probability \(\beta_{\mathbf{s}^{\prime}}\) and from \(\mathsf{S}_{1}(\mathbf{s}^{\prime})\) with probability \(1-\beta_{\mathbf{s}^{\prime}}\). If the vector \(\mathbf{s}^{\prime}\) is equivalent to \(\mathbf{s}\), then this sampling algorithm will generate samples with the correct correlation function with respect to the real secret \(\mathbf{s}\), as opposed to the naive sampling algorithm. This also explains why we consider IQP matrices of full column rank in the stabilizer construction. If the classical prover is given an IQP matrix \(\mathbf{H}\) that is not full-rank, he can always apply an invertible matrix \(\mathbf{Q}\) so that \(\mathbf{H}\mathbf{Q}=(\mathbf{H}^{\prime},\mathbf{0})\), where \(\mathbf{H}^{\prime}\) is of full column rank. Then, he runs the secret extraction algorithm on \(\mathbf{H}^{\prime}\). Once a candidate secret is found, he can use it to identify the corresponding \(\mathsf{S}_{0}\) and \(\mathsf{S}_{1}\) from the original matrix \(\mathbf{H}\), as well as computing the correlation function. Finally, if the identification matches that of the real secret, then using the second classical sampling algorithm will allow him to pass the test. ### Analysis Here, we present analysis on the secret extraction of Linearity Attack. Probability of sampling a good \(\mathbf{d}\).First, we have the following proposition. **Proposition 5.4**.: _Given an IQP matrix \(\mathbf{H}\) and two vectors \(\mathbf{d}\) and \(\mathbf{s}\), we have \(\mathbf{G}_{\mathbf{s}}\mathbf{d}=\mathbf{G}_{\mathbf{d}}\,\mathbf{s}\), where \(\mathbf{G}_{\mathbf{s}}=\mathbf{H}_{\mathbf{s}}^{T}\mathbf{H}_{\mathbf{s}}\) and \(\mathbf{G}_{\mathbf{d}}=\mathbf{H}_{\mathbf{d}}^{T}\mathbf{H}_{\mathbf{d}}\). Therefore, \(\mathbf{s}\) lies in \(\ker(\mathbf{G}_{\mathbf{d}})\) if and only if \(\mathbf{G}_{\mathbf{s}}\mathbf{d}=\mathbf{0}\), which happens with probability \(2^{-\delta}\) over all choices of \(\mathbf{d}\), where \(g=\operatorname{rank}(\mathbf{G}_{\mathbf{s}})\) is the rank of \(\mathbf{G}_{\mathbf{s}}\)._ The proof is given in Appendix G.3. This proposition tells us that if the random \(\mathbf{d}\) does not satisfy \(\mathbf{G}_{\mathbf{s}}\mathbf{d}=\mathbf{0}\), then the verifier's secret \(\mathbf{s}\) will not lie in \(\ker(\mathbf{G}_{\mathbf{d}})\). In this case, Meta-Algorithm 2 will not be able to find the correct secret from the kernel of \(\mathbf{G}_{\mathbf{d}}\), and it has to be started over with a new \(\mathbf{d}\). If the correlation function with respect to the real secret has inverse polynomial scaling, i.e., \(2^{-g/2}=\Omega(1/\operatorname{poly}(n))\), then the probability of sampling a good \(\mathbf{d}\) is also large, which is \(2^{-g}=\Omega(1/\operatorname{poly}(n))\). This might appear advantageous for the attacker. But note that a classical attack cannot determine whether the sampled \(\mathbf{d}\) is good or not before he can find the real secret. In fact, he even cannot _definitively_ determine whether a vector \(\mathbf{s}_{i}\) in \(\ker(\mathbf{G}_{\mathbf{d}})\) that passes the property check is the real secret or not. Size of \(\ker(\mathbf{G}_{\mathbf{d}})\).The next question is, how large is the size of \(\ker(\mathbf{G}_{\mathbf{d}})\). This is important because the steps before the property check takes \(O(n^{3})\) time, which comes from the Gaussian elimination used to solve the linear system to find the kernel of \(\mathbf{G}_{\mathbf{d}}\). However, for the property check, the prover will potentially need to check every vectors in \(\ker(\mathbf{G}_{\mathbf{d}})\), which takes time proportional to its size. It is important to note that checking the basis vectors of \(\ker(\mathbf{G}_{\mathbf{d}})\) is not sufficient to find the real secret \(\mathbf{s}\), because the linearity structure is not preserved under taking the Gram matrix. Even if \(\mathbf{s}\in\ker(\mathbf{G}_{\mathbf{d}})\), the basis vectors of the kernel space can all have high ranks for their associated Gram matrices. Below, we give an expected lower bound for the size of \(\ker(\mathbf{G}_{\mathbf{d}})\), with the proof presented in Appendix G.4. **Theorem 5.5**.: _Given \((\mathbf{H},\mathbf{s})\in\mathcal{H}_{n,m,g}\), randomly sample a vector \(\mathbf{d}\). Then, the size of \(\ker(\mathbf{G}_{\mathbf{d}})\) is greater than \(2^{n-m/2}\) in expectation over the choice of \(\mathbf{d}\)._ Therefore, the size of \(\ker(\mathbf{G}_{\mathbf{d}})\) is increased exponentially by increasing \(n\). The increase of \(n\) can be achieved by adding column redundancy, i.e., adding more all-zeros columns in Eq. (4.1). But in the stabilizer construction, the column redundancy cannot be arbitrarily large. Recall that to make the IQP matrix \(\mathbf{H}\) full rank, one needs to add at least \(n-r\) redundant rows, where \(r=\operatorname{rank}(\mathbf{H}_{\mathbf{s}})\). If \(\mathbf{H}\) is not full rank, then as we discussed in Section 5.1.2, the classical prover can always perform column operations to effectively reduce the number of columns \(n\), and hence reduce the dimension of \(\ker(\mathbf{G}_{\mathbf{d}})\). Suggested parameter regime.Based on the above analysis, it is important to choose a good parameter regime to invalidate the Linearity Attack. Suppose the expected security parameter is \(\lambda\), meaning that the expected time complexity of a classical prover is \(\Omega(2^{\lambda})\). Then, generally we require \(n-m/2\geq\lambda\) for \(\ker(\mathbf{G_{d}})\) to be sufficiently large, and the number of redundant rows \(m-m_{1}\geq n-r\) for \(\mathbf{H}\) to be full-rank, where \(m_{1}\) is the number of rows in \(\mathbf{H_{s}}\). Specifically, for the stabilizer construction, given \(n\) and \(g\), we randomly choose the parameter \(r\geq g\). Then, we require that the number of rows in \(\mathbf{H_{s}}\) and \(\mathbf{H}\) satisfies \[m_{1}\leq n-2\lambda+r m_{1}+n-r\leq m\leq 2(n-\lambda)\, \tag{5.3}\] respectively. In addition, since \(m\) is the number of gates in the IQP circuit, we will require sufficiently large \(n\) and \(m=\Omega(n)\) to invalidate classical simulation. Numerical simulation.In Fig. 2 (a), we plot the dimension of \(\ker(\mathbf{G_{d}})\) for \(g=1,3,5\) and \(m=200\). For each number of columns \(n\), we sample \(100\) instances from \(\mathcal{H}_{n,m,g}\) with the stabilizer construction (Meta-Algorithm 1). Then, a random \(\mathbf{d}\) is sampled and we calculate the dimension of \(\ker(\mathbf{G_{d}})\). The asterisks are the expected lower bound \(n-m/2\), as shown in Theorem 5.5. The numerical experiment demonstrates good agreement with the theoretical prediction. In Fig. 2 (b), we present the numerical results for the success probability of the attack. Although to invalidate the attack, the maximum number of property checks should be \(2^{50}\) or larger, we set it to be \(2^{15}\) for a proof of principle in the numerical experiment. For each number of columns \(n\), we sample \(100\) random instances from \(\mathcal{H}_{n,m,g}\), where \(m=200\). Then, the Linearity Attack is applied to each instance and the success probability is defined as the fraction of successfully attacked instances, which is the instance that the attacker can classically generate samples to spoof the test. As one can see, the success probability decreases to zero as \(n\) exceeds \(m/2+15=115\), as expected. Challenge.In addition, we have posted a challenge problem as well as the source code for generation and verification on GitHub1, to motivate further study. The challenge problem Figure 2: **(a)** The dimension of \(\ker(\mathbf{G_{d}})\) for \(g=1,3,5\) and the number of rows \(m=200\). The asterisks indicate the expected lower bound \(n-m/2\). **(b)** The success probability of the attack. Here, we set the threshold for the rank in the property check to be the same as \(g\). is given by the \(\mathbf{H}\) matrix of a random instance from \(\mathcal{H}_{n,m,g}\) with \(n=300\) and \(m=360\); the \(g\) parameter is hidden because in practice, the prover can only guess a value. One needs to generate samples with the correct correlation function in the direction of the hidden secret to win the challenge. ### A fix of the Shepherd-Bremner construction Finally, we would like to remark why the attack in Ref. [30] can break the Shepherd-Bremner construction and how we can fix it by adding column redundancy. Let \(\mathcal{H}_{n,m,q}^{\text{QRC}}=\{(\mathbf{H},\mathbf{s})\}\) be a family of pairs of an IQP matrix \(\mathbf{H}\in\mathbb{F}_{2}^{m\times n}\) and a secret \(\mathbf{s}\) so that \(\mathbf{H}_{\mathbf{s}}\) generates a QRC of length \(q\) (up to row permutations) and \(\mathbf{H}\) is of full column rank. What the construction recipe of Ref. [24] does is to randomly sample instances from \(\mathcal{H}_{n,m,q}^{\text{QRC}}\), where \(n=(q+3)/2\) and \(m\geq q\), leaving a loophole for the recent classical attack [30]. To see why the parameter regime is as above, we first note that the length of QRC is \(q\), implying that the number of rows in \(\mathbf{H}_{\mathbf{s}}\) is \(q\) and hence \(m\geq q\). Moreover, the dimension of a length-\(q\) QRC is \((q+1)/2\), which implies that the rank of \(\mathbf{H}_{\mathbf{s}}\) is \((q+1)/2\). But an all-ones column was added in the construction (see Eq. (2.4)), which is an codeword of QRC, leading to \(n=(q+3)/2\). In the Shepherd-Bremner construction, the rank of Gram matrix \(\mathbf{G}_{\mathbf{s}}\) associated with the real secret \(\mathbf{s}\) is \(1\) according to Corollary 3.3. Therefore, the probability of choosing a good \(\mathbf{d}\) is \(1/2\) (as also shown in Theorem 3.1 of Ref. [30]). However, since the number of columns and the number of rows in \(\mathbf{H}\) is \(n=(q+3)/2\) and \(m\geq q\), respectively, the size of \(\ker(\mathbf{G}_{\mathbf{d}})\) is generally small. As a result, the prover can efficiently explore the entire \(\ker(\mathbf{G}_{\mathbf{d}})\), and if no vector passes the property check, the prover can simply regenerate \(\mathbf{d}\) and repeat the secret extraction procedure. The numerical results in Ref. [30] indicated that the size of \(\ker(\mathbf{G}_{\mathbf{d}})\) is indeed constant when applied to the Shepherd-Bremner construction, which suggests that an efficient classical prover can pass the test and hence break the original construction. Specifically, for the challenge instance posted in Ref. [24], \(m\) is taken to be \(2q\). Then, according to Theorem 5.5, the dimension of \(\ker(\mathbf{G}_{\mathbf{d}})\) is expected to be constant, making it susceptible to the attack. To address this issue, the original Shepherd-Bremner construction can be enhanced by introducing additional column redundancy to extend the number of columns \(n\), which can achieve random sampling from families \(\mathcal{H}_{n,m,q}^{\text{QRC}}\) with any \(n\geq(q+1)/2\) (Appendix E). This hides the dimension information of the hidden QRC. Combined with other obfuscation techniques in the Shepherd-Bremner construction, this achieves random sampling from \(\mathcal{H}_{n,m,q}^{\text{QRC}}\) with any possible parameters. Below, we propose a parameter regime that can invalidate the attack in Ref. [30]. Given the length \(q\) of the QRC, we have \(r=(q+1)/2\) and \(m_{1}=q\)[29]. So, the first formula in Eq. (5.3) gives \(n\geq(q-1)/2+2\lambda\) and the second formula gives the range of the number of redundant rows \(n-(q+1)/2\leq m_{2}\leq 2n-2\lambda-q\). In this way, the size of \(\ker(\mathbf{G}_{\mathbf{d}})\) will be larger than \(2^{\lambda}\) in general, offering a viable solution to fortify the Shepherd-Bremner construction against the attack. Note that the column redundancy technique was used in Ref. [28] to scramble a small random IQP circuit into a large one, to maintain the value of the correlation function, although its connection to the classical security was not explored. Moreover, a multi-secret version was explored in Ref. [41], which was shown to be more vulnerable to the classical attack instead. We perform numerical experiment to support our previous analysis. When \(m=2q\), \(n\) can be as large as \(r+q\) and the expected kernel dimension of \(\mathbf{G}_{\mathbf{d}}\) is \(r\). In Fig. 3 (a), we plot the kernel dimensions under the setting \(n=r+q\) and \(m=2q\), with \(q=103,127,151\) and \(167\). For each parameter set, 100 instances are sampled from \(\mathcal{H}_{n,m,q}^{\text{QRC}}\), and then a random \(\mathbf{d}\) is sampled for each instance and we evaluate the dimension of \(\ker(\mathbf{G_{d}})\). We also plot the expected lower bound \(n-m/2\) for a comparison. In Fig. 3 (b), we plot the success probability versus the number of columns (qubits) \(n\). Here, \(m\) is set to be \(2q\) and \(n\) is increased from \(r=(q+1)/2\) to \(r+q\). For each value of \(n\), 100 random instances from \(\mathcal{H}_{n,m,q}^{\text{QRC}}\) are sampled, and the success probability is the fraction of successful attacks among them. We set the security parameter to be 15 for a proof of principle, meaning that the maximum number of QRC checks is set to be \(2^{15}\). The success probabilities drop down to zero when \(n>q+15\), as expected. Our analysis and numerical results demonstrate that Claim 3.1 in Ref. [30], which originally states that the QRC-based construction can be broken efficiently by the KM attack, turns out to be false under appropriate choices of parameters. ## 6 Discussion In this work, we give the stabilizer scheme for the IQP-based protocols for verifiable quantum advantage, which focuses on the case \(\theta=\pi/8\) in the IQP circuits. With the connection between IQP circuits, stabilizer formalism and coding theory, we study the properties of correlation functions and IQP circuits. Based on these properties, we give an efficient procedure to sample generator matrices of random codes satisfying certain conditions, which lies at the core of our stabilizer scheme. Then, one needs to hide and obfuscate this generator matrix into a larger matrix. We propose a new obfuscation method called column redundancy, which uses the redundant generator matrix to hide the information of the dimension of the hidden code. To explore the classical security of our protocol, we consider a family of attacks based on extracting secrets. We conjecture that such attacks cannot be efficient classically for random instances generated by our stabilizer scheme. To support this conjecture, we extend the recent attack algorithm on the QRC-based construction to the general case for \(\theta=\pi/8\), which we call the Linearity Attack. Our analysis shows that this attack fails to find the secret in polynomial time by choosing instances from a good parameter regime. Notably, our column redundancy technique also fixes the loophole in the original Shepherd-Bremner construction. Our work Figure 3: **(a)** The dimension of \(\ker(\mathbf{G_{d}})\) for \(q=103,127,151,167\). Here, the number of rows and columns are \(m=2q\) and \(n=r+q\), where \(r=(q+1)/2\) is the dimension of QRC. **(b)** The success probability of the attack. The asterisks denote the points \((q+15,0)\). paves the way for cryptographic verification of quantum computation advantage in the NISQ era. There are several open problems for future research. The most important one is to rigorously prove the security of the IQP-based verification protocols. In Conjecture 5.1, we state that classical attacks based on secret extraction is on average hard. It would be favorable to prove the random self-reducibility of the problem, so that the hardness conjecture can be relaxed to the worst-case scenario. For example, recently a worst-to-average-case reduction was found for computing the probabilities of IQP circuits and it would be interesting to see if the techniques of Ref. [42] could be leveraged to gain insight into the validity of Conjecture 5.1. Before one can rigorously prove the hardness of classical attacks, one might gain intuition by considering other possible classical attacks. In terms of implementing the protocol in practice, generating instances according to a given architecture and noise analysis are also important open problems. We believe that the mathematical structure of the stabilizer scheme provides a promising avenue for the use of certain cryptographic techniques to improve the security of IQP-based protocols, and to construct instances that can be readily implemented with current technology. Acknowledgement.We thank Ryan Snoyman for sharing his honors thesis, where he also considered the same problem and made some insightful observations. We also thank Earl Campbell, Ryan Mann, Mauro Morales and Man-Hong Yung for helpful discussions. BC acknowledges the support from the Sydney Quantum Academy. MJB acknowledges the support of Google. MJB acknowledges support by the ARC Centre of Excellence for Quantum Computation and Communication Technology (CQC2T), project number CE170100012. ZJ acknowledges the support of a startup funding from Tsinghua University.
2307.09922
Information Structures in AC/DC Grids
The converters in an AC/DC grid form actuated boundaries between the AC and DC subgrids. We show how in both simple linear and balanced dq-frame models, the states on either side of these boundaries are coupled only by control inputs. This topological property imparts all AC/DC grids with poset-causal information structures. A practical benefit is that certain decentralized control problems that are hard in general are tractable for poset-causal systems. We also show that special cases like multi-terminal DC grids can have coordinated and leader-follower information structures.
Josh A. Taylor
2023-07-19T11:50:06Z
http://arxiv.org/abs/2307.09922v1
# Information Structures in AC/DC Grids ###### Abstract The converters in an AC/DC grid form actuated boundaries between the AC and DC subgrids. We show how in both simple linear and balanced dq-frame models, the states on either side of these boundaries are coupled only by control inputs. This topological property imparts all AC/DC grids with poset-causal information structures. A practical benefit is that certain decentralized control problems that are hard in general are tractable for poset-causal systems. We also show that special cases like multi-terminal DC grids can have coordinated and leader-follower information structures. AC/DC grid; information structure; poset causality; decentralized control; multi-terminal direct current grid ## I Introduction AC/DC grids consist of several AC and DC subgrids inter-faced by power electronic converters. For example, in a point-to-point DC link, the DC subgrid is a single line, which is interfaced with one or two AC subgrids through a pair of converters. Multi-terminal DC (MTDC) grids interconnect several AC subgrids through voltage-sourced converters (VSCs) [1]. In any AC/DC grid, the converters comprise actuated boundaries between the AC and DC subgrids. In this paper, we study how this topological property determines the information structure of an AC/DC grid. The information structure of a dynamical system encodes which states influence which other states. It is natural to think in terms of subsystems--the information structure of an AC/DC grid specifies how each AC or DC subgrid influences the others. The main result of this paper is that AC/DC grids have poset-causal information structures [2]. This is because the structure of the coupling between the subgrids can be chosen to be a directed acyclic graph (DAG). One reason this is useful is that poset-causal systems admit tractable, optimal decentralized controllers [2]. Coordinated systems [3] and leader-follower systems [4] are successive special cases that admit even simpler decentralized controllers. An interesting question is whether these information structures make AC/DC grids amenable to other specialized tools, e.g., via controllability or observability [5, 6]. To date, few studies have made explicit use of the network structure of general AC/DC grids. There are a handful of papers that design distributed controllers for MTDC grids [7, 8, 9]. Reference [10] relates the controllability of systems with point-to-point DC links to the effective reactance of the AC grid. Reference [11] designs local controllers for general AC/DC grids; whereas we focus on information structure, they focus on specific control objectives and stability. The most closely related paper to the present is the author's prior work [12], which showed that DC-segmented power systems are poset-causal. Our original results are as follows. The poset-causality of an AC/DC grid depends on how each VSC partitions the states of the AC and DC subgrids on either side; we define this precisely in Section III-B. We see that this partitioning occurs in a simple linear model in Section IV. In Section V, we show how different physical approximations each lead to partitioning in a standard dq-frame model. In Section VI, we show that in both the linear and dq-frame models, the partitions can be chosen so as to yield poset-causality; this is due to the fact that an undirected graph always has an acyclic orientation [13]. We also show that a single DC subgrid connected to multiple AC subgrids, i.e., an MTDC grid, is a coordinated system; and a single AC subgrid connected to a single DC subgrid is a leader-follower system. In Section VII-B, we use concepts from [3] to design a decentralized controller for an MTDC grid with an additional point-to-point DC link between two of the AC subgrids. ## II Poset-causality A poset \(\Psi\) is made up of a set P and a binary relation \(\preceq\)[14]. The following properties hold for all \(a,b,c\in\) P. * Reflexivity: \(a\preceq a\); * Antisymmetry: \(a\preceq b\) and \(b\preceq a\Rightarrow a=b\); * Transitivity: \(a\preceq b\) and \(b\preceq c\Rightarrow a\preceq c\). The following two results relate graphs and posets. **Result 1**: _Every DAG specifies a unique poset [15]._ **Result 2**: _Given a simple undirected graph, we can choose the directions of its edges so that the resulting directed graph is acyclic [13]. This is called an acyclic orientation._ Given \(a\in\) P, we denote the set of upstream elements \(\uparrow a=\{b\in\) P \(|\)\(b\preceq a\}\). The relations between elements of a poset can be encoded in a function \(\sigma:\text{P}\times\text{P}\rightarrow\mathbb{R}\) such that \(\sigma(a,b)=0\) when \(a\not\preceq b\). Its incidence algebra, \(\Psi\), is the set of all such functions. If P has a finite number of elements, then for any \(\sigma\in I(\Psi)\), there is a matrix \(M\) for which \(M(j,i)=\sigma(g(i),g(j))\), where \(g:\mathbb{N}\rightarrow\) P maps row and column indices to elements of P. With a slight abuse of notation we write \(M\in I(\Psi)\). ### _Nonlinear systems_ Consider the system \[\dot{x}=f(x,u),\quad z=h(x,u), \tag{1}\] where \(x\in\mathbb{R}^{n}\), \(u\in\mathbb{R}^{m}\), and \(z\) are states, inputs, and outputs. Suppose the system consist of \(p\) subsystems. We partition \(x\) into \([x_{1};x_{2};\ldots;x_{p}]\), where \(x_{i}\in\mathbb{R}^{n_{i}}\) and \(\sum_{i}n_{i}=n\). \(x_{i}\) are the states of subsystem \(i\). We similarly partition the inputs \(u\in\mathbb{R}^{m}\) into \([u_{1};u_{2};\ldots;u_{p}]\), where \(u_{i}\in\mathbb{R}^{m_{i}}\) and \(\sum_{i}m_{i}=m\). Suppose that we have a poset, \(\Psi=(\mathsf{P},\preceq)\), and that its elements are the subsystems, \(\mathsf{P}=\{1,...,p\}\). System (1) is poset-causal if we can write it as \[\dot{x}_{i}=f_{i}\left((x_{j},u_{j})_{j\in\uparrow i}\right),\quad z_{i}=h_{i} \left((x_{j},u_{j})_{j\in\uparrow i}\right),\quad i\in\mathsf{P}.\] Intuitively, each subsystem's state and output depend only on upstream subsystems. ### _LTI systems_ Consider the LTI system \[\dot{x}=Ax+Bu+Fw,\quad z=Cx+Du, \tag{2}\] where \(w\) is a disturbance. We assume \(C^{\top}D=0\), \(C^{\top}C\) is positive semidefinite, \(D^{\top}D\) is positive definite, and \(F\) is block diagonal. We write the matrix \(A\) as \([A_{ij}]_{i,j\in\{1,\cdots,p\}}\), where \(A_{ij}\) is the block indexed by the \(i^{\text{th}}\) and \(j^{\text{th}}\) partitions of \(x\). Matrices \(B,F,C\) and \(D\) can be similarly organized into blocks. Consider a poset \(\Psi\), and suppose \(\sigma\in I(\Psi)\). The matrix \(A\) belongs to the block incidence algebra \(I_{A}(\Psi)\) if \(A_{ji}=\mathbf{0}\) whenever \(\sigma(g(i),g(j))=0\), where \(\mathbf{0}\) is the appropriately sized matrix of zeroes. A similar definition holds for \(I_{B}(\Psi)\). Let \(\mathcal{A}=(sI-A)^{-1}\), and define the transfer matrices \[P_{11}=C\mathcal{A}F,\;P_{12}=C\mathcal{A}B+D,\;P_{21}=\mathcal{A}F,\;P_{22}= \mathcal{A}B.\] We can express LTI system (2) as \[z=P_{11}w+P_{12}u,\quad x=P_{21}w+P_{22}u.\] System (2) is poset-causal if \(P_{22}\in I_{P_{22}}(\Psi)\) for some poset \(\Psi\). The following result from [2] directly links poset causality to the system matrices. **Result 3**: _If \(A\in I_{A}(\Psi)\) and \(B\in I_{B}(\Psi)\), then \(P_{22}\in I_{P_{22}}(\Psi)\)._ In other words, (2) is poset-causal if \(A\) and \(B\) are in the block incidence algebra of a poset. Given a controller \(u=Kx\), the transfer matrix from the disturbance, \(w\), to output, \(z\), is \[T_{zw}=P_{11}+P_{12}K(I-P_{22}K)^{-1}P_{21}. \tag{3}\] Reference [2] solves the problem of minimizing the \(\mathcal{H}_{2}\) norm of \(T_{zw}\) over \(K\in I_{K}(\Psi)\), i.e., controllers that are decentralized because they are in the same poset as (2). We describe this further in Section VII-A. ### _Special cases_ The following are special cases of poset-causal systems. We refer the reader to [3] for a more detailed summary. #### Ii-C1 Hierarchical systems A system is hierarchical if its graph is a single, directed tree [16]. #### Ii-C2 Coordinated systems Coordinated systems are hierarchical systems of depth one [3]. They consist of a coordinator, \(c\in\mathsf{P}\), and subsystems, \(i\in\downarrow c\setminus c\), for which \(\downarrow i=i\). #### Ii-C3 Leader-follower systems Leader-follower systems are coordinated systems with a single subsystem [4, 17]. ## III Network modeling ### _Graph structure_ Let \(\mathcal{N}^{\mathsf{A}}\) and \(\mathcal{N}^{\mathsf{D}}\) be buses in the AC and DC parts of the network. Let \(\mathcal{E}^{\mathsf{A}}\) be the set of AC lines. If \(ij\in\mathcal{E}^{\mathsf{A}}\), then \(i\) and \(j\in\mathcal{N}^{\mathsf{A}}\). Similarly, let \(\mathcal{E}^{\mathsf{D}}\) be the set of DC lines. Let \(\mathcal{C}\) be the set of converters. If \(ij\in\mathcal{C}\), the either \(i\in\mathcal{N}^{\mathsf{A}}\) and \(j\in\mathcal{N}^{\mathsf{D}}\) or vice versa. The graphs \(\left(\mathcal{N}^{\mathsf{A}},\mathcal{E}^{\mathsf{A}}\right)\) and \(\left(\mathcal{N}^{\mathsf{D}},\mathcal{E}^{\mathsf{D}}\right)\) are undirected. The graph \(\left(\mathcal{N}^{\mathsf{A}}\cup\mathcal{N}^{\mathsf{D}},\mathcal{C}\right)\) is directed, and if \(ij\in\mathcal{C}\), then \(ji\notin\mathcal{C}\). Suppose that there are \(m^{\mathsf{A}}\) and \(m^{\mathsf{D}}\) connected AC and DC subgraphs in \(\left(\mathcal{N}^{\mathsf{A}},\mathcal{E}^{\mathsf{A}}\right)\) and \(\left(\mathcal{N}^{\mathsf{D}},\mathcal{E}^{\mathsf{D}}\right)\). Let \(\left(\mathcal{N}^{\mathsf{A}}_{k},\mathcal{E}^{\mathsf{A}}_{k}\right)\) and \(\left(\mathcal{N}^{\mathsf{D}}_{l},\mathcal{E}^{\mathsf{D}}_{l}\right)\) be the \(k^{\text{th}}\) and \(l^{\text{th}}\) such subgraphs for \(k=1,...,m^{\mathsf{A}}\) and \(l=1,...,m^{\mathsf{D}}\). Define the mapping \(\mathcal{M}^{\mathsf{A}}\) such that if \(i\in\mathcal{N}^{\mathsf{A}}_{k}\), \(\mathcal{M}^{\mathsf{A}}(i)=k\); \(\mathcal{M}^{\mathsf{A}}\) identifies which AC subgraph each qubit has belongs to. Similarly, define \(\mathcal{M}^{\mathsf{D}}\) such that if \(i\in\mathcal{N}^{\mathsf{D}}_{k}\), \(\mathcal{M}^{\mathsf{D}}(i)=k\). Let \((\mathcal{P},\mathcal{G})\) be a simple directed graph such that if \(k\) and \(l\in\mathcal{P}\) and \(kl\in\mathcal{G}\), then there exists \(i\in\mathcal{N}^{\mathsf{A}}_{k}\) and \(j\in\mathcal{N}^{\mathsf{D}}_{l}\) (or \(i\in\mathcal{N}^{\mathsf{A}}_{l}\) and \(j\in\mathcal{N}^{\mathsf{D}}_{k}\)) such that \(ij\in\mathcal{C}\). Observe that \((\mathcal{P},\mathcal{G})\) is bipartite. This is because an AC subgrid can only be converter-connected to DC subgrids, and vice versa. The directions of the edges in \(\mathcal{G}\) are determined by those of \(\mathcal{C}\). We choose the directions of the edges in \(\mathcal{C}\) such that \((\mathcal{P},\mathcal{G})\) is a DAG; we can always do so because, as stated in Result 2, every undirected graph has at least one acyclic orientation. \((\mathcal{P},\mathcal{G})\) specifies a unique poset, which we denote \(\Phi=(\mathcal{P},\preceq)\); note that the nodes in \(\mathcal{P}\) are now also the elements of the poset. Observe that if \(\mathcal{M}(i)\preceq\mathcal{M}(j)\), either \(\mathcal{M}(i)=\mathcal{M}(j)\) or there is a path from \(i\) to \(j\) through \(\mathcal{E}\). ### _State partitions_ In the models we present later in Sections IV and V, a converter either partially or fully decouples the AC and DC states on either side. In this section, we specify how this decoupling determines the direction of each converter in the set \(\mathcal{C}\). Each state is associated with a node or an edge in the system's graph. Suppose \(x\) is the vector of AC states and \(x_{i}\) the subvector associated with AC bus \(i\in\mathcal{N}^{\mathsf{A}}\). Let \(y\) similarly be the vector of DC states and \(y_{j}\) the subvector for \(j\in\mathcal{N}^{\mathsf{D}}\). **Definition 1** (One-way partition): _Let \(i\in\mathcal{N}^{\mathsf{A}}\) and \(j\in\mathcal{N}^{\mathsf{D}}\) be connected by a converter. The converter partitions the state one way if one of the following are true._ * _The evolution of_ \(x_{i}\) _does not depend on_ \(y_{j}\)_. In this case_ \(ij\in\mathcal{C}\)_._ * _The evolution of_ \(y_{j}\) _does not depend on_ \(x_{i}\)_. In this case_ \(ji\in\mathcal{C}\)_._ In this manner, the direction of a converter in \(\mathcal{C}\) encodes the direction that physical information flows through it. **Definition 2** (Full partition): _Let \(i\in\mathcal{N}^{\mathsf{A}}\), and \(j\in\mathcal{N}^{\mathsf{D}}\) be connected by a converter. The converter fully partitions the state if the evolution of \(x_{i}\) does not depend on \(y_{j}\), and vice versa. In this case we may choose whether \(ij\) or \(ji\in\mathcal{C}\)._ A pair of one-way partitions in both directions forms a full partition. In the next two sections, we present several models in which the converter partitions the state. A converter's direction affects its control structure. Consider a converter between buses \(i\in\mathcal{N}^{\mathsf{A}}\) and \(j\in\mathcal{N}^{\mathsf{D}}\). Assume \(ij\in\mathcal{C}\), either due to a one-way partition or because we have chosen this direction, and let \(z_{ij}\) be the vector of control variables. We will regard the converter as a part of AC subgrid \(\mathcal{M}^{\mathsf{A}}(i)\), and not DC subgrid \(\mathcal{M}^{\mathsf{D}}(j)\). We are in effect associating \(z_{ij}\) with bus \(i\), so that it influences bus \(j\) and not vice versa; this implies \(\mathcal{M}^{\mathsf{A}}(i)\prec\mathcal{M}^{\mathsf{D}}(j)\). If poset causality is to be preserved, then \(z_{ij}\) can only receive feedback based on states in subgrid \(\mathcal{M}^{\mathsf{A}}(i)\) (or further upstream in the poset). If \(z_{ij}\) were to receive feedback from a state in subgrid \(\mathcal{M}^{\mathsf{D}}(j)\), information would flow from \(j\) to \(i\), violating the partition. ## IV Linear model In this simple linear model, the converters are represented by controllable current transfers. We use the standard linear power flow approximation to model the AC part of the system, which contains most of the generation and load, and model the DC part of the system as a linear circuit. The converters fully partition the state in this model, and hence can have either direction. A given converter \(ij\in\mathcal{C}\), \(i\in\mathcal{N}^{\mathsf{A}}\), and \(j\in\mathcal{N}^{\mathsf{D}}\), has control input \(\zeta_{ij}\), the current it injects on the DC side. Let \(\hat{v}_{j}\) be the constant nominal voltage on the DC side and \(p_{ij}\) the power on the AC side. Then, assuming a lossless converter as in [9], conservation of power gives \(p_{ij}=\hat{v}_{j}\zeta_{ij}\). The AC states are the voltage angles, \(\theta_{i}\), and frequencies, \(\omega_{i}\), for \(i\in\mathcal{N}^{\mathsf{A}}\). The dynamics at bus \(i\in\mathcal{N}^{\mathsf{A}}\) are \[\dot{\theta}_{i} =\omega_{i} \tag{4a}\] \[J_{i}\dot{\omega}_{i} =P_{i}-D_{i}\omega_{i}-\sum_{j:ij\in\mathcal{E}^{\mathsf{A}}}B_{ ij}(\theta_{i}-\theta_{j})\] \[\quad+\left\{\begin{array}{ll}-\hat{v}_{j}\zeta_{ij}&\text{ if }ij\in\mathcal{C}\text{ for some }j\\ \hat{v}_{j}\zeta_{ji}&\text{ if }ji\in\mathcal{C}\text{ for some }j\\ 0&\text{ otherwise},\end{array}\right. \tag{4b}\] where \(P_{i}\) is the generation or load, \(J_{i}\) and \(D_{i}\) are the rotor inertia and damping, and \(B_{ij}\) is the line susceptance. The DC states are the voltages, \(v_{i}\) for \(i\in\mathcal{N}^{\mathsf{D}}\), and currents, \(i_{ij}\) for \(ij\in\mathcal{E}^{\mathsf{D}}\). The dynamics at bus \(i\in\mathcal{N}^{\mathsf{D}}\) are \[C_{i}\dot{v}_{i}=\sum_{j:ij\in\mathcal{E}^{\mathsf{D}}}i_{ij}+ \left\{\begin{array}{ll}-\zeta_{ij}&\text{ if }ij\in\mathcal{C}\text{ for some }j\\ \zeta_{ji}&\text{ if }ji\in\mathcal{C}\text{ for some }j\\ 0&\text{ otherwise},\end{array}\right. \tag{4c}\] where \(C_{i}\) is the bus's capacitance. The dynamics of line \(ij\in\mathcal{E}^{\mathsf{D}}\) are \[L_{ij}\dot{i}_{ij}=v_{i}-v_{j}-R_{ij}i_{ij}, \tag{4d}\] where \(L_{ij}\) and \(R_{ij}\) are the line's inductance and resistance. Given a converter between \(i\in\mathcal{N}^{\mathsf{A}}\) and \(j\in\mathcal{N}^{\mathsf{D}}\), the control variable \(\zeta_{ij}\) fully partitions the states on either side--the evolutions of \(\theta_{i}\) and \(\omega_{i}\) do not depend on \(v_{j}\) or \(i_{jk}\), \(jk\in\mathcal{E}^{\mathsf{D}}\). We may thus choose whether \(ij\) or \(ji\in\mathcal{C}\). If we choose, say, \(ij\in\mathcal{C}\), then we are associating \(\zeta_{ij}\) with bus \(i\), so that it influences bus \(j\) and not vice versa. ## V Nonlinear dq-frame model We now describe the standard balanced dq-frame model, for which we follow the presentation of Chapter 17 in [18]. We do not explicitly model the AC and DC parts of the grid, which could be simple as in Section IV or as complicated as desired. Consider a VSC between buses \(i\in\mathcal{N}^{\mathsf{A}}\) and \(j\in\mathcal{N}^{\mathsf{D}}\). The control inputs are \(m^{\mathsf{d}}_{ij}\) and \(m^{\mathsf{q}}_{ij}\), the d and q components of the averaged switching signal. The AC-side converter states are the currents, \(i^{\mathsf{d}}_{ij}\) and \(i^{\mathsf{q}}_{ij}\). The component voltages at AC bus \(i\) are \(v^{\mathsf{d}}_{i}\) and \(v^{\mathsf{d}}_{i}\). Let \(L_{ij}\) and \(R_{ij}\) be the combined inductance and resistance of the converter's transformer and filter. The dynamics of the AC-side converter currents are \[L_{ij}\dot{i}^{\mathsf{d}}_{ij} =v^{\mathsf{d}}_{i}+\omega L_{ij}i^{\mathsf{q}}_{ij}-R_{ij}i^{ \mathsf{d}}_{ij}-\frac{1}{2}v^{\mathsf{D}}_{j}m^{\mathsf{d}}_{ij} \tag{5a}\] \[L_{ij}\dot{i}^{\mathsf{q}}_{ij} =v^{\mathsf{q}}_{i}+\omega L_{ij}i^{\mathsf{d}}_{ij}-R_{ij}i^{ \mathsf{q}}_{ij}-\frac{1}{2}v^{\mathsf{D}}_{j}m^{\mathsf{q}}_{ij}, \tag{5b}\] where \(v^{\mathsf{D}}_{j}\) is the DC-side capacitor voltage. Let \(C_{ij}\) be the DC-side capacitance. Let \(\zeta_{ij}\) be DC-side converter current, and \(i_{ij}\) be the current from the converter to the DC node \(j\). The dynamics are \[C_{ij}\dot{v}^{\mathsf{D}}_{j}=\zeta_{ij}-i_{ij},\] (6a) where \[\zeta_{ij}=\frac{3}{4}\left(i^{\mathsf{d}}_{ij}m^{\mathsf{d}}_{ij}+i^{\mathsf{ q}}_{ij}m^{\mathsf{q}}_{ij}\right). \tag{6b}\] The DC-side power is given by \[P^{\mathsf{D}}_{ij}=v^{\mathsf{D}}_{j}\zeta_{ij}.\] The real and reactive powers on the AC side are given by \[P^{\mathsf{A}}_{ij}=\frac{3}{4}\left(v^{\mathsf{d}}_{i}\dot{i}^{\mathsf{d}}_{ij}+v ^{\mathsf{q}}_{i}\dot{i}^{\mathsf{q}}_{ij}\right),\quad Q^{\mathsf{A}}_{ij}= \frac{3}{4}\left(-v^{\mathsf{d}}_{i}\dot{i}^{\mathsf{q}}_{ij}+v^{\mathsf{q}}_{i }\dot{i}^{\mathsf{d}}_{ij}\right).\] Conservation of power implies \(P^{\mathsf{D}}_{ij}=P^{\mathsf{A}}_{ij}\) (on average). As written, the states are not partitioned because the DC voltage, \(v^{\mathsf{D}}_{ij}\), influences the AC currents in (5), and the AC currents, \(\dot{i}^{\mathsf{d}}_{ij}\) and \(i^{\mathsf{q}}_{ij}\), influence \(v^{\mathsf{q}}_{j}\) through (6). In the following subsections, we obtain partitions in several ways. * In Section V-A, redefining the control inputs leads to one-way partitions. * In Section V-B, holding voltage constant leads to one-way partitions. * In Section V-C, we combine one-way partitions to obtain full partitions. * In Section V-D, we assume an inner current controller is fast enough that the converter currents are in steady state, which is standard practice today. The currents are always equal to their setpoints, and thus become control inputs. This fully partitions the AC and DC states. ### _Partitioning via substitution_ We can obtain one-way partitions simply by redefining the control inputs. The first below is standard (see Section 17.7.3 in [18]), and the latter new. #### Iv-A1 A standard substitution To decouple the d and q currents, define the new control inputs \[\beta_{ij}^{\text{d}}=2\frac{m_{ij}^{\text{d}}-\omega L_{ij}i_{ij}^{\text{q}}}{v _{j}^{\text{p}}},\quad\beta_{ij}^{\text{q}}=2\frac{m_{ij}^{\text{q}}-\omega L_{ ij}i_{ij}^{\text{d}}}{v_{j}^{\text{p}}}. \tag{7}\] Substituting, (5) becomes \[L_{ij}i_{ij}^{\text{d}}=v_{i}^{\text{d}}-R_{ij}i_{ij}^{\text{d}}- \beta_{ij}^{\text{d}} \tag{8a}\] \[L_{ij}i_{ij}^{\text{q}}=v_{i}^{\text{q}}-R_{ij}i_{ij}^{\text{q}} -\beta_{ij}^{\text{q}}. \tag{8b}\] The AC-side converter currents now depend on no DC-side states. The DC voltage still depends on \(i_{ij}^{\text{d}}\) and \(i_{ij}^{\text{q}}\). With only this substitution, the direction of the corresponding edge is from \(i\) to \(j\), i.e., \(ij\in\mathcal{C}\). #### Iv-A2 Another substitution Set \[\rho_{ij}^{\text{d}}=i_{ij}^{\text{d}}m_{ij}^{\text{d}},\quad\rho_{ij}^{\text{ q}}=i_{ij}^{\text{q}}m_{ij}^{\text{q}}. \tag{9}\] Substituting, (5) becomes \[L_{ij}i_{ij}^{\text{d}}=v_{i}^{\text{d}}+\omega L_{ij}i_{ij}^{ \text{q}}-R_{ij}i_{ij}^{\text{d}}-\frac{1}{2i_{ij}^{\text{d}}}v_{j}^{\text{p} }\rho_{ij}^{\text{d}} \tag{10a}\] \[L_{ij}i_{ij}^{\text{q}}=v_{i}^{\text{q}}+\omega L_{ij}i_{ij}^{ \text{d}}-R_{ij}i_{ij}^{\text{q}}-\frac{1}{2i_{ij}^{\text{q}}}v_{j}^{\text{p} }\rho_{ij}^{\text{q}}, \tag{10b}\] and (6b) becomes \[\zeta_{ij}=\frac{3}{4}\left(\rho_{ij}^{\text{d}}+\rho_{ij}^{\text{q}}\right). \tag{11}\] The DC voltage, \(v_{j}^{\text{D}}\), still influences the AC-side states through (10). Because \(\zeta_{ij}\) now only depends on the control inputs, the AC-side states do not influence the DC voltage. The direction of the edge is therefore from \(j\) to \(i\), i.e., \(ji\in\mathcal{C}\). We remark that unlike in Section V-A1, this substitution serves no purpose beyond creating a one-way partition. ### _Partitioning via constant voltage_ We now show how approximating either the AC or DC voltage as constant leads to a one-way partition. #### Iv-B1 Tightly regulated DC voltage The converter's DC voltage is usually tightly regulated (see Section 17.7.2 in [18]). We may thus set \(i_{j}^{\text{p}}=0\), so that \(v_{j}^{\text{D}}\) is constant and \(i_{ij}=\zeta_{ij}\). This partitions the state one way in that the AC-side currents are directly coupled to the DC bus states, but no DC-side states affect the AC-side currents. Under this approximation, the direction of the corresponding edge is from \(i\) to \(j\), i.e., \(ij\in\mathcal{C}\). #### Iv-B2 Tightly regulated AC voltages The converter's AC voltages are also usually tightly regulated (see Section 17.7.1 in [18]). We represent this by setting \(v_{i}^{\text{q}}=0\) and assuming \(v_{i}^{\text{d}}\) is constant, which means the AC-side voltage is on the d axis. Under this approximation, the AC-side currents, \(i_{ij}^{\text{d}}\) and \(i_{ij}^{\text{q}}\), have no dependence on the AC bus voltages. \(i_{ij}^{\text{d}}\) and \(i_{ij}^{\text{q}}\) do still depend on the DC-side voltage, \(v_{j}^{\text{D}}\). Therefore, the direction of the corresponding edge is from \(j\) to \(i\), i.e., \(ji\in\mathcal{C}\). ### _Combining one-way partitions_ We can obtain a full partition by combining one-way partitions in several different ways. * Assume tightly regulated AC and DC voltages. The remaining converter states, \(i_{ij}^{\text{d}}\) and \(i_{ij}^{\text{q}}\), depend only on the converter controls, \(m_{ij}^{\text{d}}\) and \(m_{ij}^{\text{q}}\), and not on any AC or DC grid states. * Make the substitution in Section V-A1 and assume tightly regulated AC voltages. Examining (8), \(i_{ij}^{\text{d}}\) and \(i_{ij}^{\text{q}}\) depend only \(\beta_{ij}^{\text{d}}\) and \(\beta_{ij}^{\text{q}}\), and not on any AC or DC grid states. * Make the substitution in Section V-A2 and assume tightly regulated DC voltages. Now (10) depends on \(\rho_{ij}^{\text{d}}\) and \(\rho_{ij}^{\text{q}}\), but not any DC-side states. The only input from the converter to the DC side is \(\zeta_{ij}\), which depends only on the control variables through (11). In the first two cases, the full partition is through \(i_{ij}^{\text{d}}\) and \(i_{ij}^{\text{q}}\), which depend only on the converter control inputs. In the third, the full partition is through \(\zeta_{ij}\) in that the states on either side are uncoupled. In all cases, we may choose the converter's direction, i.e., whether \(ij\) or \(ji\in\mathcal{C}\). ### _Partitioning via timescale separation_ The converter currents are usually controlled locally on a faster timescale (see Section 17.8 in [18]). We may assume they are in steady state, and therefore always at their setpoints. The control inputs are thus \(i_{ij}^{\text{d}}\) and \(i_{ij}^{\text{q}}\). Because there is no other coupling across the converter, the converter fully partitions the states under this assumption. Today, it is common to have a local PID loop use \(i_{ij}^{\text{q}}\) to regulate reactive power, and in turn the AC voltage magnitude. \(i_{ij}^{\text{d}}\) is then used to control either the converter's power transfer or DC-side voltage. Each case entails a feedback loop with either AC- or DC-side states. As a result, the converter no longer fully partitions the states, but only one way. A converter's local control loops, if it has any, thus dictate its direction in \(\mathcal{C}\). Let \(i\in\mathcal{N}^{\text{A}}\), \(j\in\mathcal{N}^{\text{D}}\), and assume the converter fully partitions the state. The above control loops specify the converter's direction as follows. * If reactive power is regulated by \(i_{ij}^{\text{q}}\), then \(ij\in\mathcal{C}\). This is because information about reactive power is on the AC-side (see Section 17.8.1 in [18]). * If the power transfer is regulated by \(i_{ij}^{\text{d}}\), then \(ji\in\mathcal{C}\). This is because information about power transfer is taken on the DC side (see Section 17.8.2 in [18]). In principle, this information could be ontained on the AC side, e.g., via real power, in which case \(ij\in\mathcal{C}\). * If DC voltage is regulated by \(i_{ij}^{\text{d}}\), then \(ji\in\mathcal{C}\). This is because information about the DC voltage is on the DC side (see Section 17.8.3 in [18]). ### _Discussion_ There are a number of ways to obtain one-way and full partitions in the dq-frame model. Figure 1 illustrates how each modification disables couplings between the states, and the new couplings induced by the local controllers in Section V-D. Our analysis aligns with the intuition that converters dynamically decouple the systems on either side, especially on slower timescales. In this paper, we seek to orient the converters so that \((\mathcal{P},\mathcal{G})\) is a DAG, and therefore the system model is post-causal. The following example looks at how the control loops in Section V-D restrict the number of acyclic orientations. **Example 1** (Point-to-point DC link): _Consider a single DC line with converters and both ends. A common configuration is for one converter to regulate the DC voltage, and the other the power transfer [19]._ There are two DC and two AC buses. Assume also that the two AC buses are connected by an AC line. We have \(\mathcal{N}^{\mathsf{A}}=\{1,4\}\), \(\mathcal{N}^{\mathsf{D}}=\{2,3\}\), \(\mathcal{E}^{\mathsf{A}}=\{14\}\), and \(\mathcal{E}^{\mathsf{D}}=\{23\}\). We must choose directions for the converters between buses 1 and 2 and between buses 3 and 4. If \(\mathcal{C}=\{12,34\}\) or \(\mathcal{C}=\{21,43\}\), the system is not poset-causal. The system is poset-causal if \(\mathcal{C}=\{12,43\}\) or \(\mathcal{C}=\{21,34\}\); we assume the latter, as in Figure 2. Because there are only two subgrids, this is also a leader-follower system, as described in Section 2. Here the DC subgrid is the leader and the AC subgrid the follower. Assume timescale separation as in Section V-D, and that the converter current setpoints depend on AC- or DC-side states via local control loops. The following pair of controllers (and vice versa) is compatible with the converter directions we've chosen. * Converter 21 regulates the DC voltage at bus 2. * Converter 34 regulates the power transfer by feeding back the power at bus 3. Unfortunately, if either converter regulates its AC-side reactive power (or voltage), it must also feed back AC-side states, eliminating a one-way partition and hence the model's poset-causality. As reactive power mostly affects local voltages, it is reasonable to omit this control loop when focusing on system-level questions. \(\triangle\) There are many choices leading to poset-causality. If interested in moving energy through large grids, then modeling the converter as a controllable current/power transfer as in Section IV may be adequate. The dq-frame model in Section V is appropriate if reactive power or AC voltage dynamics are important. In this case, the timescale approximation of Section V-D is relatively simple and aligned with current practice. These models are representative but not comprehensive; e.g., a dq-frame model would be appropriate for unbalanced AC grids, and a model with positive and negative sequences can capture other converter control strategies. Analyzing partitioning and information structures in such settings is a topic of future work. ## VI Information structures We now discuss the poset-causality of the systems in Sections IV and V. The results are essentially more general Fig. 1: The nodes in the above graph are states in the dq-frame model of converter \(ij\in\mathcal{C}\), (5)-(6), and at the adjacent buses. Solid lines represent physical couplings between states. The text and its location indicates the coupling direction disabled by each modification; e.g., fixing \(v_{i}^{a}\) eliminates the coupling from \(v_{i}^{d}\) to \(v_{ij}^{d}\). The dashed lines represent the one-way couplings induced by local control loops; e.g., regulating DC voltage makes \(i_{ij}^{a}\) depend on \(v_{j}^{\mathsf{D}}\). Fig. 2: A point-to-point DC link. The converter directions make the system poset-causal. versions of Lemma 1 in [12], which established the poset-causality of a linearized model with only point-to-point DC links. **Lemma 1**: _System (4) is poset-causal if \((\mathcal{P},\mathcal{G})\) is a DAG._ Assume \((\mathcal{P},\mathcal{G})\) is a DAG. By Result 1, it specifies a poset, \(\Psi\). The \(A\) matrix is block diagonal, with each block corresponding to either an AC or DC subgrid. This implies \(A\in\mathcal{I}_{A}(\Psi)\). We can also see by inspection that \(B\in\mathcal{I}_{B}(\Psi)\). By Result 3, system (4) is poset-causal. Result 2 says that we can always choose the directions of the converters in \(\mathcal{C}\) so that \((\mathcal{P},\mathcal{G})\) is acyclic. This means that any AC/DC grid has at least one poset-causal representation. The total number of acyclic orientations is \(|\mathcal{X}(-1)|\), where \(\mathcal{X}\) is the chromatic polynomial of the underlying undirected graph [20]. Figure 3 shows an AC/DC grid with converter directions chosen to yield poset-causality. As is, the dq-frame model in Section V is not poset-causal. We can make it poset-causal with the modifications from Sections V-A to V-D, which, by partitioning the states, determine the possible directions of the converters in \((\mathcal{P},\mathcal{G})\). For example, if we assume the currents are in steady state as in Section V-D, then the converters fully partition the states, and we can pick the direction of each one. If we choose converter directions so that \((\mathcal{P},\mathcal{G})\) is a DAG, then the dq-frame model is poset-causal. We eschew a formal proof because the argument is similar to that of Lemma 1, but more tedious. **Corollary 1**: _System (4) is a coordinated system if it consists of either_ * _one AC and multiple DC subgrids, or_ * _one DC and multiple AC subgrids._ _The latter case might be referred to as an MTDC grid. Observe that if the converters are pointed out of the single AC (DC) grid, then it is the coordinator, and the DC (AC) subgrids are the subsystems. If the converters are pointed into the single AC (DC) grid, then it is the subsystem, and the DC (AC) subgrids are together the coordinator. As we'll see in the example in Section VII-B, the latter setup is of more practical interest because the DC (AC) subgrids can each use local control, and only the single AC (DC) grid must use feedback over the whole system._ **Corollary 2**: _System (4) is a leader-follower system if it consists of one AC and one DC subgrid._ These results similarly extend to the dq-frame model as well. Note that other choices of subsystems can lead to different information structures. For example, even in a very complicated AC/DC grid, one can obtain a leader-follower system by only partitioning along a single boundary of converters. ## VII Example In this example, we illustrate how an AC/DC grid's information structure can guide the design of a decentralized controller. ### _Decentralized control_ Decentralized control is intractable for general LTI systems [21, 22]. If the system is also poset-causal, optimal decentralized control is only slightly more complicated than the centralized regulator. Consider the following problem: minimize the \(\mathcal{H}_{2}\) norm of \(T_{zw}\), as defined in (3), subject to the communication constraint \(K\in I_{K}(\Psi)\). Reference [2] constructed the optimal solution to this problem in terms of a nested family of Riccati equations. The controller in [2] is dynamic in that new states are introduced, which serve as estimates of downstream states. This might be seen as an undesirable complication. Reference [3] constructs suboptimal static controllers for coordinated LTI systems. In short, one first constructs a local controller for the coordinator system, and then controllers for the subsystems that use feedback from the local and coordinator subsystems. We apply this perspective in this example. ### _Test system_ We construct a decentralized controller for a stylized test system based on the MTDC grid in [9]. We use the linear model of Section IV. The system consists of the following parts. * An MTDC grid with ten lines and six VSCs, with parameters from [9]. Each line is represented by a resistance and inductance, and each VSC an identical capacitance. * An AC subgrid is attached to each of the six VSCs of the MTDC grid. Each AC subgrid is approximated as a single inertia with normalized moment \(10\) MWs/MVA and damping coefficient \(0.1\) pu. The AC subgrids have no generator inputs or other controls. * A point-to-point DC link between AC subgrids 1 and 6. It consists of two VSCs, each with the same capacitance as those in the MTDC grid, and a DC line with parameters from row two of Table I in [9]. The states are the six AC subgrid frequencies, the inductor currents of the ten DC lines in the MTDC grid and the DC line between AC 1 and AC 6, and the capacitor voltages of the eight VSCs' DC-side terminals. The only control inputs are the current/power transfers through the eight VSCs. The real power on each VSC's AC side is equal to the DC current times the nominal voltage, which is 1 pu. The system is shown in Figure 4. Our goal is to design a controller in which VSCs 7 and 8 use local feedback and the six VSCs of the MTDC grid system-wide feedback. We hence view this as a leader-follower system, a special case of poset causality described in Section II-C3. The leader consists of AC 1, AC 6, VSC 7, VSC 8, and the DC line between. Fig. 3: The nodes are AC and DC subgrids. Each edge is a VSC in \(\mathcal{C}\), the directions of which make \((\mathcal{P},\mathcal{G})\) acyclic. The total number of such acyclic orientations for this system is \(|\mathcal{X}(-1)|=392\). The follower consists of the other four AC subgrids, the MTDC grid, and VSCs 2 through 5. The boundary between the two systems consists of VSCs 1 and 6, which are oriented toward the MTDC grid. The other six VSCs do not need directions because they are within either the leader or follower subsystem. The optimal decentralized controller for general leader-follow systems is derived in [17]. It is a special case of that for poset-causal systems [2], and similarly introduces new estimator states. We instead take a simpler approach motivated by [3]. * We first solve for the optimal regulator for the leader system and close the loop. * We then solve for the optimal regulator for the full system, which consists of the follower system and the locally controlled leader system. The resulting controller uses only feedback from the leader system for VSCs 7 and 8, and feedback from the full system for VSCs 1 through 6. We remark that we could have used a different information structure to design a different decentralized controller for this system. For example, we could have swapped leader and follower roles, or treated each VSC plus local AC subgrid as a separate subsystem and implemented the poset-causal controller of [2]. In the latter case, there would be multiple posets to choose from, each corresponding to a different acyclic orientation of the graph of converters and AC and DC subgrids. ### _Simulations_ All computations were performed in Python using NumPy [23] and SciPy [24]. The figure was made with Matplotlib [25]. The controllers seek to drive the system to zero. The cost coefficients for all state and control variables were one. The initial frequency deviations of AC 1, AC 4, and AC 6 were \(\omega_{1}^{0}=-0.01\), \(\omega_{4}^{0}=-0.01\), and \(\omega_{6}^{0}=0.02\) pu. All other states began at zero. We chose these initial conditions because the DC lines primarily move energy between AC subgrids. Had the initial frequencies not summed to zero, the system would have taken considerably longer to settle because the dampings and resistances dissipate little energy. In a more realistic setup, generator controls would provide power balancing and damping. Also note that the system does not oscillate because each AC subgrid was modeled as a single inertia. The performances of the centralized and decentralized controllers were similar, with respective closed loop \(\mathcal{H}_{2}\) norms 9.166 and 9.172. We only plot frequencies under the latter because the two look roughly the same. The top plot of Figure 5 shows how much more quickly the controlled system returns to the origin. This is because the converters seek to balance deviations rather than letting them damp out within each subgrid. Subgrids without initial frequency deviations remain at zero without control. With control, energy passes through their inertias, resulting in frequency deviations roughly an order of magnitude smaller than those shown. The middle and bottom plots of Figure 5 show the voltage of VSC 7 and the current from VSC 7 to VSC 8 under the decentralized and centralized controllers. Note that both undergo transients in first 0.002 seconds that take them from zero to the starting value seen on the left side of each plot. Their deviations from zero are larger under the decentralized controller. This is because the leader controller, unaware of the rest of the system, expends more control effort in VSC 7 Fig. 4: The test system. AC 1 through AC 6 are subgrids modeled as single inertias. The top and bottom boxes are the follower and leader systems, respectively, and the two intermediary VSCs form their boundary. Fig. 5: The top plot shows the frequencies in AC 1 and AC 6 without control (\(O\)) and under the leader-follower controller (\(D\)). The middle plot shows the voltages at the terminal of VSC 7 under the leader follower (\(D\)) and centralized (\(C\)) controllers. The bottom plot shows the current through the DC line between VSCs 7 and 8 under the leader follower (\(D\)) and centralized (\(C\)) controllers. and VSC 8 than is optimal, resulting in a slightly higher \(\mathcal{H}_{2}\) norm. ## VIII Conclusion Converters form actuated boundaries between AC and DC subgrids. We have shown that for both simple linear and dq-frame models, this topological property imparts all AC/DC grids with poset-causal information structures, and that special cases can have coordinated and leader-follower information structures. In a stylized example, we saw how this structure can inform the design of decentralized controllers. The topological structure of AC/DC grids is obvious, and it is natural to ask how else it might be useful. Do poset-causality and other information structures enable other tools or analyses, e.g., via controllability or observability [5, 6], and do AC/DC grids possess other useful structural properties? If so, graph theoretic notions like treewidth and bipartiteness might aid in analysis, e.g., in characterizing the optimal acyclic orientation. More concrete directions include conducting similar analyses for other power electronic interfaces like cycloinverters between AC subgrids; understanding the interplay of information structure with stability and control objectives; and making use of poset-causality without only considering controllers with the same information structure, e.g., allowing bidirectional communication between neighboring subgrids.
2306.06938
High-precision interpolation of stellar atmospheres with a deep neural network using a 1D convolutional auto encoder for feature extraction
Given the widespread availability of grids of models for stellar atmospheres, it is necessary to recover intermediate atmospheric models by means of accurate techniques that go beyond simple linear interpolation and capture the intricacies of the data. Our goal is to establish a reliable, precise, lightweight, and fast method for recovering stellar model atmospheres, that is to say the stratification of mass column, temperature, gas pressure, and electronic density with optical depth given any combination of the defining atmospheric specific parameters: metallicity, effective temperature, and surface gravity, as well as the abundances of other key chemical elements. We employed a fully connected deep neural network which in turn uses a 1D convolutional auto-encoder to extract the nonlinearities of a grid using the ATLAS9 and MARCS model atmospheres. This new method we call iNNterpol effectively takes into account the nonlinearities in the relationships of the data as opposed to traditional machine-learning methods, such as the light gradient boosting method (LightGBM), that are repeatedly used for their speed in well-known competitions with reduced datasets. We show a higher precision with a convolutional auto-encoder than using principal component analysis as a feature extractor.We believe it constitutes a useful tool for generating fast and precise stellar model atmospheres, mitigating convergence issues, as well as a framework for future developments. The code and data for both training and direct interpolation are available online at https://github.com/cwestend/iNNterpol for full reproducibility and to serve as a practical starting point for other continuous 1D data in the field and elsewhere.
C. Westendorp Plaza, A. Asensio Ramos, C. Allende Prieto
2023-06-12T08:16:26Z
http://arxiv.org/abs/2306.06938v1
# iNNterpol+ ###### Abstract Context:Given the widespread availability of grids of models for stellar atmospheres, it is necessary to recover intermediate atmospheric models by means of accurate techniques that go beyond simple linear interpolation and capture the intricacies of the data. Aims:Our goal is to establish a reliable, precise, lightweight, and fast method for recovering stellar model atmospheres, that is to say the stratification of mass column, temperature, gas pressure, and electronic density with optical depth given any combination of the defining atmospheric specific parameters: metallicity, effective temperature, and surface gravity, as well as the abundances of other key chemical elements. Methods:We employed a fully connected deep neural network which in turn uses a 1D convolutional auto-encoder to extract the nonlinearities of a grid using the ATLAS9 and MARCS model atmospheres. Results:This new method we call iNNterpol effectively takes into account the nonlinearities in the relationships of the data as opposed to traditional machine-learning methods, such as the light gradient boosting method (LightGBM), that are repeatedly used for their speed in well-known competitions with reduced datasets. We show a higher precision with a convolutional auto-encoder than using principal component analysis as a feature extractor. We believe it constitutes a useful tool for generating fast and precise stellar model atmospheres, mitigating convergence issues, as well as a framework for future developments. The code and data for both training and direct interpolation are available online for full reproducibility and to serve as a practical starting point for other continuous 1D data in the field and elsewhere. Conclusions: ## 1 Introduction In order to study stellar spectra from observed data, a widespread approach is to resort to theoretical model atmospheres. They represent tabulated thermodynamical quantities such as density, temperature, pressure, electron number density, and opacity as a function of optical depth for a wide range of stellar atmospheric parameters, such as effective temperature, surface gravity, and chemical composition. Commonly, a grid of such models are painstakingly calculated for a large set of parameters (Kirby, 2011). Intermediate values of these parameters are obtained by interpolating in this grid. Since these grids span a wide range of stellar atmospheres, where different physical processes have to be taken into account, the question arises of whether we can go beyond a simple linear interpolation among these models. The purpose of this work is to investigate a way to recover all the possible atmospheres within a specific grid in a fast way and also with great precision, going beyond a straightforward linear interpolation. The method is presented here, where the nonlinear relations within the data were recovered by means of a neural network (NN). We applied these techniques to two well-known families of models that effectively cover most of the parameter space of effective temperatures, surface gravities, and metallicity ratios such as the ATLAS9 and MARCS collections of model atmospheres. The ATLAS9 family consists of over 853,000 models which are obtained by the well-known code of Kurucz (1979) enhanced by new opacities and covering a wider range of stellar compositions. The MARCS code (Gustafsson et al., 2008) with an augmented metallicity subgrid provides over 380,000 models. Both grids of models are described in Meszaros et al. (2012). These series of models serve as a starting point for our technique, which by no means is restricted to them. We firmly believe that this type of NN interpolator can be very useful for any other grid of models when the parameter space is sufficiently covered, as we discuss below. To provide a more comprehensive set of models and access the latest updates, we extended our technique to include the PHOENIX model atmospheres (Husser et al., 2013). However, compared to the other two models, PHOENIX has a much smaller parameter space as it does not include the carbon abundance as a specific parameter. This leads to a significantly smaller number of available models, which is roughly ten times less than in the other two families. Neural networks, which have shown great success in various fields from image recognition and generation to natural language processing, have also been used in different aspects of astronomy (e.g., see Baron 2019). In our particular case, we wanted to apply a NN to a series of smooth stellar atmospheric models, in a way that the NN could learn the characteristic features of these models for the initial stellar parameters (effective temperature, surface gravity, and metal abundances). Given new values of these parameters, the NN would be able to generate an atmosphere based on the features learned with sufficient precision in order to be used in subsequent tasks such as calculating stellar synthetic spectra. In order to reduce the number of parameters and effectively avoid the so-called curse of dimensionality, we needed to capture in some way the essence of these models with an effective feature-extraction method. In our case, even with a high number of models as we have (see Section 2), the great number of parameters (or dimensions) could render our attempts to obtain high precision futile. Additionally, the characteristic smooth nature of our data (all physical parameters) in optical depth implies that treating each value as independent to the next one is prone to imprecisions and sharp discontinuities. Initially we resorted to using principal component analysis (PCA) for dimensional reduction, as it is a well-known technique for this means and is described in Section 3.1. Although PCA is known to be precise, especially with observed data (i.e., with noise), it is of linear nature. Applying singular value decomposition (SVD) to the matrix of stacked models and keeping a subset of the principal components is what we passed to our NN and this enabled us to train it. With our data being naturally smooth and not presenting any observational noise, there is no evident limit to the number of components to discard. Taking advantage of recent progress in deep learning, we find that a convolutional auto-encoder (CAE) is particularly useful in this regard, as it can improve the results obtained with PCA even with the same number of components. We then proceed to elaborate the optimal solution or state of the art (SOTA), we call iNNterpol, described in Section 3.3, that highly improves on PCA and gives a higher precision when tested with data not seen by the NN. As a sidenote, we study the use of traditional machine-learning (ML) techniques, such as gradient boosting (GB, in particular LightGBM, Ke et al. 2017), that in principle are optimal for tabular-like data. These techniques are simpler, faster, and have allowed for well-known ML challenges to be overcome (e.g., Caruana & Niculescu-Mizil 2006), albeit with a short time for development and thus employing limited data samples. We shall prove in Section 3.2 that they are not entirely applicable for the problem at hand. The aim of this work is to deliver both a useful tool and a method that can be applied to all sets of data of a 1D continuous nature, which are numerous in the natural sciences in general and particularly abundant in astrophysics. As the details here are really important and we know it is usually very hard to reproduce models in the literature from general descriptions, we provide all products, that is the tool with the full source code, together with the original data. These are available for full reproducibility of the present work and to serve as a starting point for future work1. Footnote 1: [https://github.com/cwestend/iNNterpol](https://github.com/cwestend/iNNterpol) This work is structured as follows: in Section 2 we present the main characteristics of the model atmospheres involved, and we then explain how we we applied PCA for dimensionality reduction and used its results embedded in a NN as described in Section 3.1. We first chose the ATLAS9 family of models as being the most extense. We then study the effect of a classical ML technique as GB in Section 3.2 and finally present the optimal model we call iNNterpol, which we present in Section 3.3 by including a CAE as the feature extractor of our NN. Once we had the optimal solution for the ATLAS9 grid, we then applied it to the MARCS and PHOENIX grid of models, where small variations had to be performed. The difference for MARCS and PHOENIX with the ATLAS9 results are presented in Section 3.4. The differences with classical linear interpolation are detailed in Section 3.5. ## 2 Description of the models We chose two families of models obtained through the ATLAS9 and MARCS codes, both described in detail in Meszaros et al. (2012), as they are the most complete, homogeneous, and dense ones available in the literature. The ATLAS9 grid has solar-scaled metallicities [M/H] from -5 to 1.5, carbon [C/M] abundances, and \(\alpha\)-element [\(\alpha\)/M] variations ranging from -1.5 to 1. The effective temperatures span values ranging from 3,500 K to 30,000 K and log g from 0 to 5. These are 1D plane-parallel model atmospheres computed under the assumption of local thermodynamical equilibrium. For the MARCS grid of models, the metallicities [M/H] cover from -2.5 to 1.0, carbon [C/M] abundances, and \(\alpha\)-element [\(\alpha\)/M] variations ranging from -1.0 to 1.0. In the MARCS grid, the effective temperatures range from 2,500 K to 8,000 K and log g from -0.5 to 5.0. For each combination of these five initial parameters, we have the stratification with optical depth of the mass column, temperature, gas pressure, and electron number density in 71 points for the ATLAS9 (56 points for MARCS models). We used the Rosseland optical depth scale: \[\tau_{\rm Ross}(z)=-\int_{\rm inf}^{\rm c}\kappa_{\rm Ross}dz, \tag{1}\] where \(\kappa_{\rm Ross}\) is the Rosseland mean opacity \[\kappa_{\rm Ross}^{-1}=\frac{\int_{0}^{\rm inf}\kappa_{\nu}^{-1}u(\nu,T)d\nu}{ \int_{0}^{\rm inf}\mu(\nu,T)d\nu} \tag{2}\] with \(u(\nu,T)=\frac{dR}{dT}\), the derivative of the Plank function, and \(\kappa\nu\) is the frequency-dependent opacity. Not all combinations are physically viable or yield stable stellar atmospheres. For instance, at high effective temperatures, there are only models with high gravities. We have roughly over 853,000 models for ATLAS9 and 381,000 for MARCS covering most stellar types so they sample the parameter space sufficiently. We also tried to apply a similar technique to the PHOENIX grid, but the results are not as precise as for the other two, as we discuss below. We think this is due to the fact that these models lack an order of magnitude as they do not sample the carbon abundances, thus providing a total of only 47,000 models. For the PHOENIX grid, the range of metallicities [M/H] varies from -4.0 to 1.0, \(\alpha\)-element [\(\alpha\)/M] variations range from -0.4 to 1.2, the effective temperatures range from 2,300 K to 15,000 K, and log g varies from -0.5 to 6.5. ## 3 Methods and results We applied the following techniques on the ATLAS9 family of models as they are the largest one and they cover more parameter space. An adaptation of the resulting model applied to the other two datasets is discussed in Sect. 3.4. ### PCA in a NN Principal component analysis, or Karhunen-Loeve transformation, is a multivariate statistical technique that has been widely used in stellar (see Munoz Bermejo et al. 2013; Carroll et al. 2007) and solar spectroscopy (Martinez Gonzalez et al. 2008; Asensio Ramos et al. 2007) for quite some time (Rees et al. 2000; Bailer-Jones et al. 1998). It is a very fast and efficient way of reducing the feature space by means of a linear transformation of the data that finds the direction along which the variance is maximum. Removing second-order dependencies yields an orthonormal basis for which the directions are uncorrelated. It effectively reduces the dimensionality of the data as we chose to keep a subset of these components that is able of reconstructing the original data within a certain error. PCA works best when the dimensions in the original data space are related to each other and when nonlinear effects are small, but it is just an assumption since this is basically unknown a priori. In the case of noisy data as in real spectra, it is natural to decide on the number of components or eigenvalues to keep since the rest should ideally contain only the information about the noise. In our case, with smooth and noiseless synthetic models, the criterion chosen was to retain as many components as needed to recover the original data within a specific error (ideally less than 2% RMS). For our model atmospheres and especially due to the smooth nature of our data, PCA not only provides a dimensional reduction, which is a natural starting point for a NN analysis (Bishop et al. 1995), but it also ensures that the recovered stratification of all parameters should be both smooth and continuous. To this end we set up our NN as described in Fig. 1, where the input are our five model grid parameters (effective temperature, surface gravity, and metal abundances: total [M/H], carbon abundance [C/M], and \(\alpha\) elements [\(\alpha\)/M]) and the output are the first 12 PCA components for each of the physical parameters on which SVD was applied. We worked with the logarithm of these quantities to keep the variations within a similar range for all quantities and for the NN to be able to train equivalent weights. We resorted to the highest numerical precision available provided by the language (in our case numpy.longdouble, or 128-bit extended-epdrecision, i.e., quadruple precision or 16-byte real numbers) in applying SVD to minimize the rounding error when recovering the original quantities. We decided 12 components were precise enough since the error recovering the parameters was within the 2% RMS error and previous calculations with only nine PCA components for each of the four physical quantities (PCA36-NN) gave a precision which was only slightly worse but which overestimated the predicted values for the temperature stratification in higher layers (see Fig. 1 in Appendix A). The rest of the hyperparameters in the NN were estimated by trial and error (number of layers, number of neurons per layer, activation functions, learning rates, epoch number, and batch size). The optimal values are shown in Table 1. The parameter inputs for each model were initially normalized by the absolute maximum value in their range. We also experimented with values in the batch size, and found that better convergence was obtained using a small size of 64, which in out network of few input parameters and larger output ones with many layers resulted in a slow training as the usage of the GPUs was not optimized. This meant that for 100 epochs, each training took around 3 hours of computing time on an NVidia Tesla P100 with 12GB, which is perhaps what also hinders the use of NNs as opposed to classical ML where it takes around 10-15 minutes as we shall see in Section 3.2. We trained the NN using the usual strategy of employing 80% of the data for training, 10% for validating, and 10% as test data, which was all unseen by the NN to evaluate the quality of the predictions. All were chosen randomly to avoid training and validating with a specific portion of parameter space with could have certain peculiar characteristics. The quality of the results of this PCA-NN using 12 PCA components for each physical quantity (PCA48-NN) is illustrated in Fig. 2 where the differences between the predicted values of the temperature stratification and the "ground truth" (the actual model atmospheres) are shown for combinations of the input parameters in the test set. For these atmospheres of metal-rich dwarf stars (with [M/H] = 0.5, [C/M], [\(\alpha\)/M] = 0 \(\pm\) 0.25, and log g = 4.5 \(\pm\) 0.5), the errors lie well below the 2% value throughout the atmosphere. ### Gradient boosting A common question that always arises when applying deep networks such as the NN described above is what would happen when applying classical ML techniques. For this, we resorted to the well-known GB methods which are so extremely fast and \begin{table} \begin{tabular}{c c c c} \hline n\({}_{l}\) Layers & n\({}_{\mathrm{e}}\) Neurons/layer & Activation & Batch size \\ \hline 8 to 22 & 20 to 80 & relu/krelu/elu & 256 to 32 \\ \hline Optimal values & & & \\ \hline (PCA36,48-NN) & & & \\ \(12\) & 40 & lkrelu & 64 \\ (CAE48-NN) & & & \\ 16 & 48 & lkrelu & 64 \\ (CAE71-NN & & & \\ or iNNterpol) & & & \\ 16 & 71 & elu & 64 \\ \hline \end{tabular} \end{table} Table 1: Range of parameters tested for the fully connected NN and optimal value found for PCA with 12 components and using a CAE with a bottleneck layer of 48, 64, and 71 (CAE48, CAE64, and the optimal CAE71 we call iNNterpol). This all pertains to ATLAS94 data; for more details on MARCS and PHOENIX, readers can refer to Section 3.4. The activation functions referred to are rectified linear unit (relu), leaky rectified linear unit (lkrelu), and exponential linear unit (elu). Figure 1: General architecture of our fully connected NN model (some connections have been omitted for clarity). The input parameters are the five values of the grid, effective temperatures Teff, surface gravity log g, and metalicities [M/H], [C/M], and [\(\alpha\)/M]. The n\({}_{\mathrm{e}}\) output values are the result of the dimensionality reduction used: for PCA the 12 components of each parameter for a specific model atmosphere (48 in total) and when using the encoder of our CAE 48, 64, and up to 71 which constitutes the best configuration. The values of n\({}_{\mathrm{e}}\) neurons per layer and n\({}_{\mathrm{f}}\) layers are described in Table 1. known to work well with tabular data. In our case we resorted to the LightGBM implementation (Ke et al., 2017) which is known to be well suited for large datasets as in our case, is less prone to overfitting, as it has a built-in early-stopping mechanism and especially because it is more accurate. The results, after carrying out a complete hyperparameter search over 500 boosting rounds consuming from 10 to 15 minutes on a 32-core Xeon CPU @ 2GHz in parallel mode (it can also work on GPU, but this was unnecessary), are shown in Table 2. We followed the same strategy of training on 80% of the data, 10% for validation, and 10% for testing. The results are shown in Fig. 3 where different atmospheric models are plotted for different effective temperatures; all have solar-type abundances and low surface gravities. We can see that the predicted values of LightGBM are quite good and precise. The problem comes when predicting models for parameters that do not fall exactly on the grid nodes. This is what can be seen in Fig. 3 where we only vary the effective temperature adding 125 K, keeping the rest of the parameters (([M/H], [C/M], [\(\alpha\)/M], and log g)) with the same values as those of the model points. We note that the model values are equispaced in 250 K in effective temperature for this range. In this case, the predicted values do not fall between the two models with effective temperatures \(\pm\) 125 K as would be expected (and as PCA-NN and CAE-NN interpolate), but exactly over the values of the model before. We conclude that at least in our case, for continuous models in a close equispaced parameter grid as we have, LightGBM acts only as a high-precision classifier (although it was set up to be a regressor). It somehow learns the structure of the grid of models and classifies the value of each temperature for each depth, but it is unable to learn and interpolate values in between. In this case, it works as a nearest neighbor interpolation so a 5D linear interpolation should be preferred as it can effectively recover intermediate values. ### Convolutional auto-encoder in the NN To overcome the limitations of PCA regarding the possible non-linear relations in the manifold that constitutes the parameter space, we resorted to a strategy that involves feature extraction based on a CAE. Here deep learning helps with the introduction of the auto-encoder (AE, Ranzato et al., 2007). It has been shown that in the simplest case, an AE with fully connected layers and only linear activation on the output trains weights that span the same subspace as the one recovered by PCA (Bourlard & Kamp, 1988), so in the worst case we would be obtaining the same gain as with PCA. In a CAE the convolutional layer, as introduced by Fukushima (1980), contains units whose receptive fields cover a patch of the previous layer; the set of adaptive parameters or weight vector of such a unit is often called a filter. These convolutional layers can be stacked, each effectively retrieving features from the previous one. This technique was applied to handwritten images by LeCun et al. (1989) and opened a broad field for image recognition that has since become the foundation of modern computer vision. Apart from the great success of convolutional neural networks (CNNs) which are now widely used in image processing and real-time recognition, the application to 1D data is paradoxically much less frequent. Besides notable examples in fields as diverse as medical sciences (Huang et al., 2018), structural damage detection (Abdeljaber et al., 2017), or speech recognition (Abdel-Hamid et al., 2014), many have yet to arise. Some even resort to converting the 1D signals to 2D structures (Ince et al., 2016; Zihlmann et al., 2017) to be able to apply the well-established techniques as used for images. The specificity of customizing the architecture of each CNN is time consuming and inherently challenging as there are no shortcuts. Additionally, the extensive hyperparameter space that needs to be sampled has surely limited the utilization of CNNs. An aspect of convolutions is that apart from reducing the size of the input data at each step in the filter dimension, they absorb the data size by increasing the number of channels, so there needs to be an additional way to create a bottleneck. In our case, we resorted to 1x1 convolution \begin{table} \begin{tabular}{l c c c} \hline boost type & min data in leaf & subsample & max depth \\ \hline gbdt & 1 & 0.5 & 15 \\ \hline num estimators & metric & boost rounds & early stopping \\ \hline 8.000 & rmse & 500 & 10 \\ \end{tabular} \end{table} Table 2: Optimal parameters after a hyperparameter search for the lightGBM regressor. Figure 3: LightGBM fits to model data for test points. Solid lines correspond to actual model data and solid circles illustrate the fit of the LightGBM model to these parameters, predicting the temperature. The crosses show the predictions to the data just adding 125 K to the effective temperature parameter and leaving the rest of the parameters unchanged. We note that the dots and crosses are predicted with the same values. Figure 2: PCA48-NN (12 component PCA) differences in temperature as a percentage between the predicted models and the actual ones for points in the grid never seen by the NN for ATLAS9 model data. The full range of effective temperatures are covered (3,500 K to 30,000 K). with a kernel size of 1 and stride of 1, made famous as used in the GoogleLeNet so-called inception architecture (Szegedy et al. 2014). This layer not only reduces the dimensions but also serves as a fully connected layer in the 1D CNN to which the activation functions enable it to learn even more characteristics of the data and enhance the power of the model. In order to both reduce the dimensions and to extract the nonlinearities in the parameter space, we set up a CAE as depicted in Fig. 4. As an AE is essentially a special type of NN in which, in an unsupervised manner, the output mimics the input in the most accurate way, we thus have an equal number of neurons on both input and output layers. The type of CAE we are interested in is a so-called "under-complete" one, where there is a bottleneck layer that has fewer neurons than the input and output ones. To be able to compare it to PCA48-NN results (see Table 1), we first set this bottleneck to the same number of components chosen before (48; 12 are for each of the four physical parameters), then proceeded to increase this number to try to improve our results and to obtain the optimal solution. The bottleneck made the CAE learn the essential features from the input in order to reproduce them accurately. In this way we captured the existing nonlinearities in the model data and go beyond PCA. Also importantly, the bottleneck served as an effective regularizer so the network does not just learn the input values (overfitting) which would hinder it use for data it has not yet seen (i.e., been trained on). We then separated the layers up to the bottleneck and used it as the so-called encoder, the results of which we then fed to the same fully connected NN we used before with PCA. In this way the encoder acts as the feature extractor and dimensionality reducer. Once the NN is trained to predict output values of equal length as the bottleneck given any combination of input parameters, we then used the decoder part to reconstruct the physical parameters that constitute the predicted stellar atmosphere. It is worth noting that in this CAE-NN, the four physical parameters for which we want to extract their features (mass column, temperature, gas pressure, and electronic number density stratification) are fed into the CAE as four channels simultaneously. This means that any relation between these parameters for a specific combination of the grid values (effective temperature, log g, and metalicities) are taken into account combined all together by the NN. We also found it necessary to normalize the input values of each quantity by subtracting the minimum value of all the models for this quantity and dividing by the difference between the maximum and the minimum of all models. This is common practice in training NNs as the learned weights do not have to span great size differences. We also found that the trained CAE used in the NN that worked best in our case was the exponential linear unit (ELU). The NN that incorporates the bottleneck provided by PCA, on the contrary, performed better using the leaky rectified linear unit. The optimal configurations are summarized in Table 1. The results of using the CAE with an output of 48 values inside a similar NN (CAE48-NN) as used before in PCA48-NN Figure 4: Architecture of our CAE for feature extraction. The 1x1 convolutional layer is at the center to act as the bottleneck. The values obtained at this bottleneck are the result of the encoder part, and constitute the outputs of the NN, the last layer from Fig. 1. We note that CID are 1D convolutions and C1TD are 1D transposed convolutions. The activation after each convolutional layer function is always an exponential linear unit (ELU). For convolutional layers the heights were scaled to the number of channels and the depths were scaled to the resulting number of kernels per channel. The detailed code available at [https://github.com/cwestend/iNInterpol](https://github.com/cwestend/iNInterpol) Figure 5: Results for CAE48-NN for ATLAS9 data with a bottleneck of 48 to compare with PCA48-NN (Fig. 2). Differences in temperature as a percentage between the predicted models and the actual ones for points in the grid never seen for the NN. The number of layers chosen were 16 and the number of neurons per layer were 48, instead of 12 layers and 40 neurons per layer we used for PCA. The plot is represented in axis scales so as to compare it with Fig. 2 Figure 6: Same as Fig. 5, but rescaled for clarity. can be seen in Fig. 5, which is at the same scale as Fig. 2 for comparison. In this case we increased the number of layers of the NN to 16 as opposed to 12 for PCA36-NN or PCA48-NN and the number of nodes to 48 instead of 40 (see Table 1). This small increase in free parameters enables the network to converge to a better solution when using the CAE; while using this exact same configuration for PCA, we find a similar solution for the temperatures but then the pressures are poorly fit, underfitting on average by more than 3%. The same results are shown in more detail in Fig. 6. The results from CAE48-NN already show an improvement of a factor of about 2 compared to PCA48-NN. We then went even deeper (literally) and gave more power to the NN model, probing the hyperparameter space to obtain a better fit. This was done incrementing the number of total layers from 12 to 16 and the number of nodes per layer from 40 to 71, as well as using a CAE with a bottleneck of 71 values (CAE71-NN). This configuration is what we call the optimal model and shall refer to as iNNterpol. The results are shown in Fig. 7 where another doubling of the gain in precision was obtained from the CAE using 48 components and the previous configuration (CAE48-NN, 16 layers, 48 neurons per layer). Temperature is the best retrieved parameter in all these NNs, both for PCA-NNs and CAE-NNs. In Appendix A we show the results for mass column, gas pressure, and electronic number density in Fig. 10, Fig. 11, and Fig. 12, respectively. The quality of the predictions can be seen in the temperature stratifications shown in Fig. 8. These are test models never seen by the NN in training. The metallicities are not identical, as the test models were chosen randomly. They represent a zone of low effective temperature, near-solar abundances, and low surface gravity (all with an identical log g = 2.0), where the calculated models are more critical and where the predictions could most likely fail. We see that the predictions are nevertheless really hard to distinguish from the actual models (sold lines overlap) as the errors in these temperature values are indeed in the 10-20 K range. The exception being at Teff = 6500 K for deep layers (\(\tau_{Ross}>1.4\)) where even the calculated models show a "kink" that is not representative of a realistic stellar atmosphere and it is likely a numerical artifact. The rest of the fits to the other physical quantities are in Appendix A, Figs. 11, 12, and 13. All cases show that the interpolated values follow the behavior of the grid models in a reasonable way, and we believe it is proof that the iNNterpol NN is able to recover the relations between the parameters and that is reflects it in the obtained models. ### Results for MARCS and PHOENIX models We applied the above CAE configuration to both MARCS and PHOENIX data, parting with the above CAE71-NN (iNNterpol) found for ATLAS9 data. The optimal parameters found are described in Table 3. The data dimensionality is very different, starting with the number of model atmospheres (853,000 models for ATLAS9, 381,000 for MARCS, and 47,000 for PHOENIX), the number of points in optical depth in the stellar atmosphere (71 for ATLAS9, 56 for MARCS, and 64 for PHOENIX), up to the specific coverage of parameter space. For this reason we tested for an increased range of values, specifically the number of layers \(n_{l}\). To be able to perform this, conscious that increasing the number of fully connected layers can lead to an effective loss of the information in the weights known as the "vanishing gradient problem" (Basodi et al., 2020, Hochreiter et al., 2015) which has been addressed introducing residual networks (ResNet: He et al., 2015), we prevented the loss of information in such a deep NN by making skip connections that allow this information to be carried on. These connections were implemented by bypassing every two fully connected layers. This architecture was useful only for MARCS data, yielding no improvement for ATLAS9 or PHOENIX data. The resulting model is fully detailed online2. Footnote 2: [https://github.com/cwestend/iNNterpol](https://github.com/cwestend/iNNterpol) The results are illustrated in Fig. 9 for MARCS data which have an equivalent quality to those for ATLAS9 (Fig. 7), while for PHOENIX the precision is five times worse, as shown in Fig. 10. We believe this can be explained by the grid with PHOENIX having an order of magnitude less models, being much less dense as compared to those of ATLAS9 and MARCS. Figure 8: Fits to various values of temperature with iNNterpol with a bottleneck of 71 (CAE71-NN) for ATLAS9 data. For all values, log g = 2.0. The test values not seen by the NN chosen randomly were used, that is why the metallicities are not exactly the same, but are all in an interval of \(\pm\)0.25 dex. Predicted values are in dash-dotted lines, and we note that interpolation is in temperature and also in metallicities. Figure 7: Results of iNNterpol for ATLAS9 data with a bottleneck of 71 and 16 layer deep (CAE71-NN). Shown are differences in temperature in percentage between the predicted models and the actual ground truth ones for points in the grid never seen for the NN. Effective temperatures cover the whole range from 3,500 K to 30,000 K. ### Comparison to linear interpolation In the specialized literature, it is common practice to interpolate model atmospheres linearly for the desired atmospheric parameters. Although models could be computed afresh for those values, the codes are not always public or easy to use, while precomputed model grids are available. Therefore, an interesting question is what the differences are between using classical linear interpolation and our iNNterpol method on the parameter space. If the grid is sufficiently dense in the sense that the models are close enough together, and thus vary linearly from one to the next, linear interpolation should be a good resort, but this is very hard to know in advance. Furthermore, there are places where a slight variation of even a single parameter can yield a very different model as we discuss below. The problem is further enhanced in places where this grid is not dense enough as near the grid limits or edges, or where there are missing models due to a lack of convergence. To test the validity of our method, we proceeded to create a new grid by linearly interpolating at half-step intervals on the original one. Using this new grid, we could use it again at half-step intervals to be able to compare the calculated values to the original ones used to make the new grid in the first place as described in Bertran de Lis et al. (2022). Making the assumption that the errors in this way are independent and Gaussian, the resulting deviations have to be corrected by a factor of \(\sqrt{2}\). We applied this to the exact same atmospheric models used to test our iNNterpol method, specifically those in which all surrounding models exist and a linear 5D interpolation was in fact possible and did not lead to extrapolation. Due to the fact that to be able to perform a linear 5D interpolation, for each desired interpolated atmosphere all 32 (\({}^{2}\)) models around the desired value must exist, the number of models available was greatly reduced. The errors in linear interpolation are shown in Fig. 11 where of the initial 170 test models shown in Fig. 7, only 104 remain that have all surrounding model atmospheres to permit a linear interpolation. We note that linear interpolation is very precise for layers at intermediate depths, but the deviations at deep optical depths or high in the stellar atmospheres make this technique less reliable as the RMS deviations become large. Comparing iNNterpol errors over the same grid points as can be seen in Fig. 12 presents a more constant deviation all through the model atmospheres. To evaluate these errors on a different value range, we can for example fix the effective temperature range to that of hot stars for the same (log) gravity range, and this is shown in Fig. 13. We see that while at higher layers the error is very small, models at deep Figure 11: Resulting error of linearly interpolating in a new half-step subgrid of ATLAS9 to compare with the original model atmospheres used for making this new subgrid. Chosen points are the exact same test values used to evaluate iNNterpol (a subset of those shown in Fig. 7) where actual linear interpolation is possible. The same range applies of Teff from 3,500 K to 30,000 K. Figure 10: Results of CAE96-NN for PHOENIX data with a bottleneck of 96, 16 layers, and 96 nodes for each layer. Differences in temperature as a percentage between the predicted models and the actual ones for points in the grid never seen for the NN. The full range of effective temperatures are covered (2,500 K to 15,000 K). \begin{table} \begin{tabular}{c c c c} \hline n\({}_{\mathrm{t}}\) Layers & n\({}_{\mathrm{a}}\) Neurons/layer & Activation & Batch size \\ \hline 16 to 28 & 64 to 128 & Ikrelu, elu & 128, 64, 32 \\ \hline Optimal values & & & \\ \hline \multicolumn{4}{c}{(MARCS CAE71)} \\ 22 & 71 & elu & 64 \\ \multicolumn{4}{c}{(PHOENIX CAE96)} \\ 16 & 96 & elu & 64 \\ \hline \end{tabular} \end{table} Table 3: Range of parameters tested for the fully connected NN and optimal values for MARCS and PHOENIX data. For both MARCS and PHOENIX, the best configurations have the resulting dimensionality reduction (n\({}_{\mathrm{r}}\)) equal to the neurons per layer (n\({}_{\mathrm{a}}\)) which are n\({}_{\mathrm{a}}\) = n\({}_{\mathrm{t}}\) = 71 and 96, respectively. Figure 9: Results of iNNterpol for MARCS data with a bottleneck of 71, 22 layers, and 71 nodes for each layer (CAE71-NN). Shown are differences of temperature as a percentage between the predicted models and the actual ground truth ones for points in the grid never seen for the NN. The full range of effective temperatures are covered (2,500 K to 8,000 K). layers present a wider variability and thus large error in linear interpolation. Again, our iNNterpol method is more consistent as it gives similar errors throughout the depth scale of the stellar atmosphere. The same behavior is evidenced if we go to other value ranges, such as for cool dwarfs which is shown in Fig. 8 and Fig. 9 or for values for giant stars as shown in Fig. 10 and Fig. 11, contained in the Appendix A. To test how our iNNterpol performs differently to a 5D linear interpolation, we can sample values where the models change for a small step in one parameter. This is shown in Fig. 15 for MARCS model atmospheres where all parameters except (log) gravity are kept constant (Teff = 3,200K, [M/H] = [C/M] = [\(\alpha\)/M] = 0). The interpolated models for intermediate gravity values show that both a linear fit and iNNterpol offer significantly different atmospheres, as the differences between both interpolations are well over the RMS errors previously shown. The effect is more significant at lower depths for both log g values, and also throughout the atmosphere as is the case for log g = 3.75. This region is a specially interesting one as log g = 3.5 marks the transition from spherical models to plane-parallel atmospheres in MARCS models. Here both techniques should naturally differ as linear interpolation takes into account only the contiguous models, while iNNterpol captures information from the whole grid. One interesting aspect is to be able to go beyond where 5D linear interpolation is not available as there are basically no surrounding models, but our iNNterpol method can still give interesting results. These have to be taken with caution, but providing the jump in a parameter in not too far away; the results may show what the NN is learning about the latent space of model atmospheres as it not only learns from nearby grid points but from all available models. This is shown in Fig. 16 where we extrapolated to find models that are off the grid such as those with high effective temperatures and low gravities. The model atmosphere is the limit in the sense that for MARCS atmospheres, there are no existing models for those Teff beyond the shown (log) gravity values. A model that extrapolates to both Teff = 8,250K and log g = 2,5 is also represented (one grid step beyond the edge of each Teff and log g), and it can be considered as a meaningful limit. Going beyond this (i.e., higher Teff or lower log g or both) yields nonsmooth temperature stratifications as is clearly seen in the model with Teff = 9,000K and log g = 2.5. We consider these variations or inhomogeneities to be artifacts due to the NN not being able to retrieve information for parameter values so far away from the available grid. Our iNNterpol method has the added advantages of being extremely fast and lightweight, as all NN weights and code are under 20MB, while a linear interpolator needs all models which for the MARCS grid take around 1GB of data. ## 4 Conclusions We hereby present a method for effectively extracting the nonlinearities in a grid of model atmospheres and being able to recover them for any values of the parameter space with great precision. This tool not only is an extremely fast and lightweight way of working with these stellar models, but it demonstrates a technique that can be employed with other families of models, provided the grid covered by these models in parameter space is dense enough. We provide iNNterpol, a fast and reliable interpolator for the ATLAS9 and MARCS family of models. We have shown that using the encoder from a CAE inside our NN greatly increases the power to interpret the existing nonlinearities present in the data, and which surpasses PCA in this configuration. In this way, traditional ML learning techniques, such as LightGBM which are way faster in many cases here, fail to capture the specific variations of the data at hand. We also show our iNNterpol method provides meaningful information where the variation between atmospheres in the grid depart from linearity. The effort designing and implementing this specific NN with a CAE for feature extraction should serve as a staring point for Figure 14: Errors of iNNterpol (CAE71-NN) for the same models as in Fig. 13 for direct comparison. Figure 12: iNNterpol (CAE71-NN) for ATLAS9 for the same models as in Fig. 11 for direct comparison. Figure 13: Errors of linearly interpolating on a half-step grid for ATLAS9 hot dwarf stars (Teff between 8,000 K and 30,000 K). others in improving and using it for other means and with data of a similar nature, which is abundant in the natural sciences. The full code and the data are freely available in the tradition of deep learning so that others will improve on it without having to recreate its results as all the details are available and open-sourced. ###### Acknowledgements. We want to appreciate the useful discussions with Thomas Masseron on linear interpolation and also Ivana Escala for her implementation of Masseron's code ([https://github.com/zeescala/interp-narcs/](https://github.com/zeescala/interp-narcs/)). We would also like to thanks our referee, Mikhail Kovalev, whose insightful comments have helped us improve the clarity of this work. CAP acknowledges financial support from the Spanish Ministry of Science and Innovation (MICINN) project PID2020-1174936B-I00. We acknowledge financial support from the Spanish Ministerio de Ciencia, Innovacion y Universidades through project PGC2018-012018-B-I00 and FEDER funds.
2310.06006
Review of control algorithms for mobile robotics
This article presents a comprehensive review of control algorithms used in mobile robotics, a field in constant evolution. Mobile robotics has seen significant advances in recent years, driven by the demand for applications in various sectors, such as industrial automation, space exploration, and medical care. The review focuses on control algorithms that address specific challenges in navigation, localization, mapping, and path planning in changing and unknown environments. Classical approaches, such as PID control and methods based on classical control theory, as well as modern techniques, including deep learning and model-based planning, are discussed in detail. In addition, practical applications and remaining challenges in implementing these algorithms in real-world mobile robots are highlighted. Ultimately, this review provides a comprehensive overview of the diversity and complexity of control algorithms in mobile robotics, helping researchers and practitioners to better understand the options available to address specific problems in this exciting area of study.
Andres-David Suarez-Gomez, Andres A. Hernandez Ortega
2023-10-09T16:47:20Z
http://arxiv.org/abs/2310.06006v1
## Review of control algorithms for mobile robotics ### Abstract This article presents a comprehensive review of control algorithms used in mobile robotics, a field in constant evolution. Mobile robotics has seen significant advances in recent years, driven by the demand for applications in various sectors, such as industrial automation, space exploration, and medical care. The review focuses on control algorithms that address specific challenges in navigation, localization, mapping, and path planning in changing and unknown environments. Classical approaches, such as PID control and methods based on classical control theory, as well as modern techniques, including deep learning and model-based planning, are discussed in detail. In addition, practical applications and remaining challenges in implementing these algorithms in real-world mobile robots are highlighted. Ultimately, this review provides a comprehensive overview of the diversity and complexity of control algorithms in mobile robotics, helping researchers and practitioners to better understand the options available to address specific problems in this exciting area of study. ### Resumen Este articulo presenta una revision exhaustiva de los algoritmos de control utilizados en la robotica movil, un campo en constante evolucion. La robotica movil ha experimentado avances significativos en los ultimos anos, impulsados por la demanda de aplicaciones en diversos sectores, como la automatizacion industrial, la exploracion espacial y la atencion medica. La revision se centra en algoritmos de control que abordan desafios especificos en la navegacion, localizacion, mapeo y planificacion de trayectorias en entornos cambiantes y desconocidos. Se discuten en detalle los enfoques clasicos, como el control PID y los metodos basados en teoria de control clasica, asi como las tecnicas modernas, incluyendo el aprendizaje profundo y la planificacion basada en modelos. Ademas, se destacan las aplicaciones practicas y los desafios pendientes en la implementacion de estos algoritmos en robots moviles del mundo real. En ultima instancia, esta revision proporciona una vision general integral de la diversidad y complejidad de los algoritmos de control en la robotica movil, ayudando a los investigadores y professionales a comprender mejor las opciones disponibles para abordar problemas especificos en esta emocionante area de estudio. ### Introduccion A menudo se investiga el diseno y la optimizacion de algoritmos de control para mejorar diferentes aspectos del rendimiento de un robot, como el control de seguimiento de trayectorias para moverse de un punto a otro. Hay una gran cantidad de investigaciones sobre algoritmos de control de robots y se proponen constantemente nuevos enfoques. Sin embargo, surge el problema de la dificultad para comparar los resultados de estas investigaciones publicadas y evaluaar la calidad de los estudios. Ademas, en las publicaciones sobre robotica, los criterios de evaluacion del rendimiento suelen ser pasados por alto, lo que dificulta realizar una comparacion objetiva de los algoritmos. Las pruebas, y saan en simulacion o experimentales, a menudo se limitan a medir la longitud de la trayectoria recorrida o el tiempo que tarda el robot en completar una tarea. Ademas, hay pocos metodos estandar para evaluar las capacidades y limitaciones de estos sistemas de manera comparable. (Norton et al, 2019). Las investigaciones en este campo se suelen llevar a cabo en entornos de laboratorio controlados para validar pruebas de concepto y establecer comparativas utiles. Sin embargo, es importante tener en cuenta que los resultados obtenidos pueden diferir en cierta medida de la operacion real de un robot, ya que esta ultima esta caracterizada por la presencia de incertidumbre (Martins et al., 2020). Ademas, existe una falta de consenso en cuanto a los criterios de evaluacion del rendimiento, los cuales suelen variar de un estudio a otro. Esta falta de consenso dificulta la comparacion de las capacidades de los algoritmos de navegacion y resta rigor a la evaluacion de los avances en este campo. En consecuencia, se carece de un sistema integral de evaluacion (Ren et al., 2020). ## Materiales y Metodos Normalmente el control de las entradas de un modelo monociclo trata de aplicar el controlador PID de retroalimentacion tradicional y seleccionar la entrada adecuada, \(u=(v\omega)^{T}\), dado por la ecuacion: \[U(t)=PID(e)=K_{p}e(t)+K_{l}\int_{0}^{1}e(\tau)\,d(\tau)+K_{D}\,\frac{de\,(t)} {dt} \tag{1}\] En el contexto de cada tarea que se detalla a continuacion, el termino 'e' se refiere al error entre el valor deseado y el valor obtenido como resultado. Las constantes Kp, Kl y KD representan las ganancias proporcional, integradora y derivativa respectivamente, mientras que 't' hace referencia al tiempo. Las ganancias de control empleadas en este estudio se determinan mediante ajustes de diferentes valores con el objetivo de lograr respuestas satisfactorias. Si el vehiciulo se desplaza a una velocidad constante, \(v=v0\), entonces la entrada de control solo cambiara junto con la velocidad angular, \(\omega\), de esta manera: \[w=PID(e) \tag{2}\] ## Resultados y Discusion ### Criterios De Espacio-Tiempo Los criterios de desempeno relacionados con las dimensiones espacio-tiempo son utilizados ampliamente y permiten una evaluacion y comparacion cuantitativa de los resultados en experimentos reales o en simulacion. En el articulo de Munoz et al. (2014), se describen varios criterios tipicos en navegacion y evasion de obstaculos, como el exito de la mision, la robustez en espacios estrechos, la longitud del camino, el tiempo requerido para completar la mision, los periodos de control, la distancia media al objetivo, la distancia a los obstaculos y la suavidad de la trayectoria, entre otros. Entre estos criterios, los relacionados con las dimensiones espacio-tiempo son los mas simples y comunmente utilizados. Se considera que una trayectoria optima desde el punto de vista de alcanzar la meta es aquella que sigue una linea recta de la minima longitud posible y sin curvatura entre el punto de inicio (xi, yi) y el punto de llegada (xn, yn), recorrida en el menor tiempo. Este enfoque asume la linealidad y la velocidad constante del robot en su trayectoria hacia la meta (Munoz-Ceballos, N. D. et al., 2022). El "Dynamic Window Approach" (DWA) es un conocido algoritmo de navegacion para evitar colisiones. Fue propuesto inicialmente por Dieter Fox y su equipo. DWA funciona en tiempo real y reacciona a las situaciones cambiantes a medida que ocurren. En los ultimos anos, la funcion de coste de DWA ha experimentado varias extensiones y mejoras. Este enfoque determina las velocidades de translacion (v) y rotacion (w) seguras y optimas directamente creando perfiles de velocidad que tienen en cuenta la dinamica del robot y las limitaciones de velocidad y aceleracion. La busqueda de velocidades adecuadas implica principalmente tres subespacios, incluyendo el espacio de posibles valores de v (Mohammadpour et al, 2021). El espacio de posibles velocidades se divide en tres subespacios de acuerdo con las restricciones cinematicas del robot: Espacio de Velocidades Posibles (Vs): Este subespacio considera las limitaciones kinematicas del robot y representa todas las velocidades que son fisicamente posibles para el robot en funcion de sus caracteristicas mecanicas. Espacio de Velocidades Admisibles (Va): En este subespacio se contemplan las velocidades que permiten al robot detenerse sin colisionar con un obstaculo. Estas velocidades estan restringidas por la capacidad del robot para detenerse de manera segura y sin colisiones. Espacio de Velocidades Posibles con Consideracion de las Limitaciones de Aceleracion del Robot (Vd): Este subespacio tiene en cuenta las limitaciones de aceleracion del robot. Representa las velocidades posibles considerando la capacidad limitada del robot para cambiar su velocidad de manera rapida debido a sus restricciones de aceleracion. \[Vr=Vs\cap Va\cap Vd \tag{3}\] Donde Vr es el espacio de busqueda de velocidades optimas, que se selecciona maximizando la siguiente funcion objetivo: \[G(v,w)=\alpha*h(v,w)+\beta*d(v,w)+\gamma*v_{F}(v,w) \tag{4}\] Donde "h" mide la alineacion del robot con la direccion objetivo, "d" representa la distancia al obstaculo mas cercano, y "vf" es la velocidad hacia adelante del robot. Los valores de \(\alpha\), \(\beta\) y \(\gamma\) son constantes ajustables que determinan la importancia relativa de estas tres medidas en la funcion objetivo. En resumen, el metodo DWA genera numerosas trayectorias locales en linea posibles y luego selecciona la mas adecuada en funcion de la funcion objetivo. Finalmente, se ejecuta la velocidad mas adecuada para seguir la trayectoria local seleccionada. _Exito En Alcanzar La Meta_ Este criterio generalmente se da en terminos de porcentaje (%), consiste en contabilizar el porcentaje de exitos en completar una mision de navegacion respecto al total de intentos. Algunos investigadores tambien establecen un tiempo limite, pero suficiente para completar con exito la mision de navegacion dada, esto con el fin de descartar las pruebas en las que el robot se quede atascado navegando en un bucle interminable (McGuire et al, 2019).Un desafio adicional para los algoritmos de control es su desempeno en la navegacion a traves de pasajes o corroedores estrechos, por lo tanto, un criterio adicional a considerar puede serla robustez en espacios estrechos: numero de pasajes estrechos atravesados con exito. Como ejemplo se usa en los robots de navegacion que deben encontrar la salida a un laborinto, en dicha situacion la triangulacion de la posicion se hace mediante la configuracion del triangulo resuelto, donde descamos calcular la altura (h) del triangulo con los lados 'a' y 'b' y el angulo \(\beta\). 'c' es el lado del triangulo que sera desconocido, por lo que se desarrollara una formula que solo utilizara 'a,' 'b' y \(\beta\) \[A=\frac{c.h}{2} \tag{4}\] \[A=\frac{a.b.\sin\beta}{2} \tag{5}\] Se aplica teorema de coseno para triangular la altura y desarrollar el triangulo reemplazando las ecuaciones anteriores. \[h=\frac{a.b.\sin\beta}{\sqrt{a^{2}+b^{2}-2.acoss\beta}} \tag{6}\] #### Tiempo En Alcanzar El Obigtivo El tiempo de ejecucion es un criterio fundamental utilizado en la mayoria de los articulos donde se comparan algoritmos. Consiste en medir el tiempo que el robot tarda en alcanzar la meta. En simulaciones de sistemas deterministicos, se obtiene el mismo resultado ante las mismas condiciones de simulacion. Sin embargo, para que la simulacion se acerque mas a la realidad, donde existen factores como el desgaste de las bacterias, la friccion entre las ruedas y el suelo, las condiciones ambientales, entre otros, se puede introducir ruido en el sistema. Esto se logra mediante la adicion de perturbaciones en la lectura de los sensores o en las senales de control a los actuadores. En la simulacion, se pueden utilizar generadores de numeros aleatorios para modelar el error o el ruido. Otro criterio relacionado es el numero de ciclos de procesamiento, que equivale al numero aproximado de operaciones realizadas para completar una mision. Es importante tener en cuenta que diferentes tipos de robots pueden tener diferentes tipos de procesadores, es decir, capacidades de computo distintas. Esto influye en el tiempo total que el robot tarda en completar la mision y en la comparacion de los algoritmos utilizando este criterio (Tai et al., 2020). _Periodos De Control_ Los periodos de control se refieren a la candidad de veeces que el planificador toma decisiones para alcanzar el objetivo. Esta medida esta relacionada con el numero de iteraciones o pasos que el robot necesita para completar la mision. Si el robot se mueve a una velocidad lineal constante (v), los periodos de control proporcionan una estimacion del tiempo empleado en completar la mision. Cuanto mayor sea el numero de periodos de control, mayor sera la cantidad de decisiones tomadas y, potencialmente, mayor sera el tiempo requerido para alcanzar el objetivo. _Criterios Basados En El Error_ En un sistema de control, el error se define como la diferencia entre la variable controlada (tambien conocida como variable del proceso) y el valor de referencia o set-point. En un sistema de control, el objetivo es que el error tienda a cero, lo que indica un buen desempeno del sistema. Una forma de evaluar el desempeno de un sistema de control es cuantificar el error accumulativo. En el caso de un robot movil, el error accumulativo proporciona una medida numerica de que tan "bueno" es el desempeno de un controlador especifico para el sistema de traccion o direccion del robot. En controladores de tiempo discreto, es necesario conocer el error e(nT) en cada instante de muestreo, donde T es el periodo de muestreo y fs es la frecuencia de muestreo. La frecuencia de muestreo tiene un impacto directo en la excititud de las mediciones y en la capacidad del controlador para detectar y corregir el error a lo largo del tiempo. Los criterios de desempeno basados en el error o en la integral del error tienen una fundamentacion teorica bien establecida en los sistemas de control, y son ampliamente utilizados para evaluar y mejorar el desempeno de los controladores (Domanski, 2020). Estos criterios permiten analizar el comportamiento del sistema en terminos de la reduccion del error y la estabilidad del control. _Error Final_ El error final se refiere a la discrepencia entre la posicion final del vehiculo y el punto final de una trayectoria de referencia establecida. Se calcula midiendo la distancia entre la posicion final del vehiculo y el punto final deseado de la trayectoria de referencia. Esta medida es es especialmente util en vehiculos submarinos robotizados, ya que permite detectar si el vehiculo ha perdido la trayectoria a mitad del seguimiento o si ha llegado a una posicion final incorrecta (Perez et al., 2018). Como ejemplo el articulo menciona que mediante una plataforma desarrollada, es posible evaluar automaticamente el rendimiento de las soluciones obtenidas utilizando una metrica especifica. En este contexto, se emplea una medida tipica en algoritmos de control para el seguimiento de trayectorias: el error integrado cuadratico (ISE) y el error final. El ISE se calcula como la suma de las distancias a la trayectoria ideal, que se encuentra a dos metros sobre la tuberia, a lo largo del tiempo. Esta medida esta inversamente relacionada con la calidad del seguimiento, y que considera tanto el tiempo dedicado como la precision en el seguimiento. La velocidad de aumento es significativamente mayor cuando la distancia entre el vehiculo y la trayectoria optima aumenta, pero tambien castiga un seguimiento excesivamente lento de la trayectoria. Ademas, es esencial determinar si se ha llegado al final de la tuberia. Para lograrlo, se calcula la distancia entre la posicion final del vehiculo y el extremo de la tuberia. De esta manera, es posible identificar si el vehiculo ha perdido la tuberia durante la mitad del seguimiento o si ha detectado incorrectamente el final. Teniendo en consideracion estas mediciones, la evaluacion final se determina mediante la ecuacion 7. \[puntuaacion=(1-error^{2})*(0,1-\frac{errorchio^{2}}{0,1}+\frac{tiempo-ref}{100} \tag{7}\] Donde "error" representa el error final, "erroredio" es la media de errores a lo largo del seguimiento calculada a partir del ISE, "tiempo" indica el tiempo necesario para llevar a cabo la intervencion, y "ref" se refiere a la referencia de tiempo de cada escenario, determinada en funcion de la distancia recorrida, los giros y los cambios de alitura. El primer termino de la ecuacion evalua la posicion final del vehiculo, sancionando aquellas situaciones en las que el vehiculo se encuentra lejos del objetivo. El segundo termino evalua el error promedio a lo largo de la trayectoria. El ultimo termino es un bono que favorece los seguimientos rapidos y castiga los lentos en funcion de la complejidad de la ruta de la tuberia. En el diseno general de controladores, existen criterios de rendimiento comumente utilizados, como los indices que involucran la integral del error (Suarin et al., 2019). Estos criterios se basan en el error acumulativo y se pueden aplicar al seguimiento de trayectorias de referencia, indicando el error a lo largo de todo el recorrido entre la trayectoria de referencia y la trayectoria real seguida por el robot. Estos indices tambien se utilizan en el control de posicion, distancia, orientacion, formacion de multiples robots, entre otros (Caruntu et al., 2019) (Farias et al., 2020). Cuanto menor sea el error, mejor sera la trayectoria recorrida y, en consecuencia, mejor sera el algoritmo de control. ### Criterios De Seguridad Las investigaciones relacionadas con este tema, que abordan los criterios de desempeno en el control de robots y su seguridad en la navegacion, se describen en el estudio realizado por Marvel and Bostelman en 2014. Estos criterios de desempeno se centran en la seguridad del robot mientras se desplaza a lo largo de una trayectoria determinada, teniendo en cuenta factores como la distancia entre el vehiculo y los obstaculos encentrados en su camino, asi como el numero de colisiones que ocurren durante la navegacion (Munoz et al., 2014). Estas investigaciones buscan garantizar un desplazamiento seguro y evitar posibles accidentes o danos durante la operacion del robot. La distancia media a los obstaculos durante toda la mision de navegacion es otro criterio de desempeno utilizado en el control de robots. Este criterio permite evalua que tan cerca o lejos se encuentra el robot de los obstaculos en su entorno durante toda la trayectoria seguida. En un entorno sin obstaculos, este valor maximo sera mayor, ya que el robot puede moverse libremente sin restriciciones. Por otro lado, si el indice de distancia media a los obstaculos se desvia menos del valor maximo, significa que la ruta seguida por el robot transcurrio por zonas mas libres de obstaculos, lo que indica un mejor desempeno en terminos de evasion y navegacion segura. Este criterio es importante para evitar colisiones y garantizar una trayectoria clara y libre de obstrucciones para el robot durante su mision de navegacion. La distancia media minima a los obstaculos es otro criterio de desempeno utilizado en el control de robots para evaluar la seguridad durante una mision de navegacion. Se promedia el valor minimo de distancia medido por cada uno de los n sensores del robot. Este criterio proporciona un idea del riesgo que se ha corrido durante la mision en terminos de la proximidad a los obstaculos. En entornos sin obstaculos, donde no hay obstaculos cercanos al robot, se cumplira que la distancia media minima a los obstaculos sera igual para todos los sensores. Esto indica que el robot ha mantenido una distancia segura y no ha estado expuesto a riesgos significativos de colision o contacto con obstaculos. En cambio, en entornos con obstaculos, cuanto menor sea la distancia media minima a los obstaculos, mayor sera el riesgo que se ha corrido durante la mision, ya que el robot ha estado mas cerca de los obstaculos. Por lo tanto, un menor valor de este criterio indica una mayor probabilidad de colisiones o contacto con los obstaculos. _Consumo Energetico_ El consumo de energia es un aspecto clave en las aplicaciones de robots moviles, ya que influye en la autonomia del robot, es decir, en el tiempo que puede funcionar de manera optima antes de quedarse sin energia. En los ultimos anos, se ha prestado especial atencion a este tema (Stefek et al, 2020). Si un robot no cumple con los requisitos de consumo de energia, como la capacidad de funcionar de manera independiente y la posibilidad de recargarse, su rendimiento, tiempo de operacion y autonomia se ven limitados (Heikkinen et al, 2018). Los robots moviles dependen en gran medida de las bacterias como fuente de energia, pero estastien una capacidad energetica limitada. Como resultado, el tiempo de operacion del robot suele ser corto, lo que puede ser insufficiente para tareas o misiones que requieren mas tiempo y energia para completarse. Aumentar el tiempo de funcionamiento mediante el uso de mas bacterias o dirigiendo el robot a una estacion de carga puede incrementar el costo o el tamano del sistema, lo que puede ocasionar problemas de control. Una alternativa es mejorar la eficencia energetica del robot, reduciendo su consumo de energia (Armah et al, 2016). En resumen, el consumo de energia es un aspecto crucial en los robots moviles, ya que afecta su autonomia y rendimiento. Es importante optimizar el diseno, los componentes y los algoritmos de control para reducir el consumo de energia y mejorar la eficencia. Esto permitira prolongar el tiempo de funcionamiento del robot y su capacidad de operar sin agotar rapidamente la energia, sin necesidad de aumentar significativamente el tamano o el costo del sistema. ## Agradecimientos Los autores agradecen a la Universidad Nacional Abierta y a Distancia por su financia cion bajo el proyecto de investigacion ECBTIPIE042022 Implementacion de una tecnica de control basada en co-diseno H/S para robots moviles basados en FPGA.
2308.03334
Variational quantum algorithm for ergotropy estimation in quantum many-body batteries
Quantum batteries are predicted to have the potential to outperform their classical counterparts and are therefore an important element in the development of quantum technologies. Of particular interest is the role of correlations in many-body quantum batteries and how these can affect the maximal work extraction, quantified by the ergotropy. In this work we simulate the charging process and work extraction of many-body quantum batteries on noisy-intermediate scale quantum (NISQ) devices, and devise the Variational Quantum Ergotropy (VQErgo) algorithm which finds the optimal unitary operation that maximises work extraction from the battery. We test VQErgo by calculating the ergotropy of a many-body quantum battery undergoing transverse field Ising dynamics following a sudden quench. We investigate the battery for different system sizes and charging times, and analyze the minimum number of ansatz circuit repetitions needed for the variational optimization using both ideal and noisy simulators. We also discuss how the growth of long-range correlations can hamper the accuracy of VQErgo in larger systems, requiring increased repetitions of the ansatz circuit to reduce error. Finally, we optimize part of the VQErgo algorithm and calculate the ergotropy on one of IBM's quantum devices.
Duc Tuan Hoang, Friederike Metz, Andreas Thomasen, Tran Duong Anh-Tai, Thomas Busch, Thomás Fogarty
2023-08-07T06:29:46Z
http://arxiv.org/abs/2308.03334v2
# Variational quantum algorithm for ergotropy estimation in quantum many-body batteries ###### Abstract Quantum batteries are predicted to have the potential to outperform their classical counterparts and are therefore an important element in the development of quantum technologies. In this work we simulate the charging process and work extraction of many-body quantum batteries on noisy-intermediate scale quantum (NISQ) devices, and devise the Variational Quantum Ergotropy (VQErgo) algorithm which finds the optimal unitary operation that maximises work extraction from the battery. We test VQErgo by calculating the ergotropy of a quantum battery undergoing transverse field Ising dynamics. We investigate the battery for different system sizes and charging times and analyze the minimum required circuit depth of the variational optimization using both ideal and noisy simulators. Finally, we optimize part of the VQErgo algorithm and calculate the ergotropy on one of IBM's quantum devices. ## I Introduction The allure of modern quantum technologies relies on leveraging quantum effects such as coherence and entanglement to out-perform their classical counterparts. In recent years this has been motivated by rapid experimental advances which has increased control over quantum states and has allowed to explore fundamental concepts in these devices. In particular, quantum thermal machines allow to explore the foundations of quantum thermodynamics, with devices such as quantum heat engines and refrigerators designed to control work output and heat flow with quantum media [1; 2; 3]. Energy can also be stored in quantum batteries to be extracted at a later time [4; 5; 6; 7; 8; 9; 10; 11; 12], which have the potential to outperform their classical counterparts in terms of total stored energy [13; 14], charging speed [15; 16; 17; 18; 19; 20] and energy extraction [21; 22]. The maximum amount of energy that can be extracted from quantum systems through unitary processes is given by the _ergotropy_[23] which relies on finding the optimal unitary operation which transforms the system to its lowest energy state, known as its _passive_ state. This can be a difficult task as the ergotropy can be sensitive to correlations which can also affect device performance, notably improving efficiency in quantum heat engines coupled to squeezed baths [24; 25; 26; 27], while impairing energy extraction from many-body batteries [28; 29; 30; 31; 7]. Simulation of the latter problem will be the focus of our work. Simulating the dynamics of many-body quantum systems in itself can be a complex problem due to the non-negligible role of quantum correlations which arise from finite couplings between particles. Furthermore, by today numerical calculations carried out on classical hardware are limited to small numbers of particles. This is in contrast to algorithms based on quantum hardware, which promise to alleviate some of this complexity by simulating quantum wave-functions in the Hilbert space of quantum bits, rather than numerically in classical registers. In addition to quantum physics and other fundamental sciences [32; 33; 34; 35; 36; 37; 38], quantum computers promise applications in various technological sectors including chemistry [39; 40; 41; 42; 43; 44] and materials design and research [45; 46; 47; 48; 49]. It is for this reason that they have seen unprecedented growth in recent years: reported milestones include simulation of dynamics and calculations of accurate expectation values on a 127 qubit device [50], demonstration of fast converging quantum-enhanced Markov chain Monte-Carlo simulations [51] and generation of large-scale cluster states on superconducting qubit devices [52]. While fault tolerant quantum computation (FTQC) based on error corrected qubits is still not technically possible, currently noisy intermediate-scale quantum (NISQ) processors are available. However, they only have short-lived qubits which are not protected from decoherence [53; 54; 55]. In this NISQ era, quantum algorithms rely on shallow circuits where qubits are measured quickly [56; 57]. There is therefore a need for quantum algorithms which can simulate quantum systems within a limited time-span while still solving problems which exceed the capabilities of their classical counterparts. Variational quantum algorithms [58; 59; 60] (VQA) have been deemed particularly promising for NISQ devices. These are a class of algorithms which can be used to find variational approximate solutions to problems of interest. A famous example is the variational quantum eigensolver (VQE) [56] which is used to determine the ground state of a Hamiltonian through repeated sam pling of an ansatz wave-function in the eigenbases of a set of observables. Amongst others, VQAs have been developed to solve the max-cut problem via the quantum approximate optimization algorithm [61], to find numerous chemical properties of molecules [62; 63; 64], or to perform machine learning tasks like the classification of symmetry protected topological phases [65]. While the performance of these algorithms is limited by barren plateaus [66], i.e., the problem of exponentially vanishing gradients with the system size, in recent years tools have been developed to study and mitigate this phenomenon [67; 68]. In addition, error mitigation has been shown to offer significant improvements when noise is an issue [69; 70; 71], and approaches inspired by FTQC have resulted in partial error correction schemes developed for NISQ devices [72; 73]. In this work we propose a VQA called variational quantum ergotropy (VQErgo) to calculate the ergotropy of a quantum battery on NISQ computers. We use the transverse field Ising spin-chain model to benchmark our algorithm whereby the battery is charged by a sudden quench of the interaction among nearest-neighbor spins. The interactions will create correlations between the spins which results in non-trivial dynamics of the ergotropy. In order to simulate the dynamics we use projected - Variational Quantum Dynamics (p-VQD) [74] to find the time-evolved quantum state and then a variational optimization is carried out to obtain the optimal unitary which prepares the passive state. The performance of the algorithm is analyzed for different system sizes and circuit depths, and its accuracy is assessed by comparison with exact results. We also analyze how the creation of correlations between spins can negatively affect the ergotropy estimation, requiring an increased circuit depth of the variational ansatz. Finally, we evaluate the effectiveness of our scheme in the presence of noise using a noisy simulator as well as real hardware. Our work represents one of the first NISQ algorithms designed specifically to calculate the ergotropy of quantum systems, expanding the tools already available to describe quantum thermodynamics and associated devices on quantum computers [75; 76; 77; 78]. The manuscript is organized as follows. In Sec. II, we briefly review the operation of quantum batteries and the concept of ergotropy. We also present the VQErgo algorithm which is used to calculate the ergotropy on quantum hardware, separated into four steps: initialization, time evolution/charging, mean energy calculation and passive energy optimization. Then, the transverse field Ising spin-chain Hamiltonian and the charging protocol for our quantum battery are introduced in Sec. III. We describe our main results in Sec. IV, including the dynamics of the system, the measurement of the total energy and the ergotropy from noise-free (state-vector simulations), noisy simulations, and from calculations run on IBM quantum devices [79]. Finally in Sec. V, we draw our conclusions and discuss future prospects for this algorithm. ## II Methods ### The maximal extractable work - ergotropy We describe a quantum battery made from \(N\) identical quantum cells which are charged through unitary dynamics by suddenly switching on an external field \(V\). Initially the battery is prepared in the ground state \(\ket{\Psi\left(t=0\right)}\) of a local Hamiltonian \(H_{0}\), and during charging it evolves according to \[H_{1}=H_{0}+V\,. \tag{1}\] The state of the charged battery is therefore time-dependent, \(\ket{\Psi\left(t\right)}\), and energy is discharged by removing the external field \(V\) with the total work stored in the battery at time \(t\) then given by \[W(t)=\bra{\Psi\left(t\right)}{H_{0}}\Psi\left(t\right))-\bra{\Psi\left(0 \right)}{H_{0}}\Psi\left(0\right)\,. \tag{2}\] While this is the total energy that is stored in the entire battery after the time \(t\), it is not necessarily all extractable, especially when only considering subsystems of the device. This would correspond to extracting energy from \(M\leq N\) cells of the battery, which could be required due to a restriction on accessing the full state of the system or in order to only partially discharge the battery. In this scenario energy can be locked in correlations between the \(M\) and \(N-M\) cells thereby reducing the amount of energy that can be extracted [4; 6; 7]. The maximum amount of work that can be extracted from the \(M\)-cell state \(\rho^{M}=\mathrm{tr}_{N-M}\{\ket{\Psi\left(t\right)}\bra{\Psi\left(t\right)} \right\}=\sum_{j=1}\lambda_{j}\ket{\varphi_{j}}\bra{\varphi_{j}}\) (with \(\lambda_{j}\geq\lambda_{j+1}\)) through unitary transformations is given by the ergotropy [23], which is found by optimizing over all possible unitaries such that the resulting state has the minimum energy with respect to the Hamiltonian \(H_{0}=\sum_{i=1}\varepsilon_{i}\ket{\psi_{i}}\bra{\psi_{i}}\) (with \(\varepsilon_{i}\leq\varepsilon_{i+1}\)) \[\mathcal{E} =\mathrm{tr}\{H_{0}\rho^{M}\}-\min_{U}\{\mathrm{tr}\{H_{0}U\rho ^{M}U^{\dagger}\}\} \tag{3}\] \[=\mathrm{tr}\{H_{0}\left(\rho^{M}-P_{\rho}\right)\}\,.\] This state is known as the passive state \(P_{\rho}=\sum_{i}\lambda_{i}\ket{\psi_{i}}\bra{\psi_{i}}\) and no further work can be extracted from it by unitary transformations. The ergotropy can then be expressed in the well-known form [23] \[\mathcal{E}=\sum_{i}\left(p_{i}-\lambda_{i}\right)\varepsilon_{i}, \tag{4}\] where \(p_{i}=\sum\limits_{j}\lambda_{j}|\bra{\varphi_{j}}\psi_{i}\rangle|^{2}\) is the projection of \(\rho^{M}\) on the eigenstates of \(H_{0}\). To extract energy from the battery we therefore require that \(p_{i}\neq\lambda_{i}\). If the reduced state \(\rho^{M}\) is mixed there can be a difference between the work \[W(t)=\mathrm{tr}\{H_{0}\rho^{M}(t)\}-\mathrm{tr}\{H_{0}\rho^{M}(0)\} \tag{5}\] and the ergotropy, \(W(t)\geq\mathcal{E}(t)\), which becomes an equality if \(\rho^{M}\) is pure. In classical simulations of quantum batteries, the ergotropy is conventionally calculated by solving Eq. (4), i.e., by diagonalizing the sub-system Hamiltonian and the reduced density matrix of the battery state and by computing the relevant overlaps. The analogous way to run this sequence of calculations using NISQ hardware would be to first obtain the full spectrum of the Hamiltonian and its eigenstates [80; 81; 82]. Then estimates of the overlaps with the time-evolved wavefunction can be obtained by measurement in the Hamiltonian eigenbasis. However, we can instead consider the optimization problem in Eq. (3) which is naturally expressed in terms of expectation values that can be efficiently computed on a quantum computer. Furthermore, the optimization over unitary operators for the passive state can be naturally phrased in the language of variational quantum algorithms [58; 59; 60] as we detail in the following section. Hence, current state of the art quantum devices allow us to readily simulate and investigate the ergotropy and other properties of many-body quantum batteries. ### The Variational Quantum Ergotropy (VQErgo) algorithm In the following we describe our framework for simulating quantum batteries on quantum hardware and how to extract interesting properties, in particular the ergotropy. We therefore refer to the overall algorithm as the Variational Quantum Ergotropy (VQErgo) algorithm. VQErgo can be divided into 4 subroutines which are (i) battery initialization, (ii) battery charging, (iii) mean energy calculation and (iv) passive energy optimization as shown in Fig. 1. **Battery initialization.** The battery starts off in the uncharged state, corresponding to the ground state of the local Hamiltonian \(H_{0}\). Any ground state preparation routine (e.g. VQE) can be employed for this task. Note that in the rest of this work we choose the local Hamiltonian to be of the form \(H_{0}=-\sum_{i}^{N}\sigma_{i}^{z}\). Thus, the ground state \(\ket{0}^{\otimes N}\) naturally coincides with the initial computational basis state of digital-based quantum computers allowing us to omit a state preparation circuit. **Battery charging.** The battery is charged by time evolving the initial state with the Hamiltonian \(H_{1}\) for a total time \(t\). On digital quantum computers time evolution can be achieved by a Trotter-Suzuki decomposition of the global time evolution unitary into local gates. However, the number of Trotter steps (the number of gates) grows with time \(t\) and hence, generally, only short evolution times can be simulated on noisy hardware. To overcome this limitation, there have been several proposals for performing the time evolution variationally using parameterized circuits of fixed, short depths [83; 84; 85; 86]. Here, we employ the projected-variational quantum dynamics (p-VQD) algorithm due to its efficiency [74]. p-VQD iteratively evolves the parameters \(w(t+\delta t)=w(t)+dw\) of a state ansatz \(\ket{\psi_{w(t)}}=U(w(t))|0\rangle\) in short time increments \(\delta t\) by minimizing the infidelity between the ansatz state \(\ket{\psi_{w(t)+dw}}=U(w(t)+dw)|0\rangle\) and the true time evolved state \(\ket{\phi(t+\delta t)}=e^{-iH_{1}\delta t}\ket{\psi_{w(t)}}\) \[dw=\arg\min_{dw}\left[\frac{1-\left|\left\langle\phi(t+\delta t)\mid\psi_{w( t)+dw}\right\rangle\right|^{2}}{\delta t^{2}}\right]. \tag{6}\] The unitary \(e^{-iH_{1}\delta t}\) is typically approximated using the Trotter-Suzuki decomposition with a single Trotter step given that the time step size \(\delta t\) is chosen sufficiently small. Importantly, the depths of the state ansatz circuit and of the circuits used for evaluating Eq. (6) do not grow with time \(t\). For further details regarding the p-VQD optimization, we refer to Appendix A.1. Let us note here that our quantum circuit framework for quantum battery simulation is highly modular and, in principle, any of the subroutine algorithms can be exchanged with other viable quantum algorithms for the respective tasks. Specifically, for time evolution, we mention the time-dependent variational algorithm (TDVA) [84; 85], and subspace variational quantum simulation (SVQS) [87], which may be used in cases where the number of excited states populated during time-evolution does not exceed the number of qubits. Additionally, one could restrict to quenches with Hamiltonians composed of only commuting terms, e.g., \(H_{1}=-\sum_{i}^{N}\sigma_{i}^{x}\sigma_{i+1}^{x}\). In this case, the time evolution operator is trivially decomposed into a single layer of two-qubit gates and eliminates the need for involved, approximate time evolution algorithms. Finally, the recent advances in analog quantum computing provide yet another promising architecture for quantum battery simulations since time evolution is naturally implemented via global Hamiltonians [88; 89]. **Mean energy calculation.** The ergotropy of the quantum battery is calculated as the difference of the mean and passive energies of the charged state \(\ket{\psi(t)}\) after tracing over \(N-M\) sites (c.f. Eq. 3). Specifically, the mean energy can be expressed as an expectation value of the local Hamiltonian acting only on the subsystem of \(M<N\) qubits \[E_{\text{mean}}=\bra{\Psi\left(t\right)}\ket{H_{0}^{M}\otimes\mathbb{I}^{ \otimes\left(N-M\right)}}\ket{\Psi\left(t\right)}. \tag{7}\] **Passive energy optimization.** The computation of the passive energy requires us to find the optimal unitary transformation \(U_{E}\) acting on \(M\) qubits of the charged state that minimizes the expectation value of the local Hamiltonian within the subsystem \[E_{\text{pass}}=\min_{U_{E}}\left[\bra{\Psi\left(t\right)}\ket{U_{E}^{ \dagger}H_{0}^{M}U_{E}}\otimes\mathbb{I}^{\otimes\left(N-M\right)}\ket{\Psi \left(t\right)}\right]\,. \tag{8}\] We can efficiently perform the optimization over unitaries on current quantum hardware using the tools of variational quantum algorithms. In particular, we define a circuit ansatz \(U_{E}(\theta)\) composed of two-qubit and single-qubit gates with a set of parameters \(\theta\). The optimization of the passive state then amounts to finding the optimal parameters that minimize the expectation value in Eq. (8) which can be iteratively achieved by using a classical optimizer like gradient descent and the parameter-shift rule for evaluating gradients [90; 91]. In order to limit the amount of noise during the quantum simulation to a minimum, we employ a hardware-efficient ansatz for the p-VQD and the passive state variational circuits. This means that the ansatz circuits are composed of several layers each containing arbitrary, parameterized single-qubit rotations (decomposed into \(R_{Y}R_{Z}R_{Y}\) gates) followed by a series of CNOT gates applied only to neighboring qubits (see Fig. 1(b)). Note that the passive state circuit \(U_{\mathcal{E}}\) is only defined on the \(M<N\) subsystem qubits. Appendix A.2 contains additional details about the circuit optimization. ## III Model To model the quantum battery we consider the paradigmatic transverse field Ising spin-chain [92; 93; 6; 22]. The competition between the nearest-neighbour interactions and an external field can result in strongly correlated reduced states \(\rho^{M}\) which give rise to a non-trivial dependence of the ergotropy on the charging time \(t\) and subsystem size \(M\) as we show further below. Moreover, the system is amenable to simulations on currently available NISQ devices for a couple of qubits. The discharged battery is described by the non-interacting Hamiltonian \[H_{0}=-h\sum_{i=1}^{N}\sigma_{i}^{z}, \tag{9}\] where \(h>0\) is the external magnetic field, \(N\) is the number of spins (cells) in the battery, and \(\sigma_{i}^{k}\) with \(k=x,y,z\) denotes the spin-1/2 Pauli matrices. At \(t=0\) the Figure 1: (a) Schematic depiction of the Variational Quantum Ergotropy (VQErgo) algorithm. _Initialization:_ The uncharged battery state \(|\psi(0)\rangle\) given by the ground state of the local Hamiltonian \(H_{0}\) is prepared on the quantum device e.g. using the variational quantum eigensolver (VQE). _Time evolution/charging:_ The battery is charged by time evolving the state with another Hamiltonian \(H_{1}\). As an example, in this work we consider quenches with the transverse field Ising Hamiltonian \(H_{1}=-J\sum_{i=1}^{N-1}\sigma_{i}^{x}\sigma_{i+1}^{x}-h\sum_{i}^{N}\sigma_{i} ^{z}\). We approximate the time evolution unitary \(U_{\rm TE}=\exp(-iH_{1}t)\) via a variational circuit \(U_{\rm TE}(\omega)\) optimized using the projected-variational quantum dynamics (p-VQD) algorithm. We also consider the case in which the magnetic field is switched off (\(h=0\)) and the time evolution unitary can be exactly decomposed into a single layer of two-qubit gates. _Mean energy measurement:_ The ergotropy is the difference between the mean and passive energy (c.f. Eq. (3)). The former can be measured as an expectation value of \(H_{0}^{M}\) on \(M<N\) qubits of the time evolved state \(|\Psi(t)\rangle\). (b) _Passive energy optimization:_ The passive energy is defined as the minimum attainable expectation value of \(H_{0}^{M}\) over all unitary transformations \(U_{\mathcal{E}}\) acting on the time evolved state. We express \(U_{\mathcal{E}}(\theta)\) in terms of a variational circuit with parameters \(\theta\) which are optimized using a typical classical-quantum feedback loop. battery is initialized in the spin polarized ground state \(\left|\Psi(0)\right\rangle=\left|\uparrow\right\rangle^{\otimes N}\equiv\left|0 \right\rangle^{\otimes N}\) and thus, naturally coincides with the initial state of the quantum computer. In order to charge the battery we implement a sudden quench \(H_{0}\to H_{1}\) for \(t>0\) which switches on the nearest-neighbor interaction \[H_{1}=-h\sum_{i=1}^{N}\sigma_{i}^{z}-J\sum_{i=1}^{N-1}\sigma_{i}^{x}\sigma_{i+ 1}^{x}, \tag{10}\] where \(J\) is the coupling strength and we consider open boundary conditions. We simulate the time evolved state \(\left|\Psi(t)\right\rangle=\exp(-iH_{1}t)\left|\Psi(0)\right\rangle\) on a quantum computer using the aforementioned p-VQD algorithm. In Appendix B we provide results obtained with an alternative charging protocol in which the external field is turned off during the charging time (\(h=0\)). In this case, the time evolution operator can be trivially decomposed into a single layer of local two-qubit gates and thus, be implemented without the need for a variational optimization. Throughout the remainder of this work we set \(\hbar=1\) and consider the transverse field Ising model with fixed parameters \(h=0.6\) and \(J=2\). The quench dynamics of the state can be computed directly through \[\left|\Psi\left(t\right)\right\rangle=\sum_{j}\langle\Phi_{j}^{F}|\Psi\left(0 \right)\rangle\exp\left(-\frac{iE_{j}^{F}t}{\hbar}\right)|\Phi_{j}^{F}\rangle, \tag{11}\] where \(|\Phi_{j}^{F}\rangle\) and \(E_{j}^{F}\) are the eigenstates and eigenvalues of \(H_{1}\). The stored work and ergotropy are then calculated using Eqs. (5) and (4), respectively. Fig. 2 shows the work \(W\) and ergotropy \(\mathcal{E}\) stored in \(M\) cells as a function of charging time for a total system size of \(N=8\) spins. We also plot their ratio \(\mathcal{E}/W\) describing how efficiently the battery can be discharged which saturates for \(M=N\) as expected. One can see that immediately following the quench the work and the ergotropy rapidly increase as the quench drives the system far out-of-equilibrium, while subsequent oscillations are the result of finite size effects. We note that the maximum ergotropy and injected work can be achieved at short charging times \(t\sim 0.4\) for \(M>2\) and the charging process subsequently stabilizes in the region \(2\lesssim t\lesssim 6\) after which revivals induce further oscillations. In general, the larger the sub-system \(M\), the more work, ergotropy and efficiency can be achieved. However, for smaller cell size \(M\) the efficiency necessarily suffers as correlated cells in the rest of the battery (\(N-M\)) are discarded. This is apparent for times \(t>2\) when the reduced state \(\rho^{M}\) is sufficiently mixed. We also note that, for long intervals for the \(M=1\) system the ergotropy is exactly zero although the total injected work is non-zero (see panel (b) and (c)). This is related to the equivalence of the reduced and passive states when \(M=1\), and will be discussed in detail in the next section. ## IV Results ### VQErgo state-vector simulation First we simulate the quantum battery using our proposed VQErgo algorithm and analyze the variational optimization in an ideal, noise-free setting via statevector simulation. We restrict ourselves to charging times \(0<t<1.4\) which include the first two maxima of the work and ergotropy curves (see Fig. 2). We simulate the time evolution starting from the polarized product state using p-VQD which optimizes the variational circuit parameters iteratively in small time increments and hence we automatically obtain the evolved states at all intermediate times as well. All the details regarding the optimization are reported in Appendix A.1. In particular, for a given number of spins \(N\) we repeat the optimization with different circuit depths, i.e., different number of variational parameters and compare the p-VQD states to the exact time evolved states in Fig. 9 of Appendix A.1. For each simulated system size we choose a final p-VQD circuit depth that gives rise to small errors with respect to the exact state. Once the optimized time evolution circuit is obtained we can measure the mean energy on the circuit (see Eq. (7)) which allows us to calculate the stored work \(W\). Next, we perform the VQErgo optimization to prepare the passive state on a subsystem of \(M<N\) qubits and measure the passive energy from which we can ex Figure 2: (a) The total injected work \(W\), (b) the ergotropy \(\mathcal{E}\) and (c) the efficiency of the battery \(\mathcal{E}/W\) as a function of the charging time \(t\). The total system is comprised of \(N=8\) spins while each line corresponds to a different sub-system size \(M\) from which energy is extracted. tract the ergotropy \(\mathcal{E}\). In Fig. 3 we show the results obtained for a total system size of \(N=8\) qubits and subsystem sizes \(M=1,\ldots,7\). We compare the exact ergotropy (orange line) to the ergotropies evaluated on optimized circuits \(U_{\mathcal{E}}\) of different depths. Each point represents an average over 100 different runs of the algorithm (i.e., using different random seeds for the circuit initialization). Overall, we find a good agreement of the variationally obtained ergotropies with their exact values given that the circuit depth is chosen large enough. Note that some of the observed discrepancies have to be attributed to the preceding p-VQD optimization, which also introduces an error in the state. To further understand the dynamics of the ergotropy in Fig. 3 we examine its constituent time dependent parts, namely the exact probabilities \(p_{i}\) of the reduced state and the \(\lambda_{i}\) of its passive state. These are shown in Fig. 4 for both \(M=1\) and \(M=6\). The simplest case is \(M=1\) as its dynamics is that of a two level system (there are only two accessible states), with \(\lambda_{i}=p_{i}\) for times \(t<0.4\) and therefore the ergotropy is zero. At \(t=0.4\) there is a crossing in the probability distribution with \(p_{2}>p_{1}\) and finite energy can now be extracted from the battery through reordering of these occupancies (see Fig.3(a)). For \(t\geq 1.2\) a subsequent crossing restores the original ordering of the probabilities and thus the ergotropy again vanishes. This behaviour is echoed in the dynamics of larger subsystems, albeit with more complexity, as the number of occupied states \(p_{i}\) and \(\lambda_{i}\) is increased. For instance, the dynamics of \(\lambda_{i}\) for \(M=6\) possess a similar structure (see Fig. 4(c)) with contributions mainly from the two lowest energy eigenstates. On the contrary, the dynamics of \(p_{i}\) is more complex and includes contributions from higher energy eigenstates of \(H_{0}^{M}\). This results in a non-zero er Figure 4: (a), (c) The passive state probabilities \(\lambda_{i}\) as a function of \(t\) for \(M=1\) and \(M=6\), respectively. In (b), (d) we show the corresponding reduced state probabilities \(p_{i}\). For \(M=6\) we plot \(\lambda_{i}\) at (e) \(t=0.4\) and (f) \(t=0.8\), and \(p_{i}\) at (g) \(t=0.4\) and (h) \(t=0.8\). Data in all figures are obtained numerically via exact diagonalization, with the specific times \(t=\{0,0.2,0.4,0.6,0.8,1,1.2,1.4\}\) denoted by square markers. Figure 3: The ergotropy \(\mathcal{E}\) as a function of the charging time \(t\) for different subsystem sizes \(M\) and a total system size of \(N=8\). The grey dashed line denotes the work \(W\) stored in the battery cells. The orange line corresponds to the value of the ergotropy computed numerically from exact diagonalization (ED) calculations while the markers show the variationally obtained ergotropies for different circuit depths of the passive state ansatz. Each point is an average over 100 optimization runs using a statevector simulation (the standard deviation is indicated by the error bars). gotropy at all times \(t>0\). In Figs. 4(e) and (g) we show \(\lambda_{i}\) and \(p_{i}\) at \(t=0.4\) which corresponds to the maximum ergotropy for \(M=6\). The large difference between the reduced state and its passive state is readily apparent, as \(p_{i}\) is distributed over all possible states, while \(\lambda_{i}\) is again concentrated around the two lowest eigenstates. However, at \(t=0.8\) the ergotropy has a local minimum as the \(p_{i}\)'s occupy lower energy states owing to less work stored in the battery (see Fig. 4(h)). Similarly to other variational quantum algorithms, the circuit depth of the ansatz is an important hyper-parameter of the optimization. Fig. 3 suggests that for VQErgo the required depth depends both on the subsystem size \(M\) and charging time \(t\). For better visualization, we plot the error in the measured ergotropies as a function of the subsystem size in Fig. 5 at \(t=0.4\) and \(t=0.8\). In the case of a single quantum cell \(M=1\), we always only require one general single-qubit rotation to prepare the passive state. However, with increasing subsystem size \(M>1\) more layers of single-and two-qubit gates are needed to reduce errors. This is due to correlations that are spread over larger distances within the system which can be quantified through the \(C_{XX}\) and \(C_{ZZ}\) correlations between qubit \(i\) and a second qubit at \(i+l\) \[C_{XX/ZZ}(i,\ell)=|\langle\sigma_{i+\ell}^{x/z}\sigma_{i}^{x/z}\rangle-\langle \sigma_{i+\ell}^{x/z}\rangle\langle\sigma_{i}^{x/z}\rangle|^{2}, \tag{12}\] where \(\langle\cdot\rangle=\langle\Psi(t)|\cdot|\Psi(t)\rangle\) denotes an expectation value calculated with the time evolved state. We take qubit \(i=4\) at the center of the \(N=8\) spin chain as an example and plot its correlations with the other qubits as a function of time in Figs. 6 (a) and (b). The maximum ergotropy coincides with the maximum correlations in the \(x\)-directions, while correlations in the \(z\)-direction vanish. Furthermore, up to times \(t\lesssim 0.6\) the qubit is correlated only with its nearest neighbors at \(\ell=\pm 1\). We observe a lack of long-range correlations also for the other spins in the chain (not shown here) and can thus infer that a single layer of two-qubit gates (paired with parameterized single-qubit rotations) is sufficient to disentangle all qubits of the subsystem and prepare the exact passive state. However, this is not the case for times \(t>0.6\) as long-range correlations and entanglement are built up. We therefore require multiple layers of two-qubit gates to rotate the reduced state into the uncorrelated basis set of the passive state and hence, to increase the accuracy of the ergotropy estimation. This is apparent in Fig. 5(b) which shows a significant increase in error at \(t=0.8\) (note the difference in order of magnitudes between Figs. 5 (a) and (b)). However, we Figure 5: The absolute error between the ergotropy calculated variationally via a statevector simulation and its exact value versus the battery subsystem size \(M\). We show the error for two distinct charging times \(t=0.4\) (a) and \(t=0.8\) (b) for different circuit depths of the passive state ansatz. Note the different order of magnitudes in the error for the two considered times which indicates that the required circuit depth depends not only on the cell size \(M\), but also on the charging time \(t\). Figure 6: The Pauli-\(XX\) (a) and \(ZZ\) (b) correlations in the charged state between the 4th qubit at the center of the chain and a qubit \(\ell\) sites apart as a function of the charging time \(t\). The insets show the respective correlations as a function of the distance \(\ell\) for two specific times \(t=0.4\) and \(t=0.8\). For charging times \(t\lesssim 0.6\) nearest neighbor correlations dominate while at later times \(t>0.6\) also long-range correlations appear. All data is from exact diagonalization calculations. have found that for the particular Ising system considered here, the errors quickly decrease with circuit depth and two layers are already often sufficient. Any extra layers provide only a small additional advantage which suggests that the circuit depth scales sub-linear with the battery cell size \(M\), making the optimization less prone to barren plateaus [94, 66]. ### VQErgo quantum device experiments Following the analysis of VQErgo under ideal, noise-free conditions, we now evaluate its performance on a real quantum device. To that end, we perform VQErgo on one of the freely accessible quantum computers provided through the IBM Quantum cloud. While the most recent state-of-the-art quantum computers operate on more than 100 qubits and feature small error rates [50], the freely available quantum devices are still small in size and very noisy. Hence, we restrict ourselves to quantum batteries comprised of only a handful of spins that can be simulated with shallow-depth circuits and can be mapped to the device qubit layout without the need for long-range gates (or SWAP gates). We also substitute our real hardware experiments with noisy, classical simulations that mimic the device noise model. All our experiments are performed on the 7 qubit ibm_perth device and its classical simulator analog FakePerth. **Full VQErgo results** We start by running the full VQErgo pipeline (including the p-VQD and passive state optimization) on the noisy classical simulator using the SPSA optimizer with 250 optimization steps and 2048 shots per measurement. The training curves and any further technical details can be found in Appendix A.2. We report the final measured work and ergotropy as a function of the charging time for a system with \(N=2\) and \(M=1\) battery cells in Fig. 7. Each point is again an average over 100 independent runs of the algorithm. For this small battery system, the ergotropies are in good agreement with their exact values. Any discrepancies and the increased standard deviation compared to the statevector simulation can be attributed to various error sources, such as shot noise, state preparation and measurement (SPAM) errors, coherent and incoherent noise. Note that the observed error in the work and ergotropy increases slightly with time which is to be expected since p-VQD iteratively evolves the ansatz state in time and hence, errors naturally build up in the charged state. **Noise free p-VQD optimization** Using the noisy simulator has not allowed us to obtain converged results for p-VQD with \(N=4\). This can be understood from the fact that p-VQD carries out a variational optimization for each time-step that is simulated. Therefore any error resulting from the noisy hardware or a simulation of it compounds with every iteration. As explained in Appendix A.1, when \(N=4\), the depth of the p-VQD circuit must be increased to twice what it is when \(N=2\) while still needing 14 iterations. Although circuits of this depth can be run without the resulting quantum state decohering totally, the accumulated error due to noise is too great to result in accurate time-evolution. However, with the optimized p-VQD parameters from state-vector simulations we are still able to show convergence of the passive state optimization using the noisy simulator and the actual device. In the remainder of this work we perform the p-VQD optimization using the classical statevector simulation and only run the optimized time evolution circuit on the quantum device followed by the variational passive state optimization. Note that in Appendix B we consider a simplified charging protocol that does not require variational time evolution and thus, can be executed end-to-end on current hardware. In Fig. 8 we display the real-device and noisy simulator results obtained for \(N=4\) and subsystem size \(M=2\). The measured injected work is in nearly perfect agreement with their theoretically computed values with the exception of the point at time \(t=0.6\) which is slightly lower. We believe that this outlier can be attributed to naturally occurring fluctuations in the experimental hardware over time (our simulations have been performed over several weeks). On the other hand, the evaluated ergotropies are consistently lower than their exact values for the noisy simulator and the real-device experiments. We expect this to be the result of decoherence as the passive state circuit contains two additional layers of CNOT gates compared to the circuit the work was measured on. It would be interesting to investigate whether quantum error mitigation such as zero-noise extrapolation can improve these results [95, 96, 97]. However, Figure 7: The total amount of stored work \(W\) (dashed line) and the ergotropy \(\mathcal{E}\) (solid line) as a function of the battery charging time \(t\) computed via exact techniques. We also show their values extracted via VQErgo using an ideal statevector simulation and a noisy classical simulation with FakePerth which mimics the ibm_perth quantum device. The results are an average over 100 independently run optimizations. The error bars indicate the corresponding standard deviation. despite the small discrepancies, the qualitative dependence of the ergotropy with time can be successfully inferred. Importantly, VQErgo allows us to determine the time at which the ergotropy becomes maximal which is crucial for designing many-body quantum batteries that perform optimally. ## V Conclusions In this work, we have studied the ergotropy - the maximal extractable work - of quantum batteries. We have shown that the calculation of the ergotropy can be naturally phrased in terms of a variational quantum algorithm and thus, the ergotropy is readily amenable to current NISQ device computations. We have embedded the ergotropy calculation in an end-to-end variational simulation routine for quantum batteries called VQErgo that includes battery initialization, charging, and the ergotropy estimation. Note that due to the modularity of the presented framework, different algorithms for any of its subroutines like initial state preparation or time evolution can be chosen and adapted to the system at hand. We tested VQErgo on a battery undergoing transverse field Ising dynamics. To that end, we investigated the passive state optimization and the required depth of the variational circuit with the battery cell size and charging time. In particular, we showed that a circuit depth larger than unity is necessary beyond a critical charging time after which long-range correlations set in. Subsequently, we demonstrated VQErgo using a noisy classical simulator with noise characteristics from a current IBM quantum device, and demonstrated that the passive state optimization can be carried out on the actual physical device. In both cases, we were able to successfully measure the injected work and ergotropy for different charging times. While the estimated ergotropies were slightly below their true exact values, the qualitative dependence of the ergotropy with time still matched the theoretical predictions. In particular, the results allow us to infer the optimal charging time of the quantum battery that is leading to the maximal ergotropy. Our algorithm is also not restricted to the simulation of quantum batteries, but is also amenable to other thermodynamic devices which depend on the ergotropy, such as quantum heat engines coupled to squeezed baths [24, 25, 26, 27], quantum flywheels [98, 99], and can also be used to measure genuine multipartite entanglement [100]. In this work we have shown a viable path towards studying many-body quantum batteries using quantum hardware. It is remarkable that the non-trivial dynamics of the transverse field Ising model can be probed even on the relatively noisy 7 qubit ibm_perth device as this is a device with a quantum volume of only 32 [79, 101]. In contrast, the state of the art Falcon r10 device is reported to have a quantum volume of 256 [102]. The complexity of the types of systems that can currently be interrogated with VQErgo is therefore expected to exceed the capabilities we have shown here. As a concrete example, it is feasible that the p-VQD optimization, which we had to carry out using a state-vector simulator for \(N=4\), could be carried out on a quantum volume 256 device. ###### Acknowledgements. This work is supported by the Okinawa Institute of Science and Technology Graduate School (OIST). The classical simulations were performed on the high-performance computing cluster (Deigo) provided by the Scientific Computing and Data Analysis section at OIST. We acknowledge the use of IBM Quantum services for this work. The views expressed are those of the authors, and do not reflect the official policy or position of IBM or the IBM Quantum team. FM acknowledges support by the NCCR MARVEL, a National Centre of Competence in Research, funded by the Swiss National Science Foundation (grant number 205602). TF acknowledges support from JSPS KAKENHI Grant Number JP23K03290. TF and TB are also supported by JST Grant Number JPMJPF2221. Figure 8: The exact stored work \(W\) (dashed line) and ergotropy \(\mathcal{E}\) (solid line) are plotted against the charging time \(t\) for a system with \(N=4,M=2\). The results were obtained using state-vector optimized p-VQD parameters together with a noisy simulation of the ibm_perth backend as well as actual device results. For the real-device experiments only a single run of the passive state optimization is carried out, while we display the average and standard deviation of 100 runs in the case of the noisy classical simulator. Due to the limited available quantum computing time, we performed the VQErgo optimization on ibm_perth only on a subset of the shown times. ## Appendix A Technical details of the optimization All quantum simulations were performed using the Qiskit python library [103] and the Qiskit Runtime Estimator primitive. In all shot-based simulations, expectation values were estimated using 2048 shots. Furthermore, we employed readout error mitigation implemented in Qiskit in all simulations that were subject to noise. ### p-VQD We optimize p-VQD using the BFGS optimizer for the state-vector simulations and SPSA [104] for the noisy simulations. While SPSA only performs approximate gradient descent, it also only requires two circuit evaluations per optimization step independent of the number of parameters and can thus be efficiently executed on quantum devices. Moreover, the stochasticity in the perturbation directions make it robust to noise. The fidelity in Eq. (6) is evaluated via sampling and can be replaced by a local cost function to make the optimization less prone to barren plateaus [74, 94]. As a termination condition for the BFGS optimizer we set a precision goal of \(10^{-6}\) in the cost function, i.e., the infidelity in Eq. (6). When using SPSA we set the number of optimization steps per time step to 1000 instead. We run several state-vector simulations of the evolution up to a total time \(t=1.4\) with different time increments \(\delta t\), circuit depths, and 3 random seeds to determine the optimal hyper-parameters that minimize the infidelity with respect to the exact state computed using the QuSpin package [105]. In Fig. 9 we plot the infidelity as a function of time for 4 different system sizes showing only the best out of each run. As expected, we find that the infidelity on average increases with time as errors build up in the time evolved state. Furthermore, the infidelity also grows with system size and as such we require larger circuit depths to faithfully represent the increasingly correlated quantum states. For the \(N=8\) state-vector simulations of Section IV.1 we use a circuit depth of 5, for the \(N=2\) noisy simulations of Fig. 7 we set the depth to 1 and for the \(N=4\) quantum device experiments of Fig. 8 the depth is equal to 2. Note that in the latter case, we used the pre-optimized p-VQD parameters from the state-vector simulation to time-evolve the state on the noisy hardware. Additionally, we experimented with different numbers of time-steps and found that the optimal number of time-steps needed is 7 for \(N=6\) and \(N=8\) while it is 14 for \(N=2\) and \(N=4\). ### Passive state optimization Analogous to p-VQD, we use the BFGS optimizer for the statevector simulations and SPSA for the noisy and real-device simulations. With the exception of the hardware experiments for which only a single data point per time is collected, we repeat each classical simulation with 100 random seeds and take the average. Fig. 10(a) shows the average number of required BFGS optimization steps to reach a precision of \(10^{-6}\) in the cost as a function of the subsystem size \(M\) for an example of a charged state at \(t=0.4\). The circuit depth was fixed to 2. Note that the number of parameters in the circuit ansatz grow linearly in \(M\). We also display the observed standard deviation of the ergotropy versus the subsystem size (see Fig. 10(b)). In Fig. 11 we show two training curves of the passive state optimization using SPSA that were collected for the two exemplary charging times \(t=0.5\) and \(t=0.9\) of Fig. 7 in the main text. The dark (bright) color corresponds to the mean (std) over 100 independent noisy simulations using the FakePerth backend while the dashed line indicates the theoretically exact ergotropy. The optimization usually converged within the first \(50\sim 100\) steps. However, the final value can deviate from its exact prediction due to noise coming from the mean and passive energy measurements as well as errors arising in the p-VQD time evolution. We also show two typical training curves for the real-device experiment results (see Fig. 8 in the main text) performed on ibm_perth in Fig. 12. Rather than measuring the passive energy after each optimization step which would require additional circuit evaluations, we instead estimate its value by averaging the two expectation values used by SPSA at each iteration. ## Appendix B Alternative charging protocol In this section we provide results obtained with a simplified battery charging scheme that does not require trtotterization or variational optimization and thus can be easily implemented on NISQ hardware. Instead of evolving the system with the transverse field Ising Hamiltonian of Eq. (10) we turn off the magnetic field during the quench and only evolve with the term containing the nearest-neighbor coupling \[H_{1}=-J\sum_{i=1}^{N-1}\sigma_{i}^{x}\sigma_{i+1}^{x}. \tag{12}\] Note that \(H_{1}\) is composed of only commuting terms. Therefore, the time evolution operator \(e^{-iH_{1}t}\) can be exactly decomposed into a single layer of two-qubit gates acting only on neighboring spins leading to \[\ket{\Psi\left(t\right)}=e^{-iH_{1}t}\ket{0}=\prod_{i=1}^{N-1}R_{XX}^{i,i+1} \left(\theta\right)\ket{0}, \tag{13}\] where \(R_{XX}^{i,i+1}\left(\theta\right)=\exp\left(-i\frac{\theta}{2}\sigma_{x}^{i} \otimes\sigma_{x}^{i+1}\right)\) and \(\theta=-2Jt\). ### Statevector simulation We run VQErgo on charged states of an \(N=10\) spin system and show the achieved final ergotropies for different subsystem sizes \(M\) in Fig. 13. Interestingly, we find that a variational circuit depth of 1 is sufficient to achieve a high accuracy with the exactly computed values (orange line) for all considered cell sizes and charging times. This suggests that the evolved state only contains nearest neighbor correlations irrespective of the charging time \(t\). Moreover, we observe that the stored work and ergotropy coincide at times \(t=(k+1/2)\pi/J\), with \(k=0,1,2,\dots\) reaching an energy \(W=\mathcal{E}=2h\). At these times the charged state is in a fully disentangled product state. The ergotropy in this case depends solely on the number of qubits \(M<N\) from which we want to extract the energy instead of the full system size \(N\). This is shown in Fig. 14 for \(N=6\) whereby the ergotropy and work are the same as in the \(N=10\) case in Fig. 13. Note that for the case of \(M=1\), it is also possible to obtain an analytical expression for the ergotropy \[\mathcal{E}(t)=\begin{cases}0,&\text{if }\tan^{2}(Jt)\leq 1\\ 2h\left[\sin^{2}(Jt)-\cos^{2}(Jt)\right],&\text{if }\tan^{2}(Jt)>1\end{cases}. \tag{14}\] Overall, and unsurprisingly, the dynamics in this case is more trivial than the one generated by the full transverse field Ising Hamiltonian. The time evolved state is entangled only over short distances and thus, the passive state can be prepared with at most a single layer of nearest-neighbor two-qubit gates. It would be interesting to study the quantum battery with charging protocols interpolating the simplified case discussed here and the Ising dynamics from the main text by applying a small number of quenches with alternating non-commuting generators. ### Noisy simulations We have also tested VQErgo with the simplified charging protocol on noisy simulators and hardware. We Figure 10: (a) The average number of BFGS iterations required to achieve a final precision of \(10^{-6}\) in the cost of the passive state optimization as a function of the subsystem size \(M\). (b) The corresponding standard deviation in the ergotropy over 100 runs. The optimizations were performed using the noise-free statevector simulator, a total system size \(N=8\), a circuit ansatz depth of 2, and a charging time \(t=0.4\). Figure 9: Infidelity between the p-VQD optimized state (via ideal, noise-free statevector simulations) and the exact time-evolved state for system sizes \(N=2\) (a), \(N=4\) (b), \(N=6\) (c), \(N=8\) (d), and different circuit depths of the variational circuit. The optimizations have been performed using BFGS for the Ising chain dynamics defined in the main text. choose 6 qubits of the 7 qubit ibm_perth device and plot an average of the measured ergotropies for different subsystem sizes \(M\) obtained on the noisy classical simulator in Fig. 14. The depth of the passive state ansatz circuit is 1. However, extra SWAP gates are required to map the full circuit to the underlying topology of the real device which ultimately introduces more noise. The error between the exact and variationally obtained ergotropies grows with the battery cell size. On the other hand, the error is independent of the charging time which is in contrast to the p-VQD based simulation where errors naturally built up over time. Despite the noisy values, we can successfully infer the qualitative dependence of the ergotropy with the charging time. Finally, we also report two results obtained on the ibm_perth quantum device for a system with \(N=2,M=1\) in Fig. 15. Again, we find an overall good agreement between the measured values and their exact prediction.
2310.16215
Rotational magic conditions for ultracold molecules in the presence of Raman and Rayleigh scattering
Molecules have vibrational, rotational, spin-orbit and hyperfine degrees of freedom or quantum states, each of which responds in a unique fashion to external electromagnetic radiation. The control over superpositions of these quantum states is key to coherent manipulation of molecules. For example, the better the coherence time the longer quantum simulations can last. The important quantity for controlling an ultracold molecule with laser light is its complex-valued molecular dynamic polarizability. Its real part determines the tweezer or trapping potential as felt by the molecule, while its imaginary part limits the coherence time. Here, our study shows that efficient trapping of a molecule in its vibrational ground state can be achieved by selecting a laser frequency with a detuning on the order of tens of GHz relative to an electric-dipole-forbidden molecular transition. Close proximity to this nearly forbidden transition allows to create a sufficiently deep trapping potential for multiple rotational states without sacrificing coherence times among these states from Raman and Rayleigh scattering. In fact, we demonstrate that magic trapping conditions for multiple rotational states of the ultracold $^{23}$Na$^{87}$Rb polar molecule can be created.
Svetlana Kotochigova, Qingze Guan, Eite Tiesinga, Vito Scarola, Brian DeMarco, Bryce Gadway
2023-10-24T22:09:42Z
http://arxiv.org/abs/2310.16215v3
# Magic Traps for Multiple Rotational States of NaRb Molecule ###### Abstract Molecules have vibrational, rotational, spin-orbit and hyperfine degrees of freedom, each of which responds in a unique fashion to external electromagnetic radiation. The coherent control over superpositions of these quantum states is key to manipulation of molecules. For example, the better the coherence time the longer quantum simulations can last. The important quantity for controlling a molecule with laser light is its complex-valued molecular dynamic polarizability. Its real part determines the tweezer potential as felt by the molecule, while its imaginary part contributes to the coherence time. Our studies show that efficient trapping of a molecule in an optical potential can be achieved by a selecting laser frequency that has a small detuning (on the order of tens of GHz) relative to an electric-dipole-forbidden molecular transition. Close proximity to this transition allows us to significantly modify the trapping potentials for multiple rotational states without sacrificing coherences among these states. We demonstrate that magic trapping conditions for multiple rotational states in ultracold \({}^{23}\)Na\({}^{87}\)Rb polar molecule can be created. In addition, we show that spin-decoupled magic trapping can be achieved with an applied static electric field oriented along the magnetic field direction. ## I Introduction Optical tweezers and lattices are convenient experimental tools to trap ultracold molecules, but their role in perturbing molecular internal states needs to be understood and managed. To preserve, for example, a superposition of states within the ground electronic potential, optical tweezers must apply the same force to each of these states by creating a so-called magic condition. For polar diatomic molecules trapped in tweezer potentials one of the natural choices for building quantum computer is to store qubits in rotational levels of the ground vibrational states. However, the aspect of engineering traps that support the confinement and long coherence times of molecular rotational levels in the ground state potentials remain challenging. Therefore, it is crucial to develop theoretical models for creating a practical, optimized molecule-based quantum computer. Molecules offer possibilities not available in other systems. This stems largely from the rich structure of molecular vibrations, rotations, and hyperfine states as well as a non-negligible permanent electric dipole moment for heteronuclear molecules. This dipole moment leads to strong coupling to microwave radiation and static electric fields as well as tuneable long-range electric dipole-dipole interactions between molecules. One of the key challenges for molecule-based quantum science is to engineer optical traps for the molecules to minimize the reduction of their rotational coherence lifetimes due to the trapping lasers, therefore enabling us to exploit the rotational degree of freedom as the quantum bit in quantum information processing. Constructing a "rotational magic trap" is the ideal solution to this problem. In such a laser trap, light-induced energy shifts of two or more rotational states are identical, eliminating dephasing associated with spatial variations in intensity across the trap. The first guiding idea of selecting laser frequencies in the near-resonant region of forbidden transitions between the excited and ground states of the ultracold molecules was proposed and investigated in Ref. [1]. It was theoretically demonstrated that in such frequency intervals the light-induced decoherence is kept to a minimum. The further search for an efficient construction of the rotational magic traps for \({}^{23}\)Na\({}^{40}\)K, \({}^{87}\)Rb\({}^{133}\)Cs, and \({}^{23}\)Na\({}^{87}\)Rb were pioneered by Refs. [2; 3; 4], respectively. Using a multi-configuration interaction approach the authors selected a laser frequency that has a small detuning (tens of GHz) relative to a narrow electronic transition between ground and exited electronic states. Close proximity to the forbidden transition poles allowed authors to significantly modify the trapping potentials for rotational levels of molecular ground states. At these laser frequencies rotational states experience the identical light shifts that significantly minimize the dephasing effect of spatially and temporal laser intensity fluctuations. Such a basic scheme can be readily applied to other molecules such as diatomic or related polyatomic molecules. However, using tweezer light that is tuned to a transition energy between hyperfine-resolved rovibrational levels of a ground and an excited electronic state is not always convenient as it can lead to unwanted scattering. Fortunately, tuning conditions can be relaxed when a static electric field is applied and the laser polarization direction or its ellipticity are carefully controlled relative to the quantization axis direction [5; 6; 7; 8; 9; 10; 11]. This paper focuses on quantitative theoretical mod eling of dephasing and decoherence processes of ultracold NaRb molecules prepared in superpositions of rotational states and held in place by tweezer forces. First, we examine the dynamic real and imaginary polarizabilities of vibrationally cold polar \({}^{23}\)Na\({}^{87}\)Rb molecules as functions of the frequency of the trapping laser. Based on the knowledge of these polarizability values and accurate rovibrational transition energies between electronic ground and excited potentials, we determine magic tweezer frequencies where the following decoherence mechanisms, relevant to qubits encoded in rotational levels, are minimized. This dephasing is associated with spatial and temporal fluctuations of the laser intensity. In addition, we derive approximate analytical expressions for the dynamic polarizabilities in order to better understand the origin of magic conditions. As a next step in improving the description of the polarizability of the \({}^{23}\)Na\({}^{87}\)Rb molecule, we studied the mixing of rotational levels of its \(v=0\) X\({}^{1}\Sigma^{+}\) ground state in the presence of electric and magnetic fields as well as the trapping laser field. The anisotropy of the dynamic polarizability of rotational levels then manifests itself as a dependence on the orientation of the laser polarization relative to that of the electric field. Finally, our polar molecules have nonzero nuclear electric-quadrupole and nuclear-magnetic moments and the magnetic field further mixes states. The combined action of these three E&M fields is a powerful tool with which to manipulate and control ultracold molecules. ## II Magic trapping frequencies due to \(b^{3}\Pi_{0^{+}}\), \(\nu^{\prime}\) = 0 resonances We begin by calculating the dynamic polarizability or ac Stark shift for rotational levels from \(J=0\) to 5 of the \(v=0\) vibrational level \(|\text{X},vJM\rangle\) of the ground \(\text{X}^{1}\Sigma^{+}\) state of the \({}^{23}\)Na\({}^{87}\)Rb molecule absent external electric or magnetic fields and without molecular spin-rotation, hyperfine, and Zeeman interactions. Here, \(M\) is the projection quantum number of angular momentum \(J\) along a laboratory or space-fixed axis to be defined later on. The tweezer laser with laser frequency \(\nu\) is linearly polarized along space-fixed direction \(\vec{\varepsilon}\) throughout this paper. We then determine laser light frequencies that allow simultaneous magic trapping of multiple rotational states using light nearly resonant with rovibrational levels of the \(\text{b}^{3}\Pi_{0^{+}}\) state. Transitions between the \(\text{X}^{1}\Sigma^{+}\) and \(\text{b}^{3}\Pi_{0^{+}}\) states are weak and only allowed through weak spin-orbit coupling with the \(\text{A}^{1}\Sigma^{+}_{0^{+}}\) state. Figure 1(a) schematically shows the NaRb molecule trapped in a tweezer potential, while Fig. 1(b) displays the three relevant relativistic \(\Omega=0^{+}\) potential energy curves of the NaRb molecule, where \(\Omega\) is the absolute value of the projection quantum number of the total electronic angular momentum along the diatomic molecular axis. More precisely, the excited non-relativistic \(\text{A}^{1}\Sigma^{+}_{0^{+}}\) and \(\text{b}^{3}\Pi_{0^{+}}\) states are coupled by the spin-orbit interaction, which leads to \(\Omega=0^{+}\) adiabatic potentials that have a narrow avoided crossing near interatomic separation \(R\approx R_{\text{c}}=7.5a_{0}\)[12], where \(a_{0}\) is the Bohr radius. The energetically lowest \(\Omega=0^{+}\) rovibrational states near the bottom or minimum of the nominally \(\text{b}^{3}\Pi_{0^{+}}\) potential, however, have a small admixture of the \(\text{A}^{1}\Sigma^{+}_{0^{+}}\) state. As electric dipole transitions between the \(\text{X}^{1}\Sigma^{+}\) and \(\text{b}^{3}\Pi_{0^{+}}\) states are forbidden, this leads to weak, but easily observable transitions between to these \(\Omega=0^{+}\) rovibrational levels of the ground electronic \(\text{X}^{1}\Sigma^{+}\) state. We observe that the equilibrium separations and harmonic frequencies of the \(\text{X}^{1}\Sigma^{+}\) and \(\text{b}^{3}\Pi_{0^{+}}\) states are almost the same. We use the non-relativistic \(\text{X}^{1}\Sigma^{+}\), \(\text{A}^{1}\Sigma^{+}_{0^{+}}\) and \(\text{b}^{3}\Pi_{0^{+}}\) potentials, spin-orbit matrix elements, and the \(\text{X}^{1}\Sigma^{+}\) to \(\text{A}^{1}\Sigma^{+}\) electronic transition dipole moment as functions of \(R\) given by Refs. [12; 13]. For rotational states \(J,M\) of the \(v=0\) ground-state of NaRb, the calculation of the sum over intermediate, excited states that appears in the evaluation of \(\alpha_{\text{X},vJM}(\nu,\vec{\varepsilon})\) can be simplified. The relevant laser frequencies are _nearly resonant_ with rovibrational levels near the minimum of the \(\text{b}^{3}\Pi_{0^{+}}\) potential. Consequently, we separate \(\alpha_{\text{X},vJM}(\nu,\vec{\varepsilon})\) into two contributions. The first contribution is due to these near-resonant transitions. We will use the rovibrational levels of this potential as well as corresponding vibrationally averaged transition dipole moments to construct the contribution to \(\alpha_{\text{X},vJM}(\nu,\vec{\varepsilon})\). A second contribution to \(\alpha_{\text{X},vJM}(\nu,\vec{\varepsilon})\) is due to _off-resonant_ transitions to other electronic states. They are computed within a quasi-static approximation where so-called parallel and perpendicular polarizabilities \(\alpha_{\parallel}(\nu,R)\) and \(\alpha_{\perp}(\nu,R)\), corresponding to laser polarizations paral Figure 1: (a) Schematic presentation of ground-state heteronuclear NaRb trapped in an optical tweezer potential. (b) The potential energies of the three most-important \(\Omega=0^{+}\) electronic states of NaRb. The two black horizontal lines in the potentials represent the energetically lowest \(v=0\) and \(v^{\prime}=0\) vibrational levels of the \(\text{X}^{1}\Sigma^{+}\) state and the \(\text{A}^{1}\Sigma^{+}\) and \(\text{b}^{3}\Pi_{0^{+}}\) complex, respectively. Relevant rotational levels \(J=0\) to 5 for both vibrational states are shown on the right. Near resonant optical transitions, orange lines with arrows, are used in a search for magic conditions as a function of tweezer laser frequency. lel and perpendicular relative to the body-fixed internuclear axis, respectively, are computed as functions of laser frequency and atom-atom separation \(R\) near the equilibrium separation \(R_{\mathrm{e}}\) of the \(\mathrm{X}^{1}\Sigma^{+}\) state using the linear response theory formulation within software package Q-Chem [14]. Q-Chem computes electronic states within a non-relativistic description of the electrons. In practice, we realize that these two quasi-static polarizabilities are to good approximation independent of \(R\) over the radial width of the \(v=0\) vibrational level of the \(\mathrm{X}^{1}\Sigma^{+}\) state. We thus only compute \(\alpha_{\parallel}(\nu,R)\) and \(\alpha_{\perp}(\nu,R)\) at \(R=R_{\mathrm{e}}\) and drop argument \(R\) for the remainder of this article. The two quasi-static polarizabilities of the \(\mathrm{X}^{1}\Sigma^{+}\) state have been obtained with a non-relativistic configuration-interaction electronic-structure calculation using an all-electron basis set for Na. Single and double excitations were allowed from these basis functions. An effective core potential describes the 28 inner electrons of Rb. Single and double excitations were allowed for the remaining electrons of Rb. Figure 2 shows the two quasi-static polarizabilities of the \(\mathrm{X}^{1}\Sigma^{+}\) state of NaRb at \(R=R_{\mathrm{e}}\) as functions of photon energy from zero to \(hc\times 25000\) cm\({}^{-1}\). Here, \(h\) is the Planck constant and \(c\) is the speed of light in vacuum. Over the large photon energy range shown in Fig. 2, several resonances are visible. Each corresponds to a transition between the \(\mathrm{X}^{1}\Sigma^{+}\) state and a \({}^{1}\Lambda\) state. In fact, in our non-relativistic formulation, \(\alpha_{\parallel}(\nu)\) only contains contributions from transitions to singlet \({}^{1}\Sigma^{+}\) electronic states, while \(\alpha_{\perp}(\nu)\) only contains contributions from transitions to singlet \({}^{1}\Pi\) states. Finally, we note that the quasi-static contributions to the polarizability of levels of the \(\mathrm{X}^{1}\Sigma^{+}\) state have a small photon-energy dependence for photon energies near the minimum of the \(\mathrm{b}^{3}\Pi_{0^{+}}\) potential. The relevant quasi-static polarizabilities are off resonant. The resonant contribution to the polarizability has been determined in several steps. We compute two-channel radial eigenvalues and eigenfunctions of the spin-orbit coupled and shifted \(\Omega=0^{+}\)\(\mathrm{A}^{1}\Sigma^{+}\) and \(\mathrm{b}^{3}\Pi_{0^{+}}\) states for total angular momentum \(J^{\prime}=0,\,1,\,\dots,\,6\) using a discrete variable representation (DVR) of the radial relative kinetic energy operator [15]. For each \(J^{\prime}\), the eigenvalues \(E_{\mathrm{Ab},\nu^{\prime}J^{\prime}}\) are labeled \(v^{\prime}=0,\,1,\,\dots\) with increasing energy and wavefunctions of the energetically lowest \(v^{\prime}\) levels are to good approximation \(\mathrm{b}^{3}\Pi_{0^{+}}\) states. The energies are independent of projection quantum number \(M^{\prime}\) of \(J^{\prime}\). We then compute rovibrational wavefunctions \(v\) and energies \(E_{\mathrm{X},vJ}\) of the \(\mathrm{X}^{1}\Sigma^{+}\) state for \(J\) up to 5 with the same DVR and radial grid used to compute eigenpairs of the coupled \(\mathrm{A}^{1}\Sigma^{+}\) and \(\mathrm{b}^{3}\Pi_{0^{+}}\) system. The energies are independent of projection quantum number \(M\) of \(J\). The use of the same radial grid avoids interpolation of wavefunctions in the computation of vibrationally-averaged transition dipole moments using the \(R\)-dependent transition dipole moment between the \(\mathrm{X}^{1}\Sigma^{+}\) and \(\mathrm{A}^{1}\Sigma^{+}\) states. Finally, we compute the resonant-part of the polarizability \(\alpha_{\mathrm{X},vJM}(\nu,\vec{\varepsilon})\) of \(v=0,JM\)\(\mathrm{X}^{1}\Sigma^{+}\) states using only the energetically lowest rovibrational levels of the coupled \(\mathrm{A}^{1}\Sigma^{+}\)-\(\mathrm{b}^{3}\Pi_{0^{+}}\) system that have a large \(\mathrm{b}^{3}\Pi_{0^{+}}\) character and thus a small vibrationally-averaged transition dipole moment. The choice to limit the determination of the resonant-part of the polarizability to a few levels of the \(\mathrm{A}^{1}\Sigma^{+}\)-\(\mathrm{b}^{3}\Pi_{0^{+}}\) system avoids double counting the effects of the \(\mathrm{A}^{1}\Sigma^{+}\) state when combining the resonant and off resonant contributions of \(\alpha_{\mathrm{X},vJM}(\nu,\vec{\varepsilon})\). In principle, the projection degeneracy is broken by hyperfine interactions between nuclear quadrupole moments and the rotation of the molecule as well as Zeeman interactions for the nuclear spins. The nuclear spin of both \({}^{23}\)Na and \({}^{87}\)Rb is \(3/2\). However, the hyperfine splittings for the \(\Omega=0^{+}\) states are small Ref. [16] compared to the rotational energies described below. Here, we omit the effects of hyperfine interactions on magic conditions. Figure 3 shows dynamic polarizabilities of the \(v=0\), \(J=0,\,1,\dots,\,5\), and \(M=0\), rotational levels of the \({}^{23}\)Na\({}^{87}\)Rb \(\mathrm{X}^{1}\Sigma^{+}\) state absent an external electric field but with a 335 Gauss magnetic field parallel to the linearly polarized light as functions of laser frequency in the neighborhood of the \(v^{\prime}=0\) level of the coupled \(\mathrm{A}^{1}\Sigma^{+}\)-\(\mathrm{b}^{3}\Pi_{0}\) system. The dynamic polarizabilities include both the resonant and off-resonant contributions. The horizontal axis gives photon energy detuning \(\Delta=h\nu-\mathcal{E}_{\nu^{\prime}=0}\), where molecular transition energy \(\mathcal{E}_{v^{\prime}=0}=E_{\mathrm{Ab},v^{\prime}=0,J^{\prime}=1}-E_{\mathrm{ X},v=0,J=0}\). Our estimate provides that \(\mathcal{E}_{v^{\prime}=0}=hc\times 11306.4\) cm\({}^{-1}\), which corresponds to a laser wavelength close to 884 nm. We chose the quantization axis of the molecular angular momentum \(J\) and \(J^{\prime}\) in the same direction as that of the laser polarization. The curves for \(J=0\) and \(J>0\) in Fig. 3 have different behaviors. The polarizability for the \(J=0\) has a single resonance at \(\Delta=0\). Those for \(J>0\) have two resonances located at \(L_{J}(v=0,v^{\prime}=0)\) and \(R_{J}(v=0,v^{\prime}=0)\), where \[L_{J}(v,v^{\prime})=J(J+1)B_{v}-[J(J-1)-2]B_{v^{\prime}} \tag{1}\] Figure 2: The quasi-static parallel (orange curve) and perpendicular (blue curve) electronic polarizabilities of the \(\mathrm{X}^{1}\Sigma^{+}\) state at its equilibrium separation \(R_{\mathrm{e}}=6.885a_{0}\) and photon energies up to \(hc\times 25\,000\) cm\({}^{-1}\). The energetically lowest four resonances are labeled with state \({}^{1}\Lambda\). The data is based on non-relativistic configuration-interaction calculations with the Q-Chem software package. and \[R_{J}(v,v^{\prime})=J(J+1)B_{v}-[(J+1)(J+2)-2]B_{v^{\prime}} \tag{2}\] with rotational constants \(B_{v}\) and \(B_{v^{\prime}}\) for the \(v\) vibrational level of the X\({}^{1}\Sigma^{+}\) state and the \(v^{\prime}=0\) vibrational level of the coupled A\({}^{1}\Sigma^{+}\)-b\({}^{3}\Pi_{0}\) system, respectively. For \({}^{23}\)Na\({}^{87}\)Rb, \(B_{v=0}/hc=0.069\,70\) cm\({}^{-1}\) and \(B_{v^{\prime}=0}/hc=0.069\,88\) cm\({}^{-1}\). The two values agree to better than \(0.5\,\%\). These behaviors follow from photon selection rules \(|J-1|\leq J^{\prime}\leq J+1\) and \(J^{\prime}-J\) is odd. Panel (a) of Fig. 3 shows that there exist two _magic_ laser frequencies. The first is located near \(\Delta/h=-2\) GHz and \(J>0\) rotational levels have nearly the same polarizabilities. The second is located near \(\Delta/h=100\) GHz, all \(J=0,\ldots,5\) rotational levels have nearly the same polarizabilities. Panel (b) looks in more detail at the latter frequency region. In particular, the polarizability of the \(J=0\) level is equal to that of the \(J=1\), 2, 3, 4, 5 rotational levels at detuning \(\Delta/h=103\) GHz, 105 GHz, 108 GHz, 112 GHz, and 116 GHz, respectively. In fact, the differences between these \(\Delta\) with \(J>0\) is increasing with \(J\). The origin of this effect lies in the increase of rotational energies with \(J\). ## III Analytical results for magic polarizability We find it useful to derive analytical expression for the dynamic polarizabilities shown in Fig. 3 in order to better understand the origin of magic conditions as well as simplifying their determination for wide variety of molecules. For rovibrational level \(v=0\)\(JM\) of the X\({}^{1}\Sigma^{+}\) state, the dynamic polarizability for linearly-polarized laser light near transitions to vibrational states \(v^{\prime}=0\) or \(v^{\prime}=1\) of the coupled A\({}^{1}\Sigma^{+}\)-b\({}^{3}\Pi_{0^{+}}\) system is well approximated by \[\alpha_{\text{X},v=0JM}(\nu,\vec{\varepsilon})=-\frac{3\pi c^{2} }{2\omega_{v^{\prime}}^{3}}\bigg{[}A_{J,M}(\theta_{\text{p}})\frac{\hbar \Gamma_{0,v^{\prime}}}{\Delta_{v^{\prime}}+L_{J}(0,v^{\prime})} \tag{3}\] \[+B_{J,M}(\theta_{\text{p}})\frac{\hbar\Gamma_{0,v^{\prime}}}{ \Delta_{v^{\prime}}+R_{J}(0,v^{\prime})}\bigg{]}\] \[+\left[A_{J,M}(\theta_{\text{p}})+B_{J,M}(\theta_{\text{p}}) \right]\left(\alpha_{\text{bg},\parallel}-\alpha_{\text{bg},\perp}\right)+ \alpha_{\text{bg},\perp}\,,\] where \(\theta_{\text{p}}\) is the angle between the polarization of the laser \(\vec{\varepsilon}\) with respect to the quantization axis for the molecular states, our laboratory-fixed \(z\) axis. The energy detuning \[\Delta_{v^{\prime}}=h\nu-\hbar\omega_{v^{\prime}} \tag{4}\] with \(\hbar\omega_{v^{\prime}}=E_{\text{Ab},v^{\prime},J^{\prime}=1}-E_{\text{X},v =0,J=0}\). The terms on the first two lines of Eq. (3) lead to resonances in the dynamic polarizability. In fact, there are one and two resonances for \(J=0\) and \(J>0\), respectively. The \(\Gamma_{0,v^{\prime}}\) Figure 3: (a) \({}^{23}\)Na\({}^{87}\)Rb dynamic polarizabilities of rotational \(v=0\) levels of the X\({}^{1}\Sigma^{+}\) state in atomic units for \(z\)-linear polarized light as functions of laser frequency detuning \(\Delta/h\) near transitions from the \(v=0\) X\({}^{1}\Sigma^{+}\) state to the \(v^{\prime}=0\) level of the coupled A \({}^{1}\Sigma^{+}\)-b\({}^{3}\Pi_{0}\) system. Each curve corresponds to the dynamic polarizability of a different rotational level \(J\) with \(M=0\). Zero detuning \(\Delta\) corresponds to the resonant transition from \(v=0\), \(J=0\) level of the X\({}^{1}\Sigma^{+}\) state to the \(v^{\prime}=0,J^{\prime}=1\) level of the coupled A\({}^{1}\Sigma^{+}\)-b\({}^{3}\Pi_{0}\) system. The purple arrows indicate magic detunings, where multiple rotational states have the same or nearly the same polarizabilities. (b) Schematically magnified region of dynamic polarizabilities, near the frequency detuning \(\Delta/h=100\) GHz. The colors of the curves are the same as those in panel (a). We observe that the curve for \(J=0\) crosses those for \(J>0\). are linewidths of the vibrational levels \(v^{\prime}\) of the coupled \(\mathrm{A}^{1}\Sigma^{+}\)-\(\mathrm{b}^{3}\Pi_{0^{+}}\) system. The parallel and perpendicular \(\alpha_{\mathrm{bg},\parallel}\) and \(\alpha_{\mathrm{bg},\perp}\) are body-fixed background polarizabilities. The dimensionless angular factor \(A_{J,M}(\theta_{\mathrm{p}})\) is given by \[A_{J,M}(\theta_{\mathrm{p}}) = \frac{J(J+1)-3M^{2}}{2(2J+1)(2J-1)}\cos^{2}\theta_{\mathrm{p}}\] \[\qquad+\frac{(J-1)J+M^{2}}{2(2J+1)(2J-1)}\] for \(|M|<J\) and \[A_{J,M}(\theta_{\mathrm{p}})=\frac{(J+|M|)(J+|M|-1)}{4(2J+1)(2J-1)}\sin^{2} \theta_{\mathrm{p}} \tag{6}\] for \(|M|=J\). Note that \(A_{J,M}(\theta_{\mathrm{p}})=0\) for \(J=0\). Finally, the dimensionless \(B_{J,M}(\theta_{\mathrm{p}})\) is given by \[B_{J,M}(\theta_{\mathrm{p}}) = \frac{J(J+1)-3M^{2}}{2(2J+1)(2J+3)}\cos^{2}\theta_{\mathrm{p}}\] \[\qquad+\frac{(J+1)(J+2)+M^{2}}{2(2J+3)(2J+1)}\,.\] Equation (3) can only be used for energy detunings that are much smaller than the vibrational spacing between different \(v^{\prime}\) states of the \(\mathrm{A}^{1}\Sigma^{+}\)-\(\mathrm{b}^{3}\Pi_{0^{+}}\) system. On the other hand, the energy detunings must be much larger than any hyperfine and Zeeman splittings in the coupled \(\mathrm{A}^{1}\Sigma^{+}\)-\(\mathrm{b}^{3}\Pi_{0^{+}}\) system and for energy detunings much larger than \(\hbar\Gamma_{0,v^{\prime}}\). Finally, a Taylor expansion of the right hand side of Eq. (3) assuming \(|\Delta_{v^{\prime}}|\gg|L_{J}|\) and \(|\Delta_{v^{\prime}}|\gg|R_{J}|\) gives \[\alpha_{\mathrm{X},v=0JM}(\nu,\vec{\varepsilon})=[A_{J,M}(\theta _{\mathrm{p}})+B_{J,M}(\theta_{\mathrm{p}})]\] \[\quad\times\left(-\frac{3\pi c^{2}}{2\omega_{v^{\prime}}^{3}} \frac{\hbar\Gamma_{0,v^{\prime}}}{\Delta_{v^{\prime}}}+\alpha_{\mathrm{bg}, \parallel}-\alpha_{\mathrm{bg},\perp}\right)+\alpha_{\mathrm{bg},\perp},+\cdots\,.\] From an inspection of Eq. (II), we realize that we can always find an energy detuning independent of \(\theta_{\mathrm{p}}\) and \(J\) such that the term in parenthesis vanishes. At this energy detuning the dynamic polarizability is \(\alpha_{\mathrm{bg},\perp}\), the same for all \(\theta_{\mathrm{p}}\) and \(J\) within our approximations, and the optical trap is magic for all rotational states. Higher-order terms in Eq. (II) will add small \(\theta_{\mathrm{p}}\)- and \(J\)-dependent corrections and are observed in Fig. 3. ## IV Vibrationally-resolved imaginary polarizabilities of the \(\mathrm{X}^{1}\Sigma^{+}\) state In addition, we developed an approach based on ideas of Refs. [1] to evaluate the imaginary dynamic polarizabilities of rovibrational levels of the ground state in vicinity of rovibrational levels of the excited state potentials. The imaginary part of \(\alpha_{i}\) describes incoherent decay that leads to loss of molecules from the optical trap. Our calculation is based on perturbation theory method with specific focus on relativistic spin-orbit coupling between \(\mathrm{A}^{1}\Sigma^{+}\)-\(\mathrm{b}^{3}\Pi_{0}\) complex. The simulations are performed using electronic potentials, permanent, and transition dipole moments of NaRb determined in Refs. [12; 13; 17]. We analyze the imaginary dynamic polarizability in the wide range laser frequency from 10000 to 20000 \(\mathrm{cm}^{-1}\) that is in the resonance with multiple vibrational levels of the excited state potentials. The molecular dynamic \(\alpha_{i}(h\nu,\vec{\epsilon})\) at frequency \(\nu\) and laser polarizability \(\vec{\epsilon}\) of state \(i\) is complex valued. To good approximation its imaginary part is \[\mathrm{Im}\left[\alpha(h\nu,\vec{\epsilon})\right] = -\frac{1}{\epsilon_{0}c}\sum_{f}\frac{\hbar\gamma_{f}/2}{(E_{f}-E_ {i})^{2}-(h\nu)^{2}}\] \[\times|\langle f|d(R)\hat{R}\cdot\vec{\epsilon}\,|i\rangle|^{2}\,,\] where kets \(|i\rangle\) and \(|f\rangle\) are simplified labels for initial rovibrational wavefunctions of the \(\mathrm{X}^{1}\Sigma^{+}\) potential and those of excited electronic states, respectively. Their energies are \(E_{i}\) and \(E_{f}\), respectively, and \(\gamma_{f}\) is the natural line width of excited rovibrational levels. Figure 4, panels (a), (b), and (c) demonstrate imaginary dynamic polarizabilities of the \(J=0,1\), and \(2\) rotational levels, respectively, of \(\mathrm{X}^{1}\Sigma^{+}\) with projection quantum number \(M=0\). It turns out that, to good approximation, the three curves are the same except for a frequency-independent scale factor. Deviations from these scalings occur very close to the resonances, i.e. on the order of the rotational spacing of the molecule. We calculate imaginary polarizabilities for these levels taking into account the rovibrational structure of lower excited states in the units of \(\mathrm{MHz/[W/cm^{2}]}\), which are often used in experimental measurements. Note that one atomic unit of polarizability corresponds to 4.68645 \(\times\)\(10^{-8}\)\(\mathrm{MHz/[W/cm^{2}]}\). We evaluated the rovibrational molecular line widths of excited electronic states dissociating to either a singly-excited Na or Rb atom by first computing a \(R\)-dependent optical potential \(-i\Gamma(R)/2\)[18] for each excited electronic state. Here, \(\Gamma(R)\) is positive and proportional to \(|\delta E(R)|^{3}d^{2}(R)\), where \(\delta E(R)\) and \(d(R)\) at internuclear separation \(R\) are the potential energy difference and the transition electronic dipole moment between an excited state and the ground state, respectively. Finally, the energies \(E_{i}\) and \(E_{f}\) and line widths \(\gamma_{f}\) of rovibrational levels of electronic states were found by computing radial rovibrational wavefunctions, energies, and matrix elements of \(\Gamma(R)\). By construction, the imaginary part is negative. Its value is seven orders of magnitude smaller than the real part. For \(J=1\) and \(2\), \(M=0\) the polarizability depends on the polarization direction of the trapping light. The imaginary part of the polarizabilities are slowly varying with frequency in regions outside multiple closely spaced resonant features, where \(\alpha(h\nu,\vec{\epsilon})\) is orders of magnitude larger than in the slowly varying regions. The resonant like features are due to the rovibrational bound states of excited electronic potentials. In fact, we could assign the resonances as due to the b\({}^{3}\Pi\), A\({}^{1}\Sigma^{+}\), and B\({}^{1}\Pi\) states. These resonances are strongest when the inner- or outer-turning point of rovibrational wavefunctions of the excited electronic potentials coincides with the equilibrium separation of the X\({}^{1}\Sigma^{+}\) potential. The calculations of the imaginary part of the polarizability allowed us predict the role of unwanted decoherence processes. In particular, optical fields can transfer population from a rovibrational level of the electronic ground state to rovibrational levels of an excited electronic state, which then by the spontaneous emission decays to many rovibrational levels of the X\({}^{1}\Sigma^{+}\) state. As a result, we lose control over the molecule. ## III Effect of external electric field on the magic condition We extended the ideas of Refs. [11] and simulated the rovibrational and hyperfine quantum states of \({}^{23}\)Na\({}^{87}\)Rb molecules when both a magnetic field \(\vec{B}\) and electric field \(\vec{E}\) are present at a fixed laser wavelength of \(\lambda=1064\) nm or wavenumber of \(E/hc\approx 9400\) cm\({}^{-1}\). For the X\({}^{1}\Sigma^{+}\) electronic state hyperfine effects are due to the nuclear spins of the atoms. Understanding the effect of changing the relative orientation or polarization of the E&M fields is of crucial importance for the creation of decoherence-free sub-spaces built from two or more rovibrational and hyperfine states. The effective Hamiltonian for the \(v=0\) rotational-hyperfine levels of the X\({}^{1}\Sigma^{+}\) state of \({}^{23}\)Na\({}^{87}\)Rb is obtain using the formalism developed in our previous studies [3; 6; 7]. We computed the eigenenergies \(\mathcal{E}_{i}\) of \(H\) including the lowest \(J=0\) and \(J=1\) rotational states of our molecule for many relative orientations of \(\vec{B}\), \(\vec{E}\), and \(\vec{\epsilon}\) as well as the magnitudes of \(\vec{B}\) and \(\vec{E}\) and intensity \(I_{\rm trap}\). For the physically relevant \(B\), \(E\), and \(I_{\rm trap}\), the energy shifts due to the Zeeman, electric-dipole, nuclear quadrupole, and polarization interactions are much smaller than those due to the \(B_{v=0}\vec{J}^{2}\) rotational interaction. Without an electric field nuclear spin states mix with the three projections of \(J=1\) and the polarizabilities of these eigenstates differ significantly from each other. Figure 5 shows the dynamic polarizabilities of the lowest 64 eigenstates of \(v=0\) X\({}^{1}\Sigma^{+}\)\({}^{23}\)Na\({}^{87}\)Rb, corresponding to all nuclear hyperfine states of \(J=0\) and 1 rotational states, as functions of the angle \(\theta\) between the linear laser polarization and a space-fixed \(z\) axis. The laser intensity is \(I_{\rm trap}=2\) kW/cm. There is magnetic field of 335.6 G along the \(z\) present. Panel (a) shows data for an electric field of 0.0 kV/cm. whereas panels (b) and (c) show an electric field value is 0.5 kV/cm oriented along \(x\) and \(z\) axis, respectively. In all three panels the polarizabilities have been obtained including quadrupole cou Figure 4: Minus one times the imaginary part of the dynamic polarizabilities of the \(J=0\), 1, and 2 rotational levels of the vibrational ground state of \({}^{23}\)Na\({}^{87}\)Rb with projection quantum number \(M=0\) along the \(z\) axis as functions of laser frequency in panels (a), (b), and (c), respectively. Imaginary polarizabilities are presented for laser polarization \(\sigma_{s}\) along the \(\vec{z}\) directions. Figure 5: Dynamic polarizabilities of all \(J=0\) and 1 hyperfine eigenstates of \(v=0\) X\({}^{1}\Sigma^{+}\)\({}^{23}\)Na\({}^{87}\)Rb as functions of the angle \(\theta\) between the linear laser polarization and a space-fixed \(z\) axis, when the nuclear quadrupole interaction is present and a magnetic of 335.6 G is applied along the \(z\) axis at a fixed laser wavelength of \(\lambda=1064\) nm or wavenumber of \(E/hc\approx 9400\) cm\({}^{-1}\). Panels (a), (b), and (c) show polarizabilities for an electric field of 0.0 kV/cm and 0.5 kV/cm applied along the \(x\) and the \(z\) axis. Black and red curves correspond to hyperfine states with dominant \(J=0,M=0\) and \(J=1,M=0\) character, respectively. Perple curves correspond to hyperfine states with \(J=1,M=\pm 1\) character. plings between nuclear spins and the molecular rotation. In panel (c) the dynamic polarizability of \(J=1,M=0\) hyperfine states coalesce into a single curve. Those for \(M=\pm 1\) states still have a complex dependence on hyperfine states. Near Feshbach resonances atom pairs can be associated into weakly-bound \({}^{23}\)Na\({}^{87}\)Rb molecules with time-dependent magnetic field ramps. In fact, the polarizabilities of \(J=0,M=0\) and \(J=1,M=0\) states are equal for \(\theta\approx 55^{\circ}\), a magic condition. The \(M=+1\) and \(M=-1\) degeneracy of \(J=1\) eigenstates, however, is lifted and eigenstates are labeled as either \(M=-1\) or 1. In our formalism we include the nuclear quadrupole interaction \(H_{\rm Q}=\sum_{k}(eqQ)_{k}(C_{2}(\alpha,\beta)\cdot T_{2}(\vec{i}_{k},\vec{i}_ {k}))/[i_{k}(i_{k}-1)]\) with one contribution for each atom. It has strengths \((eqQ)_{k}\) and couples nuclear spins to rotational states \(J\). Here, \(C_{2m}(\alpha,\beta)\) is a spherical harmonic function that depends on angle \(\alpha\) and \(\beta\) orienting the molecular axis and \(T_{2m}(\vec{i}_{k},\vec{i}_{k})\) is a rank-2 spherical tensor constructed from spin \(\vec{i}_{k}\). For \({}^{23}\)Na\({}^{87}\)Rb, the parameters \((eqQ)_{k}\) were first given in Ref. [19] as \((eqQ)_{\rm Na}/h=0.132\) MHz and \((eqQ)_{\rm Rb}/h=-2.984\) MHz. Finally, the polarization interaction \[H_{\rm pol} = -\frac{1}{3}\left[\alpha_{||}(\nu)+2\alpha_{\perp}(\nu)\right]I_ {\rm trap}-\frac{\sqrt{6}}{3}\left[\alpha_{||}(\nu)-\alpha_{\perp}(\nu)\right]T _{2}(\vec{\epsilon},\vec{\epsilon})\cdot C_{2}(\alpha,\beta)I_{\rm trap}\,, \tag{10}\] where \(I_{\rm trap}\) is the intensity of the trapping laser. The rank-2 tensor operators in \(H_{\rm pol}\) capture its dependence on (linear) laser polarization \(\vec{\epsilon}\) and rotational state of the molecule [5]. The Hamiltonian \(H_{\rm pol}\) involves the frequency-dependent \(v=0\) vibrationally-averaged parallel and perpendicular polarizabilities, which for \({}^{23}\)Na\({}^{87}\)Rb are \(\alpha_{||}/h=57.904\) Hz/(W/cm\({}^{2}\)) and \(\alpha_{\perp}/h=19.079\) Hz/(W/cm\({}^{2}\)) at a laser wavelength of 1064 nm. Typically, the laser intensity \(I_{\rm trap}\) is of order 1 kW/cm\({}^{2}\). We neglect contributions from centrifugal distortions, the rotational Zeeman interaction, and other hyperfine terms. ## IV Conclusion In this paper we developed a theoretical approach to construct an optical trap for a single NaRb molecule where molecular rotational-hyperfine states have so called magic conditions. Constructing a rotational magic trap is the ideal solution the long rotational coherence times needed to exploit the rotational degree of freedom as the quantum bit in quantum information processing. In such a laser trap, light-induced energy shifts of multiple rotational states of the ground configuration are the same, eliminating dephasing associated with spatial variations in intensity across the trap. This opens up the prospect of using the rotational degree of freedom of the molecule to encode a synthetic dimension in addition to having multiple molecule-containing traps in real space. We used several ways to reach this goal: a) changing a trapping laser frequency in the region that close to or in between the narrow transitions from \(v=0,J=0\) of the X\({}^{1}\Sigma^{+}\) state to the \(v^{\prime}=0\) and \(v^{\prime}=1\) vibrational levels of the spin-orbit coupled A\({}^{1}\Sigma^{+}\)-b\({}^{3}\Pi_{0}\) complex. No external electric field is present. The magnetic field strength is 335.6 G; b) changing field orientation relative to polarization of trapping light with magnetic field of 335.6 G is on and a static electric field is on and off. For case a), we predict nearly magic conditions for the lowest six rotational states of the \(v=0\) level at detuning \(\Delta/h=-2\) GHz and 100 GHz from the \(v^{\prime}\)=0, \(J^{\prime}=1\) level of the b\({}^{3}\Pi_{0}\) potential. Case b) focuses on finding the magic conditions taking into account the nonzero nuclear spins of \({}^{23}\)Na and \({}^{87}\)Rb, which align along the magnetic field through the Zeeman interaction. Moreover, nuclear quadrupole interactions mix nuclear spin states with the rotation of the molecule. This causes the rotating molecules to dephase quickly in the inhomogeneous trapping laser field. This dephasing can be canceled to first order by selecting a specific angle between the angular momentum of the molecule \(J\) and the trapping field polarization direction such that the differential polarizability vanishes. We have shown that applying an electric field along the magnetic field direction decouples the hyperfine states and thus reduces second-order differential light shifts. ## Acknowledgements Our research is supported by the U.S. Air Force Office of Scientific Research Grants No. FA9550-19-1-0272. Work at Temple University is also supported by the U.S. Air Force Office of Scientific Research Grants No. FA9550-21-1-0153 and the NSF Grant No. PHY-1908634.
2302.01578
Searching Large Neighborhoods for Integer Linear Programs with Contrastive Learning
Integer Linear Programs (ILPs) are powerful tools for modeling and solving a large number of combinatorial optimization problems. Recently, it has been shown that Large Neighborhood Search (LNS), as a heuristic algorithm, can find high quality solutions to ILPs faster than Branch and Bound. However, how to find the right heuristics to maximize the performance of LNS remains an open problem. In this paper, we propose a novel approach, CL-LNS, that delivers state-of-the-art anytime performance on several ILP benchmarks measured by metrics including the primal gap, the primal integral, survival rates and the best performing rate. Specifically, CL-LNS collects positive and negative solution samples from an expert heuristic that is slow to compute and learns a new one with a contrastive loss. We use graph attention networks and a richer set of features to further improve its performance.
Taoan Huang, Aaron Ferber, Yuandong Tian, Bistra Dilkina, Benoit Steiner
2023-02-03T07:15:37Z
http://arxiv.org/abs/2302.01578v1
# Searching Large Neighborhoods for ###### Abstract Integer Linear Programs (ILPs) are powerful tools for modeling and solving a large number of combinatorial optimization problems. Recently, it has been shown that Large Neighborhood Search (LNS), as a heuristic algorithm, can find high quality solutions to ILPs faster than Branch and Bound. However, how to find the right heuristics to maximize the performance of LNS remains an open problem. In this paper, we propose a novel approach, CL-LNS, that delivers state-of-the-art anytime performance on several ILP benchmarks measured by metrics including the primal gap, the primal integral, survival rates and the best performing rate. Specifically, CL-LNS collects positive and negative solution samples from an expert heuristic that is slow to compute and learns a more efficient one with contrastive learning. We use graph attention networks and a richer set of features to further improve its performance. Machine Learning, ICML ## 1 Introduction Algorithm designs for combinatorial optimization problems (COPs) are important and challenging tasks. A wide variety of real-world problems are COPs, such as vehicle routing (Toth and Vigo, 2002), network design (Johnson et al., 1978), path planning (Pohl, 1970) and mechanism design (De Vries and Vohra, 2003) problems, and a majority of them are NP-hard to solve. In the past few decades, algorithms, including optimal algorithms, approximation algorithms and heuristic algorithms, have been studied extensively due to the importance of COPs. Those algorithms are mostly designed by human through costly processes that often require deep understanding of the problem domains and their underlying structures as well as considerable time and effort. Recently, there has been an increased interest to automate algorithm designs for COPs with machine learning (ML). Many ML approaches learn to either construct or improve solutions within an algorithmic framework, such as greedy search, local search or tree search, for a specific COP, such as the traveling salesman problem (TSP) (Xin et al., 2021; Zheng et al., 2021), vehicle routing problem (VRP) (Kool et al., 2018) or independent set problem (Li et al., 2018), and are often not easily applicable to other COPs. In contrast, Integer Linear Programs (ILPs) can flexibly encode and solve a broad family of COPs, such as minimum vertex cover, set covering and facility location problems. ILPs can be solved by Branch and Bound (BnB) (Land and Doig, 2010), an optimal tree search algorithm that can achieve state-of-the-art for ILPs. Over the past decades, BnB has been improved tremendously to become the core of many popular ILP solvers such as SCIP (Bestuzheva et al., 2021), CPLEX (Cplex, 2009) and Gurobi (Gurobi Optimization, LLC, 2022). However, due to its exhaustive search nature, it is hard for BnB to scale to large instances (Khalil et al., 2016; Gasse et al., 2019). On the other hand, Large Neighborhood Search (LNS) has recently been shown to find high quality solutions much faster than BnB for large ILP instances (Song et al., 2020; Wu et al., 2021; Sonnerat et al., 2021; Huang et al., 2022). LNS starts from an initial solution (i.e., a feasible assignment of values to variables) and then improves the current best solution by iteratively picking a subset of variables to reoptimize while leaving others fixed. Picking which subset to reoptimize, i.e., the _destroy heuristic_, is a critical component in LNS. Hand-crafted destroy heuristics, such as the randomized heuristic (Song et al., 2020; Sonnerat et al., 2021) and the Local Branching (LB) heuristic (Fischetti and Lodi, 2003), are often either inefficient (slow to find good subsets) or ineffective (find subsets of bad quality). ML-based destroy heuristics have also been proposed and outperform hand-crafted ones. State-of-the-art approaches include IL-LNS (Sonnerat et al., 2021) that uses imitation learning (IL) to imitates the LB heuristic and RL-LNS (Wu et al., 2021) that uses a similar framework to IL-LNS but trained with reinforcement learning (RL). In this paper, we propose a novel ML-based LNS for ILPs, namely _CL-LNS_, that uses contrastive learning (CL) (Chen et al., 2020; Khosla et al., 2020) to learn efficient and effective destroy heuristics. Similar to IL-LNS (Sonnerat et al., 2021), we learn to imitate the _Local Branching (LB)_ heuristic, a destroy heuristic that selects the optimal subset of variables within the Hamming ball of the incumbent solutions. LB requires solving another ILP with same size as the original problem and thus is computationally expensive. We not only use the optimal subsets provided by LB as the expert demonstration (as in IL-LNS), but also leverage intermediate solutions and perturbations. When solving the ILP for LB, intermediate solutions are found and those that are close to optimal in term of effectiveness become _positive samples_. We also collect _negative samples_ by randomly perturbing the optimal subset. With both positive and negative samples, instead of a classification loss as in IL-LNS, we use a contrastive loss that encourages the model to predict the subset similar to the positive samples but dissimilar to the negative ones with similarity measured by dot products (Oord et al., 2018; He et al., 2020). Finally, we also use a richer set of features and use graph attention networks (GAT) instead of GCN, to further boost performance. Empirically, we show that CL-LNS outperforms state-of-the-art ML and non-ML approaches at different runtime cutoffs ranging from a few minutes to an hour in terms of multiple metrics, including the primal gap, the primal integral, the best performing rate and the survival rate, demonstrating the effectiveness and efficiency of CL-LNS. In addition, CL-LNS shows great generalization performance on test instances two times larger than training instances. ## 2 Background In this section, we first define ILPs and then introduce LNS for ILP solving and the Local Branching (LB) heuristic. ### ILPs An _integer linear program (ILP)_ is defined as \[\min\mathbf{c}^{\mathsf{T}}\mathbf{x}\text{ s.t. }\mathbf{A}\mathbf{x}\leq\mathbf{b}\text{ and }\mathbf{x}\in\{0,1\}^{n}, \tag{1}\] where \(\mathbf{x}=(x_{1},\dots,x_{n})^{\mathsf{T}}\) denotes the \(n\) binary variables to be optimized, \(\mathbf{c}\in\mathbb{R}^{n}\) is the vector of objective coefficients, \(\mathbf{A}\in\mathbb{R}^{m\times n}\) and \(\mathbf{b}\in\mathbb{R}^{m}\) specify \(m\) linear constraints. A _solution_ to the ILP is a feasible assignment of values to the variables. In this paper, we focus on the formulation above that consists of only binary variables, but our methods can be applied to mixed integer linear programs with continuous variables and/or non-binary integer variables. ### LNS for ILP solving LNS is a heuristic algorithm that starts with an initial solution and then iteratively destroys and reoptimizes a part of the solution until a runtime limit is exceeded or some stopping condition is met. Let \(\mathcal{I}=(\mathbf{A},\mathbf{b},\mathbf{c})\) be the input ILP, where \(\mathbf{A},\mathbf{b}\) and \(\mathbf{c}\) are the coefficients defined in Equation (1), and \(\mathbf{x}^{0}\) be the initial solution (typically found by running BnB for a short runtime). In iteration \(t\geq 0\) of LNS, given the _incumbent solution_\(\mathbf{x}^{t}\), defined as the best solution found so far, a _destroy heuristic_ selects a subset of \(k^{t}\) variables \(\mathcal{X}^{t}=\{x_{i_{1}},\dots,x_{i_{k^{t}}}\}\). The reoptimization is done by solving a sub-ILP with \(\mathcal{X}^{t}\) being the variables while fixing the values of \(x_{j}\notin\mathcal{X}^{t}\) the same as in \(\mathbf{x}^{t}\). The solution to the sub-ILP is the new incumbent solution \(\mathbf{x}^{t+1}\) and then LNS proceeds to iteration \(t+1\). Compared to BnB, LNS is more effective in improving the objective value \(\mathbf{c}^{\mathsf{T}}x\) especially on difficult instances (Song et al., 2020; Sonnerat et al., 2021; Wu et al., 2021). Compared to other local search methods, LNS explores a large neighborhood in each step and thus, is more effective in avoiding local minima. Adaptive Neighborhood SizeAdaptive methods are commonly used to set the neighborhood size \(k^{t}\) in previous work (Sonnerat et al., 2021; Huang et al., 2022). The initial neighborhood size \(k^{0}\) is set to a constant or a fraction of the number of variables. In this paper, we consider the following adaptive method (Huang et al., 2022): in iteration \(t\), if LNS finds an improved solution, we let \(k^{t+1}=k^{t}\), otherwise \(k^{t+1}=\min\{\gamma\cdot k^{t},\beta\cdot n\}\) where \(\gamma>1\) is a constant and we upper bound \(k^{t}\) to a constant fraction \(\beta<1\) of the number of variables to make sure the sub-ILP is not too large (thus, too difficult) to solve. Adaptively setting \(k^{t}\) helps LNS escape local minima by expanding the search neighborhood when it fails to improve the solution. ### LB Heuristic The LB Heuristic (Fischetti and Lodi, 2003) is originally proposed as a primal heuristic in BnB but also applicable in LNS for ILP solving (Sonnerat et al., 2021; Liu et al., 2022). Given the incumbent solution \(\mathbf{x}^{t}\) in iteration \(t\) of LNS, LB aims to find the subset of variables to destroy \(\mathcal{X}^{t}\) such that it leads to the optimal \(\mathbf{x}^{t+1}\) that differs from \(\mathbf{x}^{t}\) on at most \(k^{t}\) variables, i.e., it computes the optimal solution \(\mathbf{x}^{t+1}\) that sits within a given Hamming ball of radius \(k^{t}\) centered around \(\mathbf{x}^{t}\). To find \(\mathbf{x}^{t+1}\), the LB heuristic solves the LB ILP that is exactly the same ILP from input but with one additional constraint that limits the distance between \(\mathbf{x}^{t}\) and \(\mathbf{x}^{t+1}\): \(\sum_{i\in[n]:x_{i}^{t}=0}x_{i}^{t+1}+\sum_{i\in[n]:x_{i}^{t}=1}(1-x_{i}^{t+1 })\leq k^{t}\). The LB ILP is of the same size of the input ILP (i.e., it has the same number of variables and one more constraint), therefore, it is often too slow to be useful in practice. ## 3 Related Work In this section, we summarize related work on LNS for ILPs and other COPs, learning to solve ILPs with BnB and contrastive learning for COPs. We also summarize additional related work on LNS-based primal heuristics for BnB and learning to solve other COPs in Appendix. ### LNS for ILPs and Other COPs Huge effort has been made to improve BnB for ILPs in the past decades, but LNS for ILPs has not been studied extensively. Recently, Song et al. (2020) show that even a randomized destroy heuristic in LNS can outperform state-of-the-art BnB. They also show that an ML-guided decomposition-based LNS can achieve even better performance, where they apply RL and IL to learn destroy heuristics that decompose the set of variables into equally-sized subsets using a classification loss. Sonnerat et al. (2021) learn to select variables by imitating LB. RL-LNS (Wu et al., 2021) uses a similar framework but trained with RL and outperforms Song et al. (2020). Both Wu et al. (2021) and Sonnerat et al. (2021) use the bipartite graph representations of ILPs to learn the destroy heuristics represented by GCNs. Another line of related work focuses on improving LB. Liu et al. (2022) use ML to tune the runtime limit and neighborhood sizes for LB. Huang et al. (2022) propose LB-RELAX to select variables by solving the LP relaxation of LB. Besides ILPs, LNS has been applied to solve many COPs, such as VRP (Ropke and Pisinger, 2006; Azi et al., 2014), TSP (Smith and Imeson, 2017), scheduling (Kovacs et al., 2012; Zulj et al., 2018) and path planning problems (Li et al., 2022; 2021). ML methods have also been applied to improve LNS for those applications (Chen and Tian, 2019; Lu et al., 2019; Hottung and Tierney, 2020; Li et al., 2021; Huang et al., 2022). ### Learning to Solve ILPs with BnB Several studies have applied ML to improve BnB. The majority of works focus on learning to either select variables to branch on (Khalil et al., 2016; Gasse et al., 2019; Gupta et al., 2020; Zarpellon et al., 2021) or select nodes to expand (He et al., 2014; Labassi et al., 2022). There are also works on learning to schedule and run primal heuristics (Khalil et al., 2017; Chmiela et al., 2021) and to select cutting planes (Tang et al., 2020; Paulus et al., 2022; Huang et al., 2022). ### Contrastive Learning for COPs While contrastive learning of visual representations (Hjelm et al., 2019; He et al., 2020; Chen et al., 2020) and graph representations (You et al., 2020; Tong et al., 2021) have been studied extensively, it has not been explored much for COPs. Mulamba et al. (2021) derive a contrastive loss for decision-focused learning to solve COPs with uncertain inputs that can be learned from historical data, where they view non-optimal solutions as negative samples. Duan et al. (2022) use contrastive pre-training to learn good representations for the boolean satisfiability problem. ## 4 Contrastive Learning for LNS Our goal is to learn a policy, a destroy heuristic represented by an ML model, that selects a subset of variables to destroy and reoptimize in each LNS iteration. Specifically, let \(\mathbf{s}^{t}=(\mathcal{I},\mathbf{x}^{t})\) be the current state in iteration \(t\) of LNS where \(\mathcal{I}=(\mathbf{A},\mathbf{b},\mathbf{c})\) is the ILP and \(\mathbf{x}^{t}\) is the incumbent solution, the policy predicts an action \(\mathbf{a}^{t}=(a^{t}_{1},\ldots,a^{t}_{n})\in\{0,1\}^{n}\), a binary representation of the selected variables \(\mathcal{X}^{t}\) indicating whether \(x_{i}\) is selected (\(a^{t}_{i}=1\)) or not (\(a^{t}_{i}=0\)). We use contrastive learning to learn to predict high quality \(\mathbf{a}^{t}\) such that, after solving the sub-ILP derived from \(\mathbf{a}^{t}\) (or \(\mathcal{X}^{t}\)), the resulting incumbent solution \(\mathbf{x}^{t+1}\) is improved as much as possible. Next, we describe how we prepare data for contrastive learning, the policy network and the contrastive loss used in training, and finally introduce how the learned policy is used in CL-LNS. ### Data Collection Following previous work by Sonnerat et al. (2021), we use LB as the expert policy to collect good demonstrations to learn to imitate. Formally, for a given state \(\mathbf{s}^{t}=(\mathcal{I},\mathbf{x}^{t})\), we use LB to find the optimal action \(\mathbf{a}^{t}\) that leads to the minimum \(\mathbf{c}^{\mathsf{T}}\mathbf{x}^{t+1}\) after solving the sub-ILP. Different from the previous work, we use contrastive learning to learn to make discriminative predictions of \(\mathbf{a}^{t}\) by contrasting positive and negative samples (i.e., good and bad examples of actions \(\mathbf{a}^{t}\)). In the following, we describe how we collect the positive sample set \(\mathcal{S}^{t}_{\mathsf{p}}\) and the negative sample set \(\mathcal{S}^{t}_{\mathsf{n}}\). Collecting Positive Samples \(\mathcal{S}^{t}_{\mathsf{p}}\)During data collection, given \(\mathbf{s}^{t}=(\mathcal{I},\mathbf{x}^{t})\), we solve the LB ILP with the incumbent solution \(\mathbf{x}^{t}\) and neighborhood size \(k^{t}\) to find the optimal \(\mathbf{x}^{t+1}\). LNS proceeds to iteration \(t+1\) with \(\mathbf{x}^{t+1}\) until no improving solution \(\mathbf{x}^{t+1}\) could be found by the LB ILP within a runtime limit. In experiments, the LB ILP is solved with SCIP 8.0.1 (Bestuzheva et al., 2021) with an hour runtime limit and \(k^{t}\) is fine-tuned for each type of instances. After each solve of the LB ILP, in addition to the best solution found, SCIP records all intermediate solutions found during the solve. We look for intermediate solutions \(\mathbf{x}^{\prime}\) whose resulting improvements on the objective value is at least \(0<\alpha_{\mathsf{p}}\leq 1\) times the best improvement (i.e., \(\mathbf{c}^{\mathsf{T}}(\mathbf{x}^{t}-\mathbf{x}^{\prime})\geq\alpha_{\mathsf{p}}\cdot\bm {c}^{\mathsf{T}}(\mathbf{x}^{t}-\mathbf{x}^{t+1})\)) and consider their corresponding actions as positive samples. We limit the number of the positive samples \(|\mathcal{S}^{t}_{\mathbf{p}}|\) to \(u_{\text{p}}\). If more than \(u_{\text{p}}\) positive samples are available, we record the top \(u_{\text{p}}\) ones to avoid large computational overhead with too many samples when computing the contrastive loss (see Section 4.3). \(\alpha_{\text{p}}\) and \(u_{\text{p}}\) are set to \(0.5\) and \(10\), respectively, in experiments. Collecting Negative Samples \(\mathcal{S}^{t}_{n}\)Negative samples are critical parts of contrastive learning to help distinguish between good and bad demonstrations. We collect a set of \(c^{t}_{n}\) negative samples \(\mathcal{S}^{t}_{n}\), where \(c^{t}_{n}=\kappa|\mathcal{S}^{t}_{\mathbf{p}}|\) and \(\kappa\) is a hyperparameter to control the ratio between the numbers of positive and negative samples. Suppose \(\mathcal{X}^{t}\) is the optimal set of variables selected by LB. We then perturb \(\mathcal{X}^{t}\) to get \(\hat{\mathcal{X}}^{t}\) by replacing \(5\%\) of the variables in \(\mathcal{X}^{t}\) with the same number of those not in \(\mathcal{X}^{t}\) uniformly at random. We then solve the corresponding sub-ILP derived from \(\hat{\mathcal{X}}^{t}\) to get a new incumbent solution \(\hat{\mathbf{x}}^{t+1}\). If the resulting improvement of \(\hat{\mathbf{x}}^{t+1}\) is less than \(0\leq\alpha_{\text{n}}<1\) times the best improvement (i.e., \(\mathbf{c}^{\intercal}(\mathbf{x}^{t}-\hat{\mathbf{x}}^{t+1})\leq\alpha_{\text{n}} \cdot\mathbf{c}^{\intercal}(\mathbf{x}^{t}-\mathbf{x}^{t+1})\)), we consider its corresponding action as a negative sample. We repeat this \(c^{t}_{\text{n}}\) times to collect negative samples. If less than \(c^{t}_{\text{n}}\) negative samples is collected, we increase the perturbation rate from \(5\%\) to \(10\%\) and generate another \(c^{t}_{\text{n}}\) samples. We keep increasing the perturbation rate at an increment of \(5\%\) until \(c^{t}_{n}\) negative samples are found or it reaches \(100\%\). In experiments, we set \(\kappa=9\) and \(\alpha_{\text{n}}=0.05\), and it takes less than 3 minutes to collect negative samples for each state. ### Policy Network Following previous work on learning for ILPs (Gasse et al., 2019; Sonnerat et al., 2021; Wu et al., 2021), we use a bipartite graph representation of ILP to encode a state \(\mathbf{s}^{t}\). The bipartite graph consists of \(n+m\) nodes representing the \(n\) variables and \(m\) constraints on two sides, respectively, with an edge connecting a variable and a constraint if the variable has a non-zero coefficient in the constraint. Following Sonnerat et al. (2021), we use features proposed in Gasse et al. (2019) for node features and edge features in the bipartite graph and also include a fixed-size window of most recent incumbent values as variable node features with the window size set to 3 in experiments. In addition to features used in Sonnerat et al. (2021), we include features proposed in Khalil et al. (2016) computed at the root node of BnB to make it a richer set of variable node features. We learn a policy \(\pi_{\mathbf{\theta}}(\cdot)\) represented by a graph attention network (GAT) (Brody et al., 2022) parameterized by learnable weights \(\mathbf{\theta}\). The policy takes as input the state \(\mathbf{s}^{t}\) and output a score vector \(\pi_{\mathbf{\theta}}(\mathbf{s}^{t})\in[0,1]^{n}\), one score per variable. To increase the modeling capacity and to manipulate node interactions proposed by our architecture, we use embedding layers to map each node feature and edge feature to space \(\mathbb{R}^{d}\). Let \(\mathbf{v}_{j},\mathbf{c}_{i},\mathbf{e}_{i,j}\in\mathbb{R}^{d}\) be the embeddings of the \(i\)-th variable, \(j\)-th constraint and the edge connecting them output by the embedding layers. Since our graph is bipartite, following previous work (Gasse et al., 2019), we perform two rounds of message passing through the GAT. In the first round, each constraint node \(\mathbf{c}_{i}\) attends to its neighbors \(\mathcal{N}_{i}\) using an attention structure with \(H\) attention heads to get updated constraint embeddings \(\mathbf{c}^{\prime}_{i}\) (computed as a function of \(\mathbf{v}_{j},\mathbf{c}_{i},\mathbf{e}_{i,j}\)). In the second round, similarly, each variable node attends to its neighbors to get updated variable embeddings \(\mathbf{v}^{\prime}\) (computed as a function of \(\mathbf{v}_{j},\mathbf{c}^{\prime}_{i},\mathbf{e}_{i,j}\)) with another set of attention weights. After the two rounds of message passing, the final representations of variables \(\mathbf{v}^{\prime}\) are passed through a multi-layer perceptron (MLP) to obtain a scalar value for each variable and, finally, we apply the sigmoid function to get a score between 0 and 1. Full details of the network architecture are provided in Appendix. In experiments, \(d\) and \(H\) are set to \(64\) and \(8\), respectively. ### Training with a Contrastive Loss Given a set of ILP instance for training, we follow the expert's trajectory to collect training data. Let \(\mathcal{D}=\{(\mathbf{s},\mathcal{S}_{\text{p}},\mathcal{S}_{\text{n}})\}\) be the set of states with their corresponding sets of positive and negative samples in the training data. A contrastive loss is a function whose value is low when the predicted action \(\pi_{\mathbf{\theta}}(\mathbf{s})\) is similar to the positive samples \(\mathcal{S}_{\text{p}}\) and dissimilar to the negative samples \(\mathcal{S}_{\text{n}}\). With similarity measured by dot products, a form of supervised contrastive loss, called InfoNCE (Oord et al., 2018; He et al., 2020), is used in this paper: \[\mathcal{L}(\mathbf{\theta})=\sum_{(\mathbf{s},\mathcal{S}_{\text{p}},\mathcal{S}_{ \text{n}})\in\mathcal{D}}\frac{-1}{|\mathcal{S}_{\text{p}}|}\sum_{\mathbf{a}\in \mathcal{S}_{\text{p}}}\log\frac{\exp(\mathbf{a}^{\intercal}\pi_{\mathbf{\theta}}(\mathbf{s })/\tau)}{\sum_{\mathbf{a}^{\prime}\in\mathcal{S}_{\text{n}}\cup(\mathbf{a})}\exp(\bm {a}^{\prime\intercal}\pi_{\mathbf{\theta}}(\mathbf{s})/\tau)}\] where \(\tau\) is a temperature hyperparameter set to 0.07 (He et al., 2020) in experiments. ### Applying Learned Policy \(\pi_{\mathbf{\theta}}\) We apply the learned policy \(\pi_{\mathbf{\theta}}\) in LNS. In iteration \(t\), let \((v_{1},\cdots,v_{n}):=\pi_{\mathbf{\theta}}(\mathbf{s}^{t})\) be the variable scores output by the policy. To select \(k^{t}\) variables, CL-LNS greedily selects those with the highest scores. Previous works (Sonnerat et al., 2021; Wu et al., 2021) commonly use sampling methods to select the variables, but those sampling methods are empirically worse than our greedy method in CL-LNS. However, when the adaptive neighborhood size \(k^{t}\) reaches its upper bound \(\beta\cdot n\), CL-LNS may repeat the same prediction due to deterministic selection process. When this happens, we switch to the sampling method introduced in (Sonnerat et al., 2021). The sampling method selects variables sequentially: at each step, a variable \(x_{i}\) that has not been selected yet is selected with probability proportional to \(v_{i}^{\eta}\), where \(\eta\) is a temperature parameter set to \(0.5\) in experiments. ## 5 Empirical Evaluation In this section, we introduce our evaluation setup and then present the results. Our code will be made available to the public upon publication. ### Setup Instance GenerationWe evaluate on four NP-hard problem benchmarks that are widely used in existing studies (Wu et al., 2021; Song et al., 2020; Scavuzzo et al., 2022), which consist of two graph optimization problems, namely the minimum vertex cover (MVC) and maximum independent set (MIS) problems, and two non-graph optimization problems, namely the combinatorial auction (CA) and set covering (SC) problems. We first generate a test set of 100 _small instances_ for each problem, namely MVC-S, MIS-S, CA-S and SC-S. MVC-S instances are generated according to the Barabasi-Albert random graph model (Albert and Barabasi, 2002), with 1,000 nodes and average degree 70 following (Song et al., 2020). MIS-S instances are generated according to the Erdos-Renyi random graph model (Erdos et al., 1960), with 6,000 nodes and average degree 5 following (Song et al., 2020). CA-S instances are generated with 2,000 items and 4,000 bids according to the arbitrary relations in Leyton-Brown et al. (2000). SC-S instances are generated with 4,000 variables and 5,000 constraints following Wu et al. (2021). We then generate another test set of 100 _large instances_ for each problem by doubling the number of variables, namely MVC-L, MIS-L, CA-L and SC-L. For each test set, Table 1 shows its average numbers of variables and constraints. More details of instance generation are included in Appendix. For data collection and training, we generate another set of 1,024 small instances for each problem. We split these instances into training and validation sets, each consisting of 896 and 128 instances, respectively. BaselinesWe compare CL-LNS with five baselines: (1) BnB: using SCIP (v8.0.1), the state-of-the-art open-source ILP solver, with the aggressive mode fine-tuned to focus on improving the objective value; (2) RANDOM: LNS which selects the neighborhood by uniformly sampling \(k^{t}\) variables without replacement; (3) LB-RELAX (Huang et al., 2022): LNS which selects the neighborhood with the LB-RELAX heuristics; (4) IL-LNS (Sonnerat et al., 2021); (5) RL-LNS (Wu et al., 2021). We compare with two more baselines in Appendix. For each ML approach, a separate model is trained for each problem on the small training set \begin{table} \begin{tabular}{c|c c c c c c c} \hline & \multicolumn{3}{c|}{Small Instances} & \multicolumn{3}{c}{Large Instances} \\ \hline Name & MVC-S & MIS-S & CA-S & SC-S & MVC-L & MIS-L & CA-L & SC-L \\ \hline \#Variables & 1,000 & 6,000 & 4,000 & 4,000 & 2,000 & 12,000 & 8,000 & 8,000 \\ Constraints & 65,100 & 23,977 & 2,675 & 5,000 & 135,100 & 48,027 & 5,353 & 5,000 \\ \hline \end{tabular} \end{table} Table 1: Names and the average numbers of variables and constraints of the test instances. Figure 1: The primal gap (the lower the better) as a function of runtime, averaged over 100 test instances. For ML approaches, the policies are trained on only small training instances but tested on both small and large test instances. and tested on both small and large test sets. We implement IL-LNS and fine tune its hyperparameters for each problem since the authors do not fully open source the code. IL-LNS uses the same training dataset as CL-LNS but uses only the positive samples. For RL-LNS, we use the code and hyperparameters provided by the authors and train the models with five random seeds to select one with the best performance on the validation sets. We do not compare to the approach by Song et al. (2020) since it performs worse than RL-LNS on multiple problems (Wu et al., 2021). MetricsWe use the following metrics to evaluate all approaches: (1) The _primal bound_ is the objective value of the ILP; (2) The _primal gap_(Berthold, 2006) is the normalized difference between the primal bound \(v\) and a precomputed best known objective value \(v^{*}\), defined as \(\frac{|v-v^{*}|}{\max(v,v^{*},c)}\) if \(v\) exists and \(v\cdot v^{*}\geq 0\), or 1 otherwise. We use \(\epsilon=10^{-8}\) to avoid division by zero; (3) The _primal integral_(Achterberg et al., 2012) at time \(q\) is the integral on \([0,q]\) of the primal gap as a function of runtime. It captures the quality of and the speed at which solutions are found; (4) The _survival rate_ to meet a certain primal gap threshold is the fraction of instances with primal gaps below the threshold (Sonnerat et al., 2021); (5) The _best performing rate_ of an approach is the fraction of instances on which it achieves the best primal gap (including ties) compared to all approaches at a given runtime cutoff. Since BnB and LNS are both anytime algorithms, we show these metrics as a function of runtime or the number of iterations in LNS (when applicable) to demonstrate their anytime performance. HyperparametersWe conduct experiments on 2.5GHz Intel Xeon Platinum 8259CL CPUs with 32 GB memory. Trainings are done on a NVIDIA A100 GPU with 40 GB memory. All experiments use the hyperparameters described below unless stated otherwise. We use SCIP (v8.0.1) (Bestuzheva et al., 2021) to solve the sub-ILP in every iteration of LNS. To run LNS, we find an initial solution by running SCIP for 10 seconds. We set the time limit to 60 minutes to solve each instance and 2 minutes for solving the sub-ILP in every LNS iteration. All approaches require a neighborhood size \(k^{t}\) in LNS, except for BnB and RL-LNS (\(k^{t}\) in RL-LNS is defined implicitly by how the policy is used). For LB-RELAX, IL-LNS and CL-LNS, the initial neighborhood size \(k^{0}\) is set to \(100,3000,1000\) and \(150\) for MVC, MIS, CA and SC, respectively, except \(k^{0}\) is set to \(150\) for SC for IL-LNS; for RANDOM, it is set to \(200,3000,1500\) and \(200\) for MVC, MIS, CA and SC, respectively. All approaches use adaptive neighborhood sizes with \(\gamma=1.02\) and \(\beta=0.5\), except for BnB and RL-LNS. For IL-LNS, when applying its learned policies, we use the sampling methods on MVC and CA instances and the greedy method on SC and MIS instances. For CL-LNS, the greedy method is used on all instances. Additional details on hyperparameter tunings are provided in Appendix. For data collection, we use different neighborhood sizes \(k^{0}=50,500,200\) and \(50\) for MVC, MIS, CA and SC, respectively, which we justify in Section 5.2. We set \(\gamma=1\) and run LNS with LB until no new incumbent solution found. The runtime limit for solving LB in every iteration is set to 1 hour. For training, we use the Adam optimizer (Kingma and Ba, 2015) with learning rate \(10^{-3}\). We use a batch size of 32 and train for 30 epochs (the training typically converges in less than 20 epochs and 24 hours). ### Results Figure 1 shows the primal gap as a function of runtime. Table 2 presents the average primal gap and primal integral at 60 minutes runtime cutoff on small and large instances, respectively (see results at 15, 30 and 45 minutes runtime cutoff in Appendix). Note that we were not able to reproduce the results on CA-S and CA-L reported in Wu et al. (2021) for RL-LNS despite using their code and repeating training with five random seeds. CL-LNS shows significantly better anytime performance than all baselines on all problems, achieving the smallest average primal gap and primal integral. It also demonstrates strong generalization performance on large instances unseen during training. Figure 2 shows the survival rate to meet the \(1.00\%\) primal gap \begin{table} \begin{tabular}{c|c c c c} \hline & PG (\%) \(\downarrow\) & PI (\%) \(\downarrow\) & PI \(\downarrow\) \\ \hline \multicolumn{5}{c}{MVC-S} \\ \hline BnB & 1.32\(\pm\)0.43 & 66.1\(\pm\)13.1 & 5.10\(\pm\)0.69 & 222.8\(\pm\)25.9 \\ RANDOM & 0.96\(\pm\)1.26 & 38.0\(\pm\)44.8 & 0.24\(\pm\)0.14 & 22.1\(\pm\)5.0 \\ LB-RELAX & 1.38\(\pm\)1.51 & 57.0\(\pm\)5.12 & 0.65\(\pm\)0.20 & 46.9\(\pm\)5.5 \\ IL-LNS & 0.29\(\pm\)0.23 & 19.2\(\pm\)10.12 & 0.22\(\pm\)0.17 & 19.4\(\pm\)5.8 \\ RL-LNS & 0.61\(\pm\)0.34 & 29.6\(\pm\)11.5 & 0.22\(\pm\)0.14 & 17.2\(\pm\)5.2 \\ **CL-LNS** & **0.17\(\pm\)0.09** & **87.6\(\pm\)6.7** & **0.15\(\pm\)0.15** & **12.8\(\pm\)5.4** \\ \hline \multicolumn{5}{c}{CA-S} \\ \hline BnB & 2.28\(\pm\)0.59 & 137.4\(\pm\)25.9 & 1.13\(\pm\)0.95 & 86.7\(\pm\)37.9 \\ RANDOM & 5.90\(\pm\)1.02 & 235.6\(\pm\)34.9 & 2.67\(\pm\)129 & 124.3\(\pm\)45.4 \\ LB-RELAX & 1.65\(\pm\)0.57 & 140.5\(\pm\)18.3 & 0.86\(\pm\)0.83 & 63.2\(\pm\)31.6 \\ IL-LNS & 1.09\(\pm\)0.51 & 90.0\(\pm\)20.8 & 1.33\(\pm\)0.97 & 63.2\(\pm\)34.3 \\ RL-LNS & 6.32\(\pm\)1.03 & 249.2\(\pm\)35.9 & 1.10\(\pm\)0.77 & 77.8\(\pm\)28.9 \\ **CL-LNS** & **0.65\(\pm\)0.32** & **59.7\(\pm\)22.7** & **0.50\(\pm\)0.58** & **26.2\(\pm\)12.8** \\ \hline \multicolumn{5}{c}{MVC-I} \\ \hline BnB & 2.41\(\pm\)0.40 & 130.2\(\pm\)11.1 & 6.29\(\pm\)1.62 & 285.1\(\pm\)18.2 \\ RANDOM & 0.38\(\pm\)0.24 & 22.7\(\pm\)8.0 & **0.11\(\pm\)0.08** & 19.0\(\pm\)3.1 \\ LB-RELAX & 0.46\(\pm\)0.23 & 48.4\(\pm\)7.5 & 9.0\(\pm\)0.16 & 98.6\(\pm\)65.5 \\ IL-LNS & 0.27\(\pm\)0.23 & 21.2\(\pm\)8.1 & 0.29\(\pm\)0.15 & 27.1\(\pm\)5.5 \\ RL-LNS & 0.59\(\pm\)0.30 & 37.3\(\pm\)9.6 & 0.14\(\pm\)0.12 & 18.9\(\pm\)4.1 \\ **CL-LNS** & **0.05\(\pm\)0.04** & **91.3\(\pm\)3.4** & 0.12\(\pm\)0.11 & **12.9\(\pm\)4.4** \\ \hline \multicolumn{5}{c}{CA-I} \\ \hline BnB & 2.74\(\pm\)1.87 & 320.9\(\pm\)83.1 & 1.54\(\pm\)13.3 & 115.0\(\pm\)42.5 \\ RANDOM & 5.37\(\pm\)0.75 & 229.2\(\pm\)24.4 & 3.31\(\pm\)17.9 & 166.4\(\pm\)61.3 \\ LB-RELAX & 1.61\(\pm\)1.50 & 153.0\(\pm\)50.3 & 1.91\(\pm\)1.42 & 88.3\(\pm\)48.9 \\ IL-LNS & 4.56\(\pm\)0.98 & 254.2\(\pm\)33.4 & 1.72\(\pm\)1.9 & 79.1\(\pm\)4.24 \\ RL-LNS & 4.91\(\pm\)0.81 & 197.0\(\pm\)28.5 & 0.66\(\pm\)0.72 & 116.2\(\pm\)22.7 \\ **CL-LNS** & **0.09\(\pm\)0.10** & **116.1\(\pm\)18.0** & **0.58\(\pm\)0.45** & **39.2\(\pm\)23.2** \\ \hline \end{tabular} \end{table} Table 2: Primal gap (PG) (in percent), primal integral (PI) at 60 minutes runtime cutoff, averaged over 100 test instances and their standard deviations. “\(\downarrow\)” means the lower the better. For ML approaches, the policies are trained on only small training instances but tested on both small and large test instances. threshold. CL-LNS achieves the best survival rate at 60 minutes runtime cutoff on all instances, except that, on SC-L, its final survival rate is slightly worse than RL-LNS but it achieves the rate with much shorter runtime. On MVC-L, MIS-S and MIS-L instances, several baselines achieve the same survival rate as CL-LNS but it always achieves the rates with the shortest runtime. Figure 3 shows the best performing rate on the small test instances where CL-LNS consistently performs best on 50% to 100% of the instances. In Appendix, we present strong results in comparison with two more baselines and on one more performance metric. Comparison with LB (the Expert)Both IL-LNS and CL-LNS learn to imitate LB. On the small test instances, we run LB with two different neighborhood sizes, one that is fine-tuned in data collection and the other the same as CL-LNS, for 10 iterations and compare its per iteration performance with IL-LNS and CL-LNS. This allows us to compare the quality of the learned policies to the expert independently of their speed. The runtime limit per iteration for LB is set to 1 hour. Figure 4 shows the primal bound as a function of number of iterations. The table in the figure summarizes the neighborhood sizes and the average runtime per iteration. For LB, the result shows that the neighborhood size affects the overall performance. Intuitively, using a larger neighborhood size in LB allows LNS to find better incumbent solutions due to being able to explore larger neighborhoods. However, in practice, LB becomes less efficient in finding good incumbent solutions as the neighborhood size increases, sometimes even performs worse than using a smaller neighborhood size (the one for data collection). The neighborhood size for data collection is fine-tuned on validation sets to achieve the best primal bound upon convergences, allowing the ML models to ob Figure 3: The best performing rate (the higher the better) as a function of runtime on 100 small instances (see Appendix for results on large instances). The sum of the best performing rates at a given runtime might sum up greater than 1 since ties are counted multiple times. Figure 2: The survival rate (the higher the better) over 100 test instances as a function of runtime to meet primal gap threshold 1.00%. For ML approaches, the policies are trained on only small training instances but tested on both small and large test instances. serve demonstrations that lead to as good primal bounds as possible in training. However, when using the ML models in testing, we have the incentive to use a larger neighborhood size and fine tune it since we no longer suffer from the bottleneck of LB. We therefore fine tune the neighborhood sizes for IL-LNS and CL-LNS separately on validation sets. CL-LNS has a strong per-iteration performance that is consistently better than IL-LNS. With the fine-tuned neighborhood size, CL-LNS even outperforms the expert that it learns from (LB for data collection) on MIS-S and CA-S. Ablation StudyWe evaluate how contrastive learning and two enhancements contribute to CL-LNS's performance. Compared to IL-LNS, CL-LNS uses (1) addition features from Khalil et al. (2016) and (2) GAT instead of GCN. We denote by "FF" the full feature set used in CL-LNS and "PF" the partial feature set in IL-LNS. In addition to IL-LNS and CL-LNS, we evaluate the performance of IL-LNS with FF and GAT (denoted by IL-LNS-GAT-FF), CL-LNS with GCN and PF (denoted by CL-LNS-GCN-PF) as well as CL-LNS with GAT and PF (denoted by CL-LNS-GAT-PF) on MVC-S and CA-S. Figure 5 shows the primal gap as a function of runtime. Table 3 presents the primal gap and primal integral at 60 minutes runtime cutoff. The result shows that IL-LNS-GAT-FF, imitation learning with the two enhancements, still performs worse than CL-LNS-GCN-PF without any enhancements. CL-LNS-GCN-PF and CL-LNS-GAT-PF perform similarly in terms of the primal gaps but CL-LNS-GAT-PF has better primal integrals, showing the benefit of replacing GCN with GAT. On MVC-S, three variants of CL-LNS have similar average primal gaps and on CA-S, CL-LNS has better average primal gap than the other two variants. But adding the two enhancement helps improve the primal integral, leading to the overall best performance of CL-LNS on both MVC-S and CA-S. ## 6 Conclusion We proposed CL-LNS, that uses a contrastive loss to learn efficient and effective destroy heuristics in LNS for ILPs. We presented a novel data collection process tailored for Figure 4: The primal bound (the lower the better) as a function of number of iterations, averaged over 100 small test instances. LB and LB (data collection) are LNS with LB using the neighborhood sizes fune-tunded for CL-LNS and for data collection, respectively. The table shows the neighborhood size (NH size) and the average runtime in seconds (with standard deviations) per iteration for each approach. \begin{table} \begin{tabular}{c|c c|c c|c c} \hline & \multicolumn{2}{c|}{MVC-S} & \multicolumn{2}{c|}{MIS-S} & \multicolumn{2}{c|}{CA-S} & \multicolumn{2}{c}{SC-S} \\ \hline & NH size & Runtime & NH size & Runtime & NH size & Runtime & NH size & Runtime \\ \hline LB & 100 & 3600\(\pm\)0 & 3,000 & 3600\(\pm\)0 & 1,000 & 3600\(\pm\)0 & 100 & 3600\(\pm\)0 \\ LB (data collection) & 50 & 3600\(\pm\)0 & 500 & 3600\(\pm\)0 & 200 & 3600\(\pm\)0 & 50 & 3600\(\pm\)0 \\ IL-LNS & 100 & 2.1\(\pm\)0.1 & 3,000 & 1.3\(\pm\)0.2 & 1,000 & 20.8\(\pm\)13.1 & 150 & 120.9\(\pm\)13 \\ CL-LNS & 100 & 2.2\(\pm\)0.1 & 3,000 & 1.3\(\pm\)0.1 & 1,000 & 25.1\(\pm\)15.3 & 100 & 50.1\(\pm\)10.4 \\ \hline \end{tabular} \end{table} Table 3: Ablation study: Primal gap (PG) (in percent) and primal integral (PI) at 60 minutes runtime cutoff, averaged over 100 small test instances and their standard deviations. “\(\downarrow\)” means the lower the better. Figure 5: Ablation study: The primal gap (the lower the better) as a function of time, averaged over 100 small test instances. CL-LNS and used GAT with a richer set of features to further improve its performance. Empirically, CL-LNS significantly outperformed state-of-the-art approaches on four ILP benchmarks w.r.t. to the primal gap, the primal integral, the best performing rate and the survival rate. CL-LNS achieved good generalization performance on out-of-distribution instances. It is future work to learn policies that can generalize across problem domains. CL-LNS does not guarantee optimality and it is also interesting future work to integrate it in BnB for which many other learning techniques are developed. Our approach is closely related to and could be useful for many problems of identifying substructures in combinatorial searches, for example, identifying backdoor variables in ILPs (Ferber et al., 2022) and selecting neighborhoods in LNS for other COPs.
2310.18238
Order-2 Delaunay Triangulations Optimize Angles
The local angle property of the (order-$1$) Delaunay triangulations of a generic set in $\mathbb{R}^2$ asserts that the sum of two angles opposite a common edge is less than $\pi$. This paper extends this property to higher order and uses it to generalize two classic properties from order-$1$ to order-$2$: (1) among the complete level-$2$ hypertriangulations of a generic point set in $\mathbb{R}^2$, the order-$2$ Delaunay triangulation lexicographically maximizes the sorted angle vector; (2) among the maximal level-$2$ hypertriangulations of a generic point set in $\mathbb{R}^2$, the order-$2$ Delaunay triangulation is the only one that has the local angle property. We also use our method of establishing (2) to give a new short proof of the angle vector optimality for the (order-1) Delaunay triangulation. For order-$1$, both properties have been instrumental in numerous applications of Delaunay triangulations, and we expect that their generalization will make order-$2$ Delaunay triangulations more attractive to applications as well.
Herbert Edelsbrunner, Alexey Garber, Morteza Saghafian
2023-10-27T16:21:33Z
http://arxiv.org/abs/2310.18238v3
# Order-2 Delaunay triangulations optimize angles ###### Abstract. The _local angle property_ of the (order-1) Delaunay triangulations of a generic set in \(\mathbb{R}^{2}\) asserts that the sum of two angles opposite a common edge is less than \(\pi\). This paper extends this property to higher order and uses it to generalize two classic properties from order-1 to order-2: (1) among the complete level-2 hypertriangulations of a generic point set in \(\mathbb{R}^{2}\), the order-2 Delaunay triangulation lexicographically maximizes the sorted angle vector; (2) among the maximal level-2 hypertriangulations of a generic point set in \(\mathbb{R}^{2}\), the order-2 Delaunay triangulation is the only one that has the local angle property. For order-1, both properties have been instrumental in numerous applications of Delaunay triangulations, and we expect that their generalization will make order-2 Delaunay triangulations more attractive to applications as well. Key words and phrases:Triangulations, higher order Delaunay triangulations, hypertriangulations, angle vectors, optimality 2020 Mathematics Subject Classification: 05B45, 52C20, 68R05 Work by the first and third authors is partially supported by the European Research Council (ERC), grant no. 788183, by the Wittgenstein Prize, Austrian Science Fund (FWF), grant no. Z 342-N31, and by the DFG Collaborative Research Center TRR 109, Austrian Science Fund (FWF), grant no. I 02979-N35. Work by the second author is partially supported by the Alexander von Humboldt Foundation. ## 1. Introduction The _order-\(k\) Delaunay triangulation_ is a generalization of the _order-\(k\) Voronoi tessellation_, which is a generalization of the _order-\(k\) Voronoi tessellation_, which is a generalization of the _order-\(k\) Voronoi tessellation_. The _order-\(k\) Voronoi tessellation_ is a generalization of the _order-\(k\) Voronoi tessellation_, which is a generalization of the _order-\(k\) Voronoi tessellation_, which is a generalization of the _order-\(k\) Voronoi tessellation_. The _order-\(k\) Voronoi tessellation_ is a generalization of the _order-\(k\) Voronoi tessellation_, which is a generalization of the _order-\(k\) Voronoi tessellation_. The _order-\(k\) Voronoi tessellation_ is a generalization of the _order-\(k\) Voronoi tessellation_, which is a generalization of the _order-\(k\) Voronoi tessellation_. The _order-\(k\) Voronoi tessellation_ is a generalization of the _order-\(k\) Voronoi tessellation_, which is a generalization of the _order-\(k\) Voronoi tessellation_. The _order-\(k\) Voronoi tessellation_ is a generalization of the _order-\(k\) Voronoi tessellation_, which is a generalization of the _order-\(k\) Voronoi tessellation_. The _order-\(k\) Voronoi tessellation_ is a generalization of the _order-\(k\) Voronoi tessellation_, which is a generalization of the _order-\(k\) Voronoi tessellation_. The _order-\(k\) Voronoi tessellation_ is a generalization of the _order-\(k\) Voronoi tessellation_, which is a generalization of the _order-\(k\) Voronoi tessellation_. The _order-\(k\) Voronoi tessellation_ is a generalization of the _order-\(k\) Voronoi tessellation_, which is a generalization of the _order-\(k\) Voronoi tessellation_. The _order-\(k\) Voronoi tessellation_ is a generalization of the _order-\(k\) Voronoi tessellation_, which is a generalization of the _order-\(k\) Voronoi tessellation_. The _order-\(k\) Voronoi tessellation_ is a generalization of the _order-\(k\) Voronoi tessellation_, which is a generalization of the _order-\(k\) Voronoi tessellation_. The _order-\(k\) Voronoi tessellation_ is a generalization of the _order-\(k\) Voronoi tessellation_, which is a generalization of the _order-\(k\) Voronoi tessellation_. The _order-\(k\) Voronoi tessellation_ is a generalization of the _order-\(k\) Voronoi tessellation_, which is a generalization of the _order-\(k\) Voronoi tessellation_. The _order-\(k\) Voronoi tessellation_ is a generalization of the _order-\(k\) Voronoi tessellation_, which is a generalization of the _order-\(k\) Voronoi tessellation_. The _order-\(k\) Voronoi tessellation_ is a generalization of the _order-\(k\) Voronoi tessellation_, which is a generalization of the _order-\(k\) Voronoi tessellation_. The _order-\(k\) Voronoi tessellation_ is a generalization of the _order-\(k\) Voronoi tessellation_, which is a generalization of the _order-\(k\) Voronoi tessellation_. The _order-\(k\) Voronoi tessellation_ is a generalization of the _order-\(k\) Voronoi tessellation_, which is a generalization of the _order-\(k\) Voronoi tessellation_. The _order-\(k\) Voronoi tessellation_ is a generalization of the _order-\(k\) Voronoi tessellation_, which is a generalization of the _order-\(k\) Voronoi tessellation_. The _order-\(k\) Voronoi tessellation_ is a generalization of the _order-\(k\) Voronoi tessellation_, which is a generalization of the _order-\(k\) Voronoi tessellation_. The _order-\(k\) Voronoi tessellation_ is a generalization of the _order-\(k\) Voronoi tessellation_, which is a generalization of the _order-\(k\) Voronoi tessellation_. The _order-\(k\) Voronoi tessellation_ is a generalization of the _order-\(k\) Voronoi tessellation_, which is a generalization of the _order-\(k\) Voronoi tessellation_. The _order-\(k\) Voronoi tessellation_ is a generalization of the _order-\(k\) Voronoi tessellation_, which is a generalization of the _order-\(k\) Voronoi tessellation_. The _order-\(k\) Voronoi tessellation_ is a generalization of the _order-\(k\) Voronoi tessellation_, which is a generalization of the _order-\(k\) Voronoi tessellation_. The _order-\(k\) Voronoi tessellation_ is a generalization of the _order-\(k\) Voronoi tessellation_, which is a generalization of the _order-\(k\) Voronoi tessellation_. The _order-\(k\) Voronoi tessellation_ is a generalization of the _order-\(k\) Voronoi tessellation_, which is a generalization of the _order-\(k\) Voronoi tessellation_. The _order-\(k\) Voronoi tessellation_ is a generalization of the _order-\(k\) Voronoi tessellation_, which is a generalization of the _order-\(k\) Voronoi tessellation_. The _order-\(k\) Voronoi tessellation_ is a generalization of the _order-\(k\) Voronoi tessellation_, which is a generalization of the _order-\(k\) Voronoi tessellation_. The _order-\(k\) Voronoi tessellation_ is a generalization of the _order-\(k\) Voronoi tessellation_, which is a generalization of the _order-\(k\) Voronoi tessellation_. The _order-\(k\) Voronoi tessellation_ is a generalization of the _order-\(k\) Voronoi tessellation_, which is a generalization of the _order-\(k\) Voronoi tessellation_. The _order-\(k\) Voronoi tessellation_ is a generalization of the _order-\(k\) Voronoi tessellation_, which is a generalization of the _order-\(k\) Voronoi tessellation_. The _order-\(k\) Voronoi tessellation_ is a generalization of the _order-\(k\) Voronoi tessellation_, which is a generalization of the _order-\(k\) Voronoi tessellation_. The _order-\(k\) Voronoi tessellation_ is a generalization of the _order-\(k\) Voronoi tessellation_, which is a generalization of the _order-\(k\) Voronoi tessellation_. The _order-\(k\) Voronoi tessellation_ is a generalization of the _order-\(k\) Voronoi tessellation_, which is a generalization of the _order-\(k\) Voronoi tessellation_. The _order-\(k\) Voronoi tessellation_ is a generalization of the _order-\(k\) Voronoi tessellation_, which is a generalization of the _order-\(k\) Voronoi tessellation_. The _order-\(k\) Voronoi tessellation_ is a generalization of the _order-\(k\) Voronoi tessellation_, which is a generalization of the _order-\(k\) Voronoi tessellation_. The _order-\(k\) Voronoi tessellation_ is a generalization of the _order-\(k\) Voronoi tessellation_, which is a generalization of the _order-\(k\) Voronoi tessellation_. The _order-\(k\) Voronoi tessellation_ is a generalization of the _order-\(k\) Voronoi tessellation_, which is a generalization of the _order-\(k\) Voronoi tessellation_. The _order-\(k\) Voronoi tessellation_ is a generalization of the _order-\(k\) Voronoi tessellation_, which is a generalization of the _order-\(k\) Voronoi tessellation_. The _order-\(k\) Voronoi tessellation_ is a generalization of the _order-\(k\) Voronoi tessellation_, which is a generalization of the _order-\(k\) Voronoi tessellation_. The _order-\(k\) Voronoi tessellation_ is a generalization of the _order-\(k\) Voronoi tessellation_, which is a generalization of the _order-\(k\) Voronoi tessellation_. The _order-\(k\) Voronoi tessellation_ is a generalization of the _order-\(k\) Voronoi tessellation_, which is a generalization of the _order-\(k\) Voronoi tessellation_. The _order-\(k\) Voronoi tessellation_ is a generalization of the _order-\(k\) Voronoi tessellation_, which is a generalization of the _order-\(k\) Voronoi tessellation_. The _order-\(k\) Voronoi tessellation_ is a generalization of the _order-\(k\) Voronoi tessellation_, which is a generalization of the _order-\(k\) Voronoi tessellation_. The _order-\(k\) Voronoi tessellation_ is a generalization of the _order-\(k\) Voronoi tessellation_, which is a generalization of the _order-\(k\) Voronoi tessellation_. The _order-\(k\) Voronoi tessellation_ is a generalization of the _order-\(k\) Voronoi tessellation_, which is a generalization of the _order-\(k\) Voronoi tessellation_. The _order-\(k\) Voronoi tessellation_ is a generalization of the _order-\(k\) Voronoi tessellation_, which is a generalization of the _order-\(k\) Voronoi tessellation_. The _order-\(k\) Voronoi tessellation_ is a generalization of the _order-\(k\) Voronoi tessellation_, which is a generalization of the _order-\(k\) Voronoi tessellation_. The _order-\(k\) Voronoi tessellation_ is a generalization of the _order-\(k\) Voronoi tessellation_, which is a generalization of the _order-\(k\) Voronoi tessellation_. The _order-\(k\) Voronoi tessellation_ is a generalization of the _order-\(k\) Voronoi tessellation_, which is a generalization of the _order-\(k\) Voronoi tessellation_. The _order-\(k\) Voronoi tessellation_ is a generalization of the _order-\(k\) Voronoi tessellation_, which is a generalization of the _order-\(k\) Voronoi tessellation_. The _order-\(k\) Voronoi tessellation_ is a generalization of the _order-\(k\) Voronoi tessellation_, which is a generalization of the _order-\(k\) Voronoi tessellation_. The _order-\(k\) Voronoi tessellation_ is a generalization of the _order-\(k\) Voronoi tessellation_, which is a generalization of the _order-\(k\) Voronoi tessellation_. The _order-\(k\) Voronoi tessellation_ is a generalization of the _order-\(k\) Voronoi tessellation_, which is a generalization of the _order-\(k\) Voronoi tessellation_. The _order-\(k\) Voronoi tessellation_ is a generalization of the _order-\(k\) Voronoi tessellation_, which is a generalization of the _order-\(k\) Voronoi tessellation_. The _order-\(k\) Voronoi tessellation_ is a generalization of the _order-\(k\) Voronoi tessellation_, which is a generalization of the _order-\(k\) Voronoi tessellation_. The _order-\(k\) Voronoi tessellation_ is a generalization of the _order-\(k\) Voronoi tessellation_, which is a generalization of the _order-\(k\) Voronoi tessellation_. The _order-\(k\) Voronoi tessellation_ is a generalization of the _order-\(k\) Voronoi tessellation_, which is a generalization of the _order-\(k\) Voronoi tessellation_. The _order-\(k\) Voronoi tessellation_ is a generalization of the _order-\(k\) Voronoi tessellation_, which is a generalization of the _order-\(k\) Voronoi tessellation_. The _order-\(k\) Voronoi tessellation_ is a generalization of the _order-\(k\) Voronoi tessellation_, which is a generalization of the _order-\(k\) Voronoi tessellation_. The _order-\(k\) Voronoi tessellation_ is a generalization of the _order-\(k\) Voronoi tessellation_, which is a generalization of the _order-\(k\) Voronoi tessellation_. The _order-\(k\) Voronoi tessellation_ is a generalization of the _order-\(k\) Voronoi tessellation_, which is a generalization of the _order-\(k\) Voronoi tessellation_. The _order-\(k\) Voronoi tessellation_ is a generalization of the _order-\(k\) Voronoi tessellation_. The _order-\(k\) Voronoi tessellation_ is a generalization of the _order-\(k\) Voronoi tessellation_, which is a generalization of the _order-\(k\) Voronoi tessellation_. The _order-\(k\) Voronoi tessellation_ is a generalization of the _order-\(k\) Voronoi tessellation_, which is a generalization of the _order-\(k\) Voronoi tessellation_. The _order-\(k\) Voronoi tessellation_ is a generalization of the _order-\(k\) Voronoi tessellation_, which is a generalization of the _order-\(k\) Voronoi tessellation_. The _order-\(k\) Voronoi tessellation_ is a generalization of the _order-\(k\) Voronoi tessellation_. With the exception of Eppstein's result--which is specific to the farthest-point Delaunay triangulation--there is a paucity of optimality properties known for higher-order Delaunay triangulations, which we end with three inter-related contributions: **I:**: we extend the local angle property from order-1 to order-\(k\), for \(1\leq k\leq n-1\), and show that the order-\(k\) Delaunay triangulation has this property; **II:**: we prove that among all complete level-2 hypertriangulations of a finite generic set in \(\mathbb{R}^{2}\), the order-2 Delaunay triangulation lexicographically maximizes the sorted angle vector; **III:**: we show that among all maximal level-2 hypertriangulations of a finite generic set in \(\mathbb{R}^{2}\), the order-2 Delaunay triangulation is the only one that has the local angle property. For ordinary triangulations, the proofs of the properties analogous to II and III follow from the existence of a sequence of edge-flips that connects any initial (complete) triangulation to the (order-1) Delaunay triangulation, such that every flip lexicographically increases the sorted angle vector. While the level-2 hypertriangulations are connected by flips introduced in [6], there are cases in which every connecting sequence contains flips that lexicographically decrease the sorted angle vector; see Section 6. Without this tool at hand, the relation between the local angle property and the sorted angle vectors is unclear, and the proofs of Properties II and III fall back to an exhaustive analysis of elementary geometric cases. This paper is organized as follows. Section 2 provides information on the main background, including level-\(k\) hypertriangulations (maximal, complete, and otherwise) and the aging function. Section 3 introduces our extension of the local angle property to order \(k\), and in Theorem 3.3 shows that the order-\(k\) Delaunay triangulation has this property. Section 4 proves Property II in Theorem 4.4 and discusses possible extensions to the class of maximal level-2 hypertriangulations and to levels beyond 2. Section 5 proves Property III in Theorem 5.4, which it extends it to order-3 for points in convex position in Theorem 5.5. Finally, Section 6 concludes the paper with discussions of open questions and conjectures related to the geometry and combinatorics of Delaunay and more general hypertriangulations. ## 2. Background We follow the standard approach to points in general position used in the literature: a finite set, \(A\subseteq\mathbb{R}^{2}\), is _generic_ if no three points are colinear and no four points are cocircular. ### Triangulations and Hypertriangulations We first define the families of all triangulations and hypertriangulations of \(A\), which include the order-1 and order-\(k\) Delaunay triangulations discussed in Section 3. We write \(\operatorname{conv}A\) for the convex hull of the set \(A\). **Definition 2.1** (Triangulations).: _For a finite \(A\subseteq\mathbb{R}^{2}\), a triangulation, \(P\), of \(A\) is an edge-to-edge subdivision of \(\operatorname{conv}A\) into triangles whose vertices are points in \(A\). It is usually identified with the set of its triangles, so we write \(P=\{T_{1},T_{2},\ldots,T_{m}\}\). The triangulation is complete if every point of \(A\) is a vertex of at least one triangle, partial if it is not complete, and maximal if there is no other triangulation of the same points that subdivides it._ It is easy to see that a triangulation is maximal iff it is complete. We nevertheless introduce both concepts because they generalize to different notions for hypertriangulations, which we introduce next. For a set of \(k\) points, \(I\), we write \([I]=\frac{1}{k}\sum_{x\in I}x\) for the average of the points and, assuming \(a\not\in I\) and \(J\cap I=\emptyset\), we write \([Ia]\) and \([IJ]\) for the averages of \(I\cup\{a\}\) and \(I\cup J\), respectively. While \([I]\) is a point, we sometimes think of it as the set \(I\), in which case we call it a _label_. **Definition 2.2** (Hypertriangulations [6]).: _Let \(A\subseteq\mathbb{R}^{2}\) be generic, \(n=\#A\), \(k\) an integer between \(1\) and \(n-1\), and \(A^{(k)}=\{[I]\mid I\subseteq A,\#I=k\}\) the set of \(k\)-fold averages of the points in \(A\). A level-\(k\) hypertriangulation of \(A\) is a possibly partial triangulation of \(A^{(k)}\) such that every edge with endpoints \([I]\) and \([J]\) satisfies \(\#(I\cap J)=k-1\)._ Observe that every triangulation of \(A\) is a level-\(1\) hypertriangulation of \(A\), and vice versa, but for \(k>1\), only a subset of the triangulations of \(A^{(k)}\) are level-\(k\) hypertriangulations of \(A\). Note also that it is possible that a point can be written as the average of more than one subset of \(k\) points in \(A\): for example, the center of a square is the \(2\)-fold average of two pairs of diagonally opposite vertices. If a level-\(k\) hypertriangulation uses such a point as a vertex, then it can use only one of the possible labels. An alternative approach to these concepts is via induced subdivisions; see [22, Chapter 9] for details, including the definitions of induced subdivisions and tight subdivisions. According to this approach, a triangulation of \(A=\{a_{1},a_{2},\ldots,a_{n}\}\) is a tight subdivision of \(\operatorname{conv}A\) induced by the projection \(\pi\colon\Delta_{n}\to\mathbb{R}^{2}\), in which \(\Delta_{n}=\operatorname{conv}\{e_{1},e_{2},\ldots,e_{n}\}\subseteq\mathbb{R} ^{n}\) is the standard \((n-1)\)-simplex, and \(\pi(e_{i})=a_{i}\), for \(i=1,2,\ldots,n\). To generalize, Olarte and Santos [15] use the level-\(k\) hypersimplex, \(\Delta_{n}^{(k)}\), which is the convex hull of the \(k\)-fold averages of the \(e_{i}\) in \(\mathbb{R}^{n}\), and define a level-\(k\) hypertriangulation of \(A\) as a tight subdivision of \(A^{(k)}\) induced by the same projection \(\pi\) restricted to \(\Delta_{n}^{(k)}\). In this setting, the constraint to use only one label for each vertex is implicit. ### The Aging Function A triangle in a level-\(k\) hypertriangulation can be classified into two types. Letting \([I],[J],[K]\) be its vertices, each the average of \(k\) points, we say the triangle is * _black_, if \(\#(I\cap J\cap K)=k-2\); * _white_, if \(\#(I\cap J\cap K)=k-1\). In other words, vertices of black triangles are labeled \([Xab],[Xac],[Xbc]\), for some \(X\) of size \(k-2\), and vertices of white triangles are labeled \([Ya],[Yb],[Yc]\), for some \(Y\) of size \(k-1\). Our next definition allows for transformations from white to black triangles. **Definition 2.3** (Aging Function).: _Letting \(T\) be a white triangle with vertices \([Ya],[Yb],[Yc]\), the aging function maps \(T\) to the black triangle, \(F(T)\), with vertices \([Yab],[Yac],[Ybc]\)._ The aging function increases the level of the triangle by one, hence the name. Correspondingly, the inverse aging function maps a black triangle to a white triangle one level lower. To extend this definition to hypertriangulations, we say a level-\(k\) hypertriangulation, \(P_{k}\), _ages_ to a level-\((k+1)\) hypertriangulation, \(P_{k+1}\), denoted \(P_{k+1}=F(P_{k})\). if the aging function defines a bijection between the white triangles in \(P_{k}\) and the black triangles in \(P_{k+1}\). Note however that the aging of \(P_{k}\) is not unique as it says nothing about the white triangles of \(P_{k+1}\). This notion is useful to obtain structural results for the family of all level-\(k\) hypertriangulations. For example, [6] has shown that every level-\(2\) hypertriangulation is an aging of a level-\(1\) hypertriangulation. For the special case in which the points are in convex position, [9] has extended this result to all levels, \(k\). However, for points in possibly non-convex position, there are obstacles to applying the aging function. An example of a level-\(2\) hypertriangulation, \(P_{2}\), for which \(F(P_{2})\) does not exist is given in [6, 15]. For later reference, we compile several results about the relation between level-\(1\) and level-\(2\) hypertriangulations obtained in [6]. Given a vertex, \(x\), in a triangulation, \(P\), we define the _star_ of \(x\) as the union of triangles that share \(x\), denoted \(\operatorname{st}(P,x)\), and shrinking the star by a factor two toward \(x\), we get \([\operatorname{st}(P,x),x]=\frac{1}{2}(\operatorname{st}(P,x)+x)\), which is the set of midpoints between \(x\) and any point \(y\in\operatorname{st}(P,x)\). Observe that the shrunken star is contained in \(\operatorname{conv}A^{(2)}\) iff \(x\) is an interior vertex of \(P\). Indeed, \(x\) necessarily belongs to the shrunken star, but if \(x\) is a convex hull vertex, then \(x\) lies outside \(\operatorname{conv}A^{(2)}\). **Lemma 2.4** (Aging Function for Triangulations).: _Let \(A\subseteq\mathbb{R}^{2}\) be finite and generic, and recall that every level-\(1\) hypertriangulation is just a triangulation._ * _For every level-_\(1\) _hypertriangulation,_ \(P\)_, of_ \(A\)_, there exists a level-_\(2\) _hypertriangulation,_ \(P_{2}\)_, such that_ \(P_{2}=F(P)\)_._ * _For every level-_\(2\) _hypertriangulation,_ \(P_{2}\)_, of_ \(A\)_, there exists unique level-_\(1\) _hypertriangulation,_ \(P\)_, such that_ \(P_{2}=F(P)\)_._ * _If_ \(P_{2}=F(P)\) _and_ \(x\in A\) _is a vertex of_ \(P\)_, then the union of white triangles in_ \(P_{2}\) _that have_ \(x\) _in all their vertex labels is_ \([\operatorname{st}(P,x),x]\cap\operatorname{conv}A^{(2)}\) Since \([\operatorname{st}(P,x),x]\cap\operatorname{conv}A^{(2)}\neq[\operatorname{st}(P,x),x]\) iff \(x\) is a convex hull vertex, the third claim implies that for each interior vertex, \(x\), scaled versions of the mentioned white triangles in \(P_{2}\) tile the star of \(x\) in \(P\). ### Maximal and Complete Hypertriangulations The Delaunay triangulation of a finite set is optimal among all complete triangulations, but not necessarily among the larger family of possibly partial triangulations of the set. In this section, we introduce two families of level-\(2\) hypertriangulations to which we compare the order-\(2\) Delaunay triangulation. **Definition 2.5** (Complete and Maximal Level-\(2\) Hypertriangulations).: _Let \(A\subseteq\mathbb{R}^{2}\) be finite and generic. A level-\(2\) hypertriangulation of \(A\) is complete if its black triangles are the images under the aging function of the triangles in a complete triangulation of \(A\), and it is maximal if no other level-\(2\) hypertriangulation subdivides it._ The notion of maximality extends to level-\(k\) hypertriangulations, while completeness does not since there are counterexamples to the existence of the aging function from level \(2\) to level \(3\); see Figure 8 in [6], which is based on Example 5.1 in [15]. For \(k=1\), a triangulation of a finite and generic set is complete iff it is maximal. An easy way to see this is by counting the triangles in a possibly partial triangulation of \(A\subseteq\mathbb{R}^{2}\). Write \(H\subseteq A\) for the vertices of the convex hull of \(A\), and set \(n=\#A\) and \(h=\#H\). The vertex set of a partial triangulation can be any subset of \(A\) that contains all points in \(H\). Let \(m\) be the number of extra points, so the triangulation has \(m+h\) vertices. We can add \(h-3\) (curved) edges to turn the triangulation into a maximally connected planar graph, which has \(3(m+h)-6\) edges and \(2(m+h)-4\) faces, including the outside. Hence, the triangulation has \(3(m+h)-6-(h-3)=3m+2h-3\) edges and \(2(m+h)-4-(h-2)=2m+h-2\) triangles. For a complete triangulation, we have \(m=n-h\) and therefore \(2n-h-2\) triangles. If a triangulation has fewer than this number, then its vertex set misses at least one point, which we can add by subdivision. Hence, the triangulation is complete iff it is maximal. The situation is slightly more complicated for level-\(2\) hypertriangulations. **Lemma 2.6** (Complete Implies Maximal).: _Let \(A\subseteq\mathbb{R}^{2}\) be finite and generic. Then any two maximal level-\(2\) hypertriangulations have the same number of triangles, and every complete level-\(2\) hypertriangulation is maximal._ Proof.: To prove the first claim, let \(n=\#A\), \(h=\#H\), and consider a level-\(2\) hypertriangulation, \(P_{2}\), aged from a possibly partial triangulation, \(P\), with \(m+h\leq n\) vertices. Note that \(P\) has \(2m+h-2\) triangles, so \(P_{2}\) has the same number of black triangles. To count the white triangles in \(P_{2}\), we recall that each white region corresponds to the star of a vertex of \(P\). If \(a\) is a vertex in the interior of \(\operatorname{conv}A\), then the white region is the shrunken star, \([\operatorname{st}(P,a),a]\). We modify \(P_{2}\) so this is also true for each vertex, \(b\), of \(\operatorname{conv}A\). To this end, we consider all boundary edges of \(P_{2}\) that connect vertices \(a^{\prime}=[ba]\) and \(c^{\prime}=[bc]\), and add the triangle \(a^{\prime}bc^{\prime}\) to \(P_{2}\). The number of thus added triangles depends on the convex hull of the midpoints of pairs but not on how this convex hull is decomposed into triangles. The benefit of this modification is that we now have exactly \(m+h\) white regions, each a star-convex polygon, and each edge of \(P\) contributes a vertex to exactly two of the white regions. Not forgetting the \(h\) vertices added during the modification, this implies that the total number of edges of the \(m+h\) white regions is \(2(3m+2h-3)+h=6m+5h-6\). Every triangulation of a \(j\)-gon has \(j-2\) triangles, so the total number of triangles in the white regions is \((6m+5h-6)-2(m+h)=4m+3h-6\). We now turn our attention to the \(n-h-m\) points of \(A\) that are not vertices of \(P\). Let \(x\) be such a point and \(abc\) the triangle in \(P\) that contains \(x\) in its interior. Hence, \([xa]\) lies in the interior of \([\operatorname{st}(P,a),a]\), and similarly for \(b\) and \(c\). To maximally subdivide \(P_{2}\), we thus add \(3(n-h-m)\) points in the interiors of the white regions, which increases the number of white triangles to \((4m+3h-6)+6(n-h-m)=6n-2m-3h-6\). Adding to this the \(2m+h-2\) black triangles, we get a total of \(6n-2h-8\) triangles. To get the number of triangles in this maximal triangulation, we still need to correct for the triangles added during the initial modification of \(P_{2}\). But their number does not depend on \(m\), so neither does the final triangle count. Hence, all maximal level-2 hypertriangulations of \(A\) have the same number of triangles. To get the second claim, observe that we have \(m=0\) whenever \(P_{2}\) is complete. Hence, we get the same number of triangles as just calculated, but without subdivision. It follows that \(P_{2}\) is maximal. ## 3. The Local Angle Property In this section, we define order-\(k\) Delaunay triangulations as special level-\(k\) hypertriangulations, introduce the local angle property for level-\(k\) hypertriangulations, and show that the order-\(k\) Delaunay triangulations have the local angle property. This property specializes to the standard local angle property that characterizes (order-1) Delaunay triangulations as well as their constrained versions. ### Higher Order Delaunay Triangulations We introduce the order-\(k\) Delaunay triangulation of a finite set as a special level-\(k\) hypertriangulation of this set; but see [1] for a more geometric definition. **Definition 3.1** (Order-\(k\) Delaunay Triangulation).: _Let \(A\subseteq\mathbb{R}^{2}\) be finite and generic, and \(k\) an integer between \(1\) and \(\#A-1\). We construct a particular level-\(k\) hypertriangulation of \(A\):_ * _a black triangle with vertices_ \([Xab],[Xac],[Xbc]\) _belongs to this hypertriangulation if_ \(X\subseteq A\) _is the set of points inside the circumcircle of_ \(abc\)_, and_ \(\#X=k-2\) * _a white triangle with vertices_ \([Ya],[Yb],[Yc]\) _belongs to this hypertriangulation if_ \(Y\subseteq A\) _is the set of points inside the circumcircle of_ \(abc\)_, and_ \(\#Y=k-1\)_._ _This hypertriangulation is called the order-\(k\) Delaunay triangulation of \(A\) and denoted \(\operatorname{Del}_{k}(A)\)._ While it may not be obvious that the above triangles form a triangulation of \(A^{(k)}\), it can be seen, for example, by lifting the points of \(A\) onto a paraboloid in \(\mathbb{R}^{3}\), and then considering the lower surface of the convex hull of the \(k\)-fold averages, which project to the points in \(A^{(k)}\). Another way to construct \(\operatorname{Del}_{k}(A)\) is from the dual order-\(k\) Voronoi tessellation, as illustrated for \(k=2\) in Figure 1. Note that for \(k=1\), we get precisely the Delaunay triangulation of \(A\), as all triangles are white and satisfy the empty circle criterion. For \(k=\#A-1\), we get the (scaled and centrally inverted copy of) the farthest-point Delaunay triangulation [7]. Each of its triangles is black, and every point of \(A\) is either a vertex or inside the circumcircle of the triangle. Moreover, the aging function applies, and we have \(\operatorname{Del}_{k+1}(A)=F(\operatorname{Del}_{k}(A))\) for every \(1\leq k<\#A-1\). ### Angles of Black and White Triangles We now generalize the local angle property from order-\(1\) to order-\(k\). For \(2\leq k\leq\#A-2\), we have black as well as white triangles. Hence, there are three types of interior edges: those shared by two white triangles, two black triangles, and a white and a black triangle. We have a different condition for each type. **Definition 3.2** (Local Angle Property).: _Let \(A\subseteq\mathbb{R}^{2}\) be finite and generic. A level-\(k\) hypertriangulation of \(A\) has the local angle property if_ Figure 1: The (_blue_) order-\(2\) Delaunay triangulation drawn on top of the (_black_) order-\(2\) Voronoi tessellation. Not all parts of the order-\(2\) Voronoi tessellation are visible in the rectangular window. * (ww) _for every edge shared by two white triangles, the sum of the two angles opposite the edge is at most_ \(\pi\)_;_ * (bb) _for every edge shared by two black triangles, the sum of the two angles opposite the edge is at least_ \(\pi\)_;_ * (bw) _for every edge shared by a black triangle and a white triangle, the angle opposite the edge in the black triangle is bigger than the angle opposite the edge in the white triangle._ For \(k=1\), there are no black triangles, so (bb) and (bw) are void. Delaunay [4] proved that the local angle property characterizes the (closest-point) Delaunay triangulation among all (complete) triangulations of a finite point set, and this was used by Lawson [12] to construct the triangulation by repeated edge flipping. Symmetrically, for \(k=\#A-1\), there are no white triangles, so (ww) and (bw) are void. Eppstein [7] proved the local angle property for the (farthest-point) Delaunay triangulation, and the convergence of the flip-algorithm implies that it is the only (not necessarily complete) triangulation of the points that has this property. The goal of this section is to extend these result to level-\(k\) hypertriangulations. ### All Delaunay Triangulations Have the Local Angle Property We prove that the Delaunay triangulations of any order have the local angle property. This extends the results from \(k=1,\#A-1\) to any order between these limits. **Theorem 3.3** (Order-\(k\) Delaunay Triangulations have Local Angle Property).: _Let \(A\subseteq\mathbb{R}^{2}\) be finite and generic. Then for every integer \(1\leq k\leq\#A-1\), the order-\(k\) Delaunay triangulation of \(A\) has the local angle property._ Proof.: Recall that white triangles of the order-\(k\) Delaunay triangulation of \(A\) have vertices \([Ya]\), \([Yb]\), \([Yc]\), in which \(Y\subseteq A\) with \(\#Y=k-1\), such that all points of \(Y\) are inside and all other points of \(A\) are outside the circumcircle of \(abc\). Similarly, its black triangles have vertices labeled \([Xab]\), \([Xac]\), \([Xbc]\), in which \(X\subseteq A\) with \(\#X=k-2\), such that all points of \(X\) are inside and all other points of \(A\) are outside this circumcircle. We establish each of the three conditions separately. (ww): Let \([Ya],[Yb],[Yc]\) and \([Yb],[Yc],[Yd]\) be the vertices of two adjacent white triangles in the order-\(k\) Delaunay triangulation of \(A\), and note that the points of \(Y\) lie inside and \(d\) lies outside the circumcircle of \(abc\); see the left panel of Figure 2. The triangles \(abc\) and \(bcd\) are homothetic copies of these two white triangles, which implies that \(a\) and \(d\) lie on opposite sides of \(bc\). Hence, \(\measuredbox{ac}+\measuredbox{bdc}<\pi\), because \(d\) is outside the circumcircle. (ww) follows. (bb): Let \([Zabc]\), \([Zabd]\), \([Zacd]\) and \([Zabd]\), \([Zacd]\), \([Zbcd]\) be the vertices of adjacent black triangles in the order-\(k\) Delaunay triangulation of \(A\), and note that the points of \(Z\) and \(d\) lie inside the circumcircle of \(abc\); see the middle panel of Figure 2. The triangles \(bcd\) and \(abc\) are homothetic copies of these two black triangles, which implies that \(a\) and \(d\) are on opposite sides of \(bc\). Hence, \(\measuredbox{\angle bac+\measuredbox{bdc}>\pi}\), because \(d\) is inside the circumcircle. (bb) follows. (bw): Let \([Xab]\), \([Xac]\), \([Xbc]\) and \([Xab]\), \([Xac]\), \([Xad]\) be the vertices of a black triangle and an adjacent white triangle in the order-\(k\) Delaunay triangulation of \(A\), and note that the points of \(X\) lie inside while \(d\) lies outside the circumcircle of \(abc\); see the right panel of Figure 2. The triangles \(abc\) and \(bcd\) are homothetic copies of the black and white triangles, with negative and positive homothety coefficients, respectively, which implies that \(a\) and \(d\) lie on the same side of \(bc\). Thus, \(\measuredbox{\angle bac>\measuredbox{bdc}}\), because \(d\) is outside the circumcircle. (bw) follows. We conjecture that the order-\(k\) Delaunay triangulation is the only level-\(k\) hypertriangulation with maximally many triangles that has the local angle property. For later reference, we refer to this as the _Local Angle Conjecture_ for hypertriangulations. ### Constrained Delaunay Triangulations Given a bounded polygonal region, \(R\), it is always possible to find a triangulation, \(P\), of its vertices (the endpoints of its edges) that contains all edges of the region. Hence, every triangle of \(P\) lies either completely inside or completely outside the region. The _restriction_ of \(P\) to \(R\) consists of the triangles inside \(R\), and we call this restriction a _triangulation_ of \(R\). For some choices of \(P\), the restriction to \(R\) looks locally like the Delaunay triangulation, namely when every edge that passes through the interior of \(R\) satisfies (ww). It is not difficult Figure 2: From _left_ to _right:_ an edge shared by two white triangles, two black triangles, a black triangle and a white triangle. _Top row:_ the adjacent triangles in the order-\(k\) Delaunay triangulation. The vertex labels encode the locations of the vertices as averages of the listed points. _Bottom row:_ the corresponding triangles spanned by the original points. to see that such choices of triangulations exist and that their restriction to \(R\) is generically unique: run Lawson's algorithm on an initial triangulation of \(R\), flipping an interior edge whenever the sum of the two opposite angles exceeds \(\pi\). This is the _constrained Delaunay triangulation_ of \(R\), as introduced in 1989 by Paul Chew [2], but see also [11]. A triangle \(uvw\) belongs to this specific triangulation iff it is contained in \(R\) and its circumcircle does not enclose any vertex that is visible from points inside the triangle. We state a weaker necessary condition for later reference. **Lemma 3.4** (Triangles and Edges in Constrained Delaunay Triangulation).: _Let \(R\) be a bounded polygonal region in \(\mathbb{R}^{2}\), assume its vertex set is generic, and let \(u,v,w\) be vertices of \(R\). If the triangle \(uvw\) is contained in \(R\), and its circumcircle does not enclose any vertex of \(R\), then \(uvw\) is a triangle in the constrained Delaunay triangulation of \(R\). Similarly, if the edge \(uv\) is contained in \(R\) but is not an edge of \(R\), and it has a circumcircle that does not enclose any vertex of \(R\), then \(uv\) is an edge of the constrained Delaunay triangulation of \(R\)._ We use constrained Delaunay triangulations to decompose white regions in aged hypertriangulations. To explain, let \(P\) be a complete triangulation of a finite and generic set, \(A\subseteq\mathbb{R}^{2}\), let \(x\in A\) be a vertex of this triangulation, call \(\operatorname{wh}(P,x)=\operatorname{st}(P,x)\cap\operatorname{conv}\left(A \setminus\{x\}\right)\) the _white region_ of \(x\) in \(P\), and let \(P(x)\) be a triangulation of \(\operatorname{wh}(P,x)\). Note that \(\operatorname{wh}(P,x)=\operatorname{st}(P,x)\) if \(x\) is an interior vertex, and \(\operatorname{wh}(P,x)\subsetneq\operatorname{st}(P,x)\) if \(x\) is a convex hull vertex. In the special case in which \(P\) is the order-1 Delaunay triangulation and \(P(x)\) is the constrained Delaunay triangulation of \(\operatorname{wh}(P,x)\) for each \(x\in A\), these sets contains all white triangles in the order-2 Delaunay triangulation, albeit the latter are only half the size. More generally, we use the constrained Delaunay triangulations of the white regions to disambiguate the aging function. This is done extensively in the proofs of our main results in Sections 4 and 5. ## 4. Optimality of the Sorted Angle Vector In this section, we prove the first main result of this paper in an exhaustive case analysis. With the exception of Section 4.4, we work only with complete level-2 hypertriangulations. To aid the discussion, we begin by introducing convenient terminology and stating a few elementary lemmas. ### Triangulations and Angle Vectors Let \(A\subseteq\mathbb{R}^{2}\) be a finite set of points, and let \(P\) be a complete triangulation of \(A\), and write \(P_{2}=F(P)\) for the (complete) level-2 hypertriangulation whose white regions are decomposed by constrained Delaunay triangulations. We prefer to work with the original points of \(A\), rather than the midpoints of its pairs. We therefore write \(\Phi_{2}=f(P)\) for the collection of triangles in \(P\), together with the triangles in the constrained Delaunay triangulations of the \(\operatorname{wh}(P,x)\), with \(x\in A\). Consistent with the earlier convention, we call the triangles of \(\Phi_{2}\) in \(P\)_black_ and the other triangles of \(\Phi_{2}\)_white_. Accordingly, we write \(\operatorname{Black}(\Phi_{2})\) for the black triangles in \(\Phi_{2}\), and \(\operatorname{White}(\Phi_{2},x)\) for the white triangles in \(\Phi_{2}\) that triangulate \(\operatorname{wh}(P,x)\). There is a bijection between \(\Phi_{2}\) and \(P_{2}\) such that the corresponding triangles are similar (scaled by a factor \(\frac{1}{2}\) and possibly inverted), so the triangles in \(\Phi_{2}\) and \(P_{2}\) define the same angles. Letting \(m\) be the number of triangles, we write \(\operatorname{Vector}(P_{2})=\operatorname{Vector}(\Phi_{2})=(\varphi_{1}, \varphi_{2},\dots,\varphi_{3m})\) for the vector of angles, which we order such that \(\varphi_{i}\leq\varphi_{i+1}\) for \(1\leq i\leq 3m-1\). Repeating the construction with another (maximal) triangulation \(Q\) of \(A\), we get another (complete) level-2 hypertriangulation of \(m\) black and white triangles, \(Q_{2}\), and another increasing angle vector, \(\operatorname{Vector}(Q_{2})=\operatorname{Vector}(\Psi_{2})=(\psi_{1}, \psi_{2},\dots,\psi_{3m})\), in which \(\Psi_{2}=f(Q)\). It is _lexicographically larger_ than the vector of \(\Phi_{2}\), denoted \(\operatorname{Vector}(\Phi_{2})\prec\operatorname{Vector}(\Psi_{2})\), if there exists an index \(1\leq p\leq m\) such that \(\varphi_{i}=\psi_{i}\), for \(1\leq i\leq p-1\), and \(\varphi_{p}<\psi_{p}\). We write \(\operatorname{Vector}(\Phi_{2})\preceq\operatorname{Vector}(\Psi_{2})\) to allow for the possibility of equal angle vectors. This notation is useful because it is possible that two different triangulations, \(P\neq Q\), have the same angle vector. For example, if \(A\) has only 4 points and they are in convex position, then there are only two different triangulations of \(A\), and the black triangles in the level-2 hypertriangulation of one are the white triangles in the level-2 hypertriangulations of the other, and vice versa. ### Elementary Lemmas If \(uvw\) is a triangle in \(\operatorname{White}(\Phi_{2},x)\), then it is not possible that \(u\) lies inside \(xvw\). This is true independent of how we triangulate \(\operatorname{wh}(P,x)\): **Lemma 4.1** (Star-convex Triangulation).: _Let \(uvw\) be a triangle in \(\operatorname{White}(\Phi_{2},x)\). Then either \(x\) is inside \(uvw\) or \(x,u,v,w\) are the vertices of a convex quadrangle._ Proof.: Assume first that \(x\) is an interior vertex, so \(\operatorname{conv}\left(A\setminus\{x\}\right)=\operatorname{conv}A\). Since \(\operatorname{wh}(P,x)\) is star-convex, with \(x\) in its kernel, every half-line emanating from \(x\) intersects the boundary of \(\operatorname{wh}(P,x)\) in exactly one point. Now suppose \(u\) lies inside the triangle \(xvw\), and consider the half-line emanating from \(x\) that passes through \(u\). Since \(x\) lies in the interior of \(\operatorname{wh}(P,x)\), the half-line goes from inside to outside the region as it passes through \(u\). But it also enters the triangle \(uvw\), which lies inside \(\operatorname{wh}(P,x)\). This is a contradiction because entering and leaving \(\operatorname{st}(P,x)\) at the same time is impossible. Assume second that \(x\) is a vertex of \(\operatorname{conv}A\), so \(\operatorname{conv}\left(A\setminus\{x\}\right)\neq\operatorname{conv}A\). Since \(uvw\) is a triangle in \(\operatorname{wh}(P,x)\), it is also a triangle in \(\operatorname{st}(P,x)\). Furthermore, \(u,v,w\) are points on the boundary of \(\operatorname{st}(P,x)\), and every half-line emanating from \(x\) that has a non-empty intersection with the interior of \(\operatorname{conv}A\) intersects this boundary in exactly one point. Assuming \(u\) lies inside \(xvw\), we can now repeat the argument of the first case and get a contradiction because the half-line passing through \(u\) both enters and leaves \(\operatorname{st}(P,x)\) when it passes through \(u\) Every point \(x\in A\) belongs to at least two edges in \(P\). However, if \(x\) belongs to only two edges, then every line that crosses both edges necessarily separates \(x\) from all points in \(A\setminus\{x\}\). We state and prove a generalization of this observation. **Lemma 4.2** (Splitting a Triangulation).: _Let \(P\) be a triangulation of a finite set \(A\subseteq\mathbb{R}^{2}\), let \(L\) be a line, and let \(Q\) be the vertices and edges of \(P\) that are disjoint of \(L\). Then \(Q\) consists of at most two connected components, one on each side of \(L\)._ Proof.: Assume without loss of generality that \(L\) is horizontal, and let \(A^{\prime}\subseteq A\) contain all points strictly above \(L\). The boundary of \(\operatorname{conv}A\) is a closed convex curve, \(\gamma\), and we write \(\gamma^{\prime}\subseteq\gamma\) for the vertices and edges strictly above \(L\). Every point \(a\in A^{\prime}\) is either a vertex of \(\gamma^{\prime}\), or there is an edge \(ab\) in \(P\), with \(b\) above \(L\) and further from \(L\) than \(a\). Hence, \(ab\in Q\). We can therefore trace a path from \(a\) that eventually reaches a vertex of \(\gamma^{\prime}\) in \(Q\), which implies that the part of \(Q\) strictly above \(L\) is either empty or connected. Symmetrically, the part of \(Q\) strictly below \(L\) is either empty or connected, which implies the claim. By construction, the interior points of a black triangle, \(abc\in P\), belong to \(\operatorname{st}(P,a)\), \(\operatorname{st}(P,b)\), \(\operatorname{st}(P,c)\) but not to the stars of any other vertices. Hence, only the white triangles used in the triangulation of these three stars can possibly share interior points with \(abc\). If a white triangle shares one or two of the vertices with \(abc\), then this further restricts the stars this white triangle may help triangulate. **Lemma 4.3** (Shared Interior Points).: _Let \(P\) be a triangulation of a finite set \(A\subseteq\mathbb{R}^{2}\), let \(abc\) be a black triangle and \(uvw\) a white triangle in \(\Phi_{2}=f(P)\), and suppose that \(abc\) and \(uvw\) share interior points._ 1. _If_ \(u=a\) _and_ \(v=b\)_, then_ \(uvw\in\operatorname{White}(\Phi_{2},c)\)_._ 2. _If_ \(v=b\) _is the only shared vertex between_ \(abc\) _and_ \(uvw\)_, then_ \(uw\) _cannot cross_ \(ab\) _and_ \(bc\)_._ 3. _If_ \(v=b\) _and_ \(uw\) _crosses_ \(bc\)_, then_ \(uvw\in\operatorname{White}(\Phi_{2},c)\)_._ 4. \(uvw\in\operatorname{White}(\Phi_{2},x)\) _for only one point_ \(x\in A\)_._ Proof.: (1) is immediate because \(c\) is the only vertex of \(abc\) that is not also a vertex of \(uvw\). To see (2), assume that \(uw\) crosses \(ab\) and also \(bc\). Then \(uvw\) shares interior points with three black triangles in \(\Phi_{2}\), namely \(abc\) and the neighboring triangles that share \(ab\) and \(bc\) with \(abc\). The only common vertex of the three black triangles is \(b\), so \(uvw\in\operatorname{White}(\Phi_{2},b)\), but this is impossible because \(b=v\). To see (3), note that \(uvw\) shares interior points with two black triangles: \(bac\) and the black triangle on the other side of \(bc\). Hence, \(uvw\) is contained in \(\operatorname{st}(P,b)\) or \(\operatorname{st}(P,c)\). Since \(b=v\), the only remaining choice is \(uvw\in\operatorname{White}(\Phi_{2},c)\). To see (4), consider first the case that \(uvw\) shares interior points with only two black triangles, \(abc\) and \(bcd\). Then one of its edges, say \(uv\) crosses \(bc\), so \(u=a\) and \(w=d\). But \(v\) cannot lie in the interior of the two black triangles or its edges, so \(v=b\). Then \(c\) is the only remaining point such that \(uvw\in\mathrm{White}(\Phi_{2},c)\). If \(uvw\) shares interior points with three or more black triangles, then the black triangles share only one common vertex, \(x\), hence \(uvw\in\mathrm{White}(\Phi_{2},x)\). ### Global Optimality The first main result of this paper asserts that Sibson's theorem on increasing angle vectors extends from order-1 to order-2 Delaunay triangulations. **Theorem 4.4** (Angle Vector Optimality).: _Let \(A\subseteq\mathbb{R}^{2}\) be finite and generic, \(P\) a complete triangulation of \(A\), \(\Phi_{2}=f(P)\), and \(\Delta_{2}=f(\mathrm{Del}(A))\). Then \(\mathrm{Vector}(\Phi_{2})\preceq\mathrm{Vector}(\Delta_{2})\)._ Proof.: Write \(D=\mathrm{Del}(A)\), so \(\Delta_{2}=f(D)\). The genericity of \(A\) implies that \(D\) and \(\Delta_{2}\) are unique, but there may be two or more triplets of points that define the same angle. It will be convenient to have distinct angles, so we first apply a perturbation that preserves the order of unequal angles while making equal angles different. The relation for the perturbed points implies the same but possibly non-strict relation for the original points since undoing the perturbation does not change the order of any two angles. So assume that the angles defined by the points in \(A\) are distinct, and to derive a contradiction, assume \(\mathrm{Vector}(\Delta_{2})\prec\mathrm{Vector}(\Phi_{2})\). More specifically, we write \(\alpha_{1}<\alpha_{2}<\ldots<\alpha_{3m}\) and \(\varphi_{1}<\varphi_{2}<\ldots<\varphi_{3m}\) for the angles of \(\Delta_{2}\) and \(\Phi_{2}\), respectively, and we assume \(\alpha_{i}=\varphi_{i}\), for \(1\leq i\leq p-1\), and \(\alpha_{p}<\varphi_{p}\), for some \(1\leq p\leq m\). In other words, \(p\) is the first index at which the two angle vectors differ, and the \(p\)-th angle of \(\Delta_{2}\) is smaller than the \(p\)-th angle of \(\Phi_{2}\). Write \(\alpha=\alpha_{p}\) and let \(bac\in\Delta_{2}\) be the triangle with \(\alpha=\measuredbox\). By the assumption of distinct angles, \(bac\not\in\Phi_{2}\). To simplify the discussion of the various cases, we assume without loss of generality that * the line, \(L\), that passes through \(b\) and \(c\) is horizontal; * the triangle \(bac\), and therefore the vertex \(a\), lie above \(L\); see Figures 3 and 4. We first consider the case in which \(bac\) is a black triangle. There are three subcases, and in each we get a contradiction by constructing two triangles that share interior points. Note that two white triangles may share interior points, but not if they triangulate the same star. Case 1: \(bac\) **is a black triangle in \(\Delta_{2}\).** By definition of \(D=\mathrm{Del}(A)\), \(bac\) does not contain a point of \(A\) in its interior, and if \(x\in A\setminus\{a\}\) lies above \(L\), then the angle \(\measuredbox\) is strictly smaller than \(\alpha\). We say a collection of triangles _covers the upper side_ of the edge \(bc\) if every interior point of \(bc\) has an open neighborhood whose intersection with the closed half-plane above \(L\) is contained in the union of these triangles. The black triangles in \(\Phi_{2}\) cover the entire convex hull of \(A\) and therefore also the upper side of \(bc\). It is possible that a single black triangle in \(\Phi_{2}\) suffices for this purpose, and this is our first subcase. Case 1.1: **the upper side of \(bc\) is covered by a single triangle, \(bxc\in\mathrm{Black}(\Phi_{2})\),** as in Figure 3 on the left. Since \(\measuredangle bxc<\alpha\), \(bxc\) must be a white triangle in \(\Delta_{2}\). Specifically, since \(a\) and \(x\) are both above \(L\), and \(a\) lies inside the circumcircle of \(bxc\), we have \(bxc\in\mathrm{White}(\Delta_{2},a)\). To get a contradiction, we construct a second such white triangle. Since there are at least two points of \(A\) above \(L\), Lemma 4.2 implies that \(P\) contains an edge connecting \(x\) to another point, \(x^{\prime}\neq x\), above \(L\). Hence, \(\mathrm{wh}(P,x)\) has a non-empty overlap with the open half-plane above \(L\). Since \(bc\) belongs to the boundary of \(\mathrm{wh}(P,x)\), there is a triangle \(bx^{\prime}c\) in \(\mathrm{White}(\Phi_{2},x)\). We have \(x^{\prime}\neq x\) by construction, and \(x^{\prime}\neq a\) because this would imply that \(\measuredangle bx^{\prime}c=\alpha\) is an angle in \(\mathrm{Vector}(\Phi_{2})\), which we assumed it is not. Since \(x^{\prime}\) lies outside the circumcircle of \(bac\), we have \(\measuredangle bx^{\prime}c<\alpha\), so \(bx^{\prime}c\in\mathrm{White}(\Delta_{2},a)\). But \(bxc\) and \(bx^{\prime}c\) share interior points, which is a contradiction. Case 1.2: **to cover the upper side of \(bc\) requires two or more triangles in \(\mathrm{Black}(\Phi_{2})\)**, as in Figure 3 in the middle and on the right. Among these triangles, let \(bxy\) and \(cx^{\prime}y^{\prime}\) be the ones that share the vertices \(b\) and \(c\) with \(bac\). Assuming \(x,x^{\prime}\) lie above \(L\) and \(y,y^{\prime}\) lie below \(L\), we have \(\measuredangle bxy<\alpha\) and \(\measuredangle cx^{\prime}y^{\prime}<\alpha\), which implies \(bxy,cx^{\prime}y^{\prime}\in\Delta_{2}\). The two triangles share interior points with \(bac\), so they cannot be black and are therefore white in \(\Delta_{2}\). Case 1.2.1: **at least one of \(x,x^{\prime}\) differs from \(a\).** Assume \(x\neq a\). Since \(xy\) crosses \(bc\), it must cross another edge of \(bac\), which by Lemma 4.3 (2) can only be \(ac\). If \(x^{\prime}=a\), then \(x^{\prime}c=ac\), and if \(x^{\prime}\neq a\), then \(x^{\prime}y^{\prime}\) crosses \(ab\) Figure 3: Edges of black and white triangles are _bold_ and _fine_, respectively, and edges of triangles in \(\Delta_{2}\) and \(\Phi_{2}\) are _pink_ and _green_, respectively. _Left:_ two overlapping triangles in \(\mathrm{White}(\Delta_{2},a)\) constructed in Case 1.1. _Middle:_ two crossing edges of black triangles in \(\Phi_{2}\) constructed in Case 1.2.1. _Right:_ two overlapping triangles in \(\mathrm{White}(\Delta_{2},c)\) constructed in Case 1.2.2. and \(bc\), again by Lemma 4.3 (2). In either case, \(bxy\) and \(cx^{\prime}y^{\prime}\) share interior points inside triangle \(abc\), which contradicts \(bxy,cx^{\prime}y^{\prime}\in\operatorname{Black}(\Phi_{2})\). Case 1.2.2: **both \(x\) and \(x^{\prime}\) are equal to \(a\).** Then \(bay,cay^{\prime}\in\operatorname{Black}(\Phi_{2})\). Since \(\measuredangle bay<\alpha\) and \(\measuredangle cay^{\prime}<\alpha\), both are white triangles in \(\Delta_{2}\). By Lemma 4.3 (1), \(bay\in\operatorname{White}(\Delta_{2},c)\) and \(cay^{\prime}\in\operatorname{White}(\Delta_{2},b)\), which implies that \(cy\) and \(by^{\prime}\) are edges in \(\operatorname{Del}(A)\). If \(y\neq y^{\prime}\), then there are three possible choices for the points \(b\), \(c\), \(y\), \(y^{\prime}\). First, they form a convex quadrangle, \(byy^{\prime}c\), with the points ordered as they are seen from \(a\). But then \(by^{\prime}\) and \(cy\) cross, which contradicts that they both belong to \(\operatorname{Del}(A)\). Second, \(y\) lies inside \(bcy^{\prime}\). Since \(cay^{\prime}\in\operatorname{White}(\Delta_{2},b)\), the circumcircle of \(cay^{\prime}\) encloses \(b\) and therefore \(y\), which is one point too many for a white triangle in \(\Delta_{2}\). Third, \(y^{\prime}\) lies inside \(bcy\), but this is symmetric to the second choice. Since we get a contradiction for all three choices, we conclude that \(y=y^{\prime}\). To get a contradiction, we use Lemma 4.2 to construct yet another triangle \(baz\in\operatorname{White}(\Delta_{2},c)\). Specifically, we let \(L\) be the line that passes through \(a\) and \(b\), and rotate the picture so \(L\) is horizontal and \(c,y\) lie above \(L\). Hence, there is a point \(z\) above \(L\) such that \(yz\) is an edge in \(P\) and \(baz\in\operatorname{White}(\Phi_{2},y)\). We have \(z\neq y\) by construction, and \(z\neq c\) by assumption on angle \(\alpha\). Since \(ba\) and \(ac\) are both edges in the boundary of \(\operatorname{st}(P,y)\), \(za\) crosses \(bc\), so \(\measuredangle baz<\alpha\), which implies that \(baz\) is a white triangle in \(\Delta_{2}\), and by Lemma 4.3 (1), \(baz\in\operatorname{White}(\Delta_{2},c)\). But \(bay\) and \(baz\) share interior points, which is a contradiction. This concludes the proof of the first case. Case 2: \(bac\) **is a white triangle in \(\Delta_{2}\).** Let \(d\) be the point such that \(bac\in\operatorname{White}(\Delta_{2},d)\). Then \(da\), \(db\), \(dc\) are edges of black triangles in \(\Delta_{2}\). We distinguish between the cases in which \(d\) lies below and above \(L\). Case 2.1: \(d\) **lies below \(L\);** see the left and middle panels of Figure 4. Then \(\measuredangle bxc<\measuredangle bac\) for all \(x\in A\) above \(L\), and \(\measuredangle byc<\measuredangle bdc\) for all \(y\in A\) Figure 4. As before, we draw edges of black and white triangles _bold_ and _fine_, respectively. To simplify, we show only edges of triangles in \(\Delta_{2}\). _Left:_ two overlapping triangles in \(\operatorname{White}(\Delta_{2},a)\) constructed in Case 2.1.1. _Middle:_ similar two overlapping triangles in \(\operatorname{White}(\Delta_{2},a)\) constructed in a chain of deductions in Case 2.1.2. _Right:_ a white triangle whose circumcircle encloses two points constructed in Case 2.2. below \(L\). Similar to Case 1.1, we distinguish between the upper side of \(bc\) being covered by one or requiring two or more black triangles in \(\Phi_{2}\). In both cases, we derive a contradiction by constructing triangles in \(\mathrm{White}(\Delta_{2},a)\) that share interior points. Case 2.1.1: **the upper side of \(bc\) is covered by a single triangle, \(bxc\in\mathrm{Black}(\Phi_{2})\)**; see the left panel of Figure 4. Then \(\measuredboxc<\alpha\), so \(bxc\) is a triangle in \(\Delta_{2}\), and since \(a\) lies inside its circumcircle, we have \(bxc\in\mathrm{White}(\Delta_{2},a)\). Using Lemma 4.2, we find a point \(x^{\prime}\) above \(L\) such that \(xx^{\prime}\) is an edge in \(P\) and \(bx^{\prime}c\) is a triangle in \(\mathrm{White}(\Phi_{2},x)\). We have \(x^{\prime}\neq x\) by construction, and \(x^{\prime}\neq a\), else \(\measuredbox^{\prime}c=\alpha\) would be an angle in \(\mathrm{Vector}(\Phi_{2})\). Again \(\measuredbox^{\prime}c<\alpha\), so \(bx^{\prime}c\in\mathrm{White}(\Delta_{2},a)\). This is a contradiction because \(bxc\) and \(bx^{\prime}c\) share interior points. Case 2.1.2: **to cover the upper side of \(bc\) requires at least two triangles in \(\mathrm{Black}(\Phi_{2})\)**. Among these triangles, let \(bxy\) and \(cx^{\prime}y^{\prime}\) be the ones that share \(b\) and \(c\) with \(bac\), respectively, and assume that \(x,x^{\prime}\) are above \(L\) and \(y,y^{\prime}\) are below \(L\). We first prove that \(d\) is connected to \(b\) and \(c\) by edges of black triangles in \(\Phi_{2}\), and thereafter derive a contradiction by constructing two triangles in \(\mathrm{White}(\Delta_{2},a)\) that share interior points. _Claim:_\(bd\) and \(cd\) are edges of triangles in \(\mathrm{Black}(\Phi_{2})\). Proof.: To derive a contradiction, assume the claim is false and \(bd\) is not edge of any black triangle in \(\Phi_{2}\). Hence \(y\neq d\). Since \(\measuredboxy<\alpha\), \(bxy\) is also in \(\Delta_{2}\). It shares interior points with the star of \(d\) without having \(d\) as a vertex, which implies that \(bxy\) must be white in \(\Delta_{2}\). Consider \(bdc\), which is not necessarily a triangle in \(\Delta_{2}\) or \(\Phi_{2}\). However, since \(d\) is the only point inside the circumcircle of \(bac\), there is no point of \(A\) inside \(bdc\). Since \(xy\) crosses \(bc\), it must cross either \(bd\) or \(cd\). Assuming \(xy\) crosses \(bd\), \(bxy\) shares interior points with the two black triangles with common edge \(bd\) in \(\Delta_{2}\), so \(bxy\in\mathrm{White}(\Delta_{2},d)\) by Lemma 4.3 (3). This is not possible since \(bxy\) and \(bac\) share interior points. Thus, \(xy\) crosses \(cd\). Since \(bxy\in\mathrm{Black}(\Phi_{2})\), this implies that \(cd\) cannot be edge of any black triangle in \(\Phi_{2}\). Hence \(y^{\prime}\neq d\), so we can use the symmetric argument to conclude that \(x^{\prime}y^{\prime}\) crosses \(bd\). But this is a contradiction since in this case \(bxy\) and \(cx^{\prime}y^{\prime}\) share interior points inside the triangle \(bcd\); see the middle panel of Figure 3 where the situation is similar. This completes the proof of the claim. Since \(bd\) and \(cd\) are edges of triangles in \(\mathrm{Black}(\Phi_{2})\), we have \(y=y^{\prime}=d\). Consider \(\mathrm{st}(P,d)\), which contains \(b\) and \(c\) on its boundary. The black triangles in \(\Phi_{2}\) that cover the upper side of \(bc\) all share \(d\) as a vertex, which implies that \(bc\) lies inside this star. Indeed, by Lemma 3.4, it is an edge of a triangle in \(\mathrm{White}(\Phi_{2},d)\). Thus, there exists a triangle \(bzc\in\mathrm{White}(\Phi_{2},d)\) with \(z\) above \(L\). We have \(z\neq a\) by assumption on \(\alpha\), so \(\measuredboxc<\alpha\), which implies that \(bzc\) is also a white triangle in \(\Delta_{2}\), and since its circumcircle encloses \(a\), \(bzc\in\mathrm{White}(\Delta_{2},a)\). To construct a second such white triangle, note that this implies that \(ab\) and \(ac\) are edges of triangles in \(\operatorname{Black}(\Delta_{2})\). As illustrated in the middle panel of Figure 4, all of \(ab,ac,ad,bd,cd\) are edges of black triangles in \(\Delta_{2}\), so \(bac,bdc\in\operatorname{Black}(\Delta_{2})\). Hence, \(bd\) and \(cd\) are edges in the boundary of \(\operatorname{st}(D,a)\), and since \(bzc\in\operatorname{White}(\Delta_{2},a)\), we also have \(bdc\in\operatorname{White}(\Delta_{2},a)\). The angle at \(b\) satisfies \(\measuredangle bbc<\measuredangle dac<\alpha\) because \(a\) lies inside the circumcircle of \(dbc\), and since \(dbc\) is a triangle in \(\Delta_{2}\), it must therefore also be a triangle in \(\Phi_{2}\). It cannot be in \(\operatorname{Black}(\Phi_{2})\) because the upper side of \(bc\) requires at least two black triangles of \(\Phi_{2}\) to be covered, by assumption. Hence, \(dbc\) is white in \(\Phi_{2}\). It shares interior points with the two black triangles with common edge \(dz\) in \(\Phi_{2}\), so \(dbc\in\operatorname{White}(\Phi_{2},z)\), by Lemma 4.3 (3). Finally consider \(\operatorname{White}(\Phi_{2},z)\). It contains \(bdc\) and, by Lemma 4.2, it covers the upper side of \(bc\). Hence, there is a triangle \(bz^{\prime}c\in\operatorname{White}(\Phi_{2},z)\) with \(z^{\prime}\) above \(L\). We have \(z^{\prime}\neq z\) by construction, and \(z^{\prime}\neq a\) by assumption on \(\alpha\). Again, \(\measuredangle bz^{\prime}c<\alpha\), so \(bz^{\prime}c\in\Delta_{2}\), and since its circumcircle encloses \(a\), we have \(bz^{\prime}c\in\operatorname{White}(\Delta_{2},a)\). But this is a contradiction because \(bzc\) and \(bz^{\prime}c\) share interior points. Case 2.2: \(d\) **lies above \(L\);** see the right panel of Figure 4. Similar to Case 2.1.2, we begin by proving that \(d\) is connected to \(b\) and \(c\) by edges of black triangles in \(\Phi_{2}\). _Claim:_\(bd\) and \(cd\) are edges of triangles in \(\operatorname{Black}(\Phi_{2})\). Proof.: To derive a contradiction, assume the claim is false and \(bd\) is not edge of any black triangle in \(\Phi_{2}\). Among the one or more black triangles needed to cover the upper side of \(bc\), let \(bxy\in\operatorname{Black}(\Phi_{2})\) be the triangle that shares \(b\) with \(bac\). Letting \(x\) be the vertex above \(L\), we have \(x\neq d\) by assumption. If \(bxy\) covers the upper side of \(bc\) by itself, then \(y=c\), and otherwise, \(y\) lies below \(L\). In either case, \(\measuredangle bxy<\alpha\), so \(bxy\) is also a triangle in \(\Delta_{2}\). It cannot be black because it shares interior points with \(\operatorname{st}(D,d)\) without having \(d\) as a vertex, so \(bxy\) is a white triangle in \(\Delta_{2}\). But this implies \(y\neq c\). Indeed, if \(y=c\), then either \(bxy=bac\), which contradicts the assumption on \(\alpha\), or the circumcircle of \(bxy\) encloses \(a\) as well as \(d\), which is one point too many for a white triangle in \(\Delta_{2}\). So \(y\) is below \(L\). Note that the circumcircle of \(bac\) encloses \(d\) and therefore \(bdc\), and since \(x\) lies on or outside this circle, it cannot lie inside \(bdc\). Since \(xy\) crosses \(bc\), it thus must cross another edge of this triangle, either \(bd\) or \(cd\). Assuming \(xy\) crosses \(bd\), which is common to two black triangles in \(\Delta_{2}\), we get \(bxy\in\operatorname{White}(\Delta_{2},d)\) from Lemma 4.3 (3). But \(bxy\) and \(bac\in\operatorname{White}(\Delta_{2},d)\) share interior points, which is a contradiction. Hence, \(xy\) crosses \(bc\) and \(cd\), so \(cd\) cannot be an edge of a black triangle in \(\Phi_{2}\). Let now \(cx^{\prime}y^{\prime}\) be among the triangles in \(\operatorname{Black}(\Phi_{2})\) needed to cover the upper side of \(bc\) that shares \(c\) with \(bac\). By a symmetric argument, we conclude that \(x^{\prime}y^{\prime}\) crosses \(bc\) and \(bd\). But this is a contradiction because \(bxy\) and \(cx^{\prime}y^{\prime}\) share interior points inside the triangle \(bcd\); see again the middle panel of Figure 3 but substitute \(d\) for \(a\). This completes the proof of the claim. Hence, \(bd\) and \(cd\) are edges of black triangles in \(\Phi_{2}\). This implies that \(b\) and \(c\) are points in the boundary of \(\operatorname{st}(P,d)\). As argued above, there are no points of \(A\) inside \(bdc\), so \(\operatorname{st}(P,d)\) covers the upper side of \(bc\). There is a circle that passes through \(b\) and \(c\) and encloses \(d\) but no other points of \(A\), so by Lemma 3.4, \(bc\) is an edge of a triangle in \(\operatorname{White}(\Phi_{2},d)\). Let \(z\) be the point above \(L\) such that \(bzc\in\operatorname{White}(\Phi_{2},d)\). We have \(z\neq d\) by construction, and \(z\neq a\) by assumption on \(\alpha\). Hence, \(\measuredangle bzc<\alpha\), which implies that \(bzc\) is also a triangle in \(\Delta_{2}\). However, the circumcircle of \(bzc\) encloses \(a\) and \(d\), which is one too many for a white triangle in \(\Delta_{2}\). This furnishes the final contradiction and completes the proof of the theorem. ### Counterexamples Can Theorem 4.4 be extended or strengthened? In this subsection, we present examples that contradict the extension to order beyond 2 and the strengthening to order-2 hypertriangulations obtained from possibly incomplete triangulations. **Order beyond 2.** Four points in convex position permit only two triangulations: \(D=\operatorname{Del}(A)\), and \(P\), which consists of the other two triangles spanned by the four points. As illustrated in Figure 5, \(\operatorname{Del}_{2}(A)\) consists of shrunken and possibly inverted copies of all four triangles, and \(\operatorname{Del}_{3}(A)\) consists of shrunken and inverted copies of the two triangles in \(P\). Assuming \(A\) is generic, Sibson's theorem implies \(\operatorname{Vector}(P)\prec\operatorname{Vector}(D)\). There are two level-3 hypertriangulations: the order-3 Delaunay triangulation, with \(\operatorname{Vector}(\operatorname{Del}_{3}(A))=\operatorname{Vector}(P)\), and another, with \(\operatorname{Vector}(P_{3})=\operatorname{Vector}(D)\). Hence, \(\operatorname{Vector}(\operatorname{Del}_{3}(A))\prec\operatorname{Vector} (P_{3})\). In words, the vector inequality asserted in Theorem 4.4 for order-2 Delaunay trinagulations does not even extend to order 3. Compare this with Eppstein's theorem [7], which asserts that for \(n\) points in convex position in \(\mathbb{R}^{2}\), the order-\((n-1)\) Delaunay triangulation lexicographically minimizes the increasing angle vector. For \(n=4\) and points in convex position, the above conclusion is a consequence of this theorem. Figure 5. From _left_ to _right_: the order-1, order-2, and order-3 Delaunay triangulations of four points, interleaved with the two possible triangulations of these points. Incomplete hypertriangulations.Theorem 4.4 compares the order-2 Delaunay triangulation with all _complete_ level-2 hypertriangulations, each aged from a triangulation that contains each point in \(A\) as a vertex. Enlarging this collection to possibly incomplete level-2 hypertriangulations is problematic since they do not necessarily have the same number of angles as \(\operatorname{Del}_{2}(A)\). We can still compare the smallest angles, but there are counterexamples. Indeed, Figure 6 shows a set of nine points whose order-2 Delaunay triangulation does not maximize the minimum angle if incomplete level-2 hypertriangulations participate in the competition. We note that for these particular nine points, the angle vectors of \(\operatorname{Del}_{2}(A)\) and the displayed level-2 hypertriangulation have the same length. This implies that the requirement of _completeness_ cannot be weakened to _maximality_, which is equivalent to having the same number of triangles. ### Corollary for MaxMin Angle Theorem 4.4 implies that among all complete level-2 hypertriangulation, the order-2 Delaunay triangulation is Figure 6: The minimum angle in the displayed level-2 hypertriangulation is larger than the minimum angle of the order-2 Delaunay triangulation of the same points. Indeed, the smallest angle in the hypertriangulation of about 9 degrees is defined by the vertices \([eh],[dh],[gh]\). For comparison, the circle in the picture proves that the angle of about 6.4 degrees defined by the vertices \([bc],[cd],[ac]\) belongs to the order-2 Delaunay triangulation (not shown). distinguished by maximizing the minimum angle. Using Sibson's result for level-\(1\) hypertriangulations [20], there is a short proof of this corollary. No such similarly short proof is known for the angle vector optimality of order-\(2\) Delaunay triangulations. **Corollary 4.5** (MaxMin Angle Optimality).: _Let \(A\subseteq\mathbb{R}^{2}\) be finite and generic, and \(P\) a complete triangulation of \(A\). Then the minimum angle of the triangles in \(\Phi_{2}=f(P)\) is smaller than or equal to the minimum angle of the triangles in \(\Delta_{2}=f(\operatorname{Del}(A))\)._ Proof.: Write \(D=\operatorname{Del}(A)\), for each \(x\in A\), write \(D(x)=\operatorname{Del}(A\setminus\{x\})\), and let \(P(x)\) be the triangulation of \(A\setminus\{x\}\) obtained by removing the triangles that share \(x\) from \(P\) and adding the triangles in the constrained Delaunay triangulation of \(\operatorname{wh}(P,x)\). By Sibson's theorem, the smallest angle in \(P\) is smaller than or equal to the smallest angle in \(D\), and for each \(x\in A\), the smallest angle in \(P(x)\) is smaller than or equal to the smallest angle in \(D(x)\). The smallest angle in \(\Delta_{2}\) is the minimum angle in \(D\) and all \(D(x)\), and the smallest angle in \(\Phi_{2}\) is the minimum angle in \(P\) and all \(P(x)\), for \(x\in A\). Hence, the smallest angle in \(\Phi_{2}\) is smaller than or equal to the smallest angle in \(\Delta_{2}\). ## 5. Uniqueness of Local Angle Property In this section, we prove the second main result of this paper, which supports the Local Angle Conjecture formulated at the end of Section 3.3 by proving it for the case \(k=2\). We begin with three basic lemmas on hypertriangulations that satisfy some or all of the conditions in Definition 3.2. ### Useful Lemmas To streamline the discussion, we call a union of black triangles a _black region_ if its interior is connected and it is not contained in a larger black region of the same triangulation. Similarly, we define _white regions_. Furthermore, we refer to _black_ or _white angles_ when we talk about the angles inside a black or white triangle. **Lemma 5.1** (Black Regions are Convex).: _Let \(A\subseteq\mathbb{R}^{2}\) be finite and generic, and let \(P_{k}\) be a level-\(k\) hypertriangulation of \(A\) that satisfies (bb). Then every black region of \(P_{k}\) is convex, and all vertices of the restriction of \(P_{k}\) to the black region lie on the boundary of that region._ Proof.: Let \(a\) be a boundary vertex of a black region, with edges \(ab_{0}\), \(ab_{1},\dots,ab_{p+1}\) bounding the \(p+1\) incident black triangles in the region. (bb) implies \(\measuredangle ab_{i-1}b_{i}+\measuredangle ab_{i+1}b_{i}>\pi\) for \(1\leq i\leq p\), so the sum of the \(2(p+1)\) angles is larger than \(p\pi\). Hence, the sum of the remaining \(p+1\) angles at \(a\) is less than \(\pi\), as required for the black region to be convex at \(a\). The same calculation shows that a ring of black triangles around a vertex in the interior of the black region is not possible. **Lemma 5.2** (Total Black Angles).: _Let \(A\subseteq\mathbb{R}^{2}\) be finite and generic, and let \(P_{k}\) be a level-\(k\) hypertriangulation of \(A\) that has the local angle property. Then the sum of black angles at any vertex of \(P_{k}\) is less than \(\pi\)._ Proof.: Let \(a\) be a vertex of \(P_{k}\). If \(a\) is a boundary vertex, then the claim is trivial. If \(a\) is an interior vertex and incident to at most one black region, then the claim follows from Lemma 5.1. So assume that \(a\) is interior and incident to \(p\geq 2\) black and therefore the same number of white regions. Let \(ab_{1},ab_{2},\ldots,ab_{2p}\) be the edges separating the black and white regions around \(a\), with the region between \(ab_{1}\) and \(ab_{2}\) being black. We also assume that the angle between any two consecutive edges is less than \(\pi\), else the claim is obvious. We look at the edge \(ab_{2}\) and claim that \(\measuredangle ab_{1}b_{2}>\measuredangle ab_{3}b_{2}\). The black region between \(ab_{1}\) and \(ab_{2}\) satisfies (bb), so its triangulation is the farthest-point Delaunay triangulation. In it, every triangle that shares an edge with the boundary of the region has the property that the angle opposite to the boundary edge is minimal over all choices of third vertex [7]. Therefore, \(\measuredangle ab_{1}b_{2}\) is greater than or equal to the angle opposite to \(ab_{2}\) inside the black triangle. Similarly, the triangulation of the white region between \(ab_{2}\) and \(ab_{3}\) satisfies (ww), so its triangulation is the constrained Delaunay triangulation of the region. Thus, \(\measuredangle ab_{3}b_{2}\) is smaller than or equal to the angle opposite to \(ab_{2}\) inside the white triangle. Applying (bw) to \(ab_{2}\), we get the claimed inequality. We repeat the same argument for all other edges separating black from white regions around \(a\), and compare the sum of black and white angles opposite these edges: \[\sum\nolimits_{i=0}^{p}\left(\measuredangle ab_{2i+1}b_{2i+2}+\measuredangle ab_{2i+ 2}b_{2i+1}\right)>\sum\nolimits_{i=0}^{p}\left(\measuredangle ab_{2i}b_{2i+1}+ \measuredangle ab_{2i+1}b_{2i}\right), \tag{1}\] in which the indices are modulo \(2p\). The sum of black angles at \(a\) is \(p\pi\) minus the first sum in (1), and the sum of white angles at \(a\) is \(p\pi\) minus the second sum in (1). Therefore the sum of black angles at \(a\) is less then the sum of white angles at \(a\). **Lemma 5.3** (Local Angle Property and Aging Function).: _Let \(A\subseteq\mathbb{R}^{2}\) be finite and generic, \(P_{k}\) a level-\(k\) hypertriangulation of \(A\), and \(P_{k-1}=F^{-1}(\operatorname{Black}(P_{k}))\) a level-\((k-1)\) hypertriangulation of \(A\). If \(P_{k}\) has the local angle property, then \(P_{k-1}\) satisfies (ww)._ Proof.: We consider two adjacent white triangles with vertices \([\text{X}a]\), \([\text{X}b]\), \([\text{X}c]\) and \([\text{X}b]\), \([\text{X}c]\), \([\text{X}d]\) in \(P_{k-1}\). Applying the aging function, we get two black triangles of \(P_{k}\) with vertices \([\text{X}ab]\), \([\text{X}ac]\), \([\text{X}bc]\) and \([\text{X}bc]\), \([\text{X}bd]\), \([\text{X}cd]\). They share \([\text{X}bc]\), which implies that the sum of their angles at this vertex is less than \(\pi\) by Lemma 5.2. The two black triangles are homothetic copies of \(abc\) and \(bcd\), and so are the corresponding two white triangles in \(P_{k-1}\), so (ww) follows. ### Level-2 Hypertriangulations We are now ready to confirm the Local Angle Conjecture for level-2 hypertriangulations. **Theorem 5.4** (Local Angle Conjecture for Level 2).: _Let \(A\subseteq\mathbb{R}^{2}\) be finite and generic, and let \(P_{2}\) be a maximal level-\(2\) hypertriangulation of \(A\). Then \(P_{2}\) has the local angle property iff it is the order-\(2\) Delaunay triangulation of \(A\)._ Proof.: No two black triangles in \(P_{2}\) share an edge, which implies that (bb) is void. On the other hand, there are pairs of adjacent white triangles that belong to the triangulation of white regions in \(P_{2}\). In complete level-2 hypertriangulations, each such region is a polygon without points (vertices) inside, but in the more general case of maximal level-2 hypertriangulations considered here, there may be such points or vertices. In either case, (ww) implies that the restriction of \(P_{2}\) to each white region is the constrained Delaunay triangulation of this region. Let \(P\) be the underlying (order-1) triangulation of \(A\), which consists of the images of the black triangles in \(P_{2}\) under the inverse aging function. We begin by establishing that \(P\) is maximal and therefore \(P_{2}\) is complete. Suppose \(x\in A\) is not a vertex of \(P\), and let \(abc\) be the triangle in \(P\) that contains \(x\) in its interior. Consider the triangle with vertices \(c^{\prime}=[ab]\), \(b^{\prime}=[ac]\), and \(a^{\prime}=[bc]\) in \(\operatorname{Black}(P_{2})\). The edge connecting \(b^{\prime}\) and \(c^{\prime}\) is shared with \([\operatorname{wh}(P_{2},a)]\), and this white region contains \(x^{\prime}=[ax]\). Since \(P_{2}\) is maximal, by assumption, \(x^{\prime}\) is a vertex of the restriction of \(P_{2}\) to this white region. Recall that the triangle \(b^{\prime}d^{\prime}c^{\prime}\) in the constrained Delaunay triangulation of the white region has the property that the angle at \(d^{\prime}\) is maximal over all possible choices of \(d^{\prime}\) visible from \(b^{\prime}\) and \(c^{\prime}\). Hence, \(\measuredangle b^{\prime}d^{\prime}c^{\prime}\geq\measuredangle b^{\prime}x^{\prime}c^ {\prime}\), but also \(\measuredangle b^{\prime}x^{\prime}c^{\prime}=\measuredangle bxc>\measuredangle bac= \measuredangle b^{\prime}a^{\prime}c^{\prime}\) because \(x\) is inside \(abc\). This implies \(\measuredangle b^{\prime}d^{\prime}c^{\prime}>\measuredangle b^{\prime}a^{\prime}c^{\prime}\), which contradicts (bw) for \(P_{2}\), so \(P\) is necessarily maximal. Applying Lemma 5.3 to \(P_{2}\), we conclude that \(P\) satisfies (ww). Since \(P\) is a maximal, the only choice left is that \(P\) is the Delaunay triangulation of \(A\). The black triangles in \(P_{2}\) thus coincide with the black triangles in the order-\(2\) Delaunay triangulation of \(A\), and \(P_{2}\) restricted to each of its white regions is the constrained Delaunay triangulation of this region. Hence, \(P_{2}\) is the order-\(2\) Delaunay triangulation of \(A\). ### Level-3 Hypertriangulations We say \(A\subseteq\mathbb{R}^{2}\) is in _convex position_ if all its points are vertices of \(\operatorname{conv}A\). For such sets, we can extend Theorem 5.4 to level-3 hypertriangulations. The main differences to general finite sets are that all triangulations have the same number of triangles, and the aging function exists, as established by Galashin in [9] but see also [6]. We use this function together with the characterization of the order-\(2\) Delaunay triangulation as the only level-\(2\) hypertriangulation that has the local angle property. **Theorem 5.5** (Local Angle Conjecture for Level \(3\)).: _Let \(A\subseteq\mathbb{R}^{2}\) be finite, generic, and in convex position, and let \(P_{3}\) be a hypertriangulation of \(A\). Then \(P_{3}\) has the local angle property iff it is the order-\(3\) Delaunay triangulation of \(A\)._ Proof.: By Theorem 3.3, the order-\(3\) Delaunay triangulation has the local angle property. Let \(P_{3}\) be a possibly different level-\(3\) hypertriangulation that also has the local angle property, and let \(P_{2}=F^{-1}(\operatorname{Black}(P_{3}))\), which exists because \(A\) is in convex position [9]. By Lemma 5.3, \(P_{2}\) satisfies (ww). Recall that (bb) is void for level-\(2\) hypertriangulations, so if in addition to (ww), \(P_{2}\) also satisfies (bw), then it has the local angle property. By Theorem 5.4, this implies that \(P_{2}\) is the order-\(2\) Delaunay triangulation of \(A\). Its white triangles are in bijection with the triplets of points whose circumcircles enclose exactly one point of \(A\), and since \(\operatorname{Black}(P_{3})=F(\operatorname{White}(P_{2}))\), so are the black triangles of \(P_{3}\). Thus, \(P_{3}\) has the same black triangles as the order-\(3\) Delaunay triangulation of \(A\). Furthermore, the white regions of \(P_{3}\) coincide with the white regions of the order-\(3\) Delaunay triangulation, and because the restriction of either triangulation to a white region is the constrained Delaunay triangulation of that region, we conclude that \(P_{3}\) _is_ the order-\(3\) Delaunay triangulation of \(A\). Figure 7. The superposition of three levels. _Left:_ part of the star of \(a\) in \(P\) on level \(1\), the (white) triangles in this star aging to black triangles in \(P_{2}\) on level \(2\), and the only two white triangles in the star of \([av]\) aging to two black triangles in \(P_{3}\) on level \(3\). One is similar to \(uvw\) and the other to \(auw\), which is assumed to be unique. _Right:_ compared to the configuration on the _left_, there are two extra white triangles, which increase the star of \([av]\) in \(P_{2}\) from two to four triangles. Accordingly, we see a white quadrangle on level \(3\). It remains to show that \(P_{2}\) indeed satisfies (bw). To derive a contradiction, we assume it does not. Let \([ab]\), \([ac]\), \([bc]\) and \([ab]\), \([ac]\), \([ad]\) be the vertices of a black triangle and an adjacent white triangle that violate (bw), so \(\measuredangle bac<\measuredangle bdc\). Let \(P=F^{-1}(\text{Black}(P_{2}))\), and consider the star of \(a\) in \(P\). All vertices are in convex position, including \(a,b,c,d\), so we may assume that \(ac\) crosses \(bd\), as in Figure 7 on the left. Let \(ax_{1}=ab,ax_{2}=ac,\ldots,ax_{p}=ad\) be the sequence of edges in the star of \(a\) that intersect \(bd\). We consider the polygon with vertices \(a,x_{1},x_{2},\ldots,x_{p}\). Since \(A\) is in convex position, the polygon is convex, which implies that its constrained Delaunay triangulation is also the Delaunay triangulation of the \(p+1\) points. Denote this Delaunay triangulation by \(\Delta\), and note that it includes \(bcd=x_{1}x_{2}x_{p}\): \(a\) is outside the circumcircle of \(bcd\), because \(abc\) and \(bcd\) violate (bw), and so is every \(x_{i}\) with \(3\leq i\leq p-1\), because \(bcd\) is a triangle in \(\text{White}(P_{2},a)\). The rest of \(\Delta\) consists of \(abd=ax_{1}x_{p}\) and the triangles of \(\text{White}(P_{2},a)\) on the other side of \(x_{2}x_{p}\). An _ear_ of \(\Delta\) is a triangle that has two of its edges in the boundary of the polygon. For example, \(ax_{1}x_{p}\) is an ear, but since every triangulation of a polygon with at least four vertices has at least two ears, there is another one, and we write \(uvw=x_{i-1}x_{i}x_{i+1}\) for a second ear of \(\Delta\). The corresponding triangle in \(P_{2}\) has vertices \([au]\), \([av]\), \([aw]\) and is adjacent to black triangles with vertices \([au]\), \([av]\), \([uv]\) and \([av]\), \([aw]\), \([vw]\). Both pairs violate (bw) because \(a\) lies outside the circumcircle of \(uvw\). Looking closely at this configuration, we note that \([av]\) is shared by the two black triangles and also belongs to \([\text{wh}(P_{2},a)]\) and \([\text{wh}(P_{2},v)]\); see again Figure 7 on the left. We distinguish between two cases: when \([av]\) belongs to only one triangle in the triangulation of the latter white region, and when it belongs to two or more such triangles. Assuming the first case, we apply the aging function to the two white triangles sharing \([av]\), which gives two black triangles with vertices \([auv]\), \([auw]\), \([awv]\) and \([auv]\), \([awv]\), \([uwv]\) in \(P_{3}\). They share an edge, and since \(a\) lies outside the circumcircle of \(uvw\), they violate (bb), which is the desired contradiction. There is still the second case, when \([av]\) belongs to two or more triangles in the triangulation of \([\text{wh}(P_{2},v)]\). Let \([uv]=[y_{1}v],[y_{2}v],\ldots,[y_{q}v]=[wv]\) be the vertices of \([\text{wh}(P_{2},v)]\) connected to \([av]\); see Figure 7 on the right. These \(q\) edges bound \(q-1\) white triangles in \(P_{2}\). Consider their images under the aging function, which are \(q-1\) black triangles in \(P_{3}\). Together with the black triangle with vertices \([auv]\), \([auw]\), \([awv]\), these black triangles surround a convex \(q\)-gon with vertices \([auv]=[ay_{1}v],[ay_{2}v],\ldots,[ay_{q}v]=[awv]\); see again Figure 7 on the right. The \(q\)-gon is convex because \(A\) is in convex position, and we claim it is a white region in \(P_{3}\). If there is any black triangle, \(T\), inside this \(q\)-gon, then we consider any generic segment connecting \(T\) to the boundary of the \(q\)-gon, and the closest part of that segment to the boundary colored black in \(P_{3}\). By construction, the triangle \(T^{\prime}\) containing this part has two vertices labeled \([avz_{1}]\) and \([avz_{2}]\), for some \(z_{1}\) and \(z_{2}\) Hence, \(F^{-1}(T^{\prime})\) is a white triangle of \(P_{2}\) incident to \([av]\), which is impossible, as all white triangles in \(P_{2}\) incident to \([av]\) age to black triangles surrounding the \(q\)-gon. Recall that \(P_{3}\) satisfies (ww), so the restriction of \(P_{3}\) to the \(q\)-gon is the (constrained) Delaunay triangulation of the \(q\)-gon. Consider the edge connecting \([auv]=[ay_{1}v]\) and \([awv]=[ay_{q}v]\) of the \(q\)-gon, and let \([ay_{i}v]\) be the third vertex of the incident white triangle. Because this triangle is part of the (constrained) Delaunay triangulation, we have \(\measured uy_{j}w<\measured uy_{i}w\) for all \(j\neq i\), and because \(P_{3}\) satisfies (bw), we have \(\measured uy_{i}w<\measured uvw\). Recall that \(a\) lies outside the circumcircle of \(uvw\), so \(\measured uvw+\measured uaw<\pi\). This implies \(\measured uy_{i}w+\measured uaw<\pi\). Hence, the circumcircle of the triangle with vertices \([uv],[y_{i}v],[uv]\) does not enclose any of the other vertices. It follows that the triangle belongs to the constrained Delaunay triangulation of the polygon with vertices \([uv]=[y_{1}v],[y_{2}v],\ldots,[y_{q}v]=[wv]\), but it does not because this polygon is triangulated with edges that all share \([av]\). This gives the final contradiction. ## 6. Concluding Remarks In this last section, we discuss open questions about hypertriangulations. The obvious one is whether optimality properties other than angles can be generalized from level 1 to higher levels: for example the smallest circumcircle [3], the smallest enclosing circle [17], roughness [18], and other functionals [5, Chapter 3] and [14], which are all optimized by the order-1 Delaunay triangulation. In addition, we list a small number of more specific questions and conjectures directly related to the discussions in the technical sections of this paper. **Flipping as a proof technique.** Sibson's original proof for the angle vector optimality of the Delaunay triangulation [20] uses the sequence of edge-flips provided by Lawson's algorithm [12]. There is such a sequence for every complete triangulation, and each flip lexicographically increases the vector. The authors of this paper pursued a similar approach to prove Theorem 4.4 using the flips of Types I to IV developed in [6]; see Figure 8 on the right. While these flips connect all level-2 hypertriangulations of a finite generic set (Theorem 4.4 in [6]), they do not necessarily lexicographically increase the angle vector. Indeed, there is a level-2 hypertriangulation of six points, \(Q_{2}\), different from the order-2 Delaunay triangulation, such that every applicable flip lexicographically decreases the sorted angle vector. The six points in this example are \(a,b,c,g,h,i\) in Figure 8, and we obtain \(Q_{2}\) from the shown hypertriangulation by removing the vertices \([ad],[dg],[be],[eh],[cf],[fi]\). In \(Q_{2}\), there are only three possible flips, all of Type I, and all three lexicographically decrease the sorted angle vector. Incidentally, six is the smallest number of points for which such a counterexample to using flips as a proof technique for level-2 hypertriangulations exists. Let \(P_{2}\) be the level-\(2\) hypertriangulation in Figure 8 (without removing points \(d,e,f\)). It provides a counterexample to using a local retriangulation operation more powerful than a flip as a proof technique. To explain, let \(P\) and \(P^{\prime}\) be two complete level-\(1\) hypertriangulations of the same set. Let \(P_{2}=F(P)\) and \(P^{\prime}_{2}=F(P^{\prime})\) be the aged level-\(2\) hypertriangulations such that the restriction to any white region is the constrained Delaunay triangulation of that region. Equivalently, \(P_{2}\) and \(P^{\prime}_{2}\) satisfy (ww). If \(P\) and \(P^{\prime}\) are connected by a single flip of Type I, we say that \(P_{2}\) and \(P^{\prime}_{2}\) are connected by a _compound flip_. It consists of a sequence of Type I flips affecting white regions in \(P_{2}\), followed by a Type III flip, followed by a sequence of Type I flips affecting white regions in \(P^{\prime}_{2}\). Such a compound flip may increase the sorted angle vector even if some of its elementary flips do not. Nevertheless, all compound flips applicable to \(P_{2}\) in Figure 8 decrease the sorted angle vector, thus spoiling the hope for an elegant proof of Theorem 4.4 using compound flips. This motivates the following question. **Question A**.: _Does there exist a flip-like approach to proving Theorem 4.4 on the angle vector optimality for complete level-\(2\) hypertriangulations?_ **Angle vector optimality and local angle property.** Recall that Theorem 4.4 proves the optimality of the Delaunay triangulation only for order-\(2\) and among all complete level-\(2\) hypertriangulations. Indeed, Section 4.4 shows counterexamples for order-\(3\) and for relaxing to maximal level-\(2\) hypertriangulations. This motivates the following two questions: Figure 8: _Right:_ the four types of flips that connect the level-\(2\) hypertriangulations of a given set. _Left:_ a complete level-\(2\) hypertriangulation such that every applicable compound flip decreases the sorted angle vector. The _dashed_ edges appear after removing vertices \([ad],[dg],[be],[eh],[cf],[fi]\). * Is there a sense in which the order-\(k\) Delaunay triangulations optimize angles for all \(k\)? * Among all maximal level-2 hypertriangulations, which one lexicographically maximizes the sorted angle vector? Recall also that Theorem 5.4 proves that the local angle property characterizes the order-2 Delaunay triangulation among all maximal level-2 hypertriangulations, leaving the case of higher orders open. We venture the following conjecture, while keeping in mind that some condition on the family of competing hypertriangulations is needed to avoid Delaunay triangulations of proper subsets of the given points. **Conjecture B**.: _Let \(A\subseteq\mathbb{R}^{2}\) be finite and generic, and for every \(1\leq k\leq\#A-1\) let \(\mathcal{F}_{k}\) be the family of level-\(k\) hypertriangulations that have the local angle property. Then \(P_{k}\in\mathcal{F}_{k}\) has the maximum number of triangles iff \(P_{k}\) is the order-\(k\) Delaunay triangulation of \(A\)._ In the formulation of this conjecture, we maximize the number of triangles over all members of \(\mathcal{F}_{k}\), and not over all level-\(k\) hypertriangulations of \(A\), because the latter may not contain any that have the local angle property. To see this, let \(A\) be any finite set that is not in convex position. For \(k=\#A-1\), all triangles are black, and by Lemma 5.1, condition (bb) of the local angle property implies that no point in the interior of \(\operatorname{conv}A\) is a vertex of the triangulation. Thus every hypertriangulation on this level that has the local angle property does not have the maximum number of triangles. Also note that Theorem 5.5 shows that the conjecture holds for the case \(k=3\) and points in convex position. More generally, for such points all level-\(k\) hypertriangulations have the same number of triangles; see [6] for interpretation of results from [9, 16]. **Maximal and maximum hypertriangulations.** Recall that a hypertriangulation is _maximal_ if no other hypertriangulation of the same level subdivides it. We say a hypertriangulation is _maximum_ if no other hypertriangulation of the same level has more triangles. In an attempt to generalize Lemma 2.6 to levels beyond 2, we conjecture that the number of triangles in a maximum hypertriangulation depends on the given points but not on how these points are triangulated. **Conjecture C**.: _Let \(A\subseteq\mathbb{R}^{2}\) be finite and generic. Then any two maximal level-\(k\) hypertriangulations of \(A\) have the same and therefore maximum number of triangles. In other words, every maximal level-\(k\) hypertriangulation is maximum._ The conjecture holds for points in convex position [9, 16], and we have verified it for a few small configurations in non-convex position. If true, this might have combinatorial meaning as the vertices of maximal hypertriangulations would then encode data from the matroid defined by the point set. We refer to [10] for an extensive discussion of this topic in connection to zonotopal tilings and collections of separated subsets, in particular for points in convex position.
2308.12239
Cyclic Orderings of Paving Matroids
A matroid M is cyclically orderable if there is a cyclic permutation of the elements of M such that any r consecutive elements form a basis in M. An old conjecture of Kajitani, Miyano, and Ueno states that a matroid M is cyclically orderable if and only if for all nonempty subsets X in E(M), |X|/r(M) is less than or equal to |E(M)|/r(M). In this paper, we verify this conjecture for all paving matroids.
Sean McGuinness
2023-08-23T16:27:09Z
http://arxiv.org/abs/2308.12239v2
# Cyclic Orderings of Paving Matroids ###### Abstract A matroid \(M\) of rank \(r\) is _cyclically orderable_ if there is a cyclic permutation of the elements of \(M\) such that any \(r\) consecutive elements form a basis in \(M\). An old conjecture of Kajitani, Miyano, and Ueno states that a matroid \(M\) is cyclically orderable if and only if for all \(\O\neq X\subseteq E(M),\ \frac{|X|}{r(X)}\leq\frac{|E(M)|}{r(M)}.\) In this paper, we verify this conjecture for all paving matroids. AMS Subject Classifications (2012) : 05D99,05B35. ## 1 Introduction A matroid \(M\) of rank \(r\) is **cyclically orderable** if there is a a cyclic permutation of the elements of \(M\) such that any \(r\) consecutive elements is a base. For a matroid \(M\) and a subset \(\O\neq X\subseteq E(M)\), we define \(\beta(X):=\frac{|X|}{r(X)}\), if \(r(X)\neq 0\); otherwise, \(\beta(X):=\infty.\) Let \(\gamma(M)=\max_{\O\neq X\subseteq E(M)}\beta(X).\) In [6], Kajitani, Miyano, and Ueno made the following conjecture: **1.1 Conjecture** A matroid \(M\) is cyclically orderable if and only if \(\gamma(M)=\beta(E(M)).\) Despite having been around for decades, the above conjecture is only known to be true for a few special classes of matroids. In [2], the conjecture was shown to be true sparse paving matroids. Perhaps the strongest result thus far can be found in [8] where it was shown that Conjecture 1.1 is true when \(r(M)\) and \(|E(M)|\) are relatively prime. **1.2 Theorem** ( Van Den Heuvel and Thomasse ) Let \(M\) be a matroid for which \(\gamma(M)=\beta(E(M)).\) If \(|E(M)|\) and \(r(M)\) are relatively prime, then \(M\) has a cyclic ordering. It follows from recent results in [1] on _split matroids_, a class which includes paving matroids, that the conjecture is true for paving matroids \(M\) where \(|E(M)|\leq 2r(M).\) Coupled with Theorem 1.2, we can replace \(2r(M)\) by \(2r(M)+1\) in this bound since \(|E(M)|\) and \(r(M)\) are relatively prime when \(|E(M)|=2r(M)+1.\) In this paper, we verify Conjecture 1.1 for all paving matroids. **1.3 Theorem** Let \(M\) be a paving matroid where \(\gamma(M)=\beta(E(M)).\) Then \(M\) is cyclically orderable. For concepts, terminology, and notation pertaining to matroids, we shall follow Oxley [7] when possible. For a matroid \(M,\)\(\mathscr{C}(M)\) will denote the set of all circuits of \(M.\) For a finite set \(A\) and integer \(k\leq|A|,\) we let \(\binom{A}{k}\) denote the set of all \(k\)-subsets of \(A.\) For a collection of subsets \(\mathscr{A}\) and integer \(k\) we let \(\binom{k}{\mathscr{A}}\) denote the set of all sets in \(\mathscr{A}\) having cardinality \(k.\) For a set \(A\) and elements \(x_{1},\ldots,x_{k}\) we will often write, for convenience, \(A+x_{1}+x_{2}+\cdots+x_{k}\) (resp. \(A-x_{1}-x_{2}-\cdots-x_{k}\)) in place of \(A\cup\{x_{1},\ldots,x_{k}\}\) (resp, \(A\backslash\{x_{1},\ldots,x_{k}\}\)). For a positive integer \(n,\) we let \([n]\) denote the set \(\{1,\ldots,n\}.\) ### Idea behind the proof To prove the main theorem, we shall use induction on \(|E(M)|\). To do this, we shall first remove a basis \(S\) from \(M\) so that the resulting matroid \(M^{\prime}\) satisfies \(\gamma(M^{\prime})=\beta(E(M)-S).\) While generally such a basis \(S\) may not exist, we will show that such bases exist when \(|E(M)|\geq 2r(M)+2.\) Applying the inductive assumption, \(M^{\prime}\) is cyclically orderable, with a cyclic ordering say \(e_{1}e_{2}\cdots e_{m}.\) We will show that for some \(i\in[m]\) and some ordering of \(S,\) say \(s_{1}s_{2}\cdots s_{r}\) (where \(r=r(M)\)), the ordering \(e_{1}\cdots e_{i}s_{1}s_{2}\cdots s_{r}e_{i+1}\cdots e_{m}\) is a cyclic ordering of \(M\). To give a rough idea of how to prove this, we will illustrate the proof in the case where \(r(M)=3.\) Suppose \(S=\{s_{1},s_{2},s_{3}\}\) is a basis of \(M\) where \(\gamma(M\backslash S)=\beta(E(M)-S)\) and \(r(M\backslash S)=3.\) Assume that \(M^{\prime}=M\backslash S\) has a cyclic ordering \(e_{1}e_{2}\cdots e_{m}.\) Suppose we try to insert the elements of \(S\), in some order, between \(e_{m}\) and \(e_{1}\), so as to achieve a cyclic ordering for \(M.\) Assume this is not possible. Then for every permutation \(\pi\) of \(\{1,2,3\},\)\(e_{1}e_{2}\cdots e_{m}s_{\pi(1)}s_{\pi(2)}s_{\pi(3)}\) is not a cyclic ordering of \(M.\) Thus for all permutations \(\pi\) of \(\{1,2,3\},\) at least one of \(\{e_{m-1},e_{m},s_{\pi(1)}\},\)\(\{e_{m},s_{\pi(1)},s_{\pi(2)}\},\{s_{\pi(2)},s_{\pi(3)},e_{1}\},\) or \(\{s_{\pi(3)},e_{1},e_{2}\}\) is a circuit. As an exercise for the reader, one can now show that there exist distinct \(i,j\in\{1,2,3\}\) such \(\{s_{i},e_{m-1},e_{m}\},\{s_{j},e_{1},e_{2}\},S-s_{i}+e_{m},\) and \(S-s_{j}+e_{1}\) are circuits. We may assume that \(i=1\) and \(j=2\). If instead, one were to assume that one could not insert the elements of \(S\) in some order between \(e_{1}\) and \(e_{2}\) so as to achieve a cyclic ordering of \(M\), then as above, there exist distinct \(i^{\prime},j^{\prime}\in\{1,2,3\},\) such that \(\{s_{i^{\prime}},e_{m},e_{1}\},\{s_{j^{\prime}},e_{2},e_{3}\},S-s_{i^{\prime} }+e_{1},\) and \(S-s_{j^{\prime}}+e_{2}\) are circuits. If \(i^{\prime}=1,\) then \(\{s_{1},e_{m-1},e_{m}\}\) and \(\{s_{1},e_{m},e_{1}\}\) are circuits. The circuit elimination axiom (together with the fact that \(M\) is a paving matroid) would then imply that \((\{s_{1},e_{m-1},e_{m}\}\cup\{s_{1},e_{m},e_{1}\})-s_{1}=\{e_{m-1},e_{m},e_{1}\}\) is a circuit, contradicting our assumption that \(e_{1}e_{2}\cdots e_{m}\) is a cyclic ordering of \(M^{\prime}.\) Also, if \(i^{\prime}=2,\) then \(\{s_{2},e_{m},e_{1}\}\) and \(\{s_{2},e_{1},e_{2}\}\) are circuits and hence by the circuit elimination axiom, \(\{e_{m},e_{1},e_{2}\}\) is a circuit, a contradiction. Thus \(i^{\prime}\not\in\{1,2\}\) and hence \(i^{\prime}=3\) and \(\{e_{m},e_{1},s_{3}\}\) and \(\{s_{1},s_{2},e_{2}\}\) are circuits. Given that \(\{s_{2},e_{1},e_{2}\}\) is also a circuit, it follows that \(\{e_{1},e_{2}\}\subset\mathrm{cl}(\{s_{1},s_{2}\}).\) Now \(j^{\prime}\in\{1,2\},\) and \(\{s_{i^{\prime}},e_{2},e_{3}\}\) is a circuit, implying that \(e_{3}\in\mathrm{cl}(\{s_{1},s_{2}\}).\) However, this is impossible since (by assumption) \(\{s_{1},s_{2},s_{3}\}\) is a basis. Thus there must be some ordering of \(S\) so that when the elements of \(S\) are inserted (in this order) between \(e_{m}\) and \(e_{1}\) or between \(e_{1}\) and \(e_{2}\), the resulting ordering is a cyclic ordering for \(M.\) ## 2 Removing a basis from a matroid Let \(M\) be a paving matroid where \(\gamma(M)=\beta(E(M)).\) As a first step in the proof of Theorem 1.3, we wish to find a basis \(B\) of \(M\) where \(\gamma(M\backslash B)=\beta(E(M)-B).\) Unfortunately, there are matroids where there is no such basis, as for example, the Fano plane. In this section, we will show that, despite this, such bases exist when \(|E(M)|\geq 2r(M)+2.\) The following is an elementary elementary observation which we will refer to in a number of places. **2.1 Observation** For a basis \(B\) in a matroid \(M\) and an element \(x\in E(M)-B,\) the set \(B+x\) has a unique circuit which contains \(x.\) We will need the following strengthening of Edmonds' matroid partition theorem [3] given in [4]: **2.2 Theorem** Let \(M\) be a matroid where \(\gamma(M)=k+\varepsilon,\) where \(k\in\mathbb{N}\) and \(0\leq\varepsilon<1,\) then \(E(M)\) can be partitioned into \(k+1\) independent sets with one set of size at most \(\varepsilon r(M).\) We are now in a position to prove the main result of this section. **2.3 Proposition** Let \(M\) be a paving matroid where \(|E(M)|\geq 2r(M)+2\) and \(r(M)\geq 3.\) Then there is a basis \(B\subset E(M)\) where \(\gamma(M\backslash B)=\beta(E(M-B))\) and \(r(M\backslash B)=r(M).\) Proof.: Let \(\gamma(M)=k+\frac{\ell}{r(M)}\) where \(0\leq\ell<r(M)\) and \(k\geq 2.\) Then \(|E(M)|=kr(M)+\ell\) and it follows by Theorem 2.2 one can partition \(E(M)\) into \(k\) bases \(F_{1},\ldots,F_{k}\) and one independent set \(F_{k+1}\) having \(\ell\) elements. Let \(r=r(M).\) If \(\ell=0,\) then \(|E(M)|=kr\geq 3r.\) In this case, we can take \(B=F_{k}\) since for \(M^{\prime}=M\backslash F_{k},\) it is seen that \(\gamma(M^{\prime})=k-1=\beta(E(M^{\prime})).\) Thus we may assume that \(\ell>0.\) Let \(F_{k}=\{x_{1},x_{2},\ldots,x_{r}\}.\) Suppose there exist distinct \(i,j\in[r]\) for which \(r((F_{k}-x_{i})\cup F_{k+1})=r((F_{k}-x_{j})\cup F_{k+1})=r-1.\) Let \(x\in F_{k+1}.\) Then \(x+(F_{k}-x_{i})\) and \(x+(F_{k}-x_{j})\) are (distinct) circuits, contradicting Observation 2.1. Thus there is at most one \(i\in[r]\) for which \(r((F_{k}-x_{i})\cup F_{k+1})=r-1.\) As such, we may assume that for \(i=1,\ldots,r-1,\) \(r((F_{k}-x_{i})\cup F_{k+1})=r.\) Thus for \(i=1,\ldots,r-1,\) there is a subset \(A_{i}\subseteq F_{k}-x_{i}\) such that \(B_{i}=A_{i}\cup F_{k+1}\) is a basis for \(M.\) We shall show that there exists \(i\in[r-1]\) such that \(B=B_{i}\) is a basis satisfying the proposition. Suppose to the contrary that this is false. Then for all \(i\in[r-1],\) there is a subset \(X_{i}\subseteq E(M)-B_{i}\) for which \(\beta(X_{i})>\beta(E-B_{i}).\) Since \(k>1,\) we have that \(F_{1}\subseteq E(M\backslash B_{i})\) and hence \(r(M\backslash B_{i})=r.\) Thus we have \(\beta(E-B_{i})=k-1+\frac{\ell}{r}.\) If \(r(X_{i})<r-1,\) then \(X_{i}\) is independent and hence \(\beta(X_{i})=1\leq\beta(E-B_{i}).\) Thus \(r(X_{i})\geq r-1\) and seeing as \(\beta(X_{i})>\beta(E(M)-B_{i}),\) we have \(r(X_{i})\leq r-1.\) Consequently, \(r(X_{i})=r-1\) and \(\beta(X_{i})=\frac{|X_{i}|}{r-1}>k-1+\frac{\ell}{r}\) Since \(r(X_{i})=r-1,\) it follows that for \(j=1,\ldots,k-1,\)\(|X_{i}\cap F_{j}|\leq r-1.\) Consequently, \(|X_{i}|\leq(k-1)(r-1)+\ell.\) If \(|X_{i}|<(k-1)(r-1)+\ell,\) then \(\beta(X_{i})\leq k-1+\frac{\ell-1}{r-1},\) implying that \(\beta(X_{i})\leq k-1+\frac{\ell}{r}\) (because \(\ell\leq r-1),\) contradicting our assumptions. Thus it follows that \(|X_{i}|=(k-1)(r-1)+\ell\) and for all \(i\in[r-1]\) and for all \(j\in[k-1],\)\(|X_{i}\cap F_{j}|=r-1,\) and \(F_{k}-A_{i}\subset X_{i}\). Thus for all \(i\in[r-1]\) and for all \(j\in[k-1],\)\(X_{ij}=X_{i}\cap F_{j}\) spans \(X_{i}.\) Since all circuits in \(M\) have size at least \(r\), it follows that for all \(j\in[k-1]\), and for all \(x\in X_{i}-X_{ij}\), \(X_{ij}+x\) is a circuit. Suppose \(k\geq 3\). Let \(i,j\in[r-1]\) where \(i\) and \(j\) are distinct. Then for all \(x\in X_{i2}\cap X_{j2}\), we have that \(x+X_{i1}\) and \(x+X_{j1}\) are circuits. It follows by Observation 2.1 that \(X_{i1}=X_{j1}\) and thus \(\operatorname{cl}(X_{i})=\operatorname{cl}(X_{j})\). Let \(X=\operatorname{cl}(X_{i})\). Then \(\{x_{i},x_{j}\}\subset X\), and since this applies to all \(j\in[r-1]-i\), it follows that \(F_{k}-x_{r}\subset X\). If \(r((F_{k}-x_{r})\cup F_{k+1})=r\), then it would also follow that \(x_{r}\in X\), implying that \(F_{k}\subset X\), an impossibility (since \(r(X)=r-1\)). Thus \(r((F_{k}-x_{r})\cup F_{k+1})=r-1\), and hence \(F_{k+1}\subset X\). Now it is seen that \(\beta(X)=\frac{|X|}{r(X)}=\frac{k(r-1)+\ell}{r-1}=k+\frac{\ell}{r-1}>\gamma(M)\), a contradiction. From the above, we have \(k=2\). Since \(|E(M)|\geq 2r(M)+2\), we have \(\ell\geq 2\). Let \(i,j\in[r-1]\) be distinct integers. Suppose there exists \(x\in(F_{2}-A_{i})\cap(F_{2}-A_{j})\). Then \(x+X_{i1}\) and \(x+X_{j1}\) are circuits, implying that \(X_{i1}=X_{j1}\) and \(\operatorname{cl}(X_{i})=\operatorname{cl}(X_{j})\). Suppose instead that \((F_{2}-A_{i})\cap(F_{2}-A_{j})=\O\). That is, \(F_{2}-A_{i}\subseteq A_{j}\) (and \(F_{2}-A_{j}\subseteq A_{i}\)). Since \(\ell\geq 2\), there exists \(x_{s}\in F_{2}-A_{i}-x_{i}\). Now \(x_{s}+B_{i}\) contains a (unique) circuit \(C\) where \(x_{s}\in C\). Also, there exists \(x_{t}\in C\cap(F_{2}-A_{j})\) since \(|C\cap(F_{2}-A_{j})|\geq|C|-(r-\ell)-1\geq r-(r-\ell)-1\geq\ell-1\geq 1\). Noting that \(B_{i}-x_{t}+x_{s}\) is also a basis, we can take \(A_{t}=A_{i}-x_{t}+x_{s}\) and \(B_{t}=B_{i}-x_{t}+x_{s}\). Consequently, \(x_{t}\in(F_{2}-A_{t})\cap(F_{2}-A_{j})\) and also, \(x_{i}\in(F_{2}-A_{t})\cap(F_{2}-A_{i})\). Thus by the above we have \(\operatorname{cl}(X_{i})=\operatorname{cl}(X_{t})=\operatorname{cl}(X_{j})\). It follows that for all \(j\in[r-1]-1\), \(\operatorname{cl}(X_{1})=\operatorname{cl}(X_{j})\). Let \(X=\operatorname{cl}(X_{1})\). We can now adopt the same arguments as in the case \(k\geq 3\) to show that \(\beta(X)=k+\frac{\ell}{r-1}>\gamma(M)\), yielding a contradiction. Thus for at least one integer \(i\in[r-1]\), \(B=B_{i}\) is a basis satisfying the proposition. This completes the proof. ## 3 S-Pairs In the second part of the proof of Theorem 1.3, we will need to establish the existence of certain circuits. More specifically, suppose \(S\) is a basis as described in Proposition 2.3 where we assume that \(S=\{s_{1},\ldots,s_{r}\}\). Suppose \(e_{1}e_{2}\ldots e_{m}\) is cyclic ordering for \(M^{\prime}=M\backslash S\). Suppose that we wish to extend this ordering to a cyclic ordering for \(M\) by inserting the elements of \(S\), in some order, between \(e_{m}\) and \(e_{1}\). Assuming this is not possible, then it turns out (as in the case where \(r(M)=3\)) that there must be certain circuits. For example, there are subsets \(\{B_{1},B_{2}\}\in\binom{S}{n-2}\) such that for all \(s_{i}\in B_{1},\ \{s_{i},e_{m-r+2},\ldots,e_{m}\}\in\mathscr{C}(M)\) and for all \(s_{i}\in B_{2},\{s_{1},e_{1},\ldots,e_{r-1}\}\in\mathscr{C}(M)\). The results in this section and its successor, lay the ground work to prove the existence of such circuits. Let \(S\) be a finite, nonempty set. For \(i=1,2,\) let \(\mathscr{S}_{i}\subseteq 2^{S}.\) We call the pair \((\mathscr{S}_{1},\mathscr{S}_{2})\) an **S-pair** if it has the following properties. * For \(i=1,2,\) if \(A,B\in\mathscr{S}_{i}\) where \(|A|=|B|+1\) and \(B\subset A,\) then \(\binom{A}{|B|}\subseteq\mathscr{S}_{i}.\) * For \(i=1,2,\) if \(A,B\in\mathscr{S}_{i}\) where \(|A|=|B|\) and \(|A\cap B|=|A|-1,\) then \(A\cup B\in\mathscr{S}_{i}.\) * For \(i=1,2,\)\(\binom{S}{1}\not\subseteq\mathscr{S}_{i}\) and \(S\not\in\mathscr{S}_{i}.\) * For \(k=1,\ldots,|S|-1,\) if \(\binom{S-x}{k}\subseteq\mathscr{S}_{1}\) for some \(x\in S,\) then \(\binom{S-x}{|S|-k}\not\subseteq\mathscr{S}_{2}.\) In the next section, we shall need the following observations for an \(S\)-pair \((\mathscr{S}_{1},\mathscr{S}_{2})\) where \(|S|=n.\) **3.1 Observation** Let \(A\subseteq S\) where \(\alpha=|A|.\) Suppose that for some some \(i\in\{1,2\}\) and some \(j\in[\alpha],\)\(\binom{A}{j}\subseteq\mathscr{S}_{i}.\) Then for \(k=j,\ldots,\alpha,\)\(\binom{A}{k}\subseteq\mathscr{S}.\) Proof.: We may assume that \(j<\alpha.\) Suppose that for some \(k\in\{j,\ldots,\alpha-1\},\)\(\binom{A}{k}\subseteq\mathscr{S}_{i}.\) Let \(B\in\binom{A}{k+1}.\) Let \(\{b_{1},b_{2}\}\subseteq B\) and for \(s=1,2,\) let \(B_{s}=B-b_{s}.\) By assumption, for \(s=1,2,\)\(B_{s}\in\mathscr{S}_{i}.\) It now follows by (**S2**) that \(B=B_{1}\cup B_{2}\in\mathscr{S}_{i}.\) It now follows that \(\binom{A}{k+1}\subseteq\mathscr{S}_{i}.\) Arguing inductively, we see that for \(k=j,\ldots,\alpha,\)\(\binom{A}{k}\subseteq\mathscr{S}.\) **3.2 Observation** Let \(A\in\mathscr{S}_{i}\) where \(\alpha=|A|.\) Suppose that for some \(j\in[\alpha-1]\) and \(x\in A,\) we have \(\binom{A-x}{j}\subseteq\mathscr{S}_{i}.\) Then \(\binom{A}{j}\subseteq\mathscr{S}_{i}.\) Proof.: Suppose first that \(j=\alpha-1.\) Then \(A^{\prime}=A-x\in\mathscr{S}_{i}.\) It follows by (**S1**) that \(\binom{A}{\alpha-1}\subseteq\mathscr{S}_{i}.\) Assume that \(j<\alpha-1\) and the assertion holds for \(j+1;\) that is, if \(\binom{A-x}{j+1}\subseteq\mathscr{S}_{i},\) then \(\binom{A}{j+1}\subseteq\mathscr{S}_{i}.\) Suppose \(\binom{A-x}{j}\subseteq\mathscr{S}_{i}.\) Then by Observation 3.1, \(\binom{A-x}{j+1}\subseteq\mathscr{S}_{1}.\) Thus by assumption, \(\binom{A}{j+1}\subseteq\mathscr{S}_{i}.\) Let \(B\in\binom{A}{j},\) where \(x\in B.\) Let \(y\in A-B\) and let \(B^{\prime}=B-x+y.\) Since \(B^{\prime}\in\binom{A-x}{j},\) it follows that \(B^{\prime}\in\mathscr{S}_{1}.\) However, we also have that \(B+y\in\mathscr{S}_{i}.\) Thus it follows by (**S1**) that \(B\in\mathscr{S}_{i}.\) We now see that \(\binom{A}{j}\subseteq\mathscr{S}_{i}.\) The assertion now follows by induction. **3.3 Observation** Let \(A\subseteq S.\) Suppose for some \(x\in A,\)\(i\in\{1,2\},\) and \(j\geq 2,\) we have that \(\{B\in\binom{A}{j}\ \big{|}\ x\in B\}\subseteq\mathscr{S}_{i}.\) Then \(\binom{A}{j}\subseteq\mathscr{S}_{i}\) and \(A\in\mathscr{S}_{i}.\) Proof.: We may assume that \(|A|\geq j+1.\) Let \(B^{\prime}\in{A-x\choose j}.\) Let \(\{y_{1},y_{2}\}\subseteq B^{\prime}\) and for \(s=1,2,\) let \(B_{s}=B^{\prime}-y_{s}+x.\) By assumption, \(\{B_{1},B_{2}\}\subset\mathscr{S}_{i}.\) It follows by \((\mathbf{S2})\) that \(B=B^{\prime}+x=B_{1}\cup B_{2}\in\mathscr{S}_{i}.\) Thus by \((\mathbf{S1})\) we have that \({B\choose j}\subseteq\mathscr{S}_{i}\) and hence \(B^{\prime}\in\mathscr{S}_{i}.\) It now follows that \({A\choose j}\subseteq\mathscr{S}_{i},\) and moreover, \(A\in\mathscr{S}_{i}\) (by Observation 3.1). ## 4 Order consistent pairs Let \(S\) be a set of \(n\) elements and let \(\mathscr{S}_{1}\subseteq 2^{S}\) and \(\mathscr{S}_{2}\subseteq 2^{S}.\) We say that the pair \((\mathscr{S}_{1},\mathscr{S}_{2})\) is **order consistent** with respect to \(S\) if for any ordering \(s_{1}s_{2}\cdots s_{n}\) of \(S\), there exists \(i\in[n]\) for which either \(\{s_{1},\cdots,s_{i}\}\in\mathscr{S}_{1}\) or \(\{s_{i},\ldots,s_{n}\}\in\mathscr{S}_{2}.\) Let \(\Pi\) denote the set of all permutations of \([n].\) Let \(\pi\in\Pi.\) We say that a subset \(A\in\mathscr{S}_{1}\) (resp. \(B\in\mathscr{S}_{2}\)) is \(\pi\)**-relevant** if there exists \(i\in[n]\) such that \(A=\{s_{\pi(1)},\ldots,s_{\pi(i)}\}\) (resp. \(B=\{s_{\pi(i)},\ldots,s_{\pi(n)}\}\)). Let \(\Pi^{\prime}\subseteq\Pi\) be a subset of permutations. We say that a subset \(\mathscr{A}\subseteq\mathscr{S}_{1}\) (resp. \(\mathscr{B}\subseteq\mathscr{S}_{2}\)) is \(\Pi^{\prime}\)**-relevant** if for all \(A\in\mathscr{A}\) (resp. \(B\in\mathscr{B}\)), there exists \(\pi\in\Pi^{\prime}\) such that \(A\) (resp. \(B\)) is \(\pi\)-relevant. We say that \((\mathscr{A},\mathscr{B})\) is order consistent relative to \(\Pi^{\prime}\) if for all \(\pi\in\Pi^{\prime},\) either there exists \(A\in\mathscr{A}\) for which \(A\) is \(\pi\)-relevant, or there exists \(B\in\mathscr{B}\) for which \(B\) is \(\pi\)-relevant. Furthermore, we say for \(\mathscr{A}\subseteq\mathscr{H}_{1}\) and \(\mathscr{B}\subseteq\mathscr{H}_{2},\) the pair \((\mathscr{A},\mathscr{B})\) is _order-consistent relative to \(\Pi^{\prime}\)_ if for all \(\pi\in\Pi^{\prime},\) there exists \(i\in[r]\) such that either For \(i\in[n],\) we let \(\Pi_{i}\) denote the set of permutations \(\pi\in\Pi\) where \(\pi(1)=i.\) The following theorem will be instrumental in the proof of main theorem. **4.1 Theorem**: Let \(S\) be a set where \(|S|=n\geq 3\) and let \((\mathscr{S}_{1},\mathscr{S}_{2})\) be an \(S\)-pair. Then \((\mathscr{S}_{1},\mathscr{S}_{2})\) is order consistent if and only if there exists \((A_{1},A_{2})\in{n-1\choose\mathscr{S}_{1}}\times{n-1\choose\mathscr{S}_{2}},\)\(A_{1}\neq A_{2},\) and \(\{B_{1},B_{2}\}\subset{S\choose n-2}\) where for \(i=1,2,\)\(B_{i}\cap A_{i}=B_{1}\cap B_{2}\in{A_{1}\cap A_{2}\choose n-3}\) and \({B_{i}\choose 1}\subset\mathscr{S}_{i}.\) Proof.: To prove sufficiency, suppose \(A_{i},B_{i},\ i=1,2\) are as described in the theorem. Note that since \(A_{1}\neq A_{2},\) we have that for \(i=1,2,\)\(B_{i}\subset A_{3-i}\) and \(A_{i}-A_{3-i}\subseteq B_{3-i}.\) For \(i=1,2,\) let \(\mathscr{T}_{i}=\{A_{i}\}\cup{B_{i}\choose 1}.\) We need only show that \((\mathscr{T}_{1},\mathscr{T}_{2})\) is order consistent. Suppose it is not. Clearly it is order consistent relative to the set of permutations \(\pi\) for which \(s_{\pi(1)}\in B_{1}\) or \(s_{\pi(r)}\in B_{2}.\) Let \(\pi\in\Pi\) where \(s_{\pi(1)}\not\in B_{1}\) and \(s_{\pi(r)}\not\in B_{2}.\) If \(s_{\pi(1)}\not\in A_{2},\) then \(A_{2}=\{s_{\pi(2)},\ldots,s_{\pi(r)}\}\) and \(A_{2}\) is \(\pi\)-relevant. Thus \((A_{1}\cap A_{2})-B_{1}\). By similar reasoning, we also have \(A_{1}-B_{2}=\{s_{\pi(r)}\}=(A_{1}\cap A_{2})-B_{1}\). However, our assumptions imply that \((A_{1}\cap A_{2})-B_{1}=(A_{1}\cap A_{2})-B_{2}\), and consequently, \(s_{\pi(1)}=s_{\pi(r)}\). This yields a contradiction. It follows that \((\mathscr{T}_{1},\mathscr{T}_{2})\) is order consistent. To prove necessity, we shall use induction on \(n\). It is a straightforward exercise to verify the assertion for \(n=3\). We shall assume that \(n\geq 4\) and the assertion is valid to all values less than \(n\). That is, if \(|S|<n\), and \((\mathscr{S}_{1},\mathscr{S}_{2})\) is an \(S\)-pair which is order consistent, then there exist sets \(A_{i},B_{i},\ i=1,2\) as described in the theorem. Assume now that \(|S|=n\) and \((\mathscr{S}_{1},\mathscr{S}_{2})\) is an \(S\)-pair which is order consistent. For all \(k\in[n]\), let \(S^{k}=S-s_{k}\) and let \(\mathscr{S}_{1}^{k}=\{A-s_{k}\ \big{|}\ A\in\mathscr{S}_{1}\ \text{and}\ s_{k}\in A\}\) and \(\mathscr{S}_{2}^{k}=\{A\in\mathscr{S}_{2}\ \big{|}\ s_{k}\notin A\}\). We observe that properties (**S1**) and (**S2**) still hold for the pair \((\mathscr{S}_{1}^{k},\mathscr{S}_{2}^{k})\) whereas (**S3**) and (**S4**) may not. **(A)** For all \(k\in[n]\), one of the following holds: **(a1)**: \(\{s_{k}\}\in\mathscr{S}_{1}\). **(a2)**: \(S^{k}\in\mathscr{S}_{2}\). **(a3)**: \(\binom{S^{k}}{1}\subseteq\mathscr{S}_{2}\). **(a4)** For some \(D\in\binom{S^{k}}{n-2}\), and positive integers \(i,j\) where \(i+j=n-1\), \(\binom{D}{i}\subseteq\mathscr{S}_{1}^{k}\) and \(\binom{D}{j}\subseteq\mathscr{S}_{2}^{k}\). **(a5)** There exist \((A_{1}^{k},A_{2}^{k})\in\binom{n-2}{\mathscr{S}_{1}^{k}}\times\binom{n-2}{ \mathscr{S}_{2}^{k}}\), \(A_{1}^{k}\neq A_{2}^{k}\), and \(\{B_{1}^{k},B_{2}^{k}\}\subseteq\binom{S^{k}}{n-3}\) where for \(i=1,2\), \(B_{i}^{k}\cap A_{i}^{k}=B_{1}^{k}\cap B_{2}^{k}\in\binom{A_{1}^{k}\cap A_{2}^ {k}}{n-4}\) and \(\binom{B_{i}^{k}}{1}\subseteq\mathscr{S}_{i}^{k}\). Proof.: Let \(k\in[n]\). Assume that none of (**a1**) - (**a4**) hold for \(k\). We will show that (**a5**) must hold for \(k\). Clearly \(S^{k}\not\in\mathscr{S}_{1}^{k}\), for otherwise this would mean that \(S\in\mathscr{S}_{1}\) which is not allowed by (**S3**). We also have that \(\binom{S^{k}}{1}\not\subseteq\mathscr{S}_{1}^{k}\). For if this was the case, then it would follow that for all \(i\in[n]-k\), \(\{s_{i},s_{k}\}\in\mathscr{S}_{1}\). It would then follow by Observation 3.3 that \(S\in\mathscr{S}_{1}\) violating (**S3**). Given that (**a2**) -(**a4**) do not hold, \((\mathscr{S}_{1}^{k},\mathscr{S}_{2}^{k})\) is seen to be an \(S^{k}\)-pair. Let \(\pi\in\Pi_{k}\) and let \(\pi^{\prime}=\pi(2)\pi(3)\cdots\pi(n)\). Since \((\mathscr{S}_{1},\mathscr{S}_{2})\) is order preserving, there exists \(A\in\mathscr{S}_{1}\) or \(B\in\mathscr{S}_{2}\) and \(i\in[n]\) such that either \(A=\{s_{\pi(1)},\ldots,s_{\pi(i)}\}\) or \(B=\{s_{\pi(i)},\ldots,s_{\pi(n)}\}\). Given that (**a1**) and (**a2**) do not hold, it follows that in the former case, \(i\geq 2\), \(A^{\prime}=\{s_{\pi(2)},\ldots,s_{\pi(i)}\}\in\mathscr{S}_{1}^{k}\) and hence \(A^{\prime}\) is \(\pi^{\prime}\)-relevant. In the latter case, \(i\geq 3\) and \(B^{\prime}=\{s_{\pi(i)},\ldots,s_{\pi(n)}\}\in\mathscr{S}_{2}^{k}\) and \(B^{\prime}\) is \(\pi^{\prime}\)-relevant. Given that \(\pi\) was arbitrarily chosen from \(\Pi_{k}\), we see that \((\mathscr{S}_{1}^{k},\mathscr{S}_{2}^{k})\) is order preserving with respect to \(S^{k}.\) By the inductive assumption, there exist \((A_{1}^{k},A_{2}^{k})\in\binom{n-2}{\mathscr{S}_{1}^{k}}\times\binom{n-2}{ \mathscr{S}_{2}^{k}},\)\(A_{1}^{k}\neq A_{2}^{k},\) and \(\{B_{1}^{k},B_{2}^{k}\}\subset\binom{S^{k}}{n-3}\) where for \(i=1,2,\)\(B_{i}^{k}\cap A_{i}^{k}=B_{1}^{k}\cap B_{2}^{k}\in\binom{A_{1}^{k}\cap A_{2}^{k}}{n-4}\) and \(\binom{B_{1}^{k}}{1}\subset\mathscr{H}_{i}^{k}.\) Thus \((\mathbf{a5})\) holds for \(k.\) **(B)** There is at most one integer \(k\) for which \((\mathbf{a2})\) or \((\mathbf{a3})\) holds. Proof.: It suffices to prove that \((\mathbf{a2})\) can hold for at most one integer \(k;\) if \((\mathbf{a3})\) holds for some integer \(k,\) then it follows by Observation 3.1 that \(S^{k}\in\mathscr{S}_{2},\) and hence \((\mathbf{a2})\) holds for \(k.\) Suppose to the contrary that \((\mathbf{a2})\) holds for distinct integers \(k\) and \(\ell.\) Then \(S^{k}\in\mathscr{S}_{2}\) and \(S^{\ell}\in\mathscr{S}_{2}.\) It then follows by \((\mathbf{S2})\) that \(S=S^{k}\cup S^{\ell}\in\mathscr{S}_{2}.\) However, this violates \((\mathbf{S3})\). Thus no two such integers can exist. **(C)** Property \((\mathbf{a4})\) holds for at most one integer \(k.\) Proof.: Suppose \((\mathbf{a4})\) holds for distinct integers \(k\) and \(\ell.\) Then for some \(i,j,i^{\prime},j^{\prime}\) where \(i+j=n-1,\)\(i^{\prime}+j^{\prime}=n-1,\) and subsets \(D\in\binom{S^{k}}{n-2}\) and \(D^{\prime}\in\binom{S^{\ell}}{n-2},\) we have \(\binom{D}{i}\subseteq\mathscr{S}_{1}^{k},\)\(\binom{D}{j}\subseteq\mathscr{S}_{2}^{k},\)\(\binom{D^{\prime}}{i^{\prime}}\subseteq\mathscr{S}_{1}^{\ell},\) and \(\binom{D^{\prime}}{j^{\prime}}\subseteq\mathscr{S}_{2}^{\ell}.\) We have that \(F_{1}=D+s_{k}\in\mathscr{S}_{1}\) and \(F_{2}=D^{\prime}+s_{\ell}\in\mathscr{S}_{1}.\) If \(F_{1}\neq F_{2},\) then by property \((\mathbf{S2}),\)\(F_{1}\cup F_{2}=S\in\mathscr{S}_{1},\) violating \((\mathbf{S3})\). Thus \(F_{1}=F_{2}=S-s=S^{\prime}\) for some \(s\in S-s_{k}-s_{\ell}\) and \(S^{\prime}\in\mathscr{S}_{1}.\) Let \(i^{*}=\max\{i,i^{\prime}\}\) and \(j^{*}=\min\{j,j^{\prime}\}.\) We claim that \(\binom{S^{\prime}}{i^{*}+1}\subseteq\mathscr{S}_{1}\) and \(\binom{S^{\prime}}{j^{*}}\subseteq\mathscr{S}_{2}.\) To prove the first assertion, we first note that it is true when \(i^{*}=n-2\) since \(S^{\prime}\in\mathscr{S}_{1}.\) We may assume that \(i^{*}<n-2.\) Then \(i^{*}\leq n-3=|D\cap D^{\prime}|=|S^{\prime}-s_{k}-s_{\ell}|.\) By assumption, \(\binom{D\cap D^{\prime}}{i^{*}}\subset\mathscr{S}_{1}^{k}.\) Thus for all \(X\in\binom{S^{\prime}-s_{k}-s_{\ell}}{i^{*}+1},\ X+s_{k}\in\mathscr{S}_{1}.\) It now follows by Observation 3.3 that \(\binom{S^{\prime}-s_{\ell}}{i^{*}+1}\subseteq\mathscr{S}_{1}.\) Now Observation 3.2 implies that \(\binom{S^{\prime}}{i^{*}+1}\subseteq\mathscr{S}_{1}.\) To prove that \(\binom{S^{\prime}}{j^{*}}\subseteq\mathscr{S}_{2},\) first suppose that \(j^{*}=j=j^{\prime}=n-2\). In this case, \(D,D^{\prime}\in\mathscr{S}_{2}\) and hence \(S^{\prime}=D\cup D^{\prime}\in\mathscr{S}_{2}\) by \((\mathbf{S2})\). It would then follow by \((\mathbf{S1})\) that \(\binom{S^{\prime}}{n-2}\subseteq\mathscr{S}_{2}.\) Thus we may assume that \(j^{*}<n-2.\) We have that \(\binom{D\cap D^{\prime}}{j^{*}}\subseteq\mathscr{S}_{2}.\) Given that \(D\cap D^{\prime}=S^{\prime}-s_{k}-s_{\ell},\) it follows by Observation 3.2 that \(\binom{S^{\prime}-s_{\ell}}{j^{*}}\subseteq\mathscr{S}_{2}\) and this in turn implies that \(\binom{S^{\prime}}{j^{*}}\subseteq\mathscr{S}_{2}.\) Given that \(i+j=i^{\prime}+j^{\prime}=n-1,\) it follows that \(i^{*}\leq n-1-j^{*},\) and hence \(i^{*}+1+j^{*}\leq n.\) However, this would mean that \((\mathbf{S4})\) is violated. Thus \((\mathbf{a4})\) can hold for at most one integer \(k.\) **(D)** There exists \(T\in\binom{S}{n-3}\) such that either \(\binom{T}{1}\subseteq\mathscr{S}_{1}\) or \(\binom{T}{1}\subseteq\mathscr{S}_{2}.\) Proof.: Assume that the assertion is false. Then there are at least three integers \(k\) for which (**a1**) does not hold. Furthermore, (**a3**) does not hold for any integer \(k.\) By (**B**) and (**C**), (**a2**) or (**a3**) holds for at most one integer \(k\) and (**a4**) holds for at most one integer \(k.\) Thus there exists \(k\in[n]\) such that none of (**a1**) - (**a4**) hold. By (**A**), (**a5**) holds for \(k.\) Thus there exists \((A_{1}^{k},A_{2}^{k})\in{n-2\choose{\mathscr{S}_{1}^{k}}}\times{n-2\choose{ \mathscr{S}_{2}^{k}}},\)\(A_{1}^{k}\neq A_{2}^{k},\) and \(\{B_{1}^{k},B_{2}^{k}\}\subset{S^{k}\choose{n-3}}\) where for \(i=1,2,\)\(B_{i}^{k}\cap A_{i}^{k}=B_{1}^{k}\cap B_{2}^{k}\in{A_{1}^{k}\cap A_{2}^{k}\choose{n-4}}\) and \({B_{i}^{k}\choose{1}}\subset{\mathscr{S}_{i}^{k}}.\) Thus we see that \({B_{2}^{k}\choose{1}}\subseteq{\mathscr{S}_{2}^{k}}\subseteq{\mathscr{S}_{2}}.\) Given that \(|B_{2}^{k}|=n-3,\) this contradicts our assumption. **(E)** There exists \(T\in{S\choose{n-2}}\) such that either \({T\choose{1}}\subseteq{\mathscr{S}_{1}}\) or \({T\choose{1}}\subseteq{\mathscr{S}_{2}}.\) Proof.: By (**D**), we may assume (by symmetry) that for all \(i\in[n-3],\)\(\{s_{i}\}\in{\mathscr{S}_{1}}.\) Next, we will show that either \(\{s_{i}\}\in{\mathscr{S}_{1}}\) for some \(i\in\{n-2,n-1,n\},\) or \({S^{\prime}\choose{1}}\subseteq{\mathscr{S}_{2}}\) for some \(S^{\prime}\in{S\choose{n-2}}.\) We may assume that (**a1**) and (**a3**) do not hold for all \(k\in\{n-2,n-1,n\}.\) Furthermore, by (**B**) and (**C**), (**a2**) and (**a4**) hold for at most one integer \(k\in\{n-2,n-1,n\}.\) As such, we may assume that (**a2**) and (**a4**) do not hold for \(k=n-2.\) Thus by By (A), (**a5**) holds for \(k=n-2\). Thus there exist \((A_{1}^{n-2},A_{2}^{n-2})\in{n-2\choose{\mathscr{S}_{1}^{n-2}}}\times{n-2 \choose{\mathscr{S}_{2}^{n-2}}},\)\(A_{1}^{n-2}\neq A_{2}^{n-2},\) and \(\{B_{1}^{n-2},B_{2}^{n-2}\}\subset{S^{n-2}\choose{n-3}}\) where for \(i=1,2,\)\(B_{i}^{n-2}\cap A_{i}^{n-2}=B_{1}^{n-2}\cap B_{2}^{n-2}\in{A_{1}^{n-2}\cap A_{2}^{n-2} \choose{n-4}}\) and \({B_{i}^{n-2}\choose{1}}\subset{\mathscr{S}_{i}^{n-2}}.\) Suppose \(s_{i}\in B_{1}^{n-2}\cap\{s_{1},\ldots,s_{n-3}\}.\) By assumption, \(\{s_{i}\}\in{\mathscr{S}_{1}}.\) However, given that \(s_{i}\in B_{1}^{n-2},\) we also have that \(\{s_{i}\}\in{\mathscr{S}_{1}^{n-2}}\) and hence \(\{s_{i},s_{n-2}\}\in{\mathscr{S}_{1}}.\) By (**S1**), \(\{s_{n-2}\}\in{\mathscr{S}_{1}},\) a contradiction. Thus \(B_{1}^{r-2}\cap\{s_{1},\ldots,s_{n-3}\}=\O.\) Thus \(B_{1}^{n-2}\subseteq\{s_{n-1},s_{n}\}\) and consequently, \(n-3\leq 2\) and hence \(n\leq 5\). To complete the proof, we need only consider two cases: **Case 1**: \(n=5.\) We have \(B_{1}^{n-2}=B_{1}^{3}=\{s_{4},s_{5}\}.\) We may assume that \(A_{2}^{3}=\{s_{1},s_{4},s_{5}\},\) where \(B_{1}^{3}\cap B_{2}^{3}=\{s_{4}\}.\) Thus \(A_{1}^{3}=\{s_{1},s_{2},s_{4}\}\) and \(B_{2}^{3}=\{s_{2},s_{4}\}.\) Then \(A_{1}^{3}+s_{3}=\{s_{1},s_{2},s_{3},s_{4}\}\in{\mathscr{S}_{1}}\) and \(B_{1}^{3}+s_{3}=\{s_{3},s_{4},s_{5}\}\in{\mathscr{S}_{1}}.\) Also, since \({B_{1}^{3}\choose{1}}\subset{\mathscr{S}_{1}^{3}},\) it follows that \(\{s_{3},s_{4}\},\{s_{3},s_{5}\}\in{\mathscr{S}_{1}}.\) By (**A**), for all \(k\in\{4,5\},\) one of (**a2**) - (**a5**) must hold. Clearly, we may assume that (**a3**) does not hold for \(k=4\) or \(k=5\). Also by (**B**), (**a2**) can hold for at most one value of \(k\). Thus we may assume the (**a4**) or (**a5**) holds for \(k=4.\) However, given that (**a5**) holds for \(k=3,\) it follows by (**C**) that (**a4**) holds for \(k=4.\) Then there exists a subset \(D^{\prime}\in{S^{4}\choose{3}}\) and integers \(i,j\) where \(i+j=4\) such that \(\binom{D^{\prime}}{i}\subseteq\mathscr{S}_{1}^{4}\) and \(\binom{D^{\prime}}{j}\subseteq\mathscr{S}_{2}^{4}\). We observe that \(D=D^{\prime}\cup\{s_{4}\}\in\mathscr{S}_{1}\). If \(D\neq\{s_{1},s_{2},s_{3},s_{4}\}\), then property (**S2**) implies that \(D\cup\{s_{1},s_{2},s_{3},s_{4}\}=\{s_{1},\ldots,s_{5}\}=S\in\mathscr{S}_{1}\), yielding a contradiction. Thus we must have that \(D=\{s_{1},s_{2},s_{3},s_{4}\}\), implying that \(D^{\prime}=\{s_{1},s_{2},s_{3}\}\). Suppose \(i=1\). Then \(\binom{D^{\prime}}{1}\subseteq\mathscr{S}_{1}^{4}\) and \(\{s_{1}\}\in\mathscr{S}_{1}^{4}\). This in turn implies that \(\{s_{1},s_{4}\}\in\mathscr{S}_{1}\). However, given that \(\{s_{1}\}\in\mathscr{S}_{1}\), it would follow that \(\{s_{4}\}\in\mathscr{S}_{1}\), a contradiction. Thus \(i\geq 2\). Also, if \(j=1\), then \(\binom{D^{\prime}}{1}\subseteq\mathscr{S}_{2}^{4}\subseteq\mathscr{S}_{2}\). In this case, the assertion is proved since \(|D^{\prime}|=3=n-2\). Thus we may assume that \(i=j=2\). Now we have \(\binom{D^{\prime}}{2}\subseteq\mathscr{S}_{2}^{4}\subseteq\mathscr{S}_{2}\). Given that \(\{s_{2}\}\in\mathscr{S}_{2}\) and for all \(i^{\prime}\in\{1,3\}\), \(\{s_{2},s_{i^{\prime}}\}\in\mathscr{S}_{2}\), it follows that for all \(i^{\prime}\in[3]\), \(\{s_{i^{\prime}}\}\in\mathscr{S}_{2}\). This yields a contradiction. This completes the case \(n=5\). **Case 2**: \(n=4\). We may assume that \(B_{1}^{n-2}=B_{1}^{2}=\{s_{4}\},\ A_{1}^{2}=\{s_{1},s_{3}\}\). There are two possible cases to consider for \(A_{2}^{2}\) and \(B_{2}^{2}\): either \(A_{2}^{2}=\{s_{1},s_{4}\}\) and \(B_{2}^{2}=\{s_{3}\}\) or \(A_{2}^{2}=\{s_{3},s_{4}\}\) and \(B_{2}^{2}=\{s_{1}\}\). We shall assume the former - the latter case can be handled similarly. We have that \(A_{1}^{2}+s_{2}=\{s_{1},s_{2},s_{3}\}\in\mathscr{S}_{1}\) and \(B_{1}^{2}+s_{2}=\{s_{2},s_{4}\}\in\mathscr{S}_{1}\). We also have that \(A_{2}^{2}=\{s_{1},s_{4}\}\in\mathscr{S}_{2}\) and \(\{s_{3}\}\in\mathscr{S}_{2}\). We may assume that (**a3**) does not hold for \(k=3\) or \(k=4\). Furthermore, (**B**) implies that (**a2**) holds for at most one of \(k=3\) or \(k=4\). Thus we may assume that (**a4**) or (**a5**) holds for \(k=3\). Since (**a5**) already holds for \(k=2\), it follows by (**C**) that (**a5**) can not hold for \(k=3\). Thus (**a4**) holds for \(k=3\). Then there exists a subset \(D^{\prime}\in\binom{S^{3}}{2}\) and integers \(i,j\) where \(i+j=3\) such that \(\binom{D^{\prime}}{i}\subseteq\mathscr{S}_{1}^{3}\) and \(\binom{D^{\prime}}{j}\subseteq\mathscr{S}_{2}^{3}\). We have that \(D=D^{\prime}\cup\{s_{3}\}\in\mathscr{S}_{1}\). Given that \(\{s_{1},s_{2},s_{3}\}\in\mathscr{S}_{1}\), if \(D\neq\{s_{1},s_{2},s_{3}\}\), then by (**S2**), \(S=D\cup\{s_{1},s_{2},s_{3}\}\in\mathscr{S}_{1}\), a contradiction. Thus we must have that \(D=\{s_{1},s_{2},s_{3}\}\), and thus \(D^{\prime}=\{s_{1},s_{2}\}\). If \(j=1\), then \(\binom{D^{\prime}}{1}\subseteq\mathscr{S}_{2}^{3}\subseteq\mathscr{S}_{2}\). Given \(|D^{\prime}|=2=n-2\), the assertion holds in this case. Thus we may assume that \(j=2\) and \(i=1\). However, this means that \(\{s_{1}\}\in\mathscr{S}_{1}^{3}\), implying that \(\{s_{1},s_{2}\}\in\mathscr{S}_{1}\). This in turn implies that \(\{s_{2}\}\in\mathscr{S}_{1}\) (since \(\{s_{1}\}\in\mathscr{S}_{1}\)) yielding a contradiction. This completes the case for \(n=4\). By (**E**), there exists \(i\in\{1,2\}\) and \(T\in\binom{S}{n-2}\) for which \(\binom{T}{1}\subseteq\mathscr{S}_{i}\). Without loss of generality, we may assume that \(i=1\) and \(T=\{s_{1},\ldots,s_{n-2}\}\); that is, for all \(j\in[n-2],\ \{s_{j}\}\in\mathscr{S}_{1}\). Suppose (**a1**) holds for \(k=n-1\); that is, \(\{s_{n-1}\}\in\mathscr{S}_{1}\). Then (**a1**) does not hold for \(k=n\) (otherwise (**S3**) is violated). If (**a2**) or (**a3**) holds for \(k=n\), then \(S^{n}\in\mathscr{S}_{2}\). In this case, (**S4**) is violated. Thus either (**a44**) or (**a5**) holds for \(k=n\). In either case, there exists \(D^{\prime}\in\binom{S^{n}}{n-2}\) where \(D^{\prime}\in\mathscr{S}_{1}^{n}\). Then \(D=D^{\prime}+s_{n}\in\mathscr{S}_{1}\). Given that \(\binom{D^{\prime}}{1}\subseteq\mathscr{S}_{1}\), it follows that \(D^{\prime}\in\mathscr{S}_{1}\). Furthermore, Observation 3.2 implies that \(\binom{D}{1}\subseteq\mathscr{S}_{1}\). However, this yields a contradiction since \(\{s_{n}\}\not\in\mathscr{S}_{1}\). Thus (**a1**) does not hold for \(k=n-1\) and the same holds for \(k=n\). Generally, the above argument demonstrates that for all \(i\in[2]\), and for all \(S^{\prime}\in\binom{S}{n-1}\), \(\binom{S^{\prime}}{1}\not\subseteq\mathscr{S}_{i}\). Suppose (**a2**) holds for \(k=n-1\). Then \(S^{n-1}=\{s_{1},\ldots,s_{n-2},s_{n}\}\in\mathscr{S}_{2}\). By (**B**), neither (**a2**) nor (**a3**) holds for \(k=n\). Suppose (**a5**) holds for \(k=n\). If \(n>4\), then \(|B_{1}^{n}|=n-3>1\) and there exists \(s_{i}\in B_{1}^{n}\cap\{s_{1},\ldots,s_{n-2}\}\). In this case, \(\{s_{i}\}\in\mathscr{S}_{1}^{n}\), and hence \(\{s_{i},s_{n}\}\in\mathscr{S}_{1}\). However, since \(\{s_{i}\}\in\mathscr{S}_{1}\), it follows that \(\{s_{n}\}\in\mathscr{S}_{1}\), a contradiction. Thus \(n=4\). Furthermore, \(B_{1}^{n}\cap\{s_{1},\ldots,s_{n-2}\}=B_{1}^{4}\cap\{s_{1},s_{2}\}=\emptyset\). It follows that \(B_{1}^{4}=\{s_{3}\}\) and \(A_{1}^{4}=\{s_{1},s_{2}\}\). Thus \(S^{3}=\{s_{1},s_{2},s_{4}\}\in\mathscr{S}_{1}\). Since for \(i=1,2\), \(\{s_{i}\}\in\mathscr{S}_{1}\), it follows by Observation 3.2 that \(\binom{S^{3}}{1}\subseteq\mathscr{S}_{1}\). However, this implies that \(\{s_{4}\}\in\mathscr{S}_{1}\), a contradiction. It follows from the above that (given (**a2**) holds for \(k=n-1\)) (**a4**) holds for \(k=n\). Thus there exists \(D^{\prime}\in\binom{S^{n}}{n-2}\) and integers \(i,j,\ i+j=n-1\), such that \(\binom{D^{\prime}}{i}\subseteq\mathscr{S}_{1}^{n}\) and \(\binom{D^{\prime}}{j}\subseteq\mathscr{S}_{2}^{n}\). Then \(D=D^{\prime}+s_{n}\in\mathscr{S}_{1}\). If \(D^{\prime}=\{s_{1},\ldots,s_{n-2}\}\), then \(D^{\prime}\in\mathscr{S}_{1}\), (since \(\binom{D^{\prime}}{1}\subseteq\mathscr{S}_{1}\)). It now follows by Observation 3.2 that \(\binom{D}{1}\subseteq\mathscr{S}_{1}\). However, this implies that \(\{s_{n}\}\in\mathscr{S}_{1}\), a contradiction. Thus \(s_{n-1}\in D^{\prime}\). Suppose \(i\leq n-3\). Then \(\binom{D^{\prime}-s_{n-1}}{1}\subseteq\mathscr{S}_{1}\) and hence \(D^{\prime}-s_{n-1}\in\mathscr{S}_{1}\). However, since \(i\leq n-3\), we also see that by Observation 3.1, \(\binom{D^{\prime}-s_{n-1}}{i}\subseteq\mathscr{S}_{1}^{n}\) and thus \(D^{\prime}-s_{n-1}\in\mathscr{S}_{1}^{n}\). Consequently, we have \(D^{\prime}-s_{n-1}+s_{n}\in\mathscr{S}_{1}\). Again, since \(\binom{D^{\prime}-s_{n-1}}{1}\subset\mathscr{S}_{1}\), it follows by Observation 3.2 that \(\binom{D^{\prime}-s_{n-1}+s_{n}}{1}\subseteq\mathscr{S}_{1}\), implying that \(\{s_{n}\}\in\mathscr{S}_{1}\), a contradiction. From the above, we have \(i=n-2\) and \(j=1\). Then \(D^{\prime}\in\mathscr{S}_{1}^{n}\) and \(\binom{D^{\prime}}{1}\subseteq\mathscr{S}_{2}\). Let \(A_{1}=D,\ A_{2}=S^{n-1},\ B_{1}=S-s_{n-1}-s_{n}\), and \(B_{2}=D^{\prime}\). Then by the above, \((A_{1},A_{2})\in\binom{n-1}{\mathscr{S}_{1}}\times\binom{n-1}{\mathscr{S}_{2}}\) and \(A_{1}\neq A_{2}\). Furthermore, we have that for \(i=1,2,\ \binom{B_{i}}{1}\subseteq\mathscr{S}_{i}\). We also see that \(B_{1}\cap B_{2}=D^{\prime}\cap\{s_{1},\ldots,s_{n-2}\}=A_{1}\cap B_{1}=A_{2} \cap B_{2}\). Thus in this case, the theorem is satisfied. To finish the proof, we will show that no other options are possible. Suppose now that (**a2**) does not hold for \(k=n-1\), and we may assume the same is true for \(k=n\). Thus (**a3**) does not hold for \(k=n-1\) or \(k=n\). Suppose (**a4**) holds for \(k=n-1\). Then there exists \(D^{\prime}\in\binom{S^{n-1}}{n-2}\) and integers \(i,j,\ i+j=n-1\), such that \(\binom{D^{\prime}}{i}\subseteq\mathscr{S}_{1}^{n-1}\) and \(\binom{D^{\prime}}{j}\subseteq\mathscr{S}_{2}^{n-2}\subseteq\mathscr{S}_{2}\). Then \(D=D^{\prime}+s_{n-1}\in\mathscr{S}_{1}\). As before, \(D^{\prime}\neq\{s_{1},\ldots,s_{n-2}\}\). Thus \(s_{n}\in D^{\prime}\) and we may assume without loss of generality that \(D^{\prime}=\{s_{1},\ldots,s_{n-3},s_{n}\}\). By (**C**), (**a4**) does not hold for \(k=n\). Thus (**a5**) holds for \(k=n\) and there exist \((A_{1}^{n},A_{2}^{n})\in{n-2\choose\mathscr{S}_{1}^{n}}\times{n-2\choose\mathscr{S }_{2}^{n}}\), \(A_{1}^{n}\neq A_{2}^{n}\), and \(\{B_{1}^{n},B_{2}^{n}\}\subseteq{S^{n}\choose n-3}\) where for \(i=1,2\), \(B_{i}^{n}\cap A_{i}^{n}=B_{1}^{n}\cap B_{2}^{n}\in{A_{1}^{n}\cap A_{2}^{n}\choose n -4}\) and \({B_{1}^{n}\choose 1}\subseteq\mathscr{S}_{i}^{n}\). Arguing as before, we have \(B_{1}^{n}\cap\{s_{1},\ldots,s_{n-2}\}=\O\). This in turn implies that \(B_{1}^{n}=\{s_{n-1}\}\) and hence \(n=4\). Furthermore, we have that \(A_{1}^{n}=A_{1}^{4}=\{s_{1},s_{2}\}\), implying that \(\{s_{1},s_{2},s_{4}\}\in\mathscr{S}_{1}\). However, we also have that \(D=\{s_{1},s_{3},s_{4}\}\in\mathscr{S}_{1}\). It follows by (**S2**) that \(S=D\cup\{s_{1},s_{2},s_{4}\}\in\mathscr{S}_{1}\), violating (**S3**). Thus (**a4**) does not hold for \(k=n-1\) and the same holds for \(k=n\). From the above, (**a5**) must hold for both \(k=n-1\) and \(k=n\), contradicting (**C**). This completes the proof of the theorem. ## 5 Proof of Theorem 1.3 Let \(M\) be a paving matroid where \(\gamma(M)=\beta(E(M))\) and \(|E(M)|=n\). ### The case \(r(M)=2\) Suppose \(r(M)=2\). We shall prove by induction on \(n\) that \(M\) is cyclically orderable. Theorem 1.3 is seen to be true when \(n=2\). Assume that it is true when \(n=m-1\geq 2\). We shall prove that it is also true for \(n=m\). Assume that \(M\) is a paving matroid where \(r(M)=2\), \(|E(M)|=m\) and \(\gamma(M)=\beta(E(M))=\frac{m}{2}\). For all elements \(e\in E(M)\), let \(X_{e}\) denote the parallel class containing \(e\) and let \(m(e)=|X_{e}|\). Then for all \(e\in E(M)\), \(\beta(X_{e})=m(e)\leq\gamma(M)=\frac{m}{2}\). Thus for all \(e\in E(M)\), \(m(e)\leq\frac{m}{2}\). There exists \(f\in E(M)\) for which \(r(M\backslash f)=2\). Let \(M^{\prime}=M\backslash f\). We claim that \(\gamma(M^{\prime})=\beta(E(M^{\prime}))=\frac{m-1}{2}\). For if there exists \(X\subseteq E(M^{\prime})\) for which \(\beta(X)>\frac{m-1}{2}>1\), then \(r(X)=2\), and \(\beta(X)=\frac{|X|}{2}>\frac{m-1}{2}\). It follows that \(|X|=m\), which is impossible since \(|E(M^{\prime})|=m-1\). Thus \(\gamma(M^{\prime})=\beta(E(M^{\prime}))\). By assumption, there is a cyclic ordering for \(M^{\prime}\), say \(e_{1}e_{2}\cdots e_{m-1}\). Since \(m(f)\leq\frac{m}{2}\), there exists \(i\in[m-1]\) such that \(\{e_{i},e_{i+1}\}\cap X_{f}=\O\). Consequently, \(e_{1}\cdots e_{i}fe_{i+1}\ldots e_{m-1}\) is seen to be a cyclic ordering for \(M\). The proof now follows by induction. ### The case where \(|E(M)|\leq 2r(M)+1\) Suppose \(|E(M)|\leq 2r(M)+1\). As mentioned earlier, if \(|E(M)|=2r(M)+1\), then \(|E(M)|\) and \(r(M)\) are relatively prime and hence it follows by Theorem 1.2 that \(M\) has a cyclic ordering. Thus we may assume that \(2r(M).\) It now follows by Theorem 2.2 that there are bases \(A\) and \(B\) for which \(A\cup B=E(M).\) In [1], the authors verify, among other things, the following conjecture of Gabow [5] for _split matroids_, a class which contains paving matroids. **5.1 Conjecture** ( Gabow ) Suppose that \(A\) and \(B\) are bases of a matroid \(N\) of rank \(r.\) Then there are orderings \(a_{1}a_{2}\cdots a_{r}\) and \(b_{1}b_{2}\cdots b_{r}\) of the elements of \(A\) and \(B\), respectively, such that for \(i=1,\ldots,r-1,\)\(\{a_{1},\ldots,a_{i},b_{i+1},\ldots,b_{r}\}\) and \(\{a_{i+1},\ldots,a_{r},b_{1}\ldots b_{i}\}\) are bases. We observe that in the special case of Conjecture 5.1 where \(E(N)\) is the union of two bases, then the conjecture implies that \(N\) has a cyclic ordering. Given that the above conjecture is true for split matroids (and hence also paving matroids) and \(E(M)=A\cup B,\) it follows that \(M\) has a cyclic ordering. ### The case where \(|E(M)|\geq 2r(M)+2\) and \(r(M)\geq 3\). In this section, we shall assume that \(|E(M)|\geq 2r(M)+2\) and \(r(M)\geq 3\). By Proposition 2.3, there exists a basis \(S\) of \(M\) for which \(\gamma(M\backslash S)=\beta(E(M)-S)\) and \(r(M\backslash S)=r(M).\) Let \(r=r(M)\) and let \(S=\{s_{1},\ldots,s_{r}\}.\) Let \(M^{\prime}=M-S\) and let \(m=|E(M^{\prime})|=n-r.\) By assumption, \(M^{\prime}\) is cyclically orderable and we will assume that \(e_{1}e_{2}\cdots e_{m}\) is a cyclic ordering. Our goal is to show that the cyclic ordering for \(M^{\prime}\) can be extended to a cyclic ordering of \(M\). To complete the proof of Theorem 1.3, we need only prove the following: **5.2 Proposition** There exists \(i\in[m]\) and a permutation \(\pi\) of \([r]\) such that \(e_{1}e_{2}\cdots e_{i}s_{\pi(1)}s_{\pi(2)}\cdots s_{\pi(r)}e_{i+1}\cdots e_{m}\) is a cyclic ordering of \(M.\) _Proof._ Assume to the contrary that for all \(i\in[m]\) and for all permutations \(\pi\) of \([r],\)\(e_{1}e_{2}\cdots e_{i}s_{\pi(1)}s_{\pi(2)}\cdots s_{\pi(r)}e_{i+1}\cdots e_{m}\) is not a cyclic ordering of \(M.\) For all \(j\in[m],\) we shall define a pair \((\mathscr{H}^{j}_{1},\mathscr{H}^{j}_{2}),\) where for \(i=1,2,\)\(\mathscr{H}^{j}_{i}\subseteq 2^{S}.\) Let \(x_{1}=e_{j-1},x_{2}=e_{j-2},\ldots,x_{r-1}=e_{j-r+1},\) and let \(y_{1}=e_{j},y_{2}=e_{j+1},\ldots,y_{r-1}=e_{j+r-2}\) where for all integers \(k,\) we define \(e_{k}:=e_{\ell}\) where \[\ell:=\left\{\begin{array}{ll}k\ mod\ m&\mbox{if $k\ mod\ m\neq 0$}\\ m&\mbox{otherwise}\end{array}\right.\] Let \(X=\{x_{1},\ldots,x_{r-1}\}\) and \(Y=\{y_{1},\ldots,y_{r-1}\}.\) Let \(\pi\) be a permutation of \([r]\). By assumption, \(e_{1}\cdots e_{j-1}s_{\pi(1)}s_{\pi(2)}\cdots s_{\pi(r)}e_{j}\cdots e_{m}\) is not a cyclic ordering for \(M.\) Given that \(M\) is a paving matroid, there exists \(i\in[r-1]\) such that either \(\{x_{1},\ldots,x_{i}\}\cup\{s_{\pi(1)},\ldots,s_{\pi(r-i)}\}\in\mathscr{C}(M)\) or \(\{y_{1},\ldots,y_{i}\}\cup\{s_{\pi(r-i+1)},\ldots,s_{\pi(r)}\}\in\mathscr{C}(M).\) Let \(\mathscr{C}_{1}^{j}\) be the set of all \(r\)-circuits which occur in the former case, and let \(\mathscr{C}_{2}^{j}\) be the set of all \(r\)-circuits occurring in the latter case. That is, \(\mathscr{C}_{1}^{j}\) is the set of all \(r\)-circuits \(C\) where for some \(i\in[r-1]\), \(\{x_{1},\ldots,x_{i}\}\subset C\subset\{x_{1},\ldots,x_{i}\}\cup S\), and \(\mathscr{C}_{2}^{j}\) is set of all \(r\)-circuits \(C\) where for some \(i\in[r-1]\), \(\{y_{1},\ldots,y_{i}\}\subset C\subseteq\{y_{1},\ldots,y_{i}\}\cup S.\) For \(i=1,2\), let \(\mathscr{H}_{i}^{j}=\{C\cap S\ \big{|}\ C\in\mathscr{C}_{i}^{j}\}.\) **(A)** For all \(j,\) the pair \((\mathscr{H}_{1}^{j},\mathscr{H}_{2}^{j})\) is an \(S\)-pair which is order consistent. Proof.: It suffices to prove the assertion for \(j=1.\) For convenience, we let \(\mathscr{H}_{1}=\mathscr{H}_{1}^{1},\)\(\mathscr{H}_{2}=\mathscr{H}_{2}^{1},\)\(\mathscr{C}_{1}=\mathscr{C}_{1}^{1},\) and \(\mathscr{C}_{2}=\mathscr{C}_{2}^{1}.\) It follows from the definition of \((\mathscr{H}_{1},\mathscr{H}_{2})\) that it is order consistent. We need only show that it is an \(S\)-pair. Suppose \(A,B\in\mathscr{H}_{1}\) where \(|A|=|B|+1\) and \(B\subset A.\) Then for some \(i\in[r-1],\)\(C_{1}=A\cup\{x_{1},\ldots,x_{i}\}\in\mathscr{C}_{1}\) and \(C_{2}=B\cup\{x_{1},\ldots,x_{i+1}\}\in\mathscr{C}_{1}.\) Let \(x\in B.\) Then \(x\in C_{1}\cap C_{2}\) and hence by the circuit elimination axiom there is a circuit \(C\subseteq(C_{1}\cup C_{2})-x=(A-x)\cup\{x_{1},\ldots,x_{i+1}\}\). Thus \(C=(A-x)\cup\{x_{1},\ldots,x_{i+1}\}\) and hence \(A-x\in\mathscr{H}_{1}.\) Since this applies to any element \(x\in B,\) it follows that \(\binom{A}{|B|}\subseteq\mathscr{H}_{1}.\) The same arguments can be applied to \(\mathscr{H}_{2}\). Thus \((\mathbf{S1})\) holds. To show that \((\mathbf{S2})\) holds, suppose \(A,B\in\mathscr{H}_{1}\) where \(|A|=|B|\) and \(|A\cap B|=|A|-1.\) There exists \(i\in[r]\) such that \(C_{1}=\{x_{1},\ldots,x_{i}\}\cup A\in\mathscr{C}_{1}\) and \(C_{2}=\{x_{1},\ldots,x_{i}\}\cup B\in\mathscr{C}_{1}.\) By the circuit elimination axiom, there exists a circuit \(C\subseteq(C_{1}\cup C_{2})-x_{i}=(A\cup B)\cup\{x_{1},\ldots,x_{i-1}\}\). Thus \(C=(A\cup B)\cup\{x_{1},\ldots,x_{i-1}\}\) is a circuit and hence \(A\cup B\in\mathscr{H}_{1}.\) The same reasoning applies if \(A,B\in\mathscr{H}_{2}.\) Thus \((\mathbf{S2})\) holds. To show that \((\mathbf{S3})\) holds, suppose \(\binom{S}{1}\subseteq\mathscr{H}_{1}.\) Then for \(i=1,\ldots,r-1,\ C_{i}=X\cup\{s_{i}\}\) is a circuit, and consequently, \(S\subseteq\operatorname{cl}(X).\) However, this is impossible since \(|X|=r-1<r(S)=r.\) Thus \(\binom{S}{1}\not\subseteq\mathscr{H}_{1}\) and likewise, \(\binom{S}{1}\not\subseteq\mathscr{H}_{2}.\) Also, we clearly have that for \(i=1,2\)\(S\not\in\mathscr{H}_{i}\) since \(S\) is a base of \(M.\) Thus \((\mathbf{S3})\) holds. Lastly, to show that \((\mathbf{S4})\) holds, let \(S^{\prime}=S-s_{r}\) Suppose first that \(\binom{S^{\prime}}{r-1}\subseteq\mathscr{H}_{1}\) and \(\binom{S^{\prime}}{1}\subseteq\mathscr{H}_{2}.\) Then \(S^{\prime}\in\mathscr{H}_{1}\) and hence \(S^{\prime}+x_{1}\in\mathscr{C}_{1}.\) Also, for all \(i\in[r-1],\)\(Y+s_{i}\in\mathscr{C}_{2}\). Thus \(x_{1}\in\operatorname{cl}(S^{\prime})\) and \(S^{\prime}\subseteq\operatorname{cl}(Y)\). Given that \(S^{\prime}\) is independent and \(|S^{\prime}|=|Y|=r-1,\) it follows that \(\operatorname{cl}(S^{\prime})=\operatorname{cl}(Y).\) However, this implies that \(Y+x_{1}=\{x_{1},y_{1},\ldots,y_{r-1}\}=\{e_{m},e_{1},\ldots,e_{r-1}\}\subseteq \operatorname{cl}(S^{\prime}),\) which is impossible since by assumption, \(\{e_{m},e_{1},\ldots,e_{r-1}\}\) is a basis of \(M.\) Suppose now that for some \(k\in[r-2]\), \(\binom{S^{\prime}}{k}\subseteq\mathscr{H}_{1}\) and \(\binom{S^{\prime}}{r-k}\subseteq\mathscr{H}_{2}\). We claim that \(\{x_{1},\ldots,x_{r-k}\}\cup\{y_{1},\ldots,y_{k}\}\subseteq\operatorname{cl}(S ^{\prime})\). By \((\mathbf{S2})\) (as in the proof of Observation 3.1), we have that for \(j=k,\ldots,r-1\), \(\binom{S^{\prime}}{j}\subseteq\mathscr{H}_{1}.\) In particular, \(S^{\prime}\in\mathscr{H}_{1},\) and hence \(C_{1}=S^{\prime}+x_{1}\in\mathscr{C}_{1}.\) This implies that \(x_{1}\in\operatorname{cl}(S^{\prime}).\) However, seeing as \(\binom{S^{\prime}}{r-2}\subseteq\mathscr{H}_{1},\) we see that \(C_{2}=(S^{\prime}-s_{r-1})\cup\{x_{1},x_{2}\}\in\mathscr{C}_{1}.\) Given that \(x_{1}\in\operatorname{cl}(S^{\prime}),\) it follows that \(x_{2}\in\operatorname{cl}(S^{\prime}).\) Continuing, we see that \(\{x_{1},\ldots,x_{r-k}\}\subseteq\operatorname{cl}(S^{\prime}).\) By similar arguments, it can be shown that \(\{y_{1},\ldots,y_{k}\}\subseteq\operatorname{cl}(S^{\prime}).\) Thus proves our claim. However, this is impossible since by assumption \(\{x_{1},\ldots,x_{r-k}\}\cup\{y_{1},\ldots,y_{k}\}\) is a basis but \(r(S^{\prime})=|S^{\prime}|=r-1.\) Thus no such \(k\) exists. More generally, the same arguments can be applied to any \(j\in[r],\) and \(S^{\prime}=S-s_{j}.\) Thus \((\mathbf{S4})\) holds. By \((\mathbf{A}),\)\((\mathscr{H}_{1}^{1},\mathscr{H}_{2}^{1})\) is an \(S\)-pair which is order consistent. Thus it follows by Theorem 4.1, that there exists \((A_{1},A_{2})\in\binom{r-1}{\mathscr{H}_{1}^{1}}\times\binom{r-1}{\mathscr{H} _{2}^{1}},\)\(A_{1}\neq A_{2},\) and \(\{B_{1},B_{2}\}\subseteq\binom{S}{r-2}\) where for \(i=1,2\), \(B_{i}\cap A_{i}=B_{1}\cap B_{2}\in\binom{A_{1}\cap A_{2}}{r-3}\) and \(\binom{B_{i}}{1}\subseteq\mathscr{H}_{i}^{1}.\) Also, \((\mathscr{H}_{1}^{2},\mathscr{H}_{2}^{2})\) is also an \(S\)-pair which is order consistent and hence by Theorem 4.1, there exists \((A_{1}^{\prime},A_{2}^{\prime})\in\binom{r-1}{\mathscr{H}_{2}^{1}}\times \binom{r-1}{\mathscr{H}_{2}^{2}},\)\(A_{1}^{\prime}\neq A_{2}^{\prime},\) and \(\{B_{1}^{\prime},B_{2}^{\prime}\}\subseteq\binom{S}{r-2}\) where for \(i=1,2\), \(B_{i}^{\prime}\cap A_{i}^{\prime}=B_{1}^{\prime}\cap B_{2}^{\prime}\in\binom{A _{1}^{\prime}\cap A_{2}^{\prime}}{r-3}\) and \(\binom{B_{i}^{\prime}}{1}\subseteq\mathscr{H}_{i}^{2}.\) Suppose \(r>4.\) Given that \(|B_{2}|=|B_{2}^{\prime}|=r-2,\) it follows that there exists \(s_{i}\in B_{2}\cap B_{2}^{\prime}.\) Then \(C_{1}=\{s_{i},e_{1},\ldots,e_{r-1}\}\) and \(C_{2}=\{s_{i},e_{2},e_{3},\ldots,e_{r}\}\) are distinct circuits in \(M.\) By the circuit elimination axiom, there exists a circuit \(C\subseteq(C_{1}\cup C_{2})-s_{i}=\{e_{1},\ldots,e_{r}\}.\) However, this is impossible since, by assumption, \(\{e_{1},\ldots,e_{r}\}\) is a basis. Thus \(r\leq 4.\) Suppose \(r=3\). Without loss of generality, we may assume that \(A_{1}=\{s_{1},s_{2}\}\) and \(B_{1}=\{s_{3}\}\) and \(A_{2}=\{s_{2},s_{3}\}\)\(B_{2}=\{s_{1}\}\). Following similar reasoning as that given in the case \(r>4,\) we have that \(\{s_{3}\}\not\in\mathscr{H}_{1}^{2}\) and \(\{s_{1}\}\not\in\mathscr{H}_{2}^{2}.\) Suppose that \(\{s_{1}\}\in\mathscr{H}_{1}^{2}.\) Then given that \(\{s_{1}\}\in\mathscr{H}_{2}^{1},\) we have \(C_{1}=\{s_{1},e_{1},e_{2}\}\) and \(C_{2}=\{s_{1},e_{m},e_{1}\}\) are circuits. This would imply that \(\{e_{m},e_{1},e_{2}\}=(C_{1}\cup C_{2})-s_{1}\) is a circuit, which is false since, by assumption, \(\{e_{m},e_{1},e_{2}\}\) is a basis. Thus \(\{s_{1}\}\not\in\mathscr{H}_{1}^{2}.\) It follows that \(B_{1}^{\prime}=\{s_{2}\}\) and \(A_{1}^{\prime}=\{s_{1},s_{3}\}.\) Since \(\{s_{1}\}\not\in\mathscr{H}_{2}^{2},\) it follows that \(B_{2}^{\prime}=\{s_{3}\}\) and \(A_{2}^{\prime}=\{s_{1},s_{2}\}.\) Since \(A_{1}=A_{2}^{\prime}=\{s_{1},s_{2}\},\) it follows that \(C_{1}=\{s_{1},s_{2},e_{m}\}\) and \(C_{2}=\{s_{1},s_{2},e_{2}\}\) are circuits. Furthermore, since \(B_{1}^{\prime}=\{s_{2}\},\) it follows that \(C_{3}=\{s_{2},e_{1},e_{m}\}\) is a circuit. It is now seen that \(\{e_{m},e_{1},e_{2}\}\subset\operatorname{cl}(\{s_{1},s_{2}\}),\) which is impossible since by assumption, \(\{e_{m},e_{1},e_{2}\}\) is a basis. Suppose \(r=4\). Again, if \(B_{1}\cap B_{1}^{\prime}\neq\O\), then it would follow by the circuit elimination axiom that that \(\{e_{m-2},e_{m-1},e_{m},e_{1}\}\) is a circuit, contradicting our assumptions. Thus \(B_{1}\cap B_{1}^{\prime}=\O\) and similarly, \(B_{2}\cap B_{2}^{\prime}=\O\). Since for \(i=1,2\), \(|B_{i}|=|B_{i}^{\prime}|=2\) it follows that \(B_{i}\cup B_{i}^{\prime}=S\). Without loss of generality, we may assume \(B_{1}=\{s_{1},s_{2}\},A_{1}=\{s_{2},s_{3},s_{4}\}\) and \(B_{2}=\{s_{2},s_{3}\},A_{2}=\{s_{1},s_{2},s_{4}\}\). It now follows that \(B_{1}^{\prime}=\{s_{3},s_{4}\}\) and \(B_{2}^{\prime}=\{s_{1},s_{4}\}\). We now see that \(A_{1}^{\prime}=\{s_{1},s_{2},s_{4}\}\) and \(A_{2}^{\prime}=\{s_{2},s_{3},s_{4}\}\). Then \(\{\{s_{1}\},\{s_{2}\},\{s_{2},s_{3},s_{4}\}\}\subseteq\mathscr{H}_{1}^{1},\ \{\{s_{2}\},\{s_{3}\},\{s_{1},s_{2},s_{4}\}\} \subseteq\mathscr{H}_{2}^{1},\ \{\{s_{3}\},\{s_{4}\},\{s_{1},s_{2},s_{4}\}\} \subseteq\mathscr{H}_{1}^{2}\) and \(\{\{s_{1}\},\{s_{4}\},\{s_{2},s_{3},s_{4}\}\}\subseteq\mathscr{H}_{2}^{2}\). By \(\mathbf{(S2)}\) it follows that \(\{s_{1},s_{2}\}\in\mathscr{H}_{1}^{1}\) and \(\{s_{2},s_{3}\}\in\mathscr{H}_{2}^{1}\). Thus \(C_{1}=\{s_{1},s_{2},e_{m-1},e_{m}\}\) and \(C_{2}=\{s_{2},s_{3},e_{1},e_{2}\}\) are circuits. Since \((\mathscr{H}_{1}^{3},\mathscr{H}_{2}^{3})\) is an \(S\)-pair which is order consistent, we can argue with \((\mathscr{H}_{1}^{2},\mathscr{H}_{2}^{2})\) in place of \((\mathscr{H}_{1}^{1},\mathscr{H}_{2}^{1})\) and \((\mathscr{H}_{1}^{3},\mathscr{H}_{2}^{3})\) in place of \((\mathscr{H}_{1}^{2},\mathscr{H}_{2}^{2})\), to show that \(\{\{s_{1}\},\{s_{2}\}\}\subseteq\mathscr{H}_{1}^{3}\) and \(\{\{s_{2}\},\{s_{3}\}\}\subseteq\mathscr{H}_{2}^{3}\). Again, we have that \(\{s_{1},s_{2}\}\in\mathscr{H}_{1}^{3}\) and \(\{s_{2},s_{3}\}\in\mathscr{H}_{2}^{3}\) and thus \(C_{3}=\{s_{1},s_{2},e_{1},e_{2}\}\) and \(C_{4}=\{s_{2},s_{3},e_{3},e_{4}\}\) are circuits. By symmetry, we have that \(C_{5}=\{s_{2},s_{3},e_{m-1},e_{m}\}\) is also a circuit. By the circuit elimination axiom, there exists a circuit \(C\) where \(C\subseteq(C_{1}\cup C_{5})-e_{m-1}\). We must have that \(C=\{s_{1},s_{2},s_{3},e_{m}\}\), implying that \(e_{m}\in\mathrm{cl}(\{s_{1},s_{2}.s_{3}\}\). We now see that \(e_{m-1}\in\mathrm{cl}(\{s_{1},s_{2}.s_{3}\}\). Using the circuits \(C_{2}\) and \(C_{3}\), one can argue in the same manner to show that \(\{e_{1},e_{2}\}\subset\mathrm{cl}(\{s_{1},s_{2}.s_{3}\}\). Thus it follows that \(\{e_{m-1},e_{m},e_{1},e_{2}\}\subset\mathrm{cl}(\{s_{1},s_{2}.s_{3}\}\), which is impossible since (by assumption) \(\{e_{m-1},e_{m},e_{1},e_{2}\}\) is a basis. This concludes the case for \(r=4\).
2306.02586
Internet of Things Meets Robotics: A Survey of Cloud-based Robots
This work presents a survey of existing literature on the fusion of the Internet of Things (IoT) with robotics and explores the integration of these technologies for the development of the Internet of Robotics Things (IoRT). The survey focuses on the applications of IoRT in healthcare and agriculture, while also addressing key concerns regarding the adoption of IoT and robotics. Additionally, an online survey was conducted to examine how companies utilize IoT technology in their organizations. The findings highlight the benefits of IoT in improving customer experience, reducing costs, and accelerating product development. However, concerns regarding unauthorized access, data breaches, and privacy need to be addressed for successful IoT deployment.
Chrisantus Eze
2023-06-05T04:26:16Z
http://arxiv.org/abs/2306.02586v4
# Chrisantus Eze ###### Abstract This work presents a survey of existing literature on the fusion of the Internet of Things (IoT) with robotics and explores the integration of these technologies for the development of the Internet of Robotics Things (IoRT). The survey focuses on the applications of IoT in healthcare and agriculture, while also addressing key concerns regarding the adoption of IoT and robotics. Additionally, an online survey was conducted to examine how companies utilize IoT technology in their organizations. The findings highlight the benefits of IoT in improving customer experience, reducing costs, and accelerating product development. However, concerns regarding unauthorized access, data breaches, and privacy need to be addressed for successful IoT deployment. ## 1 Introduction The Internet of Things has continued to experience increased adoption, with numerous research efforts focusing on its deployment in various aspects of our daily lives. Similarly, robotics, which has been around for a while, has played a crucial role in diverse domains. However, there was a period during which both fields underwent significant development independently. It has now become evident that current scenarios require the integration of these two disciplines and a collaborative approach from their respective communities. Over the years, areas such as healthcare and agriculture have witnessed significant impacts from IoT and robotics. [1] discussed how IoT and robotics have transformed healthcare services, including rehabilitation, assistive surgery, elderly care, and prosthetics. In another study by [2], a model called Automatic Agricultural field Robot - Agro-bot was introduced, which utilizes robotics and automation to perform various agricultural operations such as soil digging, seed sowing, precise watering, spraying, and weeding. Figure 1: This illustrates the relationship between IoT and IoRT This paper provides an overview of IoT and Robotics technologies and explores their integration for the development of the Internet of Robotics Things (IoRT). The convergence of IoT and robotics is evident in various distributed and heterogeneous robot control paradigms, such as network robot systems [3] and robot ecologies [4]. Additionally, approaches like ubiquitous robotics [5] and cloud robotics [6; 7; 8] have emerged, utilizing server-side resources to handle resource-intensive functionalities. The concept of the "Internet of Robotic Things" (IoRT) was introduced to describe a scenario where sensor data from diverse sources is fused and processed using local and distributed intelligence to control physical objects. This cyber-physical perspective of IoRT leverages IoT's sensor and data analytics technologies to enhance robots' situational awareness, resulting in improved task execution. Notable use cases include intelligent transportation [9] and companion robots [10]. Subsequent literature on IoRT has explored alternative perspectives, such as emphasizing robust team communication [11] and considering robots as additional sensors within the IoT ecosystem [12; 13]. Furthermore, cloud computing now finds increased applications in human-robot interaction, which is a field that seeks to develop techniques to enhance how humans interact and collaborate with robots. Notable works in this area include those by [14; 15; 16]. [14] proposed a system that provides companionship to the elderly using robotics, which can be controlled physically or remotely via cloud infrastructures. [15] presents a detailed concept of Human-Robot Interaction systems architecture, focusing on acquiring information about the robot's interlocutor. It includes an IoT-based sensor subsystem for interlocutor identification and integrates additional subsystems for human and visual identification. [16] proposes a new system architecture for SIGVerse, addressing key problems in robotics simulation. By adopting Unity middleware for VR applications and ROS for robot software development, the proposed architecture enables faster development of human-robot interaction applications. Additionally, [17] proposed techniques and guidelines to design Augmented Reality (AR) interfaces, enhancing the experience of humans interacting with robots. The rest of the paper is organized according to the following: the existing literature is reviewed in Section 2, Section 3 presents some of the core challenges being faced, and finally, in Section 4 the work is summarized and concluded. ## 2 Review of Literature In this section, I will explore some existing works that focus on integrating IoT with robotics for various applications in healthcare, agriculture, and other fields. ### Design and Implementation of Ploughing and Seeding of Agriculture Robot Using IOT [18] proposed a system that involves the utilization of a robot for automating the processes of sowing crops, plowing the field, and automatically sprinkling water on the crops. The robot is equipped with motors that enable its movement in both forward and backward directions. To train the robot, an embedded instructing coding system is employed, enabling it to carry out the tasks of sowing, plowing, and watering the crops. By setting specific time intervals, these operations can be executed automatically without the need for manual labor. ### Internet of Things-Based Devices/Robots in Agriculture 4.0 [19] examines the work conducted by researchers and engineers, as well as the major application areas of Internet of Things (IoT)-based devices in smart agriculture. The article also discusses the various equipment and technologies employed in smart agriculture. Additionally, it highlights the challenges encountered when implementing IoT-based devices in agriculture. The study's findings are valuable and helpful for researchers, agricultrists, and professionals working in the field of modern agriculture. The utilization of IoT brings benefits to farming and agriculture by introducing concepts such as live tracking, pest control, irrigation management, soil analysis, and more. Consequently, this paper provides a critical review of IoT-based devices deployed in agricultural fields, with the introduction section exploring the growth of IoT-based devices in agriculture. ### Agrobot - An IoT-Based Automated Multi-Functional Robot The authors in [2] presented an Agrobot, a robot designed to meet agricultural requirements and enhance various aspects of farming operations. This Agrobot incorporates an automated seed-planting machine, thereby significantly improving agricultural output. With the ability to perform tasks such as drilling, fertilizing, seed sowing, and watering, this single robot serves as an optimal solution to overcome challenges arising from the limited availability of working capital and increasing labor costs. By introducing automated equipment in farming, it becomes possible to scale up production within a limited timeframe. ### Internet of Things and Robotics in Transforming Current-Day Healthcare Services The study in [1] examines the role of the Internet of Things (IoT) and robotics in transforming healthcare services. It presents the various functionalities of ideal IoT-aided robotic systems and highlights their importance in healthcare applications. The research focuses on the application of IoT and robotics in healthcare services such as rehabilitation, assistive surgery, elderly care, and prosthetics. It provides a detailed analysis of recent developments, the current status, limitations, and challenges in this field. Additionally, the study discusses the role and applications of these technologies in managing the COVID-19 pandemic. It offers comprehensive knowledge of the functionality, application, challenges, and future scope of IoT-aided robotic systems in healthcare services. This research provides valuable insights for future researchers aiming to enhance healthcare services using these technologies. ### Internet of Robotic Things: Context-Aware and Personalized Interventions of Assistive Social Robots In our daily lives, versatile and capable assistive service and companion robots are present, operating within our living environment. These robots possess the ability to manipulate physical objects, navigate their surroundings, and engage in conversations. However, human behavior is often unpredictable and dynamic, necessitating the assistance of a cloud-backend system. This system serves several purposes, including analyzing data from sensors and wearables, determining the required robotic tasks, and providing the necessary support for executing these tasks effectively in our everyday environment. The authors in [10] presented a system architecture design for an Internet-of-Robotic-Things (IoRT) framework. To showcase the practical application of this framework, they focused on a case study involving personal interactions facilitated by a companion robot. The objective of the case study was to alleviate behavioral disturbances experienced by individuals with dementia. Innovative and efficient method of robotics for helping the Parkinson's disease patient using IoT in big data analytics Big data has accumulated a massive amount of stored data in various fields, including robotics, the Internet of Things (IoT), and healthcare systems. While IoT-based healthcare systems play a crucial role in the big data industry, there are instances where accurately predicting results through sensing can be challenging. The proposed system in [20], which combines artificial intelligence and IoT for Parkinson's disease, has the potential to significantly enhance gait performance. This research provides a clear understanding of the role of robots in Parkinson's disease and their interaction with big data analytics. The research scheme involves collecting data from big data sources and introducing a novel approach that utilizes laser scanning combined with piecewise linear Gaussian dynamic time warp machine learning. A laser scanning system is used to scan the path for obstacles and identify safe areas. The primary role of the robot is to predict the motion of the patient using the walker and provide appropriate physical training. As both the patient and the robot have fixed sensors, the robot walks alongside the patient to accurately predict the patient's walker motion. ### Healthcare Robots Enabled with IoT and Artificial Intelligence for Elderly Patients The authors in [21] identified the needs of elderly patients and proposed solutions using personalized robots. The utilization of IoT devices enables the quick prediction of emergency situations by providing vital information, while artificial intelligence aids in suggesting necessary actions. Health data from patients is obtained through IoT-based wearable devices and processed using AI for decision-making. The designed robot then carries out the required actions based on these decisions. Humanoid robots can be designed to provide healthcare and physical assistance to elderly patients and those with chronic conditions. Additionally, animal-like robots can be developed to act as pets and offer companionship to individuals with psychosocial issues. The main objective is to review the capabilities of robots and lay the groundwork for future advancements, envisioning a robot that can prevent interventions, perform multiple functions, engage in motivational interactions, provide enhanced educational data, and promptly alert an ambulance in case of emergencies. ### Distributed Perception by Collaborative Robots The authors of the study in [22] proposed a framework to leverage the combined computational power of multiple low-power robots in order to enable efficient, dynamic, and real-time recognition. The method is designed to adapt to the availability of computing devices during runtime and adjust to the inherent dynamics of the network. The framework is applicable to any distributed robot system. To demonstrate its effectiveness, the researchers implemented the framework using several Raspberry-Pi3-based robots equipped with cameras (up to 12 robots). They successfully implemented a state-of-the-art action recognition model for videos and two recognition models for images. The results show that this approach allows a group of low-power robots to achieve similar performance (measured in terms of the number of images or video frames processed per second) compared to a high-end embedded platform, Nvidia Tegra TX2. ### Robot Cloud: Bridging the power of robotics and cloud computing Cloud computing has become a transformative force in the cyber world, serving as a prominent computing and service platform for resource sharing. This includes sharing platforms, software applications, and various services. The authors in [23] tried to merge the cyber world with the physical world through the concept of the "Robot Cloud," which aims to combine the capabilities of robotics with cloud computing. To realize this vision, the researchers propose a novel Robot Cloud stack, designed to support and facilitate the integration. They employ a service-oriented architecture (SOA) to enhance the flexibility, extensibility, and reusability of the functional modules within the Robot Cloud. To showcase their design approach, a prototype of the Robot Cloud is developed using the widely-used Google App Engine. Additionally, simulation experiments are conducted in the context of a "robot show" application scenario. These experiments evaluate the proposed scheduling policy and investigate the impact of different request distributions and robot center solutions. ### Internet of Things (IoT) based Robotic Arm In this project [24], the authors achieved control over a robotic arm not only through traditional wired controls but also by utilizing the emerging technology of the Internet of Things (IoT). By incorporating an IoT interface, they were able to remotely control the robotic arm, making it applicable to various industrial settings where machines require control from distant locations. This project not only enables real-time response to commands but also records and reproduces specific movements, thereby reducing the need for human intervention and effort in repetitive tasks. ### ROSLink: Bridging ROS with the Internet-of-Things for Cloud Robotics The Internet of Things (IoT) and cloud robotics may be seamlessly integrated with ROS-enabled robots thanks to a new communication protocol called ROSLink [25]. ROSLink does away with the necessity for Network Address Translation (NAT) and enables any robot to be mapped to any user via the Internet by implementing the server in a widely accessible cloud. The JSON serialization-based protocol has been shown to be effective and dependable for operating robots over the cloud. Given the accessibility of fast internet connections and an abundance of bandwidth, ROSLink offers a compact and scalable solution with little extra network overhead. The evaluation research looked at how various network topologies affected performance. Challenges and Issues Several issues could arise when smart objects and devices are interconnected with robots. This section describes some of the key issues and challenges currently facing the adoption of IoT and robotics. ### Security and Privacy It is important to ensure that the connection is secure and user data is protected against malicious hackers. To achieve optimal security and privacy, there is a need to explore a novel access control mechanism that works in conjunction with robot authentication, with a focus on defining and managing robot identity [26]. Additionally, implementing algorithms for data confidentiality and message integrity is essential, along with advanced approaches to identify and restrict the involvement of untrusted devices and robots within the system. ### Computational Complexity Interconnecting multiple devices in a shared network often leads to various computational issues, which are further exacerbated when highly computationally intensive devices like robots are added to the network. These issues can manifest in different forms, such as implementing network protocols, designing algorithms for message sharing among devices, real-time knowledge sharing, handling big data, and optimizing bandwidth usage. ### Ethical There have been several ethical concerns regarding the deployment of IoT and robots in various settings. [27] discussed ethical issues related to the deployment of IoT technology, including informed consent, privacy, trust, and physical safety. [28] outlined ethical concerns associated with deploying robot applications for assisting the elderly, such as reduced human contact, feelings of objectification and loss of control, loss of privacy, loss of personal liberty, deception, and infantilization, and the circumstances of elderly people controlling robots. ## 4 Summary and Conclusion In this study, I conducted a literature review on the integration of the Internet of Things (IoT) with robotics. The aim of this paper is to present an overview of IoT and robotics technologies and investigate their integration for the development of the Internet of Robotics Things (IoRT). The survey primarily focused on the application of IoRT in healthcare and agriculture. Additionally, I addressed some of the major concerns related to the adoption of IoT and robotics. Furthermore, I conducted an online survey to examine the usage of IoT technology in companies. Here are the findings from the survey: **Customer experience:** The company used IoT to provide customers with real-time information about products and services, such as product availability, and delivery status. This helps to improve customer satisfaction and loyalty. **Costs:** IoT can be used to automate tasks, such as monitoring and controlling equipment, which can help to reduce costs. The company uses IoT to monitor and regulate the temperature in the building. They also ensure the proper regulation of the temperature in unoccupied rooms to conserve energy and cost electricity cost. **Speed of product development:** IoT can also be used to collect data from products in the field, which can be used to improve the design and development of new products. For example, the company used IoT to collect information about how users use their products and use those data to improve the next iteration of the products. They also stated some of the concerns they have with the deployment of IoT, such as: **Unauthorized access:** IoT devices are often connected to the internet, which makes them vulnerable to unauthorized access. **Data breaches:** IoT devices can collect a lot of data about users, which can be used for malicious purposes if it is not properly secured. **Privacy concerns:** Some users may be concerned about the amount of data that IoT devices collect about them. Therefore, it is noteworthy that organizations that successfully adopt IoT need to address these challenges in order to protect their users and their data.
2302.06203
Fast Algorithms for Discrete Differential Equations
Discrete Differential Equations (DDEs) are functional equations that relate polynomially a power series $F(t,u)$ in $t$ with polynomial coefficients in a "catalytic" variable $u$ and the specializations, say at $u=1$, of $F(t,u)$ and of some of its partial derivatives in $u$. DDEs occur frequently in combinatorics, especially in map enumeration. If a DDE is of fixed-point type then its solution $F(t,u)$ is unique, and a general result by Popescu (1986) implies that $F(t,u)$ is an algebraic power series. Constructive proofs of algebraicity for solutions of fixed-point type DDEs were proposed by Bousquet-M\'elou and Jehanne (2006). Bostan et. al (2022) initiated a systematic algorithmic study of such DDEs of order 1. We generalize this study to DDEs of arbitrary order. First, we propose nontrivial extensions of algorithms based on polynomial elimination and on the guess-and-prove paradigm. Second, we design two brand-new algorithms that exploit the special structure of the underlying polynomial systems. Last, but not least, we report on implementations that are able to solve highly challenging DDEs with a combinatorial origin.
Alin Bostan, Hadrien Notarantonio, Mohab Safey El Din
2023-02-13T09:21:05Z
http://arxiv.org/abs/2302.06203v2
# Fast Algorithms for Discrete Differential Equations ###### Abstract. Discrete Differential Equations (IDDEs) are functional equations that relate algebraically a power series \(F(t,u)\) in \(t\) with polynomial coefficients in a "catalytic" variable \(u\) and the specializations, say at \(u=1\), of \(F(t,u)\) and of some of its partial derivatives in \(u\). DDEs occur frequently in combinatorics, especially in map enumeration. If a DDE is of fixed-point type then its solution \(F(t,u)\) is unique, and a general result by Popescu (1986) implies that \(F(t,u)\) is an algebraic power series. Constructive proofs of algebraicity for solutions of fixed-point type DDEs were proposed in 2006 by Bousquet-Melou and Jehanne. Last year, Bostan et al. initiated a systematic algorithmic study of such DDEs of order 1. We generalize this study to DDEs of arbitrary order. First, we propose nontrivial extensions of algorithms based on polynomial elimination and on the guess-and-prove paradigm. Second, we design two brand-new algorithms that exploit the special structure of the underlying polynomial systems. Last, but not least, we report on implementations that are able to solve highly challenging DDEs with a combinatorial origin. Functional equations; Discrete differential equations; Algorithms; Complexity; Catalytic variables; Algebraic functions. + Footnote †: F \((t,u)\) has polynomial coefficients in \(u\) since for a fixed number of black faces, the outer degree is finite. + Footnote †: F \((t,u)\) has polynomial coefficients in \(u\) since for a fixed number of black faces, the outer degree is finite. + Footnote †: F \((t,u)\) has polynomial coefficients in \(u\) since for a fixed number of black faces, the outer degree is finite. + Footnote †: F \((t,u)\) has polynomial coefficients in \(u\) since for a fixed number of black faces, the outer degree is finite. + Footnote †: F \((t,u)\) has polynomial coefficients in \(u\) since for a fixed number of black faces, the outer degree is finite. + Footnote †: F \((t,u)\) has polynomial coefficients in \(u\) since for a fixed number of black faces, the outer degree is finite. + Footnote †: F \((t,u)\) has polynomial coefficients in \(u\) since for a fixed number of black faces, the outer degree is finite. + Footnote †: F \((t,u)\) has polynomial coefficients in \(u\) since for a fixed number of black faces, the outer degree is finite. + Footnote †: F \((t,u)\) has polynomial coefficients in \(u\) since for a fixed number of black faces, the outer degree is finite. + Footnote †: F \((t,u)\) has polynomial coefficients in \(u\) since for a fixed number of black faces, the outer degree is finite. + Footnote †: F \((t,u)\) has polynomial coefficients in \(u\) since for a fixed number of black faces, the outer degree is finite. + Footnote †: F \((t,u)\) has polynomial coefficients in \(u\) since for a fixed number of black faces, the outer degree is finite. + Footnote †: F \((t,u)\) has polynomial coefficients in \(u\) since for a fixed number of black faces, the outer degree is finite. + Footnote †: F \((t,u)\) has polynomial coefficients in \(u\) since for a fixed number of black faces, the outer degree is finite. + Footnote †: F \((t,u)\) has polynomial coefficients in \(u\) since for a fixed number of black faces, the outer degree is finite. + Footnote †: F \((t,u)\) has polynomial coefficients in \(u\) since for a fixed number of black faces, the outer degree is finite. + Footnote †: F \((t,u)\) has polynomial coefficients in \(u\) since for a fixed number of black faces, the outer degree is finite. + Footnote †: F \((t,u)\) has polynomial coefficients in \(u\) since for a fixed number of black faces, the outer degree is finite. + Footnote †: F \((t,u)\) has polynomial coefficients in \(u\) since for a fixed number of black faces, the outer degree is finite. + Footnote †: F \((t,u)\) has polynomial coefficients in \(u\) since for a fixed number of black faces, the outer degree is finite. + Footnote †: F \((t,u)\) has polynomial coefficients in \(u\) since for a fixed number of black faces, the outer degree is finite. + Footnote †: F \((t,u)\) has polynomial coefficients in \(u\) since for a fixed number of black faces, the outer degree is finite. + Footnote †: F \((t,u)\) has polynomial coefficients in \(u\) since for a fixed number of black faces, the outer degree is finite. + Footnote †: F \((t,u)\) has polynomial coefficients in \(u\) since for a fixed number of black faces, the outer degree is finite. + Footnote †: F \((t,u)\) has polynomial coefficients in \(u\) since for a fixed number of black faces, the outer degree is finite. + Footnote †: F \((t,u)\) has polynomial coefficients in \(u\) since for a fixed number of black faces, the outer degree is finite. + Footnote †: F \((t,u)\) has polynomial coefficients in \(u\) since for a fixed number of black faces, the outer degree is finite. + Footnote †: F \((t,u)\) has polynomial coefficients in \(u\) since for a fixed number of black faces, the outer degree is finite. + Footnote †: F \((t,u)\) has polynomial coefficients in \(u\) since for a fixed number of black faces, the outer degree is finite. + Footnote †: F \((t,u)\) has polynomial coefficients in \(u\) since for a fixed number of black faces, the outer degree is finite. + Footnote †: F \((t,u)\) has polynomial coefficients in \(u\) since for a fixed number of black faces, the outer degree is finite. + Footnote †: F \((t,u)\) has polynomial coefficients in \(u\) since for a fixed number of black faces, the outer degree is finite. + Footnote †: F \((t,u)\) has polynomial coefficients in \(u\) since for a fixed number of black faces, the outer degree is finite. + Footnote †: F \((t,u)\) has polynomial coefficients in \(u\) since for a fixed number of black faces, the outer degree is finite. + Footnote †: F \((t,u)\) has polynomial coefficients in \(u\) since for a fixed number of black faces, the outer degree is finite. + Footnote †: F \((t,u)\) has polynomial coefficients in \(u\) since for a fixed number of black faces, the outer degree is finite. + Footnote †: F \((t,u)\) has polynomial coefficients in \(u\) since for a fixed number of black faces, the outer degree is finite. + Footnote †: F \((t,u)\) has polynomial coefficients in \(u\) since for a fixed number of black faces, the outer degree is finite. + Footnote †: F \((t,u)\) has polynomial coefficients in \(u\) since for a fixed number of black faces, the outer degree is finite. + Footnote †: F \((t,u)\) has polynomial coefficients in \(u\) since for a fixed number of black faces, the outer degree is finite. + Footnote †: F \((t,u)\) has polynomial coefficients in \(u\) since for a fixed number of black faces, the outer degree is finite. + Footnote †: F \((t,u)\) has polynomial coefficients in \(u\) since for a fixed number of black faces, the outer degree is finite. + Footnote †: F \((t,u)\) has polynomial coefficients in \(u\) since for a fixed number of black faces, the outer degree is finite. + Footnote † †: F \((t,u)\) has polynomial coefficients in \(u\) since for a fixed number of black faces, the outer degree is finite. + Footnote †: F \((t,u)\) has polynomial coefficients in \(u\) since for a fixed number of black faces, the outer degree is finite. + Footnote † †: F \((t,u)\) has polynomial coefficients in \(u\) since for a fixed number of black faces, the outer degree is finite. + Footnote † †: F \((t,u)\) has polynomial coefficients in \(u\) since for a fixed number of black faces, the outer degree is finite. + Footnote † † †: F \((t,u)\) has polynomial coefficients in \(u\) since for a fixed number of black faces, the outer degree is finite. + Footnote † † †: F \((t,u)\) has polynomial coefficients in \(u\) since for a fixed number of black faces, the outer degree is finite. + Footnote
2308.13133
AccFlow: Backward Accumulation for Long-Range Optical Flow
Recent deep learning-based optical flow estimators have exhibited impressive performance in generating local flows between consecutive frames. However, the estimation of long-range flows between distant frames, particularly under complex object deformation and large motion occlusion, remains a challenging task. One promising solution is to accumulate local flows explicitly or implicitly to obtain the desired long-range flow. Nevertheless, the accumulation errors and flow misalignment can hinder the effectiveness of this approach. This paper proposes a novel recurrent framework called AccFlow, which recursively backward accumulates local flows using a deformable module called as AccPlus. In addition, an adaptive blending module is designed along with AccPlus to alleviate the occlusion effect by backward accumulation and rectify the accumulation error. Notably, we demonstrate the superiority of backward accumulation over conventional forward accumulation, which to the best of our knowledge has not been explicitly established before. To train and evaluate the proposed AccFlow, we have constructed a large-scale high-quality dataset named CVO, which provides ground-truth optical flow labels between adjacent and distant frames. Extensive experiments validate the effectiveness of AccFlow in handling long-range optical flow estimation. Codes are available at https://github.com/mulns/AccFlow .
Guangyang Wu, Xiaohong Liu, Kunming Luo, Xi Liu, Qingqing Zheng, Shuaicheng Liu, Xinyang Jiang, Guangtao Zhai, Wenyi Wang
2023-08-25T01:51:26Z
http://arxiv.org/abs/2308.13133v1
# AccFlow: Backward Accumulation for Long-Range Optical Flow ###### Abstract Recent deep learning-based optical flow estimators have exhibited impressive performance in generating local flows between consecutive frames. However, the estimation of long-range flows between distant frames, particularly under complex object deformation and large motion occlusion, remains a challenging task. One promising solution is to accumulate local flows explicitly or implicitly to obtain the desired long-range flow. Nevertheless, the accumulation errors and flow misalignment can hinder the effectiveness of this approach. This paper proposes a novel recurrent framework called AccFlow, which recursively backward accumulates local flows using a deformable module called as AccPlus. In addition, an adaptive blending module is designed along with AccPlus to alleviate the occlusion effect by backward accumulation and rectify the accumulation error. Notably, we demonstrate the superiority of backward accumulation over conventional forward accumulation, which to the best of our knowledge has not been explicitly established before. To train and evaluate the proposed AccFlow, we have constructed a large-scale high-quality dataset named CVO, which provides ground-truth optical flow labels between adjacent and distant frames. Extensive experiments validate the effectiveness of AccFlow in handling long-range optical flow estimation. Codes are available at [https://github.com/mulns/AccFlow](https://github.com/mulns/AccFlow). + Footnote †: dagger}\) Corresponding authors. \(\ddagger\) Equal contribution. \(\star\) Work was partially finished at the University of Electronic Science and Technology of China. ## 1 Introduction Optical flow is ideally a dense field of motion vectors that depicts the pixel-wise correspondence of two video frames. Since a variety of downstream applications (_e.g._, video editing [3, 14, 54], action recognition [42], and object tracking [1]) significantly benefit from the accuracy of flow estimation, optical flow estimation turns out to be a long-standing fundamental task in computer vision [41, 39, 40, 27, 26, 55, 25, 13]. Recent advances [6, 47, 48] resort to deep learning to estimate optical flow and achieve promising accuracy. Although remarkable performance has been achieved in the _local_ flow estimation between two adjacent frames, it is non-trivial to estimate the _long-range_ flow that records the pixel correspondence between two distant frames. The long-range optical flow is a grounded research topic that has plenty of practical applications. For instance, in video completion [8], long-range optical flow is beneficial to the detail compensation between distant frames; in video key-point propagation [11], since the long-range optical flow performs holistic pixel-tracking in nature, it frees the quantity limitation of tracked pixels; in video super-resolution [31], it enables better inter-frame alignment in one sliding window; and in segmentation mask propagation [51], it provides an explicit approach to propagate masks to distant frames, improving the interpretability compared to the implicit matching. The above examples take a glance at the wide applications of long-range optical flow. More significantly, the success of this task has the potential to break through the performance bottleneck of relevant tasks. Figure 1: Comparisons of our method with RAFT [48] and GMA [19] on HS-Sintel dataset [18]. Zoom-in regions are annotated in red boxes. Our method outperforms other methods especially for occluded area. Surprisingly, even though the long-range optical flow is significant and can benefit many related tasks, few works put effort on this research line. One possible reason is the lack of public datasets that provide ground-truth bidirectional cross-frame optical flows for training and validation. In literature, the early attempt to address this long-range optical flow task is Lim [22], which proposed a method based on the _forward_ flow accumulation, in which the flows of adjacent frames are added successively along the motion trajectories. The recent work [18] follows this idea and reasons the occlusion regions from high-frame-rate frames. Apart from these, one can simply estimate the long-range flow by employing methods specified for local flow [19, 48]. As shown in Figure 1, since the influence of occlusion is positively related to the time interval between two frames, the accuracy of flow estimation from these methods would be deteriorated severely or even be unacceptable when the time interval is beyond a threshold. In addition, one can also traverse all pixels in a frame and employ the pixel-tracking methods [38, 11] to produce the long-range dense flow, which has huge computational overheads and cannot be used in applications requiring dense flow. To sum up, a considerate long-range optical flow method should address the following challenging issues: 1. **Occlusion** As the time interval increases, the flow estimation of two distant frames suffers significant degradation owing to the inter-frame occlusion. Therefore, without a specific design, the common methods that aim at dealing with local flows perform poorly. Janai [18] formulate it as an energy minimization problem and found it highly non-convex, so they exploit the linearity of small motions and reasons about occlusions from multiple frames. However, this strategy is based on high-frame-rate videos (\(\geq 240\) FPS) and not applicable on regular videos. 2. **Accumulation error** Although flow accumulation is a promising solution to tackle long-range flow estimation, it also brings the accumulation error, resulting in inaccurate estimation in non-occluded regions. Therefore, the effectiveness of accumulation error compensation is critical. Lim [22] and Janai [18] constrained the photo consistency of warped frames to shrink accumulated error. However, the photo consistency loss is not comprehensive for flow estimation as revealed in [17, 24]. 3. **Efficiency** The computational complexity of long-range optical flow should be controlled at an appropriate level to support the downstream tasks in practice. Therefore, the pixel-tracking methods [38, 11], which iterative estimate the per-pixel long-range displacement, do not satisfy this requirement. To address the above issues, we propose a novel framework, named AccFlow, to estimate long-range optical flow by progressively backward accumulating local flows with effective corrections. More specifically, to alleviate the occlusion effect, we propose the _backward accumulation_, a new accumulation strategy distinct from the _forward accumulation_ pipeline, and elaborate a corresponding deep module, named AccPlus. More details about the difference between backward and forward accumulation can be found in Section 3.1 and 3.2. The AccFlow framework consists of three components: an arbitrary optical flow estimator, the AccPlus module, and an adaptive blending module. The arbitrary optical flow estimator is used to estimate local flows and long-range initial flow. The AccPlus performs the backward accumulation in feature domain. The adaptive blending module rectifies the accumulated error. Furthermore, to train and validate our AccFlow, we elaborately build a large-scale synthetic dataset, named CVO (cross-frame video optical flows). Different from other synthetic flow datasets [5, 6], the CVO includes _comprehensive_ cross-frame bidirectional flow annotations. The CVO also includes more challenging cases that have large pixel displacement and severe occlusion. The contributions of this paper can be summarized as follows: \(\bullet\) We propose a novel **backward accumulation** strategy to alleviate the long-range occlusion. \(\bullet\) We build the CVO, a new large-scale synthetic dataset with comprehensive **cross-frame** optical flow annotations. \(\bullet\) We propose the **AccFlow framework** which is simple yet effective to predict the long-range optical flow and achieves the state-of-the-art results on several benchmarks. ## 2 Related Works ### Adjacent Frame Optical Flow Estimation Optical flow methods can be categorized into two-frame and multi-frame methods according to the number of input frames. For two-frame methods, traditional algorithms [4, 45, 36] obtain optical flow by minimizing well-designed energy functions based on the brightness constancy assumption. By training a convolutional network on a synthetic dataset, FlowNet [6] first established a deep learning approach for optical flow estimation. After that, the performance of optical flow estimation is gradually improved by various works, such as FlowNet2 [16], PWCNet [47], and IRR-PWC [15]. Recently, RAFT [48] proposed a new paradigm to estimate optical flow by introducing 4D correlation volume and recurrent network. Following RAFT, graph reasoning [30], global motion aggregation [19], kernel patch attention [29], and cross-attention transformer [44] are further proposed to improve the accuracy and efficiency. The purpose of multi-frame optical flow estimation is to estimate the optical flow of adjacent frames by utilizing the temporal information of multiple video frames. Traditional methods achieve multi-frame optical flow estimation by phase-based representations of local image structure [7, 12], spatial-temporal regularization term [18, 52, 43], constant velocity prior [18, 49, 37, 46, 50], constant acceleration assumption [2, 20], and directional prior [32]. Recently, deep-based multi-frame methods are proposed to fuse flow prediction [35] or feature [34, 9] from previous frame pair into the current estimation process. Although these optical flow methods have achieved remarkable performance, they mainly focus on estimating optical flow of two adjacent frames, leaving the long-range optical flow of non-adjacent frames rarely being explored. ### Non-adjacent Frame Optical Flow Estimation Lim _et al_. [23] proposed the early work to obtain the cross-frame optical flow, where the Lucas-Kanade method [28] is used to produce optical flow at a high frame rate and the accumulation strategy is designed to generate optical flow at a standard frame rate. After that, this accumulation method is improved by accumulation error modeling and correction [21, 22, 18]. Janai _et al_. [18] cast this task as an energy minimization problem, and opt for a data-driven hypothesis generation strategy for optimization. Recently, Harley _et al_. [11] proposed a deep CNN network, PIPs, to estimate cross-frame sparse optical flow from the perspective of per-pixel tracking over the video sequence. Although PIPs has achieved state-of-the-art performance for video pixel tracking, it is difficult to obtain long-range dense optical flow due to the lack of spatial coherence information. In this paper, we deeply analyze the drawbacks of existing accumulation strategies and propose a new accumulation framework for obtaining long-range dense optical flow. ## 3 Methods Let \(\mathcal{I}=\{\mathbf{I}_{1},\ldots,\mathbf{I}_{N}\}\) denote a video sequence with \(N\) image frames \(\mathbf{I}_{t}\in\mathbb{R}^{w\times h\times 3}\) of size \(w\times h\) and \(3\) color channels. Let \(\mathbf{F}_{i,j}\in\mathbb{R}^{w\times h\times 2}\) denote the optical flow field from the reference image \(\mathbf{I}_{i}\) to the target image \(\mathbf{I}_{j}\). Specifically, for each pixel \(\mathbf{x}\in\Omega_{i}=\{1,\ldots,w\}\times\{1,\ldots,h\}\) in reference image \(\mathbf{I}_{i}\), \(\mathbf{F}_{i,j}(\mathbf{x})\in\mathbb{R}^{2}\) describes the apparent motion from frame \(I_{i}\) to \(I_{j}\). Our goal is to estimate the long-range optical flow field \(\mathbf{F}_{1,N}\) by accumulating all intermediate local flow fields \(\{\mathbf{F}_{1,2},\ldots,\mathbf{F}_{N-1,N}\}\). To achieve this, Lim _et al_. [22] and Janai _et al_. [18] formulate it as a dense pixel tracking task and obtain the long-range flow by tracking through pixel trajectories. In this paper, we refer to these approaches as the _forward accumulation_. In Section 3.1, we revisit the forward accumulation process and provide a formalization of it. The essential problem inherent in this process is analyzed, and a solution referred to as _backward accumulation_ is proposed in Section 3.2. Subsequently, we introduce in Section 3.3 the proposed AccFlow framework that accomplishes the aforementioned backward accumulation to mitigate the occlusion effect and rectify the accumulated error. Additionally, we introduce the proposed CVO dataset which provides synthesized video with ground-truth long-range optical flow between distant frames in Section 3.4. ### Revisiting the Forward Accumulation Generally, the accumulation process is a recursive procedure to fuse all intermediate local flows together. For brevity, we define the fusion of two adjacent optical flows \(\mathbf{F}_{i,k}\) and \(\mathbf{F}_{k,j}\) as \(\oplus\), and we present the fused flow \(\mathbf{F}_{i,j}\) as: \[\mathbf{F}_{i,j}=\mathbf{F}_{i,k}\oplus\mathbf{F}_{k,j} \tag{1}\] where \(i,k,j\in[1,N]\) denote three time stamps satisfying \(i<k<j\). Since the adjacent flows \(\mathbf{F}_{i,k}\) and \(\mathbf{F}_{k,j}\) start at different frames (_i.e_., frame \(\mathbf{I}_{i}\) and \(\mathbf{I}_{k}\)), in order to obtain the target flow \(\mathbf{F}_{i,j}\) which starts at frame \(\mathbf{I}_{i}\), we need to warp the start point of each motion vector in \(\mathbf{F}_{k,j}\) to align them with \(\mathbf{F}_{i,k}\), and then add the two flows pixel-wise. Let \(\widetilde{\mathbf{F}}_{k,j}^{i}\) denote the warped \(\mathbf{F}_{k,j}\) starting at frame \(\mathbf{I}_{i}\), we have: \[\widetilde{\mathbf{F}}_{k,j}^{i}(\mathbf{x})=\mathbf{F}_{k,j}(\mathbf{x}+ \mathbf{F}_{i,k}(\mathbf{x})) \tag{2}\] for each pixel \(\mathbf{x}\) in reference image \(\mathbf{I}_{i}\). Then we obtain the target flow \(\mathbf{F}_{i,j}\) by: \[\mathbf{F}_{i,j}(\mathbf{x})=\mathbf{F}_{i,k}(\mathbf{x})+\widetilde{\mathbf{ F}}_{k,j}^{i}(\mathbf{x}). \tag{3}\] However, as Janai _et al_. [18] revealed, the reference pixel \(\mathbf{x}\in\Omega_{i}\) can be forward occluded in frame \(\mathbf{I}_{k}\), which leads to wrong warping results in Equation (1)-(3). Therefore, researchers usually speculate on the occlusion mask and solve the occluded regions by estimation. For brevity, we define the binary occlusion mask \(\mathbf{O}_{i,k}\), where \(\mathbf{O}_{i,k}(\mathbf{x})\in\{0,1\}\) specifies whether pixel \(\mathbf{x}\in\Omega_{i}\) is forward occluded from frame \(\mathbf{I}_{i}\) to \(\mathbf{I}_{k}\). Equation (1)-(3) valid only when pixel \(\mathbf{x}\in\Omega_{i}\) is not occluded in frame \(I_{k}\) (_i.e_., \(\mathbf{O}_{i,k}(\mathbf{x})=0\)). As for occluded pixels (_i.e_., \(\mathbf{O}_{i,k}(\mathbf{x})=1\)), its optical flow has to be estimated by some carefully designed occlusion solvers. For easy notation, function \(solveOcc\) denotes occlusion solvers in general, and \(\mathbf{P}_{i,j}\in\mathbb{R}^{w\times h\times 2}\) denote the estimated flows in occluded region, where \[\mathbf{P}_{i,j}=solveOcc(\mathbf{F}_{i,k},\mathbf{F}_{k,j},\mathbf{O}_{i,k}). \tag{4}\] Therefore, Equation (3) can be re-formulated as: \[\mathbf{F}_{i,j}(\mathbf{x})=\begin{cases}\mathbf{F}_{i,k}(\mathbf{x})+ \widetilde{\mathbf{F}}_{k,j}^{i}(\mathbf{x})&\text{if }\mathbf{O}_{i,k}(\mathbf{x})=0,\\ \mathbf{P}_{i,j}(\mathbf{x})&\text{if }\mathbf{O}_{i,k}(\mathbf{x})=1.\end{cases} \tag{5}\] The forward accumulation process recursively performs the above operations. Specifically, with the time index \(t\) increases from \(2\) to \(N-1\), we recursively produce \(\mathbf{F}_{1,t+1}\) by fusing the pre-obtained flow \(\mathbf{F}_{1,t}\) and the local flow \(\mathbf{F}_{t,t+1}\) as follows: \[\mathbf{F}_{1,t+1}=\mathbf{F}_{1,t}\oplus\mathbf{F}_{t,t+1}, \tag{6}\] where for each pixel \(\mathbf{x}\in\Omega_{1}\) in reference image \(\mathbf{I}_{1}\), we have \[\mathbf{F}_{1,t+1}(\mathbf{x})=\begin{cases}\mathbf{F}_{1,t}(\mathbf{x})+ \widetilde{\mathbf{F}}_{t,t+1}^{1}(\mathbf{x})&\text{if }\mathbf{O}_{1,t}(\mathbf{x})=0,\\ \mathbf{P}_{1,t+1}(\mathbf{x})&\text{if }\mathbf{O}_{1,t}(\mathbf{x})=1, \end{cases} \tag{7}\] where the occlusion mask \(\mathbf{O}_{1,t}\) is usually estimated as well. We denote the occlusion reasoning methods as \(getOcc\) in general: \[\mathbf{O}_{1,t}=getOcc(\mathbf{F}_{1,t},\mathbf{F}_{t,t+1}). \tag{8}\] For clarity, we present the pseudocode of the forward accumulation process in Algorithm 1. ``` Input:\(\{\mathbf{F}_{t,t+1}\mid t\in[1,N-1]\}\) Output:\(\mathbf{F}_{1,N}\) for\(t\gets 2\)to\(N-1\) : \(\mathbf{O}_{1,t}\leftarrow\)getOcc\((\mathbf{F}_{1,t},\mathbf{F}_{t,t+1})\) \(\mathbf{P}_{1,t+1}\leftarrow\)solveOcc\((\mathbf{F}_{1,t},\mathbf{F}_{t,t+1},\mathbf{O}_{1,t})\) for\(\mathbf{x}\in\Omega_{1}\) : \(\widetilde{\mathbf{F}}_{t,t+1}^{1}(\mathbf{x})\leftarrow\mathbf{F}_{t,t+1}( \mathbf{x}+\mathbf{F}_{1,t}(\mathbf{x}))\) if\(\mathbf{O}_{1,t}(\mathbf{x})=0\) : \(\mathbf{F}_{1,t+1}(\mathbf{x})\leftarrow\mathbf{F}_{1,t}(\mathbf{x})+ \widetilde{\mathbf{F}}_{t,t+1}^{1}(\mathbf{x})\) elif\(\mathbf{O}_{1,t}(\mathbf{x})=1\) : \(\mathbf{F}_{1,t+1}(\mathbf{x})\leftarrow\mathbf{P}_{1,t+1}(\mathbf{x})\) ``` **Algorithm 1**The Forward Accumulation ### Backward Accumulation Previous research [18] has shown that the forward accumulation can generate high quality motion hypotheses for visible regions, but the occluded regions limit its performance. In this subsection, we first analyze the occlusion area in the forward accumulation process, then propose a new solution to alleviate the occlusion effect. Let \(\Delta=|k-i|\geq 1\) denote the time interval, we define the proportion of occluded area of \(\mathbf{O}_{i,k}\) as: \[\alpha_{\Delta}^{i}=\frac{\sum_{\mathbf{x}\in\Omega_{i}}\mathbf{O}_{i,k}( \mathbf{x})}{h\times w}, \tag{9}\] where \(\alpha_{\Delta}^{i}\in[0,1]\). We begin by analyzing the case of a one-dimensional object moving with constant velocity, assuming that the object is of length \(\delta w\) pixels, the canvas length is \(M\gg\delta w\), the velocity of the object is \(v\) pixels per frame, and the background is fixed. From time \(t=1\) to \(t=k\), the proportion of forward occluded area is calculated as: \[\alpha_{|k-1|}^{1}=\frac{\min\{v\times|k-1|,\delta w\}}{M}, \tag{10}\] which is positively correlated with the time interval \(|k-1|\). Similar conclusions can be extended to two-dimensional cases. Thus, the inequality \[\alpha_{\Delta+1}^{i}\geq\alpha_{\Delta}^{i}, \tag{11}\] holds for linear motion. While the assumption of linear motion may not always hold in practical scenarios, our experiments show that Equation (11) remains valid when a significant number of samples are tested. The statistical results over 5000 samples are provided in terms of box-plot in Figure 2, which demonstrates that the \(\alpha_{\Delta}^{i}\) is positively correlated with \(\Delta\) as Equation (10) indicates. This conclusion is important for the following analysis. Algorithm 1 shows that the occlusion proportion \(\alpha_{t-1}^{1}\) of \(\mathbf{O}_{1,t}\) increases progressively with \(t\) increases, which significantly burdens the occlusion solver. Although existing techniques [19, 48] can powerfully solve occlusion with deep neural networks (DNN), the constant increment of the occlusion proportion is still a challenge that might consume substantial computational resources. To address the above critical issue, we propose a simple solution, named the _backward accumulation_, where we reverse the accumulation order without extra computational complexity involved. As analyzed in Equation (3)-(4), the alignment operation introduces errors in the forward occluded regions, and as revealed in Equation (11), they are proportionally correlated with the time interval. In each step of accumulation process, we can simplify the problem as the alignment of two optical flows, one of which has a larger magnitude (pre-obtained from the last step) and another one has a smaller magnitude (the local flow). The forward accumulation chooses to align two flows along the larger one, which essentially leads to a larger occlusion area. Therefore, we propose to align the two flows along the smaller one. Specifically, with time variable \(t\) decreases from \(N-1\) to \(2\), we recursively produce the long-range flow \(\mathbf{F}_{t-1,N}\) by fusing the pre-obtained flow \(\mathbf{F}_{t,N}\) and the local flow \(\mathbf{F}_{t-1,t}\) as follows: \[\mathbf{F}_{t-1,N}=\mathbf{F}_{t-1,t}\oplus\mathbf{F}_{t,N}, \tag{12}\] where for each pixel \(\mathbf{x}\in\Omega_{t-1}\) in reference image \(\mathbf{I}_{t-1}\), we have \[\mathbf{F}_{t-1,N}(\mathbf{x})=\begin{cases}\mathbf{F}_{t-1,t}(\mathbf{x})+ \widetilde{\mathbf{F}}_{t,N}^{t-1}(\mathbf{x})&\text{if }\mathbf{O}_{t-1,t}(\mathbf{x})=0,\\ \mathbf{P}_{t-1,N}(\mathbf{x})&\text{if }\mathbf{O}_{t-1,t}(\mathbf{x})=1, \end{cases} \tag{13}\] and the occlusion mask is obtained by: \[\mathbf{O}_{t-1,t}=\textit{getOcc}(\mathbf{F}_{t-1,t},\mathbf{F}_{t,N}). \tag{14}\] By doing this, we form the backward accumulation process presented in Algorithm 2. As evident from the recursive process, the occluded regions are pixels with \(\mathbf{O}_{t-1,t}(\mathbf{x})=1,\mathbf{x}\in\Omega_{t-1}\), at each step. The occlusion proportion defined in Equation (9) is \(\alpha_{1}^{t-1}\) here. During the backward accumulation, although the reference image undergoes changes, the occluded region remains at a minimum level, particularly when compared to the forward accumulation method where the occluded region progressively increases. We visualize this observation in Figure 3. The reduced occluded area enables the occlusion solver to handle the occlusion more efficiently. ### AccFlow Framework In this section, we present AccFlow, a deep framework that employs the backward accumulation to estimate accurate long-range optical flow. The framework consists of three components, an arbitrary optical flow estimator OFNet (_e.g._, RAFT, GMA, _etc._), the AccPlus module, and the adaptive fusion module. Initially, local flows \(\{\mathbf{F}_{t,t+1}\mid t\in[1,N-1]\}\) are obtained from the pretrained OFNet as inputs of AccFlow. The AccFlow recursively produces the long-range flow \(\mathbf{F}_{t-1,N}\) with time \(t\) decreases from \(N-1\) to \(2\) and the recurrent structure is shown in Figure 3(a). **The AccPlus Module.** Following the Algorithm 2, we implement the backward accumulation in the AccPlus module to perform flow fusion in feature domain as shown in Figure 3(b). At each stage, given the local flow \(\mathbf{F}_{t-1,t}\) and pre-obtained flow \(\mathbf{F}_{t,N}\), we encode them into motion features \(f_{t-1,t}\) and \(f_{t,N}\) with a motion encoder. The motion encoder spatially downscales features by \(1/4\) times. The occlusion mask \(\mathbf{O}_{t-1,t}\) is determined by \(\textit{getOcc}\) which is a simple warping operation in this paper. More details about the encoder and \(\textit{getOcc}\) are provided in appendix. Afterwards, we warp the motion features \(f_{t,N}\) to align them with \(f_{t-1,t}\) by deformable convolution and produce \(\widetilde{f}_{t,N}\). In the AccPlus, we implement \(\textit{solveOcc}\) in Algorithm 2 by a set of convolutional layers. Specifically, we concatenate \(\widetilde{f}_{t,N}\) and \(f_{t-1,t}\) along the channel dimensional, where \(f_{t-1,t}\) provides the spatial coherence information for handling occlusion. The concatenated feature is then processed by multiple convolutional layers. The resulting output features, denoted as \(p_{t-1,N}\), are then merged with \(\widetilde{f}_{t,N}\) and \(f_{t-1,t}\) to produce the final target motion feature \(f_{t-1,N}\). **The adaptive blending module.** Directly decoding the output features \(f_{t-1,N}\) of AccPlus and passing them to next stage may result in the accumulation error. To mitigate this issue, an adaptive blending module is added to suppress the accumulation error by using the directly estimated long-range flow as prior information. Specifically, we first establish an initial long-range optical flow \(\mathbf{F}_{t-1,N}^{ini}\) with the pretrained OFNet, and then encode it into a motion feature \(f_{t-1,N}^{ini}\) with the motion encoder (share parameters with the one in AccPlus). Subsequently, the adaptive blending module takes the two motion features (_i.e._, \(f_{t-1,N}^{ini}\) and \(f_{t-1,N}\)) and corresponding video frames as inputs to calculate an adaptive confidence mask. The confidence mask is then used to fuse them with attention mechanism, and the output motion features are decoded into the optical flow \(\mathbf{F}_{t-1,N}\) with a motion decoder. Details of the motion decoder are provided in appendix. ### CVO Dataset Existing optical flow datasets only provide the local optical flow annotations. In order to provide the ground-truth Figure 3: Visualization of occlusion masks during accumulation. White regions denote occluded area. Figure 2: Box-plot of the occlusion proportion \(\alpha_{\Delta}^{1}\) over 5000 samples, the occlusion proportion (Y-axis) increases with the time interval \(\Delta\) (X-axis) increases. (GT) long-range optical flows, we construct a cross-frame video optical flow dataset (CVO), consisting of 12K synthetic video sequences and GT optical flow labels across different frame intervals. This dataset is essential for the research on long-range optical flow estimation and other related tasks. **Dataset Collection** We generate the CVO dataset using Kubric [10], which is a data generation pipeline for creating semi-realistic synthetic multi-object videos. We first simulate the movement of multiple objects, and then render frames along with optical flow annotations. For each video sequence, we render 7 frames of size \(512\times 512\) at 60 FPS (frame per second) in conjunction with the bidirectional optical flow of adjacent frames. In addition, we provide cross-frame bidirectional optical flows across different frame intervals. All the cross-frame flows take the first frame as reference. We further render the RGB video frames with and without random motion blur, which is denoted as _Clean_ and _Final_ sets. We partition all video sequences into two subsets, 11K sequences and 500 sequences, which serve as the training and validation splits, respectively. **Comparisons with Existing Datasets** The CVO dataset contains richer annotations compared with existing optical flow datasets [5, 6] since it provides cross-frame bidirectional optical flow annotations. Moreover, the CVO contains more challenging samples with large motion and complex occlusion. We compare the flow magnitudes among different datasets by plotting the statistical histograms in Figure 5. Even though the FlyingThings3D [6] has similar flow magnitude distribution compared to CVO, the CVO contains more extreme large motions (flow magnitude \(\geq 125\) pixels). According to our experiments, the proposed CVO is sufficient to support researches on long-range optical flow estimation and other related tasks. ## 4 Experiments ### Validation Benchmarks **CVO:** We adopt the CVO testing set, which consists of _Clean_ and _Final_ splits, as one of our validation benchmarks. Each split contains \(500\) sequences for evaluation. In each sequence, there are \(7\) frames of size \(512\times 512\), and the default GT optical flow \(\mathbf{F}_{1,7}^{gt}\). If experiments on other frame intervals are desired, we provide the corresponding GT flow \(\mathbf{F}_{1,i}^{gt},i\in[2,6]\) (denoted as CVO-\(i\)). **HS-Sintel:** MPI Sintel [5] is a commonly used optical flow benchmark generated from the realistic animated film. However, it only provides GT flows at 24 FPS. Therefore, we use the High-Speed Sintel videos [18], namely HS-Sintel, as an alternative. Specifically, Janai [18] selected a subset of 19 sequences from the MPI Sintel training set (clean pass) and re-rendered them 24 FPS to 1008 FPS with 4\(\times\) resolution. Unfortunately, the GT flows at other frame rates of HS-Sintel are not publicly available. Therefore, we use the GT flows at 24 FPS of MPI Sintel as labels to evaluate the estimates from video sequences at 1008 FPS of HS-Sintel. ### Implementation Details **Loss function:** During the recurrent process to obtain the target flow \(\mathbf{F}_{1,N}\), the AccFlow also produces intermediate Figure 4: Illustration of the network structure. (a) The AccFlow framework. Time \(t\) decreases from \(N-1\) to \(2\) to obtain long-range flow \(\mathbf{F}_{1,N}\). OFNet is an arbitrary flow estimator. (b) The AccPlus module, an efficient module that implements the backward accumulation in feature domain. The red arrows signify the encoding of images into context features by a context encoder, which adheres to the structure outlined in [48]. Figure 5: The histogram comparisons of the flow magnitude between the training set of CVO and public datasets, such as MPI Sintel [5] and FlyingThings3D [33]. flows \(\mathbf{F}_{t,N},t\in[1,N-2]\). Therefore, we train the network by supervising all the flow outputs with L1 loss: \[\mathcal{L}=\frac{1}{N-2}\sum_{i=1}^{N-2}\left\|\mathbf{F}_{i,N}-\mathbf{F}_{i, N}^{gt}\right\|_{1}\!. \tag{15}\] **Training details:** We train the AccFlow with the mixture of 'clean' and 'final' pass of CVO training set. We augment the training data by randomly cropping the input frames into patches of size \(256\times 256\). Other training hyperparameters (, learning rate and batch size) follow the default settings from [48]. By replacing the OFNet with different existing optical flow estimators, we train four models for comparison. Specifically, we embed the officially pretrained RAFT [48] and GMA [19] in AccFlow framework, respectively. On the one hand, we fix the parameter of OFNet and train other parameters from scratch, and produce Acc+RAFT and Acc+GMA, respectively. On the other hand, we fine-tune the parameter of OFNet and produce Acc+RAFT\({}^{*}\) and Acc+GMA\({}^{*}\), respectively. ### Alternative Approaches Previously, several works [22, 18] have been focused on optical flow accumulation. Therefore, for more comprehensive comparisons, we consider some other alternative approaches to estimate long-range optical flow. **Direct estimation.** One of the naive methods is to directly estimate long-range flow with two distant reference images. Other than RAFT and GMA, we also compare the GMFlow [53] which formulates the optical flow as a global matching problem to solve large motion. For fair comparisons, we also fine-tune the RAFT and GMA with training set of CVO, denoted as RAFT\({}^{*}\) and GMA\({}^{*}\), respectively. **Pixel tracking.** Another intuitive way is to use pixel tracking method to iteratively estimate the per-pixel long-range displacement. We use the SOTA pixel tracking method PIPs [11] to achieve this. Such process is time-consuming so we only test this method on CVO testing set. **Warm start.** Zachary _et al_. [48] propose to estimate optical flow with warm start. This method can also be applied in flow accumulation, that is, we use the pre-obtained \(\mathbf{F}_{1,t}\) as an initialized flow input to estimate \(\mathbf{F}_{1,t+1}\). This procedure is essentially an implicit forward accumulation process, thus we include it into comparisons. ### Comparisons with Existing Methods We compare the existing methods in terms of the average End-Point-Error (EPE) applied to all pixels (ALL) and occlusion regions (OCC). In Table 1, we compare our AccFlow with previous methods on two benchmarks, and our AccFlow outperforms all the previous methods by a large margin especially for occluded regions. Specifically, we notice that it is challenge for direct methods (the 1,5,9,13,18-_th_ rows in Table 1) to produce long-range optical flow due to the extreme large motion and occlusion problems. For forward accumulation, the explicit methods (, [22] and [18]) fail to handle the constantly increased occlusion which result in inferior performance. PIPs can accurately estimate sparse motion but suffers from the lack of spatial coherence information for dense flow estimation. Moreover, the im \begin{table} \begin{tabular}{l|c c c|c c c|c c|c|c} \hline \multirow{2}{*}{Method} & \multicolumn{3}{c|}{HS-Sintel} & \multicolumn{3}{c|}{CVO (_Clean_)} & \multicolumn{3}{c|}{CVO (_Final_)} & \multicolumn{1}{c}{Inference} \\ & ALL & NOC & OCC & ALL & NOC & OCC & ALL & NOC & OCC & time (s) \\ \hline RAFT & 2.141 & 1.124 & 7.169 & 5.687 & 2.798 & 13.233 & 6.653 & 3.812 & 13.891 & 0.129 \\ RAFT-_Lim_ & 3.868 & 1.845 & 12.63 & 11.96 & 6.573 & 31.10 & 12.34 & 6.938 & 31.45 & 0.956 \\ RAFT-_w_ & 1.921 & 1.004 & 6.623 & 5.259 & 2.274 & 12.59 & 5.508 & 2.493 & 12.90 & 0.525 \\ Acc+RAFT (ours) & 1.709 & 1.163 & 5.639 & 3.170 & 1.623 & 8.113 & 3.283 & 1.714 & 8.261 & 0.813 \\ \hline GMA & 2.291 & 1.330 & 7.139 & 5.757 & 2.775 & 13.58 & 6.265 & 3.530 & 13.71 & 0.234 \\ GMA-_Lim_ & 3.871 & 1.764 & 12.79 & 12.22 & 6.708 & 31.40 & 12.42 & 7.038 & 31.61 & 2.159 \\ GMA-_w_ & 1.924 & 1.043 & 6.458 & 5.136 & 2.137 & 12.49 & 5.515 & 2.502 & 12.81 & 1.167 \\ Acc+GMA (ours) & 1.568 & 1.091 & 5.003 & 3.583 & 1.807 & 8.868 & 3.752 & 1.979 & 9.030 & 1.499 \\ \hline RAF & 2.567 & 1.426 & 7.717 & 4.445 & 1.948 & 11.73 & 4.537 & 2.003 & 11.70 & 0.129 \\ RAFT\({}^{*}\)-_Lim_ & 3.657 & 1.611 & 12.36 & 23.34 & 6.543 & 32.90 & 13.02 & 7.033 & 33.82 & 0.956 \\ RAFT\({}^{*}\)-_w_ & 2.139 & 1.059 & 6.963 & 3.738 & 1.052 & 10.41 & 3.808 & 1.162 & 10.14 & 0.525 \\ Acc+RAFT\({}^{*}\) (ours) & 1.383 & 0.930 & 4.546 & 2.634 & 1.155 & 7.302 & 2.707 & 1.249 & 7.295 & 0.813 \\ \hline GMA\({}^{*}\) & 2.520 & 1.469 & 7.600 & 4.638 & 2.342 & 11.33 & 4.633 & 2.114 & 11.36 & 0.234 \\ GMA\({}^{*}\)-_Lim_ & 3.306 & 1.381 & 11.70 & 11.39 & 5.833 & 31.28 & 11.68 & 6.130 & 31.35 & 2.159 \\ GMA\({}^{*}\)-_w_ & 1.888 & 0.946 & 6.516 & 3.832 & 1.082 & 10.38 & 3.807 & 1.159 & 10.10 & 1.167 \\ Acc+GMA\({}^{*}\) (ours) & 1.434 & 0.950 & 4.770 & 2.732 & 1.181 & 7.438 & 2.808 & 1.261 & 7.495 & 1.499 \\ \hline SlowFlow & 2.58\({}^{\dagger}\) & 0.87\({}^{\dagger}\) & 9.45\({}^{\dagger}\) & - & - & - & - & - & - & \(\geq 500\) \\ PIPs & - & - & - & 8.568 & 6.351 & 21.55 & 8.954 & 6.718 & 22.06 & \(\geq 500\) \\ GMFlow & 2.055 & 1.024 & 7.132 & 5.801 & 2.680 & 13.521 & 6.506 & 3.402 & 14.21 & 0.341 \\ \hline \end{tabular} \end{table} Table 1: Comparisons of AccPlus framework with other methods on two benchmarks in terms of EPE \(\downarrow\) on all regions (ALL) and occluded regions (OCC). The best and the second-best results are marked in red and blue, respectively. ‘_-Lim_’ denotes the flow accumulation method in [22]. ‘-\(w\)’ denotes the warm-start method (details in Section 4.3). For the SlowFlow [18], we refer to data in their paper (denoted with \({}^{\dagger}\)). We report the inference time of 7 frames of size \(512\times 512\) per sample on an NVIDIA GTX3090 GPU. plicit forward accumulation method (_i.e_., warm start) is not specially designed for this task and fall short in tackling occlusion problem, but still brings certain performance gain compared with direct methods. Compared to all these methods, the AccFlow framework can decrease the average EPE error by large margin, which justifies the effectiveness of our framework for occlusion correction and non-occlusion correspondence enhancement. Moreover, the qualitative comparisons are shown in Figure 6, where two small objects with large motion are annotated in red boxes. It can be seen that our AccFlow can produce accurate optical flows while the compared methods suffer from significant errors especially for occluded area. ### Ablation Study Backward VS. Forward accumulation:In Section 3.2, we demonstrate that the backward accumulation is less susceptible to occlusion effect than the forward one. In order to fairly compare the two methods, we design a modified AccPlus module which implements the forward accumulation in Table 2 (denoted as 'F.'). It is worth noting that the modification only change the inputs of network and no additional computational complexity is introduced. Detailed structure of the forward version of AccPlus is provided in appendix. In Table 2, we compare the backward accumulation with the forward one in terms of EPE under the same experimental settings. We can find that the backward version can deal with the occluded area more effectively than the forward version by large margin. This is because the backward accumulation has stable and minimum occlusion proportion at each step of iterations. Adaptive blending module:In Section 3.3, we design the AccFlow framework not only to address occlusion problem but also suppress the accumulation error. Specifically, the adaptive blending module takes a directly estimated long-range flow as prior to rectify the cumulated flow. To evaluate this, we train networks w/. and w/o. the adaptive blending module (denoted as 'AB') in Table 2. The EPE is reduced by large margin especially for non-occluded area (NOC), which demonstrates the necessity of adaptive blending module for mitigating accumulated error. Accumulation for different frame rangesIn Figure 7, we show the results of long-range optical flow estimation in different estimation ranges. When the range increases, the EPE of the flows from our proposed AccFlow (Acc+GMA\({}^{*}\)) increases slower than that from direct estimation and the warm start methods. This observation shows the robustness of our proposed framework in different estimation ranges. ## 5 Conclusion We propose the backward accumulation strategy for improved long-range optical flow estimation, surpassing prior methods. AccFlow employs feature domain backward accumulation and DNN-based error correction. Experimental results effectively address occlusion and accumulation errors. Ablation studies confirm superiority and adaptive blending's necessity. AccFlow notably reduces EPE on several benchmarks. In conclusion, AccFlow offers a simple, potent solution for flow accumulation, with scalability. ## 6 Acknowledgment The work was supported in part by the Shanghai Pujiang Program under Grant 22PJ1406800, in part by the Guangdong Basic and Applied Basic Research Foundation under Project (No.2023A1515010644), and in part by Sichuan Provincial Key Laboratory of Intelligent Terminals under Grant SCITLAB-20016. \begin{table} \begin{tabular}{c c|c|c c c|c c c} \hline \multicolumn{2}{c|}{Acc+RAFT} & \multirow{2}{*}{AB} & \multicolumn{3}{c|}{HS-Sintel} & \multicolumn{3}{c}{CVO (Final)} \\ \cline{2-9} \multicolumn{1}{c|}{F.} & & \multicolumn{1}{c|}{B.} & \multicolumn{1}{c|}{ALL} & \multicolumn{1}{c|}{NOC} & \multicolumn{1}{c|}{OCC} & \multicolumn{1}{c}{ALL} & \multicolumn{1}{c}{NOC} & \multicolumn{1}{c}{OCC} \\ \hline ✓ & & & 2.238 & 5.785 & 5.788 & 3.283 & 1.914 & 7.716 \\ & ✓ & & 1,740 & 1.303 & 4.711 & 2.709 & 1.252 & 7.299 \\ ✓ & & ✓ & 1.716 & 0.936 & 5.895 & 3.229 & 0.873 & 8.823 \\ & ✓ & ✓ & 1.383 & 0.930 & 4.546 & 2.707 & 1.249 & 7.295 \\ \hline \end{tabular} \end{table} Table 2: Ablation study of AccFlow framework (reported in EPE \(\downarrow\)). ‘F.’ denotes a modified AccPlus that accumulates the local flows in forward manner, ‘B.’ is the proposed AccPlus with backward accumulation, and ‘AB’ denotes the adaptive blending module. Figure 6: Visual quality comparisons on CVO dataset. Two small objects with large motions are emphasized with red boxes. More results can be found in the supplementary. Figure 7: Average EPE \(\downarrow\) (ALL) of long-range flows from the compared methods in different estimation ranges.
2310.06012
Holographic description of Narain CFTs and their code-based ensembles
We provide a precise relation between an ensemble of Narain conformal field theories (CFTs) with central charge $c=n$, and a sum of $(U(1) \times U(1))^n$ Chern-Simons theories on different handlebody topologies. We begin by reviewing the general relation of additive codes to Narain CFTs. Then we describe a holographic duality between any given Narain theory and a pure Chern-Simons theory on a handlebody manifold. We proceed to consider an ensemble of Narain theories, defined in terms of an ensemble of codes of length $n$ over ${\mathbb Z}_k \times {\mathbb Z}_k$ for prime $k$. We show that averaging over this ensemble is holographically dual to a level-$k$ $(U(1) \times U(1))^n$ Chern-Simons theory, summed over a finite number of inequivalent classes of handlebody topologies. In the limit of large $k$ the ensemble approaches the ensemble of all Narain theories, and its bulk dual becomes equivalent to "U(1)-gravity" - the sum of the pertubative part of the Chern-Simons wavefunction over all possible handlebodies - providing a bulk microscopic definition for this theory. Finally, we reformulate the sum over handlebodies in terms of Hecke operators, paving the way for generalizations.
Ofer Aharony, Anatoly Dymarsky, Alfred D. Shapere
2023-10-09T18:00:00Z
http://arxiv.org/abs/2310.06012v1
# Holographic description of Narain CFTs and their code-based ensembles ###### Abstract We provide a precise relation between an ensemble of Narain conformal field theories (CFTs) with central charge \(c=n\), and a sum of \((U(1)\times U(1))^{n}\) Chern-Simons theories on different handlebody topologies. We begin by reviewing the general relation of additive codes to Narain CFTs. Then we describe a holographic duality between any given Narain theory and a pure Chern-Simons theory on a handlebody manifold. We proceed to consider an ensemble of Narain theories, defined in terms of an ensemble of codes of length \(n\) over \(\mathbb{Z}_{k}\times\mathbb{Z}_{k}\) for prime \(k\). We show that averaging over this ensemble is holographically dual to a level-\(k\)\((U(1)\times U(1))^{n}\) Chern-Simons theory, summed over a finite number of inequivalent classes of handlebody topologies. In the limit of large \(k\) the ensemble approaches the ensemble of all Narain theories, and its bulk dual becomes equivalent to "U(1)-gravity" - the sum of the pertubative part of the Chern-Simons wavefunction over all possible handlebodies - providing a bulk microscopic definition for this theory. Finally, we reformulate the sum over handlebodies in terms of Hecke operators, paving the way for generalizations. ###### Contents * 1 Introduction * 2 Additive codes and Narain CFTs * 2.1 Codes over \(\mathbb{Z}_{k}\times\mathbb{Z}_{k}\) * 2.2 General case * 3 \((U(1)\times U(1))^{n}\) Chern-Simons theories on a solid torus * 3.1 A review of Abelian Chern-Simons theories on handlebodies * 3.2 The wavefunction of \(U(1)_{k}\) theory on a torus * 3.3 Wavefunction of the \((U(1)\times U(1))_{k}\) theory * 3.4 General case * 4 Holographic description of the ensemble of code CFTs * 4.1 Level \(k=1\) CS theories and conventional holographic correspondence * 4.2 Averaging over Narain CFTs * 4.3 Level \(k>1\) CS theory and ensemble averaging * 4.4 Holographic correspondence in the \(k\to\infty\) limit * 4.5 Ensembles of \(n=1\) and \(n=2\) theories in the large \(p\) limit * 4.6 Extensions and generalizations * 5 Ensemble averaging, typicality and holography * 6 Discussion * A The compact scalar CFT * B Chern-Simons theory: technical details * C Narain \(c=2\) theories * C.1 All even self-dual \(n=2\) codes over \(\mathbb{Z}_{p}\times\mathbb{Z}_{p}\) * C.2 Hecke operators and triality * C.3 Averaging over the moduli space * C.4 Large-\(p\) limit Introduction In recent years, it has become evident that certain gravitational theories in anti-de Sitter (AdS) space are dual to ensemble averages, rather than to individual quantum field theories. A general argument for requiring an ensemble to describe gravitational bulk theories is based on the presence of bulk geometries with several disconnected boundaries [1], known as "wormholes". If the gravitational action of such configurations is non-trivial, then the dual field theory will not factorize on disconnected manifolds, necessitating an ensemble interpretation. The first explicit example of such a duality arose in 2d JT gravity, which was found to be dual to an average over an ensemble of quantum-mechanical systems [2]. In one dimension higher, an intriguing example that motivated our study is provided by a theory called "U(1) gravity", which is formulated as a sum over handlebody geometries in the bulk, and is dual to an average over the moduli space of Narain CFTs [3; 4] (see also [5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17] for further developments). Yet the original example of a holographic correspondence, between the \(\mathcal{N}=4\) supersymmetric Yang-Mills theory and type IIB string theory on \(AdS_{5}\times S^{5}\)[18], has so far evaded an ensemble interpretation. This raises the question: when does an ensemble of field theories admit a holographic interpretation? In particular, can a finite ensemble have a gravitational dual, and which bulk geometries need to be summed over in this case? In this paper, we address the latter question by studying finite ensembles of Narain theories composed of "code CFTs", which were introduced in [19; 20; 21]. We find that they are dual to a \((U(1)\times U(1))^{n}\) Chern-Simons theory of finite level, summed over a finite number of inequivalent handlebody topologies. The relation between error-correcting codes and CFTs goes back all the way to the Golay code, which is associated with the Leech lattice, and which led to the discovery of the Monster CFT [22], followed by other developments connecting codes and chiral CFTs [23; 24; 25; 26; 27; 28]. Motivated by these developments, as well as by the emergence of quantum error correction in the context of bulk reconstruction [29], two of us proposed a connection between quantum codes and non-chiral CFTs in [19]. Our work led to further activity connecting codes and CFTs, with applications to the modular bootstrap program and beyond [6; 20; 21; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43]. Ensembles of code CFTs were found in [6; 21] to be self-averaging and to exhibit a large spectral gap, suggesting a possible holographic interpretation and motivating the current study. We first consider the holographic description of an individual Narain CFT with \(c=n\) on a Riemann surface \(\Sigma\). By explicitly evaluating the partition functions on both sides of the duality for \(\Sigma\) of genus one, we show that it is dual to a pure level-1 \((U(1)\times U(1))^{n}\) "\(AB\)" Chern-Simons (CS) theory on a 3-manifold \(\mathcal{M}\) with boundary \(\partial\mathcal{M}=\Sigma\) (any such 3-manifold can be chosen and gives the same results, there is no sum over 3-manifolds), and we establish the precise holographic dictionary. We note that the two \(U(1)^{n}\) gauge fields are coupled at the level of large gauge transformations, and their boundary conditions determine the moduli of the Narain theory. The level \(k=1\) Chern-Simons theory avoids the factorization puzzle because it is trivial in the bulk - it has a unique wavefunction on any \(\Sigma\), and in particular the partition function on a "wormhole" geometry connecting two disjoint boundaries is the same as that on the disconnected product of manifolds with the same boundaries. For a \((U(1)\times U(1))^{n}\) CS theory of level \(k>1\), the field theory dual is no longer an individual Narain CFT. Rather, we find it to be dual to an ensemble average over a finite set of \(c=n\) Narain CFTs based on the set of all even self-dual codes of length \(n\) over the "alphabet" \(\mathbb{Z}_{k}\times\mathbb{Z}_{k}\). In this case the CS wavefunction depends on the topology of the bulk manifold \(\mathcal{M}\), and we find that the averaged CFT partition function is precisely reproduced by the corresponding Chern-Simons wavefunction, summed over a finite number of equivalence classes of handlebody topologies. The boundary conditions of the CS theory map to parameters of the Narain theories in the ensemble. Our main identity (4.17), valid for any fixed \(n\) and prime \(k=p\), gives an explicit relation between the average over the code-based ensemble and the "Poincare series" representing a (finite) sum over bulk geometries. The \(k=1\) duality of the previous paragraph may be viewed as a special case of this, where the ensemble contains just a single CFT. As \(p\to\infty\), for \(n>2\) we will argue that the ensemble of code theories densely covers the whole of Narain moduli space with the canonical measure. We show explicitly how a similar limit works in the case of \(n=2\), by expressing the average in terms of Hecke operators, and applying a theorem [44] on the equidistribution of Hecke points. In the bulk, for \(n>2\) in the \(p\to\infty\) limit we recover the full Poincare sum over all handlebody topologies, reproducing the "U(1)-gravity" of [3, 4]. Thus, our construction provides a microscopic bulk definition for the latter, as a limit of CS theories. Arguments of typicality suggest that for large ensembles of CFTs that are self-averaging and possess a holographic description as a sum over geometries, random individual theories should also admit an approximate holographic description as a sum over geometries. Motivated by this, we propose a sum-over-geometries description for any individual Narain theory with \(n>2\), that in general is non-local in the bulk, but that becomes approximately local for typical (random) theories as \(n\to\infty\) (which is the limit in which the ensemble becomes self-averaging). The plan of the paper is as follows. In Section 2, we briefly review the relation between additive codes, lattices, and Narain CFTs. In the course of this discussion, we generalize previous constructions by introducing arbitrary additive codes in Section 2.2. Section 3 reviews \((U(1)\times U(1))^{n}\) Chern-Simons theories on handlebody geometries, and constructs their wavefunctions for general boundary conditions. In Section 4 we discuss the holographic interpretation of Narain theories and their ensembles. First, in section 4.1 we show that the wavefunction of level-1 \((U(1)\times U(1))^{n}\) Chern-Simons theory, evaluated with given boundary conditions, is equal to the partition function of a Narain CFT. The point in the Narain moduli space is specified by the boundary conditions of the CS theory, establishing an explicit holographic dictionary. We briefly discuss the idea of averaging over these boundary conditions in section 4.2, and proceed to discuss level \(k>1\) CS theories summed over geometries in section 4.3. This section establishes our main technical result, equation (4.17). We discuss the \(k\to\infty\) limit and the emergence of "U(1)-gravity" in section 4.4. Section 4.5 is devoted to a detailed analysis of the \(n=1\) and \(n=2\) cases. It also establishes connections with Hecke operators, which we further discuss, together with related mathematical observations, in section 4.6. Section 5 explores the holographic description of an individual Narain theory as a sum over geometries. We conclude with a discussion in Section 6. Several appendices contain technical details. ## 2 Additive codes and Narain CFTs ### Codes over \(\mathbb{Z}_{k}\times\mathbb{Z}_{k}\) A classical additive code over an Abelian group \(F\) is a collection of \(F\)-valued strings (codewords) of length \(n\) closed under addition within \(F\). Additive codes are naturally related to lattices [45], and thus to lattice-based chiral CFTs [23]. Recently, codes of more general type have been shown to be related to Narain CFTs, their orbifolds, and Abelian fermionic theories [30, 31, 35, 37, 38, 39, 40, 41, 42, 43, 19, 43, 20, 44, 31]. As an illustrative example, we briefly review the relation between additive even self-dual codes over \(\mathbb{Z}_{k}\times\mathbb{Z}_{k}\) and Narain theories [20, 21]. A code \(\mathcal{C}\) over \(\mathbb{Z}_{k}\times\mathbb{Z}_{k}\) can be thought of as a linear space within \(\mathbb{Z}_{k}^{2n}\) where all algebra is defined mod \(k\). The space is equipped with the indefinite scalar product \[\eta=\left(\begin{array}{cc}0&\mathbb{I}_{n\times n}\\ \mathbb{I}_{n\times n}&0\end{array}\right) \tag{2.1}\] with respect to which all code vectors are "even", \(c^{T}\eta\,c=0\,(\text{mod }2k)\) for all \(c\in\mathcal{C}\). Furthermore, self-duality implies that any vector \(c^{\prime}\in\mathbb{Z}^{2n}\) orthogonal to \(\mathcal{C}\), in the sense that \(c^{T}\eta\,c^{\prime}=0\,(\text{mod }k)\) for any \(c\in\mathcal{C}\), also belongs to \(\mathcal{C}\). There are \[\mathcal{N}=\prod_{i=0}^{n-1}(p^{i}+1) \tag{2.2}\] distinct codes of this type when \(k=p\) is prime (the expression for composite \(k\) is more involved). Starting from an even self-dual code \(\mathcal{C}\), we can define an even self-dual lattice \(\Lambda_{\mathcal{C}}\subset\mathbb{R}^{n,n}\) as follows: \[\Lambda_{\mathcal{C}}\equiv\left\{v/\sqrt{k}\,|\,v\in\mathbb{Z}^{2n},\,v\,{ \rm mod}\,k\in\mathcal{C}\subset\mathbb{Z}_{k}^{2n}\right\}. \tag{3}\] A lattice \(\Lambda_{\mathcal{C}}\) defines a Narain CFT. When \(k=p\) is prime, the CFT can be described, via a \(T\)-duality transformation1, as a compactification on an \(n\)-dimensional torus with metric \(\gamma=I/\sqrt{p}\) and with \(B\)-field given by an antisymmetric integer-valued matrix \({\rm B}_{ij}\in\mathbb{Z}\), such that \(G=(I,{\rm B}^{T})\) with \({\rm B}=B\,{\rm mod}\,p\) is the generator matrix of the code \(\mathcal{C}\) brought into canonical form,2 Footnote 1: T-duality is commonly understood as the action of \(O(n,n,\mathbb{Z})\) on \(\gamma\) and \(B\). From the lattice generator matrix point of view, it is the action of \(O(n,n,\mathbb{Z})\) from the right, amended by the action of \(O(n,\mathbb{R})\times O(n,\mathbb{R})\) from the left to preserve the “left bottom block equal zero” structure as in (4). From the Narain lattice point of view action of \(O(n,n,\mathbb{Z})\) from the right is trivial. Hence, in the context of Narain lattices, by T-duality we mean the action of \(O(n,\mathbb{R})\times O(n,\mathbb{R})\). Footnote 2: The generator matrix \(G\) is a \(n\times 2n\) matrix such that all codewords are given by \(c=G^{T}q\,{\rm mod}\,k\), \(q\in\mathbb{Z}_{k}^{n}\). The form of \(G=(I,{\rm B}^{T})\) with some antisymmetric \({\rm B}_{ij}\in\mathbb{Z}_{k}\) is called canonical. When \(k\) is prime, one can always bring the generator matrix to the canonical form using so-called code equivalence transformations [21]. \[\Lambda_{\mathcal{C}}=O_{T}\left(\begin{array}{cc}\gamma^{*}&\gamma^{*}B\\ 0&\gamma\end{array}\right),\qquad\gamma^{*}\equiv(\gamma^{\mathsf{T}})^{-1}, \qquad O_{T}\in O(n,\mathbb{R})\times O(n,\mathbb{R}). \tag{4}\] Here we use \(\Lambda_{\mathcal{C}}\) to denote both the lattice and the lattice-generating matrix. An important object characterizing a code is the complete enumerator polynomial \(W_{\mathcal{C}}\). It counts the number of codewords of a code, that include a given "letter" with the specified multiplicity. In the present case, with the "alphabet" \(\mathbb{Z}_{k}\times\mathbb{Z}_{k}\), we regard a codeword \(c=(a_{1},\ldots,a_{n},b_{1},\ldots,b_{n})\) as being composed of letters \((a_{i},b_{i})\in\mathbb{Z}_{k}\times\mathbb{Z}_{k}\). Introducing \(k^{2}\) formal variables \(X_{ab}\) with \(0\leq a,b<k\) to represent the letters, one defines the complete enumerator polynomial \[W_{\mathcal{C}}(X)=\sum_{(\vec{a},\vec{b})\in\mathcal{C}}\prod_{i=1}^{n}X_{a_ {i}b_{i}}. \tag{5}\] For self-dual \(\mathcal{C}\), \(W_{\mathcal{C}}\) satisfies the so-called Mac-Williams identity \[W_{\mathcal{C}}(X)=W_{\mathcal{C}}(X^{\prime}),\quad\text{where}\ \ X^{\prime}_{ab}\equiv\frac{1}{k}\sum_{a^{\prime},b^{\prime}}X_{a^{\prime}b^{ \prime}}e^{-2\pi i(a^{\prime}b+a\,b^{\prime})/k}. \tag{6}\] To better illustrate the notation, we consider a simple example - length \(n=1\) even self-dual codes over \(\mathbb{Z}_{k}\times\mathbb{Z}_{k}\). When \(k=1\) there is a unique code consisting of only one codeword \((0,0)\in{\cal C}\). When \(k=p\) is prime, there are two codes, one with codewords of the form \((a,0)\in{\cal C}_{1}\) and the other with \((0,b)\in{\cal C}_{2}\), with arbitrary \(0\leq a,b<p\). Their enumerator polynomials are \[W_{{\cal C}_{1}}(X)=\sum_{a=0}^{p-1}X_{a0},\qquad W_{{\cal C}_{2}}(X)=\sum_{b=0} ^{p-1}X_{0b}. \tag{7}\] When \(k>1\) is not prime, there are more codes. All length \(n=2\) codes for prime \(k=p\) are listed in Appendix C.1. A defining feature of the code construction of Narain theories is that for a Narain theory defined with the help of a code-based lattice (3), its torus partition function can be concisely written in terms of \(W_{\cal C}\). Indeed, the torus partition function of a Narain theory is defined in terms of a Siegel theta series that sums over all lattice points. For \(\Lambda_{\cal C}\) as in (4), we can readily see that the lattice points organize into sets, each associated with a given codeword \((\vec{a},\vec{b})\in{\cal C}\): \[S_{\vec{a},\vec{b}}=\{v/\sqrt{k}\in\Lambda_{\cal C}\,|\,v=(\vec{a},\vec{b})\, \,\text{mod}\,\,k\}. \tag{8}\] We can sum over these sets individually, yielding \[Z(\tau) = W_{\cal C}(\Psi),\quad\Psi_{ab}=\frac{\Theta_{ab}}{|\eta(\tau)|^ {2}},\] \[\Theta_{ab} \equiv \sum_{n,m}e^{i\pi\tau p_{L}^{2}-i\pi\bar{\tau}p_{R}^{2}},\quad p_{ L,R}=\sqrt{\frac{k}{2}}((n+a/k)\pm(m+b/k)),\quad n,m\in\mathbb{Z}. \tag{9}\] It can be readily seen that by virtue of \({\cal C}\) being even, each combination \(\prod_{i=1}^{n}\Psi_{a_{i}b_{i}}\) in \(W_{\cal C}\) associated with an individual codeword \((\vec{a},\vec{b})\in{\cal C}\) will be invariant under \(\tau\to\tau+1\), although individual factors \[\Psi_{ab}(\tau+1)=\Psi_{ab}(\tau)\,e^{2\pi iab/k} \tag{10}\] are not. Furthermore, \(Z\) will be invariant under \(\tau\to-1/\tau\) due to the Mac-Williams identity (6) and the fact that \(\Psi_{ab}(-1/\tau)=\Psi^{\prime}_{ab}(\tau)\), where \(\Psi^{\prime}\) are defined as \(X^{\prime}\) in (6). The relation between the code's enumerator polynomial and the associated CFT partition function can be extended to higher genus [33; 35]. The relation between codes and CFTs at the level of the partition function has proved to be a useful tool, which among other things provides an efficient way to solve modular bootstrap constraints, construct inequivalent isospectral CFTs [30] and modular invariant \(Z(\tau)\) which are "fake" (i.e., not associated with any bona fide CFT) [36], construct "optimal" CFTs maximizing the value of the spectral gap [21; 31], etc. One recent application was the calculation of the spectral gap of \(U(1)^{n}\times U(1)^{n}\) primaries for a typical code theory when \(k\to\infty\) while \(n\) is kept fixed [21]. The resulting gap, \(\Delta=n/(2\pi e)\), matches the value of the spectral gap in "\(U(1)\)-gravity" [3; 6], a result we return to in section 5. The results mentioned above mostly rely on a "rigid embedding" (3) or its analogues, in which a code, understood as a subset of \(\mathbb{Z}_{k}^{2n}\), is mapped to a lattice \(\Lambda_{\mathcal{C}}\), which is a sublattice of a cubic lattice of spacing \(1/\sqrt{k}\), \((\sqrt{k}\,\mathbb{Z})^{2n}\subset\Lambda_{\mathcal{C}}\subset(\mathbb{Z}/ \sqrt{k})^{2n}\subset\mathbb{R}^{n,n}\). This rigidity, which allows only very special Narain lattices to be obtained from codes, suggests a picture in which codes are related to a set of very special Narain theories, dubbed code CFTs. In this picture, there is a close relation between the underlying code and the algebraic properties of the CFT [32]. However, as we will see momentarily, these maps from codes to CFTs are a particular instance of a much more general relation. ### General case Reference [21] provides a general construction of codes over an Abelian group \(\mathsf{G}\) defined as the quotient group of a self-orthogonal even "glue lattice" \(\mathsf{\Lambda}\), \[\mathsf{G}=\mathsf{\Lambda}^{*}/\mathsf{\Lambda},\qquad\mathsf{\Lambda}^{*} \equiv\eta(\mathsf{\Lambda}^{\mathsf{T}})^{-1}. \tag{11}\] In [21] the focus was on \(\mathsf{\Lambda}\subset\mathbb{R}^{1,1}\) and all such lattices were classified there. They are defined by \[\mathsf{\Lambda}^{T}\eta\mathsf{\Lambda}=g_{\mathsf{\Lambda}}=\left(\begin{array} []{cc}2\,\mathsf{n}&\mathsf{k}\\ \mathsf{k}&2\,\mathsf{m}\end{array}\right),\quad\mathsf{n},\mathsf{m},\mathsf{ k}\in\mathbb{Z},\quad\mathsf{k}^{2}-4\mathsf{nm}>0, \tag{12}\] with an arbitrary \(O_{+}(1,1)\) transformation acting on \(\mathsf{\Lambda}\). In particular, the case of \(\mathbb{Z}_{k}\times\mathbb{Z}_{k}\) codes discussed above corresponds to the glue matrix \[\mathsf{\Lambda}=\left(\begin{array}{cc}1/r&0\\ 0&r\end{array}\right)\sqrt{k} \tag{13}\] with \(r=1\). A nontrivial "embedding" \(r\) in (13) changes \(p_{L,R}\) in (9) to \[p_{L,R}=\sqrt{\frac{k}{2}}((n+a/k)/r\pm(m+b/k)r), \tag{14}\] while changing neither the relation between \(Z(\tau)\) and \(W_{\mathcal{C}}\), nor the way in which \(\Psi_{ab}(\tau)\) changes under modular transformations of \(\tau\). The group \(\mathsf{G}\) is an "alphabet", while codes are collections of \(\mathsf{G}\)-valued strings of length \(n\) closed under addition and equipped with the scalar product inherited from \(\eta\). Then, even self-dual codes \(\mathcal{C}\) over \(\mathsf{G}\) define even self-dual (Narain) lattices \(\Lambda_{\mathcal{C}}\) in \(\mathbb{R}^{n,n}\) via a straightforward generalization of (3), \[\Lambda\subset\,\Lambda_{\mathcal{C}}\,\subset\Lambda^{*}\subset\mathbb{R}^{n,n },\qquad\Lambda=\mathsf{\Lambda}\oplus\cdots\oplus\mathsf{\Lambda}. \tag{15}\] If all \(n\) glue lattices \(\mathsf{\Lambda}\) in (15) are the same, then the permutation of letters within the codeword - a code equivalence - is also a symmetry (an element of the T-duality group) at the level of the code CFT. But one can also choose \(n\) different parameters \(r_{i}\), in which case to preserve the relation between \(Z\) and \(W_{\mathcal{C}}\), the enumerator polynomial should depend on \(n\,k^{2}\) distinct auxiliary variables \(X^{i}_{ab}\), \(W_{\mathcal{C}}=\sum_{(\vec{a},\vec{b})\in\mathcal{C}}\prod_{i}X^{i}_{a_{i}b_ {i}}\). More generally, one can consider \(O(n,n,\mathbb{R})\) transformations acting and mixing several or all \(\mathsf{\Lambda}\)'s within \(\Lambda\), or combinations of completely different even self-orthogonal matrices \(\mathsf{\Lambda}\) of different dimensions (leading to codes where different letters belong to different alphabets). Thus, most generally one can consider a lattice \(\Lambda\subset\mathbb{R}^{n,n}\), even and self-orthogonal with respect to the \(2n\)-dimensional scalar product (1) within \(\mathbb{R}^{n,n}\), with the "codewords" being elements of the Abelian quotient group \(c\in\mathcal{C}\subset\Lambda^{*}/\Lambda=\mathsf{G}_{\Lambda}\). This group defines a "dictionary," a set of all possible "words". The "dictionary group" inherits the scalar product from (1). An even self-dual code would additionally satisfy \[c^{T}\,\eta\,c\in 2\mathbb{Z}\,\text{ for any }c\in\mathcal{C}, \tag{16}\] \[c^{T}\,\eta\,c^{\prime}\in\mathbb{Z}\,\text{ for all }c,c^{\prime}\in\mathcal{C}, \tag{17}\] while if \(c^{\prime}\notin\mathcal{C}\) then \(c^{T}\,\eta\,c^{\prime}\) is not an integer for some \(c\in\mathcal{C}\). Any even self-dual code then defines a Narain lattice, generalizing (3), \[\Lambda_{\mathcal{C}}=\{v\,|\,v\in\Lambda^{*},\,(v\,\operatorname{mod}\Lambda) \in\mathcal{C}\}\,,\quad\Lambda\subset\Lambda_{\mathcal{C}}\subset\Lambda^{*} \subset\mathbb{R}^{n,n}. \tag{18}\] Here we denote by \((v\,\operatorname{mod}\Lambda)\) the equivalence class of \(v\) within \(\Lambda^{*}/\Lambda\). In general, the relation between the associated CFT partition function and the code enumerator polynomial remains essentially the same. The complete enumerator polynomial \[W_{\mathcal{C}}(X)=\sum_{c\in\mathcal{C}}X_{c}, \tag{19}\] is defined in term of formal auxiliary variables \(X_{c}\) for \(c\in\mathsf{G}_{\Lambda}\), which are then promoted to "(code)word blocks" \(\Psi_{c}\) with modular parameter \(\tau\) and arbitrary fugacities \(\xi,\bar{\xi}\in\mathbb{R}^{n}\) \[\Psi_{c}(\tau,\xi,\bar{\xi})=\frac{\Theta_{c}}{|\eta(\tau)|^{2n}}, \quad\Theta_{c}=\sum_{\ell}e^{i\pi\tau p_{L}^{2}-i\pi\bar{\tau}p_{R}^{2}+2\pi i (p_{L}\cdot\xi-p_{R}\cdot\bar{\xi})+\frac{\pi}{2\tau_{2}}(\xi^{2}+\bar{\xi}^{ 2})},\] \[\begin{pmatrix}p_{L}+p_{R}\\ p_{L}-p_{R}\end{pmatrix}=\sqrt{2}(\Lambda\,\vec{\ell}+\vec{c}),\quad\vec{\ell} \in\mathbb{Z}^{2n},\quad\vec{c}\in\mathcal{C}\subset\mathsf{G}_{\Lambda} \equiv\Lambda^{*}/\Lambda. \tag{20}\] We emphasize that \(\Psi_{c}\) are defined for all \(c\in{\sf G}_{\Lambda}\), and not all of them are even. The _path integral_ of the CFT is then given by \[Z_{BPI}=W_{\cal C}(\Psi), \tag{21}\] where "BPI" stands for bulk path integral. This name is justified in Section 3. \(Z_{BPI}\) is equal to the CFT partition function up to a theory-independent factor, as explained in Appendix A; see also [49]. The functions \(\Psi_{c}\) change covariantly under modular transformations \[\Psi_{c}(\tau+1,\xi,\bar{\xi}) = \Psi_{c}(\tau,\xi,\bar{\xi})e^{\pi ic^{T}\eta c},\] \[\Psi_{c}(-1/\tau,\xi/\tau,\bar{\xi}/\bar{\tau}) = \frac{1}{|{\sf G}_{\Lambda}|^{1/2}}\sum_{c^{\prime}\in{\sf G}_{ \Lambda}}\Psi_{c^{\prime}}(\tau,\xi,\bar{\xi})e^{-2\pi ic^{T}\eta c^{\prime}}. \tag{22}\] Modular invariance of \(Z_{PI}\) follows from (22) and the algebraic properties of \(W_{\cal C}\) due to the evenness and self-duality (Mac-Williams identity) of the underlying code. The transformations (22) are defined solely in terms of the code and therefore can be defined already at the level of the formal variables \(X_{c}\). The same functions \(\Psi_{c}\), with \(\xi=\bar{\xi}=0\), have been discussed in [12], where they appeared in a different context - as the partition functions of non-modular-invariant CFTs. There, an ensemble of such CFTs generated by the action of \(O(n,n,\mathbb{R})\) on a given \(\Lambda\) was discussed, together with its holographic interpretation. The focus in this paper is different: we sum over \(\Psi_{c}\) for all \(c\) belonging to a suitable even self-dual code such that the resulting partition function corresponds to a modular-invariant Narain CFT. We would like to emphasize that the action of \(O(n,n,\mathbb{R})\) on \(\Lambda\) does not affect the code, its enumerator polynomial, nor the transformation laws (22). It changes the "embedding" that maps the codes associated with a given \(\Lambda\) into the space of Narain CFTs. Explicitly, this means we define \(\Psi_{c}\) exactly as in (20), but can introduce an arbitrary \({\cal O}\in O(n,n,\mathbb{R})\), \[\begin{pmatrix}p_{L}+p_{R}\\ p_{L}-p_{R}\end{pmatrix}={\cal O}\sqrt{2}(\Lambda\,\vec{\ell}+\vec{c}),\quad \vec{\ell}\in\mathbb{Z}^{2n},\quad\vec{c}\in{\cal C}\subset{\sf G}_{\Lambda} \equiv\Lambda^{*}/\Lambda \tag{23}\] where the change of notation can be absorbed into the definition of \(\Lambda\). Choosing different "embeddings" \({\cal O}\) will change code theories, but the relation (21) between \(Z\) and \(W_{\cal C}\) will remain the same. Thus, starting from any \(\Lambda\), e.g. as given by (15) and (13) with \(r=1\), and applying an appropriate \({\cal O}\), we can represent _any_ Narain lattice as a code lattice \(\Lambda_{\cal C}\) associated with _any_\({\cal C}\) over any alphabet. This, first of all, makes the notion that only certain Narain CFTs are associated with codes obsolete any Narain theory can be thought of as a code CFT associated with any even self-dual code of any type, i.e. with any \(\mathsf{G}\), \(\mathcal{C}\subset\mathsf{G}^{n}\), or more generally with any \(\mathsf{G}_{\Lambda}\supset\mathcal{C}\). One can even associate several arbitrary Narain CFTs with several codes simultaneously, by making use of the \(n(2n-1)\) parameters of \(O(n,n,\mathbb{R})\). Yet the notion of a code CFT ensemble is still relevant, since as \(n\) increases there are generally many more codes of a given type, see e.g. (2), than the number of adjustable parameters. In the case of \(n=1\) codes with prime \(k=p\) discussed above, the construction based on (13) gives two possible codes; the corresponding CFTs are compact scalar theories with radii \(R=r\sqrt{2p}\) and \(R=r\sqrt{2/p}\), respectively. Obviously, by taking different values of \(r\), each code covers the full space of \(c=1\) Narain CFTs. Another way to think about the relation of codes to CFTs is that codes provide a simple tool to represent the modular invariant partition function \(Z(\tau)\) of any given Narain theory as a sum of "codeword blocks" \(\Psi_{c}\) transforming in a particular representation of the modular group specified by \(\mathsf{G}\) (or more generally \(\mathsf{G}_{\Lambda}\)) equipped with the scalar product. At a more technical level, the code - a collection of individual codewords - provides a division of a code-based Narain lattice into subsets \(S_{c}\). The sum over such a subset is \(\Psi_{c}\), which exhibits modular properties (22). Since all Narain lattices are related by the orthogonal group \(O(n,n,\mathbb{R})\), any code can be used to decompose any Narain lattice into subsets, such that the partial sums over the subsets will form a code-specified representation of the modular group. ## 3 \((U(1)\times U(1))^{n}\) Chern-Simons theories on a solid torus ### A review of Abelian Chern-Simons theories on handlebodies In this paper we discuss Abelian Chern-Simons theories on handlebodies, starting from a single Abelian factor, and then generalizing to many. Handlebodies are smooth 3-manifolds \(\mathcal{M}\) whose boundary \(\partial\mathcal{M}\) is a genus-\(g\) Riemann surface \(\Sigma\). Topologically, they are characterized by the set of one-cycles of \(\Sigma\) that are contractible in \(\mathcal{M}\), which form a Lagrangian sublattice of \(H^{1}(\mathcal{M},\mathbb{Z})\) with respect to the intersection form. We will focus on the \(g=1\) case, for which two important examples of handlebodies are thermal global anti-de Sitter space and (related to it by a modular transformation of the torus) the BTZ black hole. Since Chern-Simons theories are topological, the bulk metric of \(\mathcal{M}\) will not play any role, but its topology and the metric on the boundary \(\partial\mathcal{M}\) are important. We begin with a \(U(1)_{k}\) theory, whose Euclidean action is \[S=\frac{ik}{4\pi}\int_{\mathcal{M}}A\wedge dA \tag{14}\] for integer \(k\). A natural way to study this theory on a handlebody is using radial quantization, where we view the handlebody \({\cal M}\) as a fibration of the Riemann surface \(\Sigma\) over a "radial" direction (which can be viewed as Euclidean time) running from the interior (where the Riemann surface shrinks to zero volume) to the boundary. In the above examples of asymptotically \(AdS_{3}\) spaces, the radial direction coincides with the usual radial coordinate of \(AdS_{3}\). From this point of view, the quantum theory has a Hilbert space which is that of Chern-Simons theory on the Riemann surface, and the path integral over the handlebody evaluates Euclidean time evolution starting from some initial state \(|\Psi_{\rm interior}\rangle\) determined by boundary conditions in the interior, to some final state \(|\Psi_{\rm boundary}\rangle\) corresponding to the boundary conditions on \(\partial{\cal M}\). This Hilbert space contains \(|k|^{g}\) states of zero energy, and any given state is a linear combination of these. The Euclidean time evolution is trivial, and the path integral on \({\cal M}\) is simply the overlap between the initial and final states inside this finite-dimensional Hilbert space, \(\langle\Psi_{\rm boundary}|\Psi_{\rm interior}\rangle\). Note that for \(|k|=1\) the Hilbert space is one-dimensional, so all handlebodies give rise to the same interior wavefunction. In this radial quantization picture the two "spatial" components of \(A\) along the Riemann surface are canonically conjugate variables, so they cannot both be diagonalized at the same time. One can choose to write the wavefunctions as functions of one or the other of these variables. More precisely, we will express the wavefunctions in terms of holonomies of the gauge fields along \(g\) nonintersecting one-cycles of the Riemann surface \(\oint_{\gamma}A\); \(W_{\gamma}=\exp(i\oint_{\gamma}A)\) is gauge-invariant, so the holonomies can be viewed as gauge-invariant coordinates up to shifts by \(2\pi\) (which arise from large gauge transformations that preserve the wavefunction). The holonomies for a basis of dual cycles of \(\Sigma\) form a set of canonically conjugate variables. For any cycle \(\gamma\) that shrinks in the interior of the handlebody, the interior wavefunction must obey \(W_{\gamma}|\Psi_{\rm interior}\rangle=|\Psi_{\rm interior}\rangle\) (while the Wilson lines on the conjugate cycles are completely smeared). In the presence of a boundary, having a consistent variational principle for the action (10) requires that \[\int_{\partial{\cal M}}A\wedge\delta A=0 \tag{11}\] (involving both components of the gauge field along the boundary, and their variations). Equation (11) can be satisfied by setting one of the components of the gauge field to zero at the boundary, or by setting to zero a complex combination of the two components, \(A_{z}\) or \(A_{\bar{z}}\), defined using an appropriate complex structure on \(\partial{\cal M}\). Setting a field to zero at the boundary automatically sets its variation to be zero. In order to obtain more interesting possibilities for boundary conditions, one can add extra terms on the boundary. With an appropriate choice of a boundary term quadratic in \(A\), one can cancel the terms in (14) that involve a given component of \(\delta A\) (either a spatial component or a complex combination), and then boundary conditions that set the other component of \(A\) to any fixed value are also allowed. In particular, if (with an appropriate choice of boundary terms) \(A_{\bar{z}}\) is frozen to a specific value at the boundary while \(A_{z}\) on the boundary is allowed to fluctuate, then the Chern-Simons theory behaves as a chiral block of a 2d \(U(1)_{k}\) CFT, with \(A_{\bar{z}}\) interpreted as a source for a chiral \(U(1)\) current \(J(z)\) at level \(k\)[46]. ### The wavefunction of \(U(1)_{k}\) theory on a torus As a warm-up example we construct the wavefunctions of level-\(k\)\(U(1)\) Chern-Simons theory on a torus, following the classic work of Bos and Nair [47]. Additional technical details can be found in Appendix B. We consider the CS theory (13) on a three-dimensional manifold \(\mathcal{M}\) with boundary \(\partial\mathcal{M}\), which in our case will be a torus with modular parameter \(\tau\). We parametrize the boundary torus by the coordinate \(z\), with identifications \(z\sim z+1,z+\tau\). We choose a gauge where the radial component of the gauge field vanishes, and its equation of motion imposes \(F_{z\bar{z}}=0\). We can then further choose a gauge where the spatial components of the gauge fields, \(A_{z}\) and \(A_{\bar{z}}\), are constant on the torus. Following [47; 48] we will consider a holomorphic representation of the wavefunction on the torus. This representation arises naturally if we deform the action (13) by adding the boundary term \[S^{\prime}=S-\frac{k}{2\pi}\int_{\partial M}d^{2}z|A|^{2},\qquad|A|^{2}\equiv A _{z}A_{\bar{z}},\qquad k>0, \tag{15}\] so that the equation of motion \(\delta S^{\prime}/\delta A_{z}=0\) is trivially satisfied at the boundary. Then the path integral can be evaluated with boundary conditions of fixed \(A_{\bar{z}}\) and arbitrary \(A_{z}\).3 The full path integral on the handlebody, including the boundary term, with a fixed value of \(A_{\bar{z}}\) at the boundary (which is equivalent to the overlap with a wavefunction that is a delta function imposing the boundary value of \(A_{\bar{z}}\)) is then Footnote 3: Adding the term (15) with the opposite sign, which is natural for \(k<0\), allows one to fix \(A_{z}\) instead. \[\Psi_{\text{interior}}(A_{\bar{z}})=\int\limits_{A_{\bar{z}}|_{\partial \mathcal{M}\text{ fixed}}}\mathcal{D}A\ e^{-S^{\prime}}. \tag{16}\] This is a holomorphic function of \(A_{\bar{z}}\). Because of the extra factor in the path integral, the overlap between two wavefunctions in the holomorphic representation is given by \[\langle\Psi_{1}|\Psi_{2}\rangle=\int d^{2}A_{\bar{z}}\,(\Psi_{1}(A_{\bar{z}}) )^{*}\Psi_{2}(A_{\bar{z}})e^{-\frac{k}{\pi}\int d^{2}z|A_{\bar{z}}|^{2}}. \tag{17}\] This expression is schematic, since as we will discuss below one needs to remove degeneracies due to large gauge transformations; see Appendix B for details. The extra exponential factor in the overlap (3.5) can also be understood in the following way. Understood as quantum operators in radial quantization, the gauge field components on the torus with action (3.1) obey the commutation relation \([A_{z},A_{\bar{z}}]=\frac{\pi}{k\tau_{2}}\),4 so if we choose the wavefunctions to be functions of only \(A_{\bar{z}}\), \(A_{z}\) acts on them by \(\frac{\pi}{k\tau_{2}}\frac{\partial}{\partial A_{\bar{z}}}\). If we insert \(A_{\bar{z}}\) into the overlap (3.5), then on one hand it can act on \(\Psi_{2}(A_{\bar{z}})\) by just multiplying it by \(A_{\bar{z}}\), but on the other hand by integration by parts it can act on \((\Psi_{1}(A_{\bar{z}}))^{*}\), which is a function of \(A_{z}\), as \(\frac{\pi}{k\tau_{2}}\frac{\partial}{\partial A_{z}}\), as expected from the canonical commutation relations. Footnote 4: Note that this is not the same commutation relation that one obtains by starting at high energies in a Maxwell-Chern-Simons theory [50], for which the phase space is labelled by \(A_{z}\), \(A_{\bar{z}}\) and their (independent) conjugate momenta. Here we describe wavefunctions on a different phase space, which is labeled only by \(A_{z}\) and its canonical conjugate \(A_{\bar{z}}\). See also [51; 52]. We will parameterize the value of \(A_{\bar{z}}\) at the boundary by \[A_{\bar{z}}=\frac{i\pi}{\tau_{2}}\xi, \tag{3.6}\] where \(\xi\) is a complex number. As described above, we can write the wavefunctions on the torus, in particular \(\Psi_{\rm interior}\), as holomorphic functions of \(\xi\). The normalization in (3.6) has been chosen so that large gauge transformations in the bulk \(A\to A+\omega\), which are characterized by integer winding numbers \(n,m\) around the two basis cycles of the torus and preserve the gauge of \(A\) being constant on the torus, shift \(\xi\) by \[\xi\to\xi+n+m\tau. \tag{3.7}\] Any holomorphic wavefunction of \(A_{\bar{z}}\) gives a ground state of the Hamiltonian, so the only constraint on the wavefunctions comes from their required covariance under large gauge transformations. Under these transformations, the interior wavefunction \(\Psi_{\rm interior}\) should change as follows \[\Psi_{\rm interior}\to\Psi_{\rm interior}\,e^{\frac{ik}{4\pi}\int\limits_{ \partial{\cal M}}\omega\wedge A}e^{\frac{k\tau_{2}}{2\pi}(A_{z}\omega_{\bar{z} }+A_{\bar{z}}\omega_{z}+|\omega_{z}|^{2})}e^{i\varphi(\omega)}, \tag{3.8}\] where we have introduced an additional cocycle \(\varphi\) to assure associativity of large gauge transformations \(\Psi_{\rm interior}(A+(\omega+\omega^{\prime}))=\Psi_{\rm interior}((A+ \omega)+\omega^{\prime})\). This condition requires \[\varphi(\omega+\omega^{\prime})=\varphi(\omega)+\varphi(\omega^{\prime})-\frac {k}{4\pi}\int\limits_{\partial{\cal M}}\omega\wedge\omega^{\prime}, \tag{3.9}\] understood mod \(2\pi\). Note that the \(A_{z}\)-dependent terms in (3.8) cancel, consistent with \(\Psi\) being holomorphic in \(A_{\bar{z}}\). Written explicitly in terms of \(\xi\), see Appendix B for details, \[\Psi(\xi+n+m\tau)=\Psi(\xi)\,e^{\frac{k\pi}{\tau_{2}}(n+m\bar{\tau})\xi+\frac{k\pi }{2\tau_{2}}|n+m\tau|^{2}+i\varphi},\quad\varphi=\pi knm+n\phi_{1}+m\phi_{2}, \tag{3.10}\] which is consistent with the combination \(|\Psi_{\rm interior}|^{2}e^{-\frac{k\pi}{\tau_{2}}|\xi|^{2}}\) being invariant under large gauge transformations (3.8), as is expected from (3.5). For even \(k\) the CS theory does not require a spin structure, and we have \(\phi_{1}=\phi_{2}=0\). For odd \(k\) the definition of the theory requires a spin structure, and on the torus there are four possible spin structures, which give rise to the options \(\phi_{1,2}=0,\pi\). This statement can be justified by considering transformations of \(\Psi\) under modular transformations of \(\tau\). For any choice of spin structure there are \(k\) distinct solutions for \(\Psi_{r}(\xi)\) (labeled by \(r=0,\cdots,k-1\)) since the space of level-\(k\)\(U(1)\) Chern-Simons wavefunctions on a torus is \(k\)-dimensional. They can be written explicitly as \[\Psi_{r}(\xi) = \frac{1}{\eta(\tau)}\sum_{n}e^{i\pi\tau p^{2}+2\pi ipu+\frac{\pi u ^{2}}{2\tau_{2}}+\frac{\xi(\phi_{2}-\tau\phi_{1})}{2\tau_{2}}-\frac{|\phi_{2} -\tau\phi_{1}|^{2}}{8\pi k\tau_{2}}}, \tag{3.11}\] \[p = \sqrt{k}\left(n+\frac{r}{k}\right),\qquad u=\sqrt{k}\left(\xi+ \frac{\tau\phi_{1}-\phi_{2}}{2\pi k}\right).\] Here the appearance of \(\eta(\tau)\) in the denominator is due to small (topologically trivial) fluctuations of the gauge field in the bulk [53]. It can be checked straightforwardly that \(\Psi_{r}\) satisfies (3.10) and is canonically normalized, see Appendix B. In the holomorphic representation, the Wilson loop operator \(W(p,q)=\exp{(i\oint_{p,q}A)}\) defined along the cycle \(p+q\tau\) acts on \(\Psi_{r}(\xi)\) as follows \[\oint_{p,q}A = (p+q\tau)\frac{-i}{k}\frac{\partial}{\partial\xi}+(p+q\bar{\tau} )\frac{i\pi}{\tau_{2}}\xi, \tag{3.12}\] \[W(p,q)\,\Psi_{r}(\xi) = e^{i(p\phi_{1}+q\phi_{2})/k+2\pi ipr/k+i\pi pq/k}\Psi_{r+q}(\xi). \tag{3.13}\] Note that the spin structure with \(\phi_{1}=\phi_{2}=\pi\) is one where the spinors are periodic along both basic cycles of the torus; this odd spin structure is modular invariant by itself, but it does not allow any of the cycles to shrink in the interior (so it does not appear for handlebodies). This is consistent with the fact that for \(k=1\) with this choice, the unique wavefunction \(\Psi_{0}(\xi)\) has eigenvalues \(W(1,0)=W(0,1)=-1\). ### Wavefunction of the \((U(1)\times U(1))_{k}\) theory Our next step is to study the "AB" theory with the action \[S=\frac{ik}{4\pi}\int(A\wedge dB+B\wedge dA), \tag{3.14}\] with invariance under all gauge transformations of \(A\) and \(B\), that include the large gauge transformations \[A\to A+\omega_{A},\quad B\to B+\omega_{B}. \tag{3.15}\] This defines the theory in the bulk, which we will denote by \((U(1)\times U(1))_{k}\) to emphasize that this is not a direct product of two \(U(1)_{k}\) Chern-Simons theories. As above, to describe the theory on a handlebody we also need to choose boundary terms and boundary conditions. Unlike the case of a single \(U(1)\) field, the \(U(1)\times U(1)\) theory has a continuous family of choices of which variables can be kept fixed at the boundary. For any \(r\) we can define gauge fields \(A_{\pm}=(A/r\pm B\,r)\sqrt{k/2}\) such that the action becomes \[S=\frac{i}{4\pi}\int(A_{+}\wedge dA_{+}-A_{-}\wedge dA_{-}). \tag{3.16}\] This now becomes the action of two decoupled \(U(1)\) theories at levels \(1\) and \((-1)\), but the dynamical fields are connected at the level of the large gauge transformations (3.15), so the theory is not equivalent to a product of two decoupled theories. In any case, since \(A_{+}\) has positive level and \(A_{-}\) has negative level, we can now choose boundary terms and boundary conditions as in the previous subsection such that we fix \((A_{+})_{\bar{z}}\) and \((A_{-})_{z}\) at the boundary to arbitrary values. Analogously to (3.6), we introduce two independent holomorphic coordinates \[\xi=\frac{\tau_{2}}{i\pi}(A_{+})_{\bar{z}},\quad\bar{\xi}=-\frac{\tau_{2}}{i \pi}(A_{-})_{z}, \tag{3.17}\] and deform the action by the boundary term \[S^{\prime}=S-\frac{1}{2\pi}\int_{\partial M}d^{2}z\left(|A_{+}|^{2}+|A_{-}|^{2 }\right), \tag{3.18}\] such that the equations of motion \(\delta S/\delta(A_{+})_{z}=\delta S/\delta(A_{-})_{\bar{z}}=0\) are trivially satisfied at the boundary. The wavefunction \(\Psi(\xi,\bar{\xi})\) associated with this action is holomorphic in \(\xi\), and separately in \(\bar{\xi}\). Next, we demand that under large gauge transformations with parameters \((n,m)\) for \(A\), and \((p,q)\) for \(B\), which take \[\xi \rightarrow \xi+\delta\xi,\quad\delta\xi=\sqrt{\frac{k}{2}}\left((n+m\tau)r^ {-1}+(p+q\tau)r\right), \tag{3.19}\] \[\bar{\xi} \rightarrow \bar{\xi}+\delta\bar{\xi},\quad\delta\bar{\xi}=\sqrt{\frac{k}{2} }\left((n+m\bar{\tau})r^{-1}-(p+q\bar{\tau})r\right),\] \(\Psi\) should change by \[\Psi\rightarrow\Psi\,e^{\frac{\pi}{2\tau_{2}}(2\xi\delta\xi^{*}+|\delta\xi|^ {2}+2\bar{\xi}\delta\bar{\xi}^{*}+|\delta\bar{\xi}|^{2})+i\pi k(mp-nq)}. \tag{3.20}\] In this case the cocycle factor is simply \(\varphi=\pi k(mp-nq)\), so there is no need to introduce nontrivial phases \(\phi_{i}\). There are \(k^{2}\) wavefunctions given explicitly by \[\Psi_{a,b}(\xi,\bar{\xi},\tau) = \frac{1}{|\eta(\tau)|^{2}}\sum_{n,m}e^{i\pi\tau p_{L}^{2}-i\pi\bar{ \tau}p_{R}^{2}+2\pi i(p_{L}\xi-p_{R}\bar{\xi})+\frac{\pi}{2\tau_{2}}(\xi^{2}+ \bar{\xi}^{2})},\quad 0\leq a,b<k, \tag{3.21}\] \[p_{L,R} = \sqrt{\frac{k}{2}}\left((n+a/k)r^{-1}\pm(m+b/k)r\right),\quad n,m \in\mathbb{Z}.\] We would like to emphasize that (3.21) for different \(r\) are different representations of the same \(k^{2}\) bulk wavefunctions, expressed as functions of different variables (corresponding to the specific choice of boundary conditions we made). The result (3.21) resembles the partition function of a free scalar CFT, and we will discuss the precise relation in the next section. Wilson loops of \(A\) along the \(n+m\tau\) cycle, \(W_{A}(n,m)=\exp\left(i\oint_{n,m}A\right)\), with a similar definition for Wilson loops of \(B\), act on (3.21) as follows \[W_{A}(n,m)\Psi_{a,b}=\Psi_{a,b+m}\,e^{2\pi ian/k}, \tag{3.22}\] \[W_{B}(n,m)\Psi_{a,b}=\Psi_{a+m,b}\,e^{2\pi ibn/k}. \tag{3.23}\] In particular, Wilson lines of both \(A\) and \(B\) along the 1 cycle act on \(\Psi_{0,0}\) trivially, so \(\Psi_{0,0}\) is a consistent wavefunction on thermal AdS - the handlebody where this cycle shrinks in the interior. ### General case Before we proceed with the general \((U(1)\times U(1))^{n}\) theory we would like to revisit the \(U(1)\times U(1)\) case, but instead of starting with the "AB" theory (3.14), we can start with (3.16) and (3.17) and just impose large gauge transformations generalizing (3.19) \[\delta\left(\begin{array}{c}\xi+\bar{\xi}^{*}\\ \xi-\bar{\xi}^{*}\end{array}\right)=\sqrt{2}\Lambda(\vec{n}+\vec{m}\tau),\quad \vec{n},\vec{m}\in\mathbb{Z}^{2}. \tag{3.24}\] Here \(\Lambda\) defines an even self-orthogonal lattice in \(\mathbb{R}^{1,1}\) as in (2.12). The holomorphic functions of \(\xi\) and \(\bar{\xi}\) \[\Psi_{\bar{c}}(\xi,\bar{\xi},\tau)=\frac{1}{|\eta(\tau)|^{2}}\sum _{\vec{\ell}}e^{i\pi\tau p_{L}^{2}-i\pi\bar{\tau}p_{R}^{2}+2\pi i(p_{L}\xi-p_{ R}\bar{\xi})+\frac{\pi}{2\tau_{2}}(\xi^{2}+\bar{\xi}^{2})}, \tag{3.25}\] \[\left(\begin{array}{c}p_{L}+p_{R}\\ p_{L}-p_{R}\end{array}\right)=\sqrt{2}\Lambda(\vec{\ell}+g_{\Lambda}^{-1}\vec{ c}),\quad\vec{\ell}\in\mathbb{Z}^{2}\] are parametrized by elements of the Abelian group \(\vec{c}\in\mathbb{Z}^{2}/g_{\Lambda}=\Lambda^{*}/\Lambda=\mathsf{G}\), and under large gauge transformations (3.24) they change as follows : \[\Psi_{\vec{c}}(\xi+\delta\xi,\bar{\xi}+\delta\bar{\xi})=\Psi_{\vec{c}}(\xi,\bar{ \xi})\,e^{\frac{\pi}{2\tau_{2}}(2\xi\delta\xi^{*}+|\delta\xi|^{2}+2\bar{\xi} \delta\bar{\xi}^{*}+|\delta\bar{\xi}|^{2})}\,e^{i\pi n^{T}g_{\Lambda}m}. \tag{3.26}\] The generalization to the case of \((U(1)\times U(1))^{n}\) is now straightforward. The main ingredient is the even self-orthogonal lattice \(\Lambda\in\mathbb{R}^{n,n}\), which defines large gauge transformations of \(U(1)^{n}\)-valued gauge fields \(A_{\pm}\) as follows, \[\left(\begin{array}{c}\xi+\bar{\xi}^{*}\\ \xi-\bar{\xi}^{*}\end{array}\right)\to\left(\begin{array}{c}\xi+\bar{\xi}^{ *}\\ \xi-\bar{\xi}^{*}\end{array}\right)+\sqrt{2}\Lambda(\vec{n}+\vec{m}\tau),\quad \vec{n},\vec{m}\in\mathbb{Z}^{2n}, \tag{3.27}\] while the relation between \(\xi,\bar{\xi}\) and \(A_{\pm}\) is as in (3.17), generalized to vector-valued quantities. The resulting wavefunction is parametrized by an element of the Abelian group \(c\in\mathsf{G}_{\Lambda}=\Lambda^{*}/\Lambda\), \[\Psi_{c}=\frac{\Theta_{c}}{|\eta(\tau)|^{2n}},\] \[\Theta_{c}(\xi,\bar{\xi},\tau)=\sum_{\vec{\ell}}e^{i\pi\tau p_{L}^ {2}-i\pi\bar{\tau}p_{R}^{2}+2\pi i(p_{L}\cdot\xi-p_{R}\bar{\xi})+\frac{\pi}{2 \tau_{2}}(\xi^{2}+\bar{\xi}^{2})}, \tag{3.28}\] \[\left(\begin{array}{c}p_{L}+p_{R}\\ p_{L}-p_{R}\end{array}\right)=\sqrt{2}\Lambda(\vec{\ell}+g_{\Lambda}^{-1}\vec{ c}),\quad\vec{\ell}\in\mathbb{Z}^{2n},\quad g_{\Lambda}=\Lambda^{T}\eta\Lambda.\] The wavefunction \(\Psi_{c}\) coincides exactly with the "codeword blocks" (2.20). We will explore the holographic interpretation of this result in section 4. ## 4 Holographic description of the ensemble of code CFTs ### Level \(k=1\) CS theories and conventional holographic correspondence As discussed above, for \(k=1\) the \(U(1)\times U(1)\) CS theory has a unique wavefunction, which can be written as \(\Psi_{00}(\tau,\xi,\bar{\xi})\). It is given by the CS path integral on any handlebody \(\mathcal{M}\) with the appropriate boundary conditions on \(\partial M=\Sigma\) of genus one. Our starting point is the observation that this unique wavefunction (3.21) with \(k=1\) is the same as the _path integral_ of the two-dimensional CFT, the compact scalar of radius \(R=\sqrt{2}r\), coupled to an external complex gauge field \(\mathcal{A}\) parametrized by \(\xi,\bar{\xi}\) \[Z_{BPI}(\tau,\xi,\bar{\xi})=\Psi_{00}(\tau,\xi,\bar{\xi}). \tag{4.1}\] We discuss the compact scalar in detail in Appendix A. From the bulk point of view \(\xi,\bar{\xi}\) parametrize certain components of the fields \(A_{+},A_{-}\) on the boundary of \(\mathcal{M}\), and the holographic dictionary relates them to sources in the CFT by \[\frac{i\pi}{\tau_{2}}\xi=\mathcal{A}_{\bar{z}}=(A_{+})_{\bar{z}}, \qquad-\frac{i\pi}{\tau_{2}}\bar{\xi}=\mathcal{A}_{z}=(A_{-})_{z}, \tag{4.2}\] such that \(Z_{BPI}\) is the CFT path integral with these sources (as discussed in Appendix A). In the CFT the complex field \(\mathcal{A}\) is a combination of two real fields \(A\) and \(B\) coupled to the two conserved \(U(1)\) currents, see (A.19), and we have chosen notation such that the CFT source fields \(A,B\) are exactly the boundary values of the bulk gauge fields \(A,B\) introduced in subsection 3.3. At the same time we emphasize that the Chern-Simons theory is quantized with the boundary condition that fixes the fields \((A_{+})_{\bar{z}},(A_{-})_{z}\) at the boundary, while \((A_{+})_{z},(A_{-})_{\bar{z}}\) vary freely. This condition looks cumbersome when expressed in terms of \(A\) and \(B\). While preserving the same boundary conditions, we can add an additional boundary term \(\frac{\pi\xi\bar{\xi}}{\tau_{2}}\) to the bulk action (3.18) and obtain, after an integration by parts, \[S_{1}=\frac{i}{2\pi}\int_{\mathcal{M}}B\wedge dA-\frac{r^{2}}{ \pi}\int_{\partial\mathcal{M}}d^{2}z\,|B|^{2}. \tag{4.3}\] The new action still satisfies \(\delta S_{1}/\delta(A_{+})_{z}=\delta S_{1}/\delta(A_{-})_{\bar{z}}=0\) at the boundary, because the added boundary term does not include fluctuating fields. It leads to a holomorphic bulk wavefunction which equals the first path integral introduced in Appendix A \[Z_{\text{PI}}(\tau,\xi,\bar{\xi})=\int\mathcal{D}A\,\mathcal{D}B \,e^{-S_{1}}. \tag{4.4}\] Note that unlike the bulk path integral discussed in the previous section (related to it by \(Z_{PI}(\tau,\xi,\bar{\xi})=Z_{BPI}(\tau,\xi,\bar{\xi})e^{-\frac{\pi}{\tau_{2}} \xi\bar{\xi}}\)), \(Z_{\text{PI}}\) is manifestly invariant under large gauge transformations of \(A\). Similarly, subtracting \(\frac{\pi\xi\bar{\xi}}{\tau_{2}}\) from the bulk action leads to \[S_{2}=\frac{i}{2\pi}\int_{\mathcal{M}}A\wedge dB-\frac{r^{-2}}{ \pi}\int_{\partial\mathcal{M}}d^{2}z\,|A|^{2}, \tag{4.5}\] and \[Z^{\prime}_{\text{PI}}(\tau,\xi,\bar{\xi})=\int\mathcal{D}A\, \mathcal{D}B\,e^{-S_{2}}, \tag{4.6}\] which is manifestly invariant under large gauge transformations of \(B\). To be precise, these bulk theories are actually complexifications of the path integrals considered in Appendix A, as \(\xi,\bar{\xi}\) are treated here as two independent complex variables. To reduce precisely to the CFTs described by the actions (A.7) and (A.14), one would need to impose additional restrictions on the boundary conditions in (4.4) and (4.6), that respectively set \(B=0\) and \(A=0\) at the boundary, by choosing \(\xi^{*}=\pm\bar{\xi}\). It is important to point out that the parameter \(r\) from the bulk point of view is a parameter changing the representation of the Chern-Simons wavefunction \(|\Psi\rangle\). It defines the boundary conditions, but it does not affect the action in the bulk. Hence, for all \(r\) the quantum state \(|\Psi\rangle\) remains the same, which is already clear from the fact that the Hilbert space of level-1 Chern-Simons theory is 1-dimensional. From the boundary point of view, the parameter \(r\) - the radius of the compact circle - changes the CFT. This situation is not conceptually different from more conventional instances of the holographic correspondence, such as gauge/gravity duality, in which the path integral of the same bulk action with different boundary conditions describes different field theories. For example, exactly marginal deformations of a CFT correspond to changing the boundary conditions for massless scalars in anti-de Sitter space. Another even more direct analogy is with the dS/CFT correspondence [54], where the same quantum Hartle-Hawking wavefunction in the bulk is dual to different field theories, depending on the overlap of the unique bulk wavefunction with the wavefunction at the boundary. The generalization of the above considerations to \((U(1)\times U(1))^{n}\) is straightforward. The general level \(k=1\) theory is \((U(1)\times U(1))^{n}\) Chern-Simons theory, quantized with large gauge transformations corresponding to points in an even self-dual lattice \(\Lambda\) (3.27). In this case \(\mathsf{G}_{\Lambda}=\Lambda^{*}/\Lambda\) consists of a single element, and the unique wavefunction (3.28) is identified with the path integral of the Narain theory associated with \(\Lambda\) \[Z_{BPI}(\tau,\xi,\bar{\xi},\Lambda)=\Psi_{\vec{0}}(\tau,\xi,\bar{\xi}). \tag{4.7}\] Similarly to the \(n=1\) case, there are many possible definitions of path integrals, generalizing (4.4) and (4.6); here we use the \(T\)-duality invariant definition of (3.28). One can rewrite the \((U(1)\times U(1))^{n}\) fields \(A_{\pm}\) in terms of the gauge fields \(A\), \(B\) and the lattice \(\Lambda\), which we parametrize by \(\gamma\) and \(B\) as in (2.4) with trivial \(O_{T}\), \[A_{\pm}=\frac{(\gamma\pm\gamma^{*}B)A\pm\gamma^{*}B}{\sqrt{2}}. \tag{4.8}\] In terms of these fields, the action of large gauge transformations are canonical, \(A\to A+\omega_{A}\), \(B\to B+\omega_{B}\), but the boundary conditions in the path integral become \(\Lambda\)-dependent. This description provides the holographic dictionary for a general Narain CFT: the path integral of a Narain theory is equivalent to the path integral of level-1 \((U(1)\times U(1))^{n}\) Chern-Simons theory with boundary conditions (wavefunction representation) specified by \(\Lambda\). The construction above is an explicit realization of the AdS\({}_{3}\)/CFT\({}_{2}\) correspondence for Narain theories in terms of pure Chern-Simons theory in the bulk. The original treatment in [50], recently revisited in [12], was in terms of Maxwell-Chern-Simons theory in the limit of infinite Maxwell coupling. We have shown here that inclusion of the Maxwell term is not necessary, provided a more explicit treatment by evaluating the path integral on both sides of the duality, and established a holographic dictionary. Our approach is also related to the recent work [16], which constructs a bulk description for a Narain theory with decomposable \(\Lambda=\Lambda_{L}\oplus\Lambda_{R}\) in terms of level \(k=1\) CS theory, obtained from \(k>1\) Chern-Simons theory by gauging all discrete symmetries. It is important to note that the holographic description above does not include a sum over bulk geometries or topologies. Rather, all handlebodies yield the same wavefunction, obeying (4.7). This is analogous to the case of AdS\({}_{3}\) with \(k=1\) units of NS-NS flux (the "tensionless string") [55; 56], where fluctuations in the bulk on a fixed background geometry are believed to account for the full partition function. Codes, and code ensembles, play a rather trivial role in the holographic description of Narain CFTs in terms of level \(k=1\) Chern-Simons theory. Indeed, the \(k=1\)\(U(1)\times U(1)\) theory is associated with the unique length \(n=1\) code over the alphabet \(\mathbb{Z}_{1}\times\mathbb{Z}_{1}\), consisting of the unique codeword \(c=(0,0)\). The parameter \(r\), the radius of the compact scalar on the CFT side, does not affect the code, but controls the embedding of the code into the space of \(n=1\) Narain CFTs. Similarly, whenever the lattice \(\Lambda\) is self-dual, the group \(\mathsf{G}_{\Lambda}\) is trivial, consisting of a single element, and there is a unique code consisting of a single codeword \(c=\vec{0}\). As we saw above, this trivial code can be mapped to an arbitrary Narain theory by choosing an appropriate embedding. To summarize, we see that the conventional holographic correspondence emerges when the code ensemble consist of only one element, a unique code associated with a given Narain CFT. ### Averaging over Narain CFTs The holographic duality described above maps Narain theories to three dimensional bulk theories which are non-gravitational Chern-Simons theories living on a fixed spacetime, with no sum over geometries. This is consistent with the fact that in these CFTs the energy-momentum tensor is a product of \(U(1)\) currents, so we do not expect an independent graviton field in the bulk Motivated by [3; 4] will now consider an average over a finite ensemble of Narain theories, or over the whole moduli space of Narain theories with some \(c=n\). _A priori_, it is not clear if such an ensemble average would have a simple description in the bulk. The duality between Narain theories and level-1 CS theories on a fixed handlebody provides one way to evaluate it - by averaging over all possible boundary terms and boundary conditions, corresponding to all Narain CFTs. For \(n=1\) this is just an average over the values of \(r\). One way to implement this is to write down the boundary terms as a function of \(\gamma\) and \(B\) using (4.8), and then to make these variables dynamical and to integrate over them with an \(O(n,n,\mathbb{R})\)-invariant measure. These variables live just on the boundary, but since they are constant on \(\partial\mathcal{M}\), integrating over them gives a non-local theory. This non-local theory (on a given handlebody) is, by construction, equivalent to the ensemble average over Narain theories, but this is not very satisfactory, and certainly more complicated than the dual description suggested in [3; 4]. In the rest of this section we explore an alternative way to obtain the ensemble average over Narain theories, which will lead to bulk sums over geometries similar to those of [3; 4], but described by a fully consistent Chern-Simons theory with compact gauge group. "U(1)-gravity" theory will then emerge as a limit. ### Level \(k>1\) CS theory and ensemble averaging Our next step is to consider codes over alphabets with more than one element. As we have seen in full generality in section 3.4, the "codeword blocks" \(\Psi_{c}\) (2.20) appearing in the context of codes have a simple bulk interpretation. They are precisely the same as the wavefunctions of the \((U(1)\times U(1))^{n}\) Chern-Simons theory on a spatial torus. Indeed, the theory quantized with large gauge transformations specified by a lattice \(\Lambda\) has exactly \(G_{\Lambda}=\Lambda^{*}/\Lambda\) independent wavefunctions, in one to one correspondence with the codewords. As was emphasized in section 2, any given code \(\mathcal{C}\) of length \(n\) can be associated with any Narain CFT of central charge \(n\), by choosing an appropriate embedding. As a result we have the following expression for the CFT path integral \[Z_{\mathcal{C}}=W_{\mathcal{C}}(\Psi)=\sum_{c\in\mathcal{C}}\Psi_{c}. \tag{4.9}\] This expression, though suggestive, has no apparent holographic interpretation. Indeed, the sum of wavefunctions on the right-hand side of (4.9) does not allow for a simple interpretation as the bulk path integral evaluated on a simple 3d geometry, or as a sum of such path integrals. This is because in general, path integrals on simple geometries such as solid tori with different shrinking cycles would lead to a subclass of \(\Psi_{c}\) not nearly exhausting all possibilities. This is easiest to see in the case of codes over \(\mathbb{Z}_{k}\times\mathbb{Z}_{k}\) and \((U(1)\times U(1))_{k}^{n}\) Chern-Simons theory. The path integral over the solid torus with a shrinkable \(n+m\tau\) cycle will lead to the unique combination of \(\Psi_{a_{1},b_{1}}\ldots\Psi_{a_{n}b_{n}}\) invariant under (3.22),(3.23) for those values of \(n\) and \(m\). For \(k>1\), combinations of these with integer coefficients can not in general lead to (4.9). Although associating individual CFTs with codes does not lead to a simple holographic interpretation, we note that codes - and hence also the associated CFTs - naturally appear in the context of ensembles. There is always an ensemble of all codes (CFTs) of a particular type (e.g. over a particular alphabet) and of given length (corresponding to CFT central charge \(n\)). It was initially suggested in [19] that such an ensemble of code CFTs should admit a holographic interpretation. A crucial observation building towards such an interpretation was made recently in [34]. There the authors considered an ensemble consisting of all length-\(n\)\(\mathbb{Z}_{2}\times\mathbb{Z}_{2}\) codes with the glue matrix \[\mathsf{\Lambda}=\left(\begin{array}{cc}1&1\\ 1&-1\end{array}\right) \tag{4.10}\] in the notation of section 2 (the original paper used a different but equivalent description). It was conjectured, and then verified explicitly for small \(n\), that the enumerator polynomial, averaged over the ensemble of all such codes, is proportional to the "Poincare series" of all possible modular transformations acting on the auxiliary variable associated with the trivial codeword, \[\overline{W}(\{X\})\equiv\frac{1}{\mathcal{N}}\sum_{\mathcal{C}}W_{\mathcal{ C}}(\{X\})\,\propto\sum_{g\in\Gamma^{*}\backslash SL(2,\mathbb{Z})}g(X_{00}^{n}). \tag{4.11}\] We emphasize that the action of the modular group on the variables \(X_{ab}\), generated by (2.22), along with the equality (4.11), are defined and satisfied at the level of codes, before the map to CFTs and Chern-Simons theory. The variables \(X_{ab}\) provide (for \(k=2\)) a four-dimensional representation of the modular group \(SL(2,\mathbb{Z})\); hence the sum on the right-hand side of (4.11) over \(\Gamma^{*}\backslash SL(2,\mathbb{Z})\), where \(\Gamma^{*}\) is the stabilizer of \(X_{00}\), includes a finite number of terms. Alternatively, one can sum over the whole modular group, with the infinite size of \(\Gamma^{*}\) absorbed into the overall proportionality coefficient. While we leave a systematic justification of (4.11) and its generalizations to future work [57], we point out that the equality between the weighted average over codes and the Poincare sum over the modular group will apply to other code constructions as well, and will extend to higher genus boundaries. To be explicit, in what follows we focus our attention on \(\mathbb{Z}_{k}\times\mathbb{Z}_{k}\) codes with the glue matrix (2.13) and prime \(k=p\), for which we establish the analogue of (4.11) for arbitrary \(n\). We consider codes of length \(n\), which define an ensemble of size (2.2). For prime \(p\) all codes should be averaged with equal weight. In this case, the average is straightforward - see footnote on page 31. Next we consider the sum over the modular group in (4.11). We introduce \(p^{2}\) variables \(X_{ab}\) forming a representation of the modular group generated by (2.10) and by obtaining \(X^{\prime}\) of (2.6) when taking \(\tau\to-1/\tau\). The full set of \((p^{2})^{n}\) variables \(X_{\vec{a},\vec{b}}\) associated with codes of length \(n\) transform in the tensor product of \(n\) such representations. The explicit action of the \(T\) and \(S\) generators of \(SL(2,\mathbb{Z})\) is, using (2.22), \[T(X_{\vec{a},\vec{b}}) =X_{\vec{a},\vec{b}}\,e^{2\pi i\frac{\vec{a}.\vec{b}}{p}}, \tag{4.12}\] \[S(X_{\vec{a},\vec{b}}) =\frac{1}{p^{n}}\sum_{\vec{a}^{\prime},\vec{b}^{\prime}}X_{\vec{a }^{\prime},\vec{b}^{\prime}}\,e^{-2\pi i\frac{\vec{a}.\vec{b}^{\prime}+\vec{a} ^{\prime}.\vec{b}}{p}}.\] Our goal is to sum over \(\Gamma^{*}\backslash SL(2,\mathbb{Z})\), where \(\Gamma^{*}\) is the stabilizer group of \(X_{\vec{0},\vec{0}}\). The sum can be performed in two steps: we first define a subgroup \(\Gamma\subset SL(2,\mathbb{Z})\) which leaves all \(X_{\vec{a},\vec{b}}\) invariant, and then additionally factorize over the stabilizer of \(X_{\vec{0},\vec{0}}\) within \(\Gamma\backslash SL(2,\mathbb{Z})\). The group \(\Gamma\) in the general case is known to be a congruence subgroup of \(SL(2,\mathbb{Z})\)[58; 12]; for prime \(p\) it is the principal congruence subgroup \(\Gamma=\Gamma(p)\). The stabilizer of \(X_{\vec{0},\vec{0}}\) in \(\Gamma\backslash SL(2,\mathbb{Z})\) is the cyclic group \(\mathbb{Z}_{2}\times\mathbb{Z}_{p}\) generated by \(S^{2}\) (which takes \(X_{a,b}\) to \(X_{-a,-b}\)) and by powers of \(T\), so \[\Gamma^{*}\backslash SL(2,\mathbb{Z})=(\mathbb{Z}_{2}\times\mathbb{Z}_{p}) \backslash SL(2,\mathbb{Z})/\Gamma(p). \tag{4.13}\] This quotient consists of \((p^{2}-1)/2\) elements, which can be parametrized by integer pairs \((c,d)\sim(-c,-d)\), corresponding to the modular transformation \[g=\left(\begin{array}{cc}a&b\\ c&d\end{array}\right)\in SL(2,\mathbb{Z}_{p})\cong\Gamma_{1}(p)\backslash PSL(2,\mathbb{Z}), \tag{4.14}\] which has the following action on \(X_{\vec{0},\vec{0}}\): \[g(X_{\vec{0},\vec{0}})=\frac{1}{p^{n}}\sum_{\vec{a},\vec{b}}X_{\vec{a},\vec{b }}\,e^{-2\pi i\frac{\vec{a}.\vec{b}}{p}r},\qquad r=d/c\bmod p. \tag{4.15}\] The equation above applies when \(c\neq 0\); otherwise \(g(X_{\vec{0},\vec{0}})=X_{\vec{0},\vec{0}}\). One can readily see that the \((p^{2}-1)/2\) terms in the Poincare sum split into \(p+1\) terms labeled by elements of \(\Gamma_{0}(p)\backslash SL(2,\mathbb{Z})=\{1,ST^{l}\}\) with \(0\leq l<p\), each appearing \((p-1)/2\) times. Combining everything, we find the averaged enumerator polynomial for codes over \(\mathbb{Z}_{p}\times\mathbb{Z}_{p}\), \[\overline{W}(X_{\vec{a},\vec{b}})=\frac{1}{\mathcal{N}}\sum_{\mathcal{C}}W_{ \mathcal{C}}(X_{\vec{a},\vec{b}})=\frac{\sum_{g\in\Gamma_{0}(p)\backslash SL (2,\mathbb{Z})}g(X_{\vec{0},\vec{0}})}{1+p^{1-n}}=\frac{X_{\vec{0},\vec{0}}+ \frac{1}{p^{n}}\sum_{r=0}^{p-1}\sum_{\vec{a},\vec{b}}X_{\vec{a},\vec{b}}\,e^ {-2\pi i\frac{\vec{a}.\vec{b}}{p}r}}{1+p^{1-n}}. \tag{4.16}\] Here \(\mathcal{N}\) is given by (2.2), and the coefficient \(1+p^{1-n}\) in the denominator of the right-hand side is chosen such that the coefficient in front of \(X_{\vec{0},\vec{0}}\) associated with the trivial codeword is equal to one. The identity (4.16) for code CFTs acquires a straightforward holographic interpretation which has been envisioned in [34]. We consider \(((U(1)\times U(1))_{k}^{n}\) Chern-Simons theory, placed on an arbitrary handlebody geometry, and identify \(\Psi_{\vec{0},\vec{0}}(\tau,\xi,\bar{\xi})\) to be the wavefunction of this theory on thermal AdS, the solid torus with shrinkable \(a\)-cycle. Indeed, Wilson loops of both \(A\) and \(B\) fields over this shrinkable cycle should act on the boundary wavefunction trivially, which singles out \(\Psi_{\vec{0},\vec{0}}\), as follows from (3.22), (3.23). Hence, the sum in (4.16) can be interpreted as a sum over all possible handlebody topologies, or more accurately, as a sum over equivalence classes of topologies yielding the same boundary wavefunction, \[\overline{Z}_{\,BPI}(\tau,\xi,\bar{\xi}) = \frac{1}{1+p^{1-n}}\sum_{g\in\Gamma_{1}(p)\backslash SL(2,\mathbb{ Z})}\Psi_{\vec{0},\vec{0}}(g\,\tau,g\,\xi,g\,\bar{\xi})=\] \[\frac{1}{1+p^{1-n}}\left(\Psi_{\vec{0},\vec{0}}(\tau,\xi,\bar{ \xi})+p^{-n}\sum_{r=0}^{p-1}\sum_{\vec{a},\vec{b}\in\mathbb{Z}_{p}^{n}}\Psi_{ \vec{a},\vec{b}}(\tau,\xi,\bar{\xi})\,e^{-2\pi i\frac{\vec{a}\cdot\bar{r}}{p}r} \right).\] This equality between ensemble averaging over code CFTs on the field theory side, and summing over topologies on the bulk side, is preserved under the action of \(O(n,n,\mathbb{R})\), which changes the embedding of the codes in the space of Narain CFTs and the representation of the wavefunction of the dual Chern-Simons theory. We expect an equality analogous to (4.17) to apply more broadly, to codes defined as even self-dual linear subspaces of the group \(\mathsf{G}_{\Lambda}\) defined by an even self-orthogonal lattice \(\Lambda\) and corresponding code CFTs/ Chern-Simons theories, although the averaging weights and the details of the "Poincare sum" will be different. The equality will also extend to higher genus boundary geometries. To summarize, we have obtained an infinite series of explicit examples of "holographic duality," in which an ensemble of CFTs is dual to a Chern-Simons theory coupled to topological gravity, and the bulk partition function is given by a sum over topologies. This sum is akin to a sum over saddle point configurations in conventional gravity, implementing the ideas of [59; 60; 61]. In spirit, our examples are similar to the original work [62] representing the Ising model partition function as a sum "over geometries," but crucially we explicitly outline the dual theory in the bulk. Our code-based construction allows for many generalizations, to additive codes of other types, and potentially going beyond additive codes and Abelian CFTs. We expect that this approach may lead to many more explicit examples, potentially reformulating the results of [10; 11; 15] in terms of codes. Although in this paper we only consider the CFTs living on a torus, we expect that the holographic duality will generalize to higher genera [57]. The ensembles we consider contain a finite number of CFTs, hence the ensemble is parameterized by a finite number of moments. This will imply that the dual Chern-Simons theory on \({\cal M}\), with \(\partial{\cal M}\) being a Riemann surface of sufficiently high genus, would be completely determined in terms of path integrals over lower genus boundary surfaces. This will require various factorization rules in the bulk, which deserves a better understanding. ### Holographic correspondence in the \(k\to\infty\) limit In the previous section we saw that the size of the ensemble is related to the number of classes of topologies appearing in the bulk sum. Bigger ensembles correspond to "more gravitational" theories in the bulk, distinguishing more topological features and thus leading to a sum over more classes of topologies. It is thus natural to ask, what would happen with the duality as the size of the code CFT ensemble becomes infinitely large. In what follows we take \(k=p\) to be prime for simplicity. To evaluate the right-hand side of (4.16) in the \(p\to\infty\) limit we go back to definition of \(\Psi_{\vec{0},\vec{0}}\) (3.28), \[\Psi_{\vec{0},\vec{0}}(\tau^{\prime},\xi^{\prime},\bar{\xi}^{ \prime}) =\frac{\Theta_{\vec{0},\vec{0}}(\tau^{\prime},\xi^{\prime},\bar{ \xi}^{\prime})}{|c\tau+d|^{n}|\eta(\tau)|^{2n}},\qquad g=\left(\begin{array}{ cc}a&b\\ c&d\end{array}\right)\in SL(2,\mathbb{Z}), \tag{4.18}\] \[\tau^{\prime}=\frac{a\tau+b}{c\tau+d},\qquad\xi^{\prime}=\frac{ \xi}{c\tau+d},\qquad\bar{\xi}^{\prime}=\frac{\bar{\xi}}{c\bar{\tau}+d}.\] From (4.15) we know that we do not need to consider all possible co-prime pairs \(c,d\), but only \(c=0,d=1\), and \(p\) additional pairs yielding all possible values for \(dc^{-1}\) mod \(p\). A crucial observation is that one can always pick a set of such pairs with positive \(c\) satisfying \(c,|d|\leq\sqrt{p}\). While we could not prove this in full generality, we have numerically checked it for the first hundred primes. We first consider the case when \(c,|d|\ll\sqrt{p}\) with \(p\gg 1\). In this regime all vectors \(\vec{p}_{L},\vec{p}_{R}\) summed over in \(\Theta_{\vec{0},\vec{0}}\), except for \(\vec{p}_{L},\vec{p}_{R}=0\), are of order \(|\vec{p}_{L}|,|\vec{p}_{R}|\sim O(p^{1/2})\). This is in fact a general result for any embedding, in the limit where the embedding is fixed while \(p\to\infty\). So for \(c,|d|\ll\sqrt{p}\), the factor \[e^{-\pi\tau_{2}^{\prime}(|\vec{p}_{L}|^{2}+|\vec{p}_{R}|^{2})} \tag{4.19}\] in (3.28) suppresses all other terms, and hence the only contribution is from \(\vec{p}_{L},\vec{p}_{R}=0\), \[\Theta_{\vec{0},\vec{0}}(\tau^{\prime},\xi^{\prime},\bar{\xi}^{ \prime})=e^{\frac{\pi}{2\tau_{2}}(\xi^{\prime 2}+\bar{\xi}^{\prime 2})}+e^{-O(p)}. \tag{4.20}\] Outside of the \(c,|d|\ll\sqrt{p}\) regime, but provided \(c,|d|\leq\sqrt{p}\) is still satisfied, we notice that the combination \(\tau_{2}^{\prime}(|\vec{p}_{L}|^{2}+|\vec{p}_{R}|^{2})\) is at least of order one, and \(|p_{L}\cdot\xi|\), \(|p_{R}\cdot\bar{\xi}|\) are at most of order one, \(O(p^{0})\). This means that \(\Theta_{\vec{0},\vec{0}}(\tau^{\prime},\xi^{\prime},\bar{\xi}^{\prime})\lesssim O (p^{0})\) for \(p\gg 1\). Now, going back to the sum over \(p+1\) pairs \(c,d\), we split the sum into two groups, for \(c,|d|\) satisfying \(c,|d|\leq p^{\alpha}\) for any \(1/3<\alpha<1/2\), and the rest. The first group will yield \[\overline{Z}_{BPI}\approx\frac{1}{1+p^{1-n}}\sum_{\begin{subarray}{c}(c,d)=1,\\ c,|d|\leq p^{\alpha}\end{subarray}}\frac{e^{\frac{\pi\xi^{2}}{2\tau_{2}}\frac {c\tau+d}{\varepsilon\tau+d}+\frac{\bar{\tau}\bar{\xi}^{2}}{2\tau_{2}}\frac{c \tau+d}{c\tau+d}}}{|c\tau+d|^{n}|\eta(\tau)|^{2n}},\qquad p\gg 1, \tag{109}\] while the second group, which has at most \(p\) terms, will give a contribution bounded by \[\sum_{|c|+|d|\geq p^{\alpha}}\frac{\Theta_{\vec{0},\vec{0}}}{|c\tau+d|^{n}|\eta (\tau)|^{2n}}\lesssim O(p^{1-n\alpha}). \tag{110}\] For \(n\,\alpha>1\) this second term is negligible in the limit \(p\to\infty\). To conclude, for \(n\geq 3\), in the \(p\to\infty\) limit we recover the following expression for the averaged partition function \[\overline{Z}(\tau,\xi,\bar{\xi})=\frac{1}{|\eta(\tau)|^{2n}}\sum_{(c,d)=1} \frac{e^{-i\pi\frac{c\xi^{2}}{\varepsilon\tau+d}+i\pi\frac{c\xi^{2}}{c\tau+d}} }{|c\tau+d|^{n}}, \tag{111}\] matching the result of [7], which reduces to the partition function of [3; 4] for \(\xi=\bar{\xi}=0\). The special cases of \(n=1,2\) are considered below. Our final expression (111) is manifestly independent of the embedding. From [3; 4; 7] we know this expression is equal to the Narain CFT path integral averaged with the Haar measure over the whole Narain moduli space. It is thus natural to speculate that for \(n>2\) in the \(p\to\infty\) limit, independently of the embedding, the ensemble of code CFTs densely covers the whole moduli space with the canonical measure. We will first discuss how this works in the \(n=2\) case in next section, and then provide additional arguments and formulate an underlying hypothesis in section 5. The original works [3; 4] identified the sum in the right-hand-side of (111) as a sum over handlebody topologies of the "perturbative sector of Chern-Simons," an Abelian Chern-Simons theory with only small (topologically trivial) fluctuations of gauge fields contributing to the path integral. This theory was dubbed "U(1)-gravity" [3], but apparently it has no well-defined microscopic description. As was pointed out in [4], genuine Chern-Simons theories with either non-compact or compact gauge groups would lead to different results. We are now ready to clarify this point. U(1)-gravity does not have a proper microscopic description in the bulk because it emerges as a limit of well-defined theories, namely the \(k\to\infty\) limit of level-\(k\) (\(U(1)\times U(1)\))\({}^{n}\) Chern-Simons theory, coupled to topological gravity (to give the sum over handlebodies). ### Ensembles of \(n=1\) and \(n=2\) theories in the large \(p\) limit The cases \(n=1\) and \(n=2\) are special. As discussed above, for \(n=1\) and \(k>1\) the ensemble consists of just two codes, one with the codewords of the form \((a,0)\in\mathcal{C}_{1}\) and the other with \((0,b)\in\mathcal{C}_{2}\), with arbitrary \(0\leq a,b<p\). When translated to CFTs, each of them maps to a single compact scalar, with radii \(R_{\pm}=\sqrt{2}\,r\,p^{\pm 1/2}\), respectively. The relation between the ensemble-averaged enumerator polynomial and the Poincare series (4.16) is valid for all \(n\); hence this ensemble can be represented "holographically" as follows \[Z_{BPI}(\tau,\xi,\bar{\xi},R_{+})+Z_{BPI}(\tau,\xi,\bar{\xi},R_{-})=\sum_{g\in \Gamma_{0}(p)\backslash SL(2,\mathbb{Z})}\Psi_{00}(g\,\tau,g\,\xi,g\,\bar{\xi},r), \tag{4.24}\] where the sum is over \(p+1\) classes of three-dimensional topologies and \(\Psi_{00}\) is given by (3.21). We have explicitly specified the embedding parameter \(r\), which is arbitrary and can scale with \(p\). This relation holds for any prime \(p\), but in the limit \(p\to\infty\) it diverges. The sum in the right-hand side of (4.24) becomes the divergent real Eisenstein series of weight 1. The left-hand side of (4.24) also diverges, as at least one of the scalars decompactifies. Using the representation (A.10) of the partition function and T-duality, we find the limit for fixed \(r\) to be (for simplicity we consider vanishing fugacities \(\xi=\bar{\xi}=0\)) \[Z_{R_{+}}+Z_{R_{-}}=\frac{p\,(r+r^{-1})}{\sqrt{\tau_{2}}|\eta(\tau)|^{2}}+e^{- O(p)},\qquad p\to\infty. \tag{4.25}\] One can also consider the \(p\to\infty\) limit when \(R_{-}=R\) remains fixed, while \(R_{+}\) scales with \(p\), in which case \[Z_{R}+Z_{Rp}=\frac{(p+1)R}{\sqrt{2\tau_{2}}|\eta(\tau)|^{2}}+\sum_{(c,d)=1}R \frac{\theta_{3}\left(\frac{iR^{2}}{2\tau_{2}^{\prime}}\right)-1}{\sqrt{2\tau _{2}}|\eta(\tau)|^{2}}+e^{-O(p)},\quad\tau_{2}^{\prime}=\frac{\tau_{2}}{|c\tau +d|^{2}}. \tag{4.26}\] This is essentially the "modular sum" representation for the given compact scalar partition function \(Z_{R}\), except for the constant term \(R/\sqrt{2\tau_{2}}|\eta(\tau)|^{2}\). In this way it is similar to the representation (3.14) of [9]. In any case, the interpretation of the divergent equations that arise in these cases is not clear. The case of \(n=2\) codes is similar but much richer. Narain lattices in \(\mathbb{R}^{2,2}\) are conventionally parametrized by two modular parameters \(t=t_{1}+it_{2}\) and \(b=b_{1}+ib_{2}\), related to \(\gamma\) and the \(B\)-field by \[\gamma=\sqrt{\frac{b_{2}}{t_{2}}}\left(\begin{array}{cc}1&t_{1}\\ 0&t_{2}\end{array}\right),\qquad B=b_{1}\left(\begin{array}{cc}0&1\\ -1&0\end{array}\right). \tag{4.27}\] T-duality acts on \((t,b)\) as \(SL(2,\mathbb{Z})\times SL(2,\mathbb{Z})\) with an additional \(\mathbb{Z}_{2}\) exchanging \(t\leftrightarrow b\). We denote the partition function of a \(c=2\) Narain theory by \(Z_{c=2}(\tau,t,b)\). It is modular invariant with respect to all three variables. One can also introduce the partition function of primaries, \(\hat{Z}(\tau,t,b)=\tau_{2}|\eta(\tau)|^{4}Z_{c=2}(\tau,t,b)\), where here and in what follows we assume the fugacities vanish \(\xi=\bar{\xi}=0\). The partition function of primaries remains modular invariant under \(\tau\), and exhibits triality - full symmetry under permutation of its three arguments [9; 63]. There are \(2(p+1)\)\(n=2\) codes, see Appendix C.1 for a detailed discussion. If we choose an embedding, an orthogonal matrix from \(O(2,2,\mathbb{R})\) introduced in the end of section 2, in the form \[\left(\begin{array}{cc}\gamma^{*}&\gamma^{*}B\\ 0&\gamma\end{array}\right)\in O(2,2,\mathbb{R}), \tag{4.28}\] parametrized by two modular parameters \(t=-1/t_{0},b=b_{0}\), then the \(2(p+1)\) self-dual lattices of the code theories (after appropriate T-dualities) are explicitly given by \[\{t=\frac{k+t_{0}}{p}\quad\text{or}\quad t=p\,t_{0}\}\quad\text{ with}\quad b=b_{0}, \tag{4.29}\] and \[\{b=\frac{k+b_{0}}{p}\quad\text{or}\quad b=p\,b_{0}\}\quad\text{ with}\quad t=t_{0}, \tag{4.30}\] where \(0\leq k<p\). It is convenient to represent the average over code theories in terms of Hecke operators \(T_{p}\), defined as follows [64]. For a modular form \(f(\tau)\) of weight \(k\) and prime \(p\) \[T_{p}f(\tau)\equiv p^{k-1}f(p\tau)+\frac{1}{p}\sum_{r=0}^{p-1}f \left(\frac{\tau+r}{p}\right). \tag{4.31}\] Then, the average over codes is simply \[\frac{p}{2(p+1)}\left(T_{p}^{t}\,Z_{c=2}(\tau,t,b)+T_{p}^{b}\,Z_{ c=2}(\tau,t,b)\right), \tag{4.32}\] where we introduced an upper index to indicate the variable that each Hecke operator is acting on. The "sum over topologies" in the right-hand side of (4.16) \[\frac{\Psi_{\vec{0},\vec{0}}(\tau)+p^{-n}\sum_{r=0}^{p-1}\sum_{a,b}\Psi_{\vec{a},\vec{b}}(\tau)\,e^{2\pi ir\frac{q\cdot\vec{b}}{p}}}{1+p^{1-n}} \tag{4.33}\] can also be simplified for general \(n\). Starting from an \(O(n,n,\mathbb{R})\) matrix \(\mathcal{O}\) specifying a Narain lattice, the partition function of the corresponding Narain theory can be written as (2.20) \[Z_{\mathcal{O}}=\frac{\Theta(\tau)}{|\eta(\tau)|^{2n}},\quad\Theta (\tau)=\sum_{\vec{\ell}}e^{i\pi\tau p_{L}^{2}-i\pi\bar{\tau}p_{R}^{2}}, \tag{4.34}\] \[\begin{pmatrix}p_{L}+p_{R}\\ p_{L}-p_{R}\end{pmatrix}=\mathcal{O}\sqrt{2}\,\vec{\ell},\quad\vec{\ell}\in \mathbb{Z}^{2n}.\] Now, going back to (4.33), we notice that it can be rewritten as \[\frac{\Theta(p\tau)+p^{-n}\sum_{r=0}^{p-1}\Theta((\tau+r)/p)}{(1+p^{1-n})|\eta (\tau)|^{2n}}=\frac{T_{p}\,\Theta}{(p^{n-1}+1)|\eta(\tau)|^{2n}}, \tag{4.35}\] where \(\Theta\) is defined with the same \(\mathcal{O}\) as the embedding matrix of codes, introduced at the end of section 2. The last step in (4.35) is justified because \(\Theta(\tau)\) is a modular form of weight \(n\). Using the definition (4.31) we can express \(T_{p}\) acting on a modular form \(f\) of weight \(n\) in terms of its action on the modular invariant \(\tau_{2}^{n/2}f\), \[\tau_{2}^{n/2}\,T_{p}\,f=p^{n/2}\,T_{p}(\tau_{2}^{n/2}f). \tag{4.36}\] Going back to the \(n=2\) case and noting that \(\tau_{2}\,\Theta=\hat{Z}(\tau,t,b)\) is exactly the partition function of primaries, we can now rewrite the identity (4.17) as follows: \[\frac{p\,(T_{p}^{t}\,\hat{Z}+T_{p}^{b}\,\hat{Z})}{2(p+1)\tau_{2}|\eta(\tau)|^{ 4}}=\frac{p\,T_{p}^{\tau}\,\hat{Z}}{(p+1)\tau_{2}|\eta(\tau)|^{4}}. \tag{4.37}\] In fact a stronger identity holds for any prime \(p\), see Appendix C, \[T_{p}^{\tau}\,\hat{Z}=T_{p}^{t}\,\hat{Z}=T_{p}^{b}\,\hat{Z}, \tag{4.38}\] which extends the triality - permutation symmetry of \(\hat{Z}\) with respect to its arguments. In the limit \(p\to\infty\), the points \(t=\frac{k+t_{0}}{p}\) form a dense line close to \(t_{2}=0\), that crosses infinitely many copies of the fundamental domain. Once these \(p\) points are mapped back to the standard keyhole domain of \(SL(2,\mathbb{Z})\), they will cover it densely with the standard covariant measure \(d^{2}t/t_{2}^{2}\). The contribution of the point \(t=p\,t_{0}\) in the full average will be \(1/p\) suppressed, and can be neglected. Thus, the average over code theories when \(p\to\infty\), at least at leading order, would plausibly approach the average over the fundamental domain of \(t\) with the \(SL(2,\mathbb{Z})\) covariant measure, plus the same average over \(b\) (note that this is not the same as averaging over all Narain theories). Similarly, thanks to (4.35), the "bulk" sum in the \(p\to\infty\) limit will be proportional to the average of \(\hat{Z}\) over the fundamental domain of \(\tau\) with the measure \(d^{2}\tau/\tau_{2}^{2}\). The same conclusion is supported by the general result of [65, 44], that in the limit \(p\to\infty\), for any square-integrable modular function \(f\), \(T_{p}(f)\) approaches the integral of \(f\) over the fundamental domain \(\mathcal{F}\) with the canonical measure \[\left|\left|T_{p}(f)-\int_{\mathcal{F}}f\,d\mu\right|\right|<||f||\,O(p^{-9/28+ \epsilon})\to 0, \tag{111}\] for large \(p\), where \(\epsilon\) is any real number \(>0\), and \(||f||\) is the Weil-Petersson norm of \(f\)[65]. The caveat here is that in our case \(\hat{Z}\) is not square-integrable, and the integral over the fundamental domain diverges. We therefore _conjecture_ that in the \(p\to\infty\) limit, \(T_{p}^{x}(\hat{Z})\) for \(x=\tau\), \(t\) or \(b\) would be given by the regularized average over the fundamental domain of \(x\), _plus_ an \(x\)-dependent term, which will _not_ be dependent on other variables. The regularized average of \(\hat{Z}\) over \(\tau\) has been carried out in [66]. We discuss averaging over \(t\), related to it by triality, in Appendix C. Using this result we conjecture \[T_{p}^{\tau}(\hat{Z}(\tau))=\frac{3}{\pi}\ln(p/p_{0})-\frac{3}{\pi}\ln(t_{2}| \eta(t)|^{4})-\frac{3}{\pi}\ln(b_{2}|\eta(b)|^{4})+f(\tau)+O(1/p) \tag{112}\] for some unknown \(f(\tau)\). Since both (109) and (110) hold for any finite prime \(p\), to preserve triality we must conclude \(f(\tau)=-\frac{3}{\pi}\ln(\tau_{2}|\eta(\tau)|^{4})\) and \[\overline{Z}=\frac{3}{\pi}\left.\frac{\ln(p/p_{0})-\ln(\tau_{2}|\eta(\tau)|^{ 4})-\ln(t_{2}|\eta(t)|^{4})-\ln(b_{2}|\eta(b)|^{4})}{\tau_{2}|\eta(\tau)|^{4} }\right|_{t=t_{0},\,b=b_{0}}+O(1/p),\] where \(p_{0}\) is some constant. ### Extensions and generalizations The equality between averaging over length-\(n\)\(\mathbb{Z}_{p}\times\mathbb{Z}_{p}\) codes (with \(n\geq 3\)) and "Poincare series," which can be understood as sums over handlebody topologies, begs a deeper understanding. First, we expect it to hold for arbitrary genus \(\mathsf{g}\), \[\frac{1}{\mathcal{N}}\sum_{\mathcal{C}}W_{\mathcal{C}}(\Psi_{( c_{1},\ldots,c_{\mathsf{g}})}(\Omega))\propto\sum_{g\in Sp(2\mathsf{g}, \mathbb{Z})}\Psi_{(0,\ldots,0)}(g\,\Omega), \tag{113}\] \[W_{\mathcal{C}}(\{X_{(c_{1},\ldots,c_{\mathsf{g}})}\})=\sum_{c_ {1},\ldots,c_{\mathsf{g}}\in\mathcal{C}}X_{(c_{1},\ldots,c_{\mathsf{g}})}, \tag{114}\] where \(X_{c_{1},\ldots,c_{\mathsf{g}}}\) are formal variables associated with the \(\mathsf{g}\)-tuples of codewords [33, 35, 67]. We promote them to wavefunctions \(\Psi_{c_{1},\ldots,c_{\mathsf{g}}}\) of Chern-Simons theory on a genus-\(\mathsf{g}\) Riemann surface, hence their dependence on the period matrix \(\Omega\). \(\Psi_{(0,\ldots,0)}\), associated with the zero codeword taken \(\mathsf{g}\) times, is the wavefunction of the Chern-Simons theory computed on a 3d manifold \(\mathcal{M}\) with all \(a\)-cycles of \(\partial\mathcal{M}\) contractible in the interior, which is an analog of thermal AdS. As in section 4.3, the Poincare sum in (4.41) can be reformulated as a sum over a coset \(\Gamma^{*}\backslash Sp(2\mathsf{g},\mathbb{Z})\), where \(\Gamma^{*}\) is a congruence subgroup of the modular group \(Sp(2\mathsf{g},\mathbb{Z})\) leaving \(\Psi_{(0,\ldots,0)}(\Omega)\) invariant. Extending the result of the \(\mathsf{g}=1\) case, we conjecture this subgroup to be \(\Gamma_{0}^{Sp(2\mathsf{g})}(p)\subset Sp(2\mathsf{g},\mathbb{Z})\), which we define to be the group of matrices \[\left(\begin{array}{cc}A&B\\ C&D\end{array}\right)\in Sp(2\mathsf{g},\mathbb{Z}),\qquad C=0\ \mathrm{mod}\,p. \tag{4.43}\] For prime \(p\) the coset \(\Gamma_{0}^{Sp(2\mathsf{g})}(p)\backslash Sp(2\mathsf{g},\mathbb{Z})\) consists of \[\mathcal{N}_{sp}(\mathsf{g},p)=\prod_{j=1}^{\mathsf{g}}(p^{j}+1) \tag{4.44}\] elements (we obtained this expression by generalizing [68; 69]), which matches the result \(\mathcal{N}_{Sp}(2,2)=15\) found in [34]. In a somewhat similar fashion, the sum over codes on the left-hand-side of (4.41) can also be represented as a coset. We recall that all even self-dual codes over \(\mathbb{Z}_{k}\times\mathbb{Z}_{k}\) of the kind we are considering can be understood as a set of even self-dual lattices \(\Lambda_{\mathcal{C}}\) satisfying, see (2.15), \[(\sqrt{k}\mathbb{Z})^{2n}\subset\Lambda_{\mathcal{C}}\subset(\mathbb{Z}/\sqrt{ k})^{2n}. \tag{4.45}\] This defines the action of \(O(n,n,\mathbb{Z})\subset O(n,n,\mathbb{R})\) on codes. For prime \(k=p\) this action is transitive.5 Indeed, one can bring any code to canonical form with a generator matrix of the form \((I,\mathrm{B})\),where \(\mathrm{B}\) is antisymmetric mod \(p\)[21], and then use \(O(n,n,\mathbb{Z})\) to make \(\mathrm{B}\) vanish. In other words, for prime \(p\) the set of all codes can be described as a coset Footnote 5: The action of \(O(n,n,\mathbb{Z})\) is also transitive on all even non-zero codewords. This is sufficient to obtain the averaged enumerator polynomial \(\overline{W}\) for arbitrary \(n,p\) for genus 1, thus completing the mathematical proof of (4.16). \[\frac{O(n,n,\mathbb{Z})}{\Gamma_{0}^{O(n,n)}(p)}, \tag{4.46}\] where \(\Gamma_{0}^{O(n,n)}(p)\), the subgroup of \(O(n,n,\mathbb{Z})\) which leaves the code with \(\mathrm{B}=0\) invariant, is defined to be the group of matrices \[\left(\begin{array}{cc}A&B\\ C&D\end{array}\right)\in O(n,n,\mathbb{Z}),\qquad C=0\ \mathrm{mod}\,p. \tag{4.47}\] The coset description (4.46) is a generalization to arbitrary prime \(p\) of the coset construction for \(p=2\) outlined in [19]. The size of the coset is given by (2.2). To summarize, the equality (4.41) between the average over codes and the Poincare series over topologies can be rewritten as a sum over similar cosets \[\sum_{\mathcal{C}\in\Gamma_{0}^{O(n,n)}(p)\backslash O(n,n,\mathbb{Z})}W_{ \mathcal{C}}(\{\Psi_{(c_{1},\ldots,c_{\mathfrak{g}})}\})\propto\sum_{g\in \Gamma_{0}^{Sp(2\mathfrak{g})}(p)\backslash Sp(2\mathfrak{g},\mathbb{Z})}g( \Psi_{(0,\ldots,0)}). \tag{4.48}\] The number of terms on both sides, \(\mathcal{N}(n,p)\) (2.2) and \(\mathcal{N}_{Sp}(\mathbf{g},p)\) (4.44), and the overall similarity of the cosets, can be seen as an extension of the worldsheet/target space duality of the \(c=2\) case [63]. We have seen in the previous section that the Poincare sum for genus one can be represented in terms of a Hecke operator. In general the Hecke operator \(T_{k}\) is defined to act on functions \(f(\Lambda)\) on lattices \(\Lambda\). Then \((T_{k}\,f)(\Lambda)\) is a sum \(f(\Lambda^{\prime})\) over all sublattices \(\Lambda^{\prime}\subset\Lambda\) of index \(k\). A modular form \(f(\tau)\) can be understood as a function on two-dimensional lattices generated by \(1\) and \(\tau\). Then \(T_{k}\) can be written as a sum over equivalence classes of \(2\times 2\) integer matrices of determinant \(k\), \[\left(\begin{array}{cc}a&b\\ c&d\end{array}\right)\in M_{k},\quad a,b,c,d\in\mathbb{Z},\quad ad-bc=k, \tag{4.49}\] modulo right multiplication by any element of \(SL(2,\mathbb{Z})\). For prime \(k=p\) this sum includes \(p+1\) terms and \(T_{p}\) is given by (4.31). The equivalence of \(\Gamma_{0}(p)\backslash SL(2,\mathbb{Z})\) and \(M_{p}/SL(2,\mathbb{Z})\), together with the relation between the Poincare series and the Hecke operator representation [64], leads to equality (4.35), which we rewrite as \[\frac{1}{(1+p^{1-n})\tau_{2}^{n/2}|\eta(\tau)|^{2n}}\sum_{g\in \Gamma_{0}(p)\backslash SL(2,\mathbb{Z})}\hat{Z}_{\vec{0},\vec{0}}(g\,\tau)= \frac{p^{-n/2}}{(1+p^{1-n})\tau_{2}^{n/2}|\eta(\tau)|^{2n}}\sum_{g\in M_{p}/SL (2,\mathbb{Z})}\hat{Z}(g\,\tau). \tag{4.50}\] Here we introduced the modular invariant partition function of primaries \(\hat{Z}(\tau)=\tau_{2}^{n/2}\Theta(\tau)\), which is related to \(\hat{Z}_{\vec{0},\vec{0}}(\tau)=\tau_{2}^{n/2}\Theta_{\vec{0},\vec{0}}(\tau)\) as follows \[\hat{Z}_{\vec{0},\vec{0}}(\tau)=p^{-n/2}\hat{Z}(p\,\tau). \tag{4.51}\] It is tempting to speculate that an analogous representation is also possible for higher-genus Poincare series, in which the Hecke operator would be defined to act on modular forms \(f(\Omega)\) of \(Sp(2\mathfrak{g},\mathbb{Z})\). The left-hand-side of (4.48), the sum over codes, is also very reminiscent of Hecke operators. While the standard Hecke operator includes the sum over all sublattices of index \(p\), the sum over even self-dual codes can be readily rewritten as a sum over all even sublattices of \((\mathbb{Z}/\sqrt{p})^{2n}\) of index \(p^{n}\). The Hecke form of code averaging, through a suitable generalization of (114), could potentially lead to a more straightforward and general proof that in the limit that the size of the code ensemble becomes infinite, the code average computes the average over the whole Narain moduli space with the Haar measure. We should note that formally the same logic can be applied to the Poincare series, which would naively suggest that when \(p\to\infty\), the right-hand-side of (113) and hence (112) would be given by an integral over the fundamental domain of \(\tau\). This is not so because the corresponding modular form \(\hat{Z}\) is not square-integrable on the fundamental domain. As a result, in order to apply (114) the integral has to be covariantly regularized, eventually leading to the conclusion of section 4.4: in the \(p\to\infty\) limit the Poincare series in (112) approaches the real Eisenstein series of weight \(n\). ## 5 Ensemble averaging, typicality and holography In section 4.4 we saw that averaging partition functions over the ensemble of code CFTs in the \(k\to\infty\) limit leads (for \(n>2\)) to "\(U(1)\)-gravity," the sum over CS theories on all handlebody topologies. In particular, the answer does not depend on the embedding of the codes, and is equal to the average of the whole Narain moduli space with the Haar measure, as was outlined in [3; 4]. This suggests that code CFTs in the \(k\to\infty\) limit, when the ensemble becomes infinitely large, densely cover the entire Narain moduli space with the canonical measure. This is in agreement with an earlier observation that the averaged code theory, in the \(k\to\infty\) limit, has the same spectral gap as the averaged Narain theory [6]. If we additionally take the large central charge limit, \(c\gg 1\), then averaging over the whole moduli space would be well approximated by a random Narain theory, because the ensemble of all Narain theories, as well as the ensemble of code CFTs in the \(c\to\infty\) limit, are self-averaging at large \(c\), namely the variance is exponentially small \(e^{-O(c)}\)[6; 13; 34]. To support the conclusion that the \(k\to\infty\) ensemble densely covers the entire moduli space, we first note that there are two code ensembles, but for large \(k\) they are similar. The first ensemble, which we used in our discussions above, is the ensemble of all \(\mathcal{N}=\prod_{i=0}^{n-1}(p^{i}+1)\) codes of length \(n\) (here we assume \(k=p\) is prime). The second ensemble is the ensemble of all \(\mathcal{N}^{\prime}=p^{n(n-1)/2}\) codes in the canonical form, also called the B-form [19]. Each code in the canonical form is parametrized by an antisymmetric matrix B defined mod \(p\), which can be interpreted as an adjacency matrix of a graph with edges carrying an element of \(\mathbb{Z}_{p}\). Every code from the first ensemble has an equivalent code in the canonical form, in the sense of code equivalences. It is a non trivial question to determine the number of codes equivalent to the canonical one with a given B (noting that certain canonical form codes are equivalent to each other). At the level of CFTs, code equivalence is the same as T-duality only for the most symmetric "rigid" embedding, when the code with the matrix B is associated with the Narain lattice specified by \(\gamma=I/\sqrt{p}\) and \(B=\text{B}/p\). When \(p\to\infty\) we expect averaging over both ensembles with each code entering with equal weight to be physically equivalent, which is reflected by \(\mathcal{N}^{\prime}/\mathcal{N}\to 1\) and by the equivalence of the resulting Gilbert-Varshamov bounds (averaged spectral gap). The ensemble of all codes in the canonical form, independently of the embedding, leads to the ensemble of CFTs with \(\gamma\to 0\) and with the \(B\)-field homogeneously covering the "T-duality cube" \(B_{ij}\sim B_{ij}+1\). We _conjecture_ that the region in the moduli space \[\gamma\to 0,\qquad 0\leq B_{ij}<1, \tag{109}\] with the conventional flat measure for \(dB_{ij}\) on the cube, covers (via T-duality) the whole Narain moduli space, with the canonical \(O(n,n,\mathbb{R})\)-invariant measure. By \(\gamma\to 0\) we mean that all singular values of \(\gamma\) approach zero. This is analogous to the observation in section 4.5 that the \(t_{2}\to 0\), \(0\leq t_{1}<1\) region is T-dual to the entire fundamental domain of \(t\) with the canonical measure. Similarly here, it is straightforward to see that starting from an arbitrary pair \((\gamma,B)\), via T-duality one can move it into the region (109). A non-trivial point, which we leave for the future, is whether the Haar measure on the Narain moduli space indeed results in the homogeneous measure for \(B_{ij}\). With this assumption, we would find that the ensemble of all canonical codes densely covers the entire Narain moduli space with the Haar measure. In particular, this would explain why the averaged spectral gap matches the one of the whole Narain ensemble [6]. The representation of the Narain moduli space via (109) provides a new easy way to obtain the original result of [3; 4] and [7]. Starting from the conventional representation of the CFT path integral (3.28) with self-dual \(\Lambda\) parametrized by \(\gamma,B\), and by performing Poisson resummation over half of the variables, we obtain \[Z^{\gamma,B}_{BPI}(\tau,\xi,\bar{\xi}) = \frac{\det{(\gamma)}}{\tau_{2}^{n/2}|\eta(\tau)|^{2n}}\sum_{\vec{ n},\vec{m}\in\mathbb{Z}^{n}}e^{-\frac{\pi}{\tau_{2}}|\vec{v}|^{2}-2\pi i\,m^{T} Bn-\frac{2\pi}{\sqrt{2}\tau_{2}}(\xi\cdot v^{*}-\bar{\xi}\cdot v)+\frac{\pi}{\tau_{2}} \xi\bar{\xi}}, \tag{110}\] \[\vec{v} = \gamma(\vec{n}\tau+\vec{m}).\] Now we are ready to average \(Z_{\gamma,B}\) over the region (109). Integration over \(B_{ij}\) forces the vectors \(\vec{n},\vec{m}\) to be collinear. We thus can parametrize \(\vec{n}=c\,\vec{\ell},\,\vec{m}=d\,\vec{\ell}\) with \(\vec{\ell}\in\mathbb{Z}^{n}\) and with a co-prime pair \((c,d)=1\). Using the explicit modular invariance of (5.2), \[\tau\to\tau^{\prime}=\frac{a\tau+b}{c\tau+d},\quad\xi\to\xi^{ \prime}=\frac{\xi}{c\tau+d},\quad\bar{\xi}\to\bar{\xi}^{\prime}=\frac{\bar{\xi}} {c\bar{\tau}+d}, \tag{5.3}\] \[(\vec{n}\ \vec{m})\to(\vec{n}\ \vec{m})\left(\begin{array}{c}a \ b\\ c\ d\end{array}\right),\] we find \[\overline{Z}_{BPI}=\sum_{(c,d)=1}\frac{\det{(\gamma)}}{\tau_{2}^{n/2}|\eta( \tau)|^{2n}}\sum_{\vec{\ell}\in\mathbb{Z}^{n}}e^{-\frac{\pi}{\tau_{2}^{2}}| \vec{v}|^{2}-\frac{2\pi}{\sqrt{2}\tau_{2}^{\prime}}(\xi^{\prime}-\bar{\xi}^{ \prime})\cdot\vec{v}+\frac{\pi}{\tau_{2}}\xi\bar{\xi}},\qquad\vec{v}=\gamma \vec{\ell}. \tag{5.4}\] In the limit \(\gamma\to 0\), the summation over \(\ell\) can be replaced by an integration, giving \[\overline{Z}_{BPI}=\sum_{(c,d)=1}\frac{1}{|\eta(\tau^{\prime})|^{2n}}e^{\frac {\pi}{2\tau_{2}^{2}}(\xi^{\prime 2}+\bar{\xi}^{\prime 2})}, \tag{5.5}\] matching (4.23). The calculation above hints towards a possible "holographic dual" for an individual Narain theory, understood as a "Poincare sum" over all co-prime pairs \((c,d)=1\), enumerating all handlebodies. The representation (5.2), valid for any \(\gamma,B\), geometrizes the action of the modular group \(SL(2,\mathbb{Z})\) as an action on a lattice of vectors \((\vec{n},\vec{m})\in\mathbb{Z}^{2n}\). There is a trivial orbit of \(SL(2,\mathbb{Z})\), consisting of the origin \(\vec{n}=\vec{m}=0\), with a non-trivial action on all other elements (trivial stabilizer). Thus, only the contribution of the origin does not admit the Poincare sum form, but we can make it as small as we want by using chains of T-dualities to render \(\det(\gamma)\) arbitrarily small. The remaining contributions of \(\mathbb{Z}^{2n}\backslash\vec{0}\) may be conveniently split into "one-vector orbits", with collinear \(\vec{n}=c\,\vec{\ell}\), \(\vec{m}=d\,\vec{\ell}\), and "two-vector orbits", when \(\vec{n},\vec{m}\) are not collinear. The contribution of the one-vector orbits leads naturally to a sum over co-prime pairs \((c,d)\), as in (5.4). We can choose a representative in each orbit with \(\vec{n}=0\), \(\vec{m}=\vec{\ell}\), leading to a concise expression for \(\vec{v}=\gamma\vec{\ell}\). Averaging over \(\gamma\), even without assuming \(\gamma\to 0\), would lead to "\(U(1)\)-gravity" - the real Eisenstein series.6 Footnote 6: As was shown in [3], averaging over \(\gamma\) with \(\det{(\gamma)}\) fixed is equivalent to replacing the sum over \(\vec{\ell}\in\mathbb{Z}^{n}\) by an integral over \(\mathbb{R}^{n}\). The contribution of the two-vector orbits can also be represented as a sum over co-prime \((c,d)=1\), but here the choice of "gauge" - the choice of a representative in the orbit of \(SL(2,\mathbb{Z})\) - is less clear. There is no obvious choice admitting an apparent bulk interpretation. For a typical Narain theory with large central charge, when \(B\) can be considered random, the contribution of two-vector orbits will be exponentially small: for small \(\gamma\) the term \(-\pi|v|^{2}/\tau_{2}\) in the exponent can be neglected, while many different pairs \((n,m)\) will lead to random phases \(e^{-2\pi im^{T}Bn}\) canceling each other. To summarize, we outlined a possible holographic "Poincare sum" representation for an individual Narain theory, which fits the picture proposed in [6]. A typical theory (when \(c\gg 1\)) will be described by \(U(1)\)-gravity with exponentially small corrections. There is a natural ambiguity of assigning these corrections to individual handlebodies, rooted in the ambiguity of choosing representatives among the two-vector orbits. This, together with the need to consider a limit of T-duality transformations yielding \(\det(\gamma)\to 0\), precludes a simple microscopic local bulk interpretation. The resulting picture is qualitatively similar to the one in JT gravity [70; 71]. An attempt to extend a holographic duality based on a sum over topologies to describe an individual Narain theory leads to potentially non-local interactions in the bulk, responsible for "half-wormholes." After averaging over all theories these interactions vanish. The crucial difference with the JT gravity case is that in our case there is also a bona fide holographic description for an individual theory, described in section 4.1 above, though it does not involve a sum over topologies. It would be very interesting to make an explicit link between the two holographic descriptions for a given theory, by starting from CS theory on a given handlebody and deforming it into a sum over topologies with some non-local action. ## 6 Discussion In this paper we considered Narain CFTs on \(\Sigma\), and found that they are described by pure level-1 \((U(1)\times U(1))^{n}\) Chern-Simons theory on a 3d manifold \(\mathcal{M}\) with \(\partial\mathcal{M}=\Sigma\). The details of \(\mathcal{M}\) do not matter; all manifolds with \(\partial\mathcal{M}=\Sigma\) lead to the same partition function because the Hilbert space of \(k=1\) CS theory is one-dimensional. The two \(U(1)^{n}\) gauge fields are linked at the level of large gauge transformations. The choice of large gauge transformations, or, equivalently, the choice of boundary conditions changing the representation of the unique CS wavefunction, specifies the dual Narain theory. This provides a holographic duality, with the holographic dictionary as outlined in section 4.1. Our considerations were limited to genus one \(\Sigma\), but it should be straightforward to extend the duality to arbitrary genus. We then proceeded to consider an ensemble of Narain CFTs defined in terms of an ensemble of codes. We considered an ensemble of all even self-dual codes of length \(n\) over \(\mathbb{Z}_{p}\times\mathbb{Z}_{p}\) for prime \(p\), and then embedded (mapped) these codes into the \(c=n\) Narain moduli space. The embedding is specified by an arbitrary \(\mathcal{O}\in O(n,n,\mathbb{R})\), thus any given code can be mapped to any given Narain theory. As \(n\) or \(p\) grows, the size of the ensemble given by (2.2) grows much faster than the dimension of \(O(n,n,\mathbb{R})\). Hypothetically, in the \(p\to\infty\) limit, the ensemble of code theories densely covers the whole Narain moduli space with the canonical measure. For fixed \(n\) and \(p\) we find that the CFT partition function averaged over this ensemble is given by the level-\(p\)\((U(1)\times U(1))^{n}\) Chern-Simons theory summed over all classes of handlebody topologies that are distinguished by that theory. The main identity (4.17), valid for any fixed \(n,p\) and fixed embedding, establishes an explicit relation between averaging over the code-based ensemble and the "Poincare series" representing the sum over topologies. Again, our explicit consideration was focused on genus one. One of the questions our construction answers is why the "U(1)-gravity" of [3; 4], though suggestive, has no well-defined microscopic bulk description. In section 4.4 we found that U(1)-gravity emerges as the \(p\to\infty\) limit of our construction, hence it is an infinite limit of a family of level-\(p\) pure Chern-Simons theories, which are all well-defined in the bulk. In our formalism the sum over bulk manifolds originates from a sum over \(SL(2,\mathbb{Z})\) transformations of a specific solid torus, and it is thus natural that we get a sum over just handlebodies and not other manifolds. Taking a leap to the holographic CFTs of [72], presumably dual to 3d quantum gravity with additional light matter, and the failure to find a dual to pure gravity due to intrinsic inconsistencies [60; 61; 73; 74; 75; 76], we can speculate that pure 3d quantum gravity might not be well defined by itself, but could emerge as an infinite limit of a family of well-defined theories. From the mathematical point of view, our main technical result, equation (4.16), deserves a better understanding. It would be interesting to extend it to higher genus [57] and to disconnected manifolds. More generally, rewriting this equation in terms of sums over cosets or in terms of Hecke operators as was done in section 4.6 hints at a deeper mathematical structure. Beyond the \(\mathbb{Z}_{k}\times\mathbb{Z}_{k}\) codes considered in this paper, the general code construction of [21] described in section 2.2 opens up possibilities for considering other types of code ensembles. Consideration of a variety of ensemble types could help answer a crucial question: when is an ensemble holographic, in the sense of admitting a bulk description in terms of a sum over geometries. When the central charge is large \(c\gg 1\), ensembles of code CFTs or the ensemble of all Narain theories are self-averaging: a random theory faithfully captures the ensemble average up to exponentially small (in \(c\)) corrections. This suggests that individual theories, at least the sufficiently typical ones, should admit a bulk description in terms of a sum over topologies. We outline such a description in section 5, but notice that it suffers from ambiguities and possibly non-local interactions in the bulk. It would be very interesting to explicitly relate this bulk description, which includes the sum over topologies, to the conventional holographic description in terms of level-1 CS theory on a fixed topology, discussed in section 4.1. Our work clarifies the role codes play in relation to CFTs and their holographic duals. We saw that all possible "words" label all possible wavefunctions in the bulk. We also saw that an ensemble of codes plays a crucial role in holography, although the reason why remains obscure. We emphasize that this is only one aspect of a more comprehensive story. We recall that the theory dual to the \(c=1\) compact scalar, the "AB" Chern-Simons theory, also emerges as a low energy limit of a 2+1 dimensional system describing Kitaev's toric code [77]. Is there a relation between the codes of this paper and the quantum codes underlying the "AB" theory? A first step connecting these two pictures was taken in [32] for the \(\mathbb{Z}_{2}\times\mathbb{Z}_{2}\) case, where classical additive codes can be understood as quantum codes. More progress followed recently, relating quantum codes (connected to \(\mathbb{Z}_{p}\times\mathbb{Z}_{p}\) classical codes in our nomenclature) to CFTs and Chern-Simons theories [37, 39, 43], but a complete picture is yet to emerge. The codes considered in this work are of additive type; consequently the corresponding CFTs are Abelian. There is a natural generalization of our story to non-Abelian codes, WZW theories and dual non-Abelian Chern-Simons theory [78]. Going in the direction of gradually generalizing the type of CFTs under consideration, one hopes to eventually arrive at codes associated with the conventional "Virasoro" CFTs, dual to quantum gravity. We can only speculate that at this point a direct link may emerge between the code structure on the CFT side and the holographic codes responsible for the locality in the bulk [29, 79]. ###### Acknowledgments. We thank Ahmed Barbar, Nathan Benjamin, Debarghya Chakraborty, Mathew Dodelson, Daniel Jafferis, Johan Henriksson, Brian McPeak, Adam Schwimmer and Edward Witten for discussions. The work of OA was supported in part by an Israel Science Foundation (ISF) center for excellence grant (grant number 2289/18), by ISF grant no. 2159/22, by Simons Foundation grant 994296 (Simons Collaboration on Confinement and QCD Strings), by grant no. 2018068 from the United States-Israel Binational Science Foundation (BSF), by the Minerva foundation with funding from the Federal German Ministry for Education and Research, by the German Research Foundation through a German-Israeli Project Cooperation (DIP) grant "Holography and the Swampland", and by a research grant from Martin Eisenstein. OA is the Samuel Sebba Professorial Chair of Pure and Applied Physics. A.D. is grateful to Weizmann Institute of Science for hospitality and acknowledges sabbatical support of the Schwartz/Reisman Institute for Theoretical Physics, and support by the NSF under grants PHY-2013812 and 2310426. This work was performed in part at Aspen Center for Physics, which is supported by National Science Foundation grant PHY-2210452. A.S. thanks the Insti tute for Advanced Study for hospitality and sabbatical support, and the Simons Center for Geometry and Physics for hospitality during the 2023 Simons Summer Workshop. ## Appendix A The compact scalar CFT The compact scalar CFT of radius \(R\) is a two-dimensional theory of a real scalar field \(X\), subject to the identification \(X\sim X+2\pi R\), coupled to external gauge fields. The free scalar theory has a \(U(1)_{L}\times U(1)_{R}\) global symmetry, and we call the corresponding charges \(Q\) and \(\bar{Q}\). We consider the Euclidean theory placed on a spacetime torus with modular parameter \(\tau\). The CFT partition function with fugacities (background gauge fields) \(\xi\) and \(\bar{\xi}\) is defined as a sum over the Hilbert space of the theory on a circle \[Z=\mbox{Tr}\,\left[q^{L_{0}-1/24}\bar{q}^{\bar{L}_{0}-1/24}e^{2 \pi i(\xi Q-\bar{\xi}\bar{Q})}\right],\qquad q=e^{2\pi i\tau}.\] (A.1) It can be readily evaluated [80] \[Z(\tau,\xi,\bar{\xi},R)=\frac{\sum_{n,m}e^{i\pi\tau p_{L}^{2}-i \pi\bar{\tau}p_{R}^{2}+2\pi i(p_{L}\xi-p_{R}\bar{\xi})}}{|\eta(\tau)|^{2}}, \quad p_{L,R}=\frac{n}{R}\pm\frac{mR}{2}.\] (A.2) The simultaneous reflection \(p_{R}\to-p_{R}\), \(\bar{\xi}\to-\bar{\xi}\) is a symmetry of \(Z\), which is the T-duality exchanging \(R\) and \(2/R\), \[Z(\tau,\xi,\bar{\xi},R)=Z(\tau,\xi,-\bar{\xi},2/R).\] (A.3) We would like to obtain the partition function (A.2) from the path integral formulation. We parametrize the spacetime torus by a complex coordinate \(z\), \[z\sim z+1,\qquad z\sim z+\tau,\qquad\tau=\tau_{1}+i\tau_{2},\] (A.4) with the notation \[\int d^{2}z=\tau_{2},\qquad\int dz\wedge d\bar{z}=-2i\tau,\] (A.5) where the integrals are over the torus. The scalar field \(X(z,\bar{z})\) is periodic up to identifications, \[X(z+1)=X(z)-2\pi Rn_{2},\quad X(z+\tau)=X(z)+2\pi Rn_{1}.\] (A.6) The sign of \(n_{2}\) is chosen for convenience. One way of coupling the theory to a background gauge field is by the action \[S[X,A]=\frac{1}{2\pi}\int d^{2}z\,\left|\partial_{z}X\right|^{2} -\frac{i}{2\pi R}\int dX\wedge A,\] (A.7) where \(A\) is an external \(U(1)\) gauge field coupling to a specific combination of the global symmetries and satisfying \(dA=0\). Using the background field gauge freedom we can choose \[\xi=\frac{\tau_{2}}{i\pi R}A_{\bar{z}},\qquad\bar{\xi}=-\frac{\tau_{2}}{i\pi R}A _{z} \tag{111}\] to be constant on the torus. For a real background field \(A\), \(\xi\) and \(\bar{\xi}\) are complex conjugate to each other, \(\xi^{*}=\bar{\xi}\). The theory is free (quadratic), so the partition function can be computed straightforwardly. We should sum over on-shell configurations satisfying (110), \[X=2\pi R\frac{(n_{1}+n_{2}\bar{\tau})z-(n_{1}+n_{2}\tau)\bar{z}} {2i\tau_{2}}, \tag{112}\] and the small fluctuations around the classical solutions contribute a multiplicative factor, that includes the Dedekind eta-function [80]. The full expression for the path integral is then \[Z_{\rm PI}(\tau,R,\xi,\bar{\xi})=\int\mathcal{D}X\,e^{-S[X,A]}=\] \[\frac{R}{\sqrt{2\tau_{2}}|\eta(\tau)|^{2}}\sum_{n_{1},n_{2}}e^{- \frac{\pi R^{2}}{2\tau_{2}}|n_{1}+n_{2}\tau|^{2}-\frac{\pi R}{\tau_{2}}\big{(} \xi(n_{1}+n_{2}\bar{\tau})-\bar{\xi}(n_{1}+n_{2}\tau)\big{)}}. \tag{113}\] Under large background gauge transformations \(A\to A+d\phi_{A}\), where \(\phi_{A}=\pi\frac{(n+m\bar{\tau})z-(n+m\tau)\bar{z}}{i\tau_{2}}\), we have from (111) \[\xi\to\xi+\frac{n+m\tau}{R},\qquad\bar{\xi}\to\xi+\frac{n+m\bar{\tau}}{R}, \tag{114}\] and the action (110) is shifted by an integer multiplied by \(2\pi i\). Hence, the Euclidean path integral is invariant under (114), which can be verified explicitly from (113). Similarly, \(Z_{\rm PI}\) is invariant under modular transformations generated by the two transformations \[\tau\to\tau+1,\quad\xi\to\xi,\qquad\bar{\xi}\to\bar{\xi}, \tag{115}\] \[\tau\to-1/\tau,\quad\xi\to\xi/\tau,\quad\bar{\xi}\to\bar{\xi}/ \bar{\tau},\] which is just the relabeling of spacetime coordinates, amended by a dilatation (which acts trivially since the theory is conformal). To find the relation between the path integral (113) and the partition function (109), we perform a Poisson resummation in (108) over \(n\), which readily yields \[Z=Z_{\rm PI}e^{-\frac{\pi}{2\tau_{2}}(\xi-\bar{\xi})^{2}}. \tag{116}\] Alternatively we can couple a background gauge field \(B\) to a different combination of the \(U(1)\) global symmetries by \[S^{\prime}[X,B]=\frac{1}{2\pi}\int d^{2}z\,|(\partial_{z}+R\,B_{z})X|^{2}. \tag{111}\] We assume \(dB=0\) and use the background gauge symmetry to parametrize the background gauge field by \[\xi=\frac{\tau_{2}}{i\pi}\frac{R}{2}B_{\bar{z}},\qquad\bar{\xi}=\frac{\tau_{2} }{i\pi}\frac{R}{2}B_{z}. \tag{112}\] In this case for real \(B\) we have \(\xi^{*}=-\bar{\xi}\), and large gauge transformations take \(B\to B+d\phi_{B}\), where \(\phi_{B}=\pi\frac{(p+q\bar{\tau})z-(p+q\tau)\bar{z}}{i\tau_{2}}\), and act as \[\xi\to\xi+\frac{(p+q\tau)R}{2},\qquad\bar{\xi}\to\bar{\xi}-\frac{(p+q\bar{\tau })R}{2}. \tag{113}\] Clearly, the path integral \[Z^{\prime}_{\rm PI}(\tau,R,\xi,\bar{\xi})=\int{\cal D}X\,e^{-S^{ \prime}[X,B]}=\] \[\frac{R}{\sqrt{2\tau_{2}}|\eta(\tau)|^{2}}\sum_{n_{1},n_{2}}e^{- \frac{\pi R^{2}}{2\tau_{2}}|n_{1}+n_{2}\tau|^{2}-\frac{\pi R}{\tau_{2}}\left( \xi(n_{1}+n_{2}\bar{\tau})-\bar{\xi}(n_{1}+n_{2}\tau)\right)+\frac{2\pi\xi\bar {\xi}}{\tau_{2}}} \tag{114}\] is invariant under large gauge transformations (113), and is also modular invariant. A comparison with \(Z_{\rm PI}\) yields \(Z_{\rm PI}=Z_{\rm PI^{\prime}}\,e^{\frac{2\pi}{\tau_{2}}\bar{\xi}\bar{\xi}}\), and \[Z=Z_{\rm PI^{\prime}}e^{-\frac{\pi}{2\tau_{2}}(\xi+\bar{\xi})^{2}}, \tag{115}\] in agreement with Appendix A of [49]. The two path integrals above are particular sections \(\xi^{*}=\pm\bar{\xi}\) of a more general theory coupled to two gauge fields, \(A\) and \(B\), combined into one complex combination \[S=\frac{1}{2\pi}\int d^{2}z|\partial X|^{2}-\frac{i}{2\pi}\int dX \wedge{\cal A}+\frac{\kappa}{\pi}\int d^{2}z\,{\cal A}^{2}, \tag{116}\] \[{\cal A}=\frac{A}{R}+i*B\frac{R}{2},\qquad{\cal A}^{2}\equiv{ \cal A}_{z}{\cal A}_{\bar{z}}.\] Taking \(\kappa=0\) or \(2\) gives complexifications of the two path integrals with the actions (112) and (111) above. As follows from (106) and (110),(115) these two values are T-dual to each other \[Z_{\rm PI}(\tau,\xi,\bar{\xi},R)=Z^{\prime}_{\rm PI}(\tau,\xi,-\bar{\xi},2/R), \tag{117}\] and each is invariant under one group of large gauge transformations, (A.11) or (A.16). The "symmetric" choice \(\kappa=1\) corresponds to the _bulk path integral_ discussed in the bulk of the paper \[Z_{BPI}(\tau,\xi,\bar{\xi})=Z(\tau,\xi,\bar{\xi})\,e^{\frac{\pi}{2 \tau_{2}}(\xi^{2}+\bar{\xi}^{2})}.\] (A.21) It is both modular-invariant and T-duality-invariant, and it changes covariantly (but is not invariant) under large gauge transformations of the form \[\xi=\frac{\tau_{2}}{i\pi}\left(\frac{A_{\bar{z}}}{R}+\frac{B_{ \bar{z}}R}{2}\right), \bar{\xi}=-\frac{\tau_{2}}{i\pi}\left(\frac{A_{z}}{R}-\frac{B_{z }R}{2}\right),\] (A.22) \[\xi\to\xi+\frac{n+m\tau}{R}+\frac{(p+q\tau)R}{2}, \bar{\xi}\to\xi+\frac{n+m\bar{\tau}}{R}-\frac{(p+q\bar{\tau})R}{2}.\] (A.23) ## Appendix B Chern-Simons theory: technical details In this section we provide additional details accompanying section 3. Our starting point is the \(U(1)\) gauge field \(A\) living on a three-manifold \(\mathcal{M}\) as in subsection 3.2. We focus on the case when \(\partial\mathcal{M}\) is a two-dimensional torus, with the same notation as in Appendix A above. The bulk theory is invariant under large gauge transformations \(A\to A+\omega\) in (3.8) when \(\omega\) is a canonically normalized cohomology on \(\partial\mathcal{M}\), namely \(\omega=d\phi\) where \(\phi\) is a multi-valued function winding along the cycles of \(\partial\mathcal{M}\). When \(\partial\mathcal{M}\) is a two-dimensional torus as above, we have explicitly \(\phi=2\pi\frac{(n+m\bar{\tau})z-(n+m\tau)\bar{z}}{2i\tau_{2}}\), from where (3.7) follows. Taking \(A_{\bar{z}}=0\) at the boundary for simplicity, two consecutive large gauge transformations \(\omega\) and \(\omega^{\prime}\) change the Chern-Simons action (3.1) by \[-\frac{ik}{4\pi}\int\limits_{\partial\mathcal{M}}\omega\wedge \omega^{\prime}=-i\pi k(nm^{\prime}-n^{\prime}m).\] (B.1) Thus the bulk theory is gauge-invariant for even \(k\), while pure sign phase factors appear for odd \(k\), related to the need to choose a spin structure. In the \(U(1)\times U(1)\) case of subsection 3.3, there are two gauge fields \(A,B\) subject to large gauge transformations \(A\to A+\omega_{A}\), \(B\to B+\omega_{B}\), where \(\omega_{A,B}=d\phi_{A,B}\) are defined in Appendix A above. One can imagine splitting \(\mathcal{M}\) into two parts by a hypersurface. Imposing boundary conditions on the surface, and then integrating over them, should remove the split. This leads to the scalar product \(\langle\Psi|\Psi^{\prime}\rangle\) discussed in the main text [48], and the wave functions (3.11) discussed there form an orthogonal basis, \[\langle\Psi_{r}|\Psi_{r^{\prime}}\rangle=\int\frac{d^{2}\xi}{ \tau_{2}}\,(\Psi_{r}(\xi))^{*}e^{-\frac{k\pi}{\tau_{2}}|\xi|^{2}}\Psi_{r^{ \prime}}(\xi)=\sqrt{\frac{1}{2k\tau_{2}}}\,\frac{\delta_{r,r^{\prime}}}{|\eta (\tau)|^{2}}.\] (B.2) The integral here is over the torus of possible boundary conditions, defined by the large gauge transformations (3.7), \(\xi\sim\xi+n+m\tau\). In the case of \((U(1)\times U(1))^{n}\) discussed in section 3.4 above, the wavefunctions (3.28) also satisfy an orthogonality condition \[\int\frac{d^{2n}\xi\,d^{2n}\bar{\xi}}{\tau_{2}^{2n}}(\Psi_{c}(\xi,\bar{\xi}))^{ *}e^{-\frac{\pi}{\tau_{2}}(|\xi|^{2}+|\bar{\xi}|^{2})}\Psi_{c^{\prime}}(\xi, \bar{\xi})=\delta_{c,c^{\prime}}\frac{1}{(2|\mathsf{G}_{\Lambda}|^{1/2})^{n}} \frac{1}{\tau_{2}^{n}|\eta(\tau)|^{4n}},\] (B.3) where the integral is over the torus in the space of \(\xi,\bar{\xi}\) variables defined by (3.27). To obtain the explicit form of the Wilson loop operators acting on the wavefunction in the holomorphic representation (3.12), we take into account that \(A_{\bar{z}}\) (understood as a quantum operator) acts on ket vectors by multiplication, \(A_{\bar{z}}|\Psi\rangle=\frac{i\pi\xi}{\tau_{2}}\Psi(\xi)\), and hence \(A_{z}\) acts on bra vectors analogously \(\langle\Psi|A_{z}=(\Psi(\xi)\frac{i\pi}{\tau_{2}}\xi)^{*}\). From here and by integrating by parts in (B.2) we find \[A_{z}|\Psi\rangle=\frac{-i}{k}\frac{\partial}{\partial\xi}\Psi(\xi),\] (B.4) which is used in (3.12). As we explained in the main text, this is in agreement with \(A_{z}\) being canonically conjugate to \(A_{\bar{z}}\), as follows from the Chern-Simons equations of motion [47]. ## Appendix C Narain \(c=2\) theories The partition function \(Z_{c=2}(\tau,t,b)\) of a central charge \(c=2\) Narain theory depends on three modular parameters, \(\tau\) and \(t,b\) introduced in (4.27). It can be written explicitly using the representation (5.2): \[Z_{c=2}(\tau,t,b)=\frac{b_{2}}{\tau_{2}|\eta(\tau)|^{4}}\sum_{\vec{n},\vec{m} \in\mathbb{Z}^{2}}e^{-\frac{\pi}{\tau_{2}}|\gamma(\vec{n}\tau+\vec{m})|^{2}-2 \pi i\,b_{1}\vec{n}\wedge\vec{m}}.\] (C.1) The moduli space of \(c=2\) Narain theories is a product of two fundamental domains of \(t\) and \(b\), with the canonical \(SL(2,\mathbb{Z})\)-invariant measure, modulo \(\mathbb{Z}_{2}\) exchange symmetry. It is convenient to introduce \[\Theta_{c=2}(\tau,t,b)=|\eta(\tau)|^{4}Z_{c=2}(\tau,t,b),\] (C.2) which is modular invariant under \(t,b\) and is a weight 2 modular form with respect to \(\tau\). The modular invariant combination \(\hat{Z}=\tau_{2}\Theta_{c=2}\) is the partition function of primaries; it exhibits triality - full permutation symmetry under its arguments \(\tau,t,b\)[9] - which is not manifest in the representation (C.1). ### All even self-dual \(n=2\) codes over \(\mathbb{Z}_{p}\times\mathbb{Z}_{p}\) There are \(2(p+1)\) even self-dual codes over \(\mathbb{Z}_{p}\times\mathbb{Z}_{p}\) with prime \(p\), which can be split into 2 families. The first \(p\) codes are generated through \[\mathcal{C}\ni c=(\vec{a},\vec{b})=G^{T}q,\quad q\in\mathbb{Z}_{p}^{2}, \tag{109}\] by the following matrix \[G=(I,\mathrm{B}^{T}),\qquad\mathrm{B}=r\left(\begin{array}{cc}0&1\\ -1&0\end{array}\right)\,\mathrm{mod}\,p,\qquad 0\leq r<p. \tag{110}\] One more code is generated by the matrix \(G=(0,I)\). Another \(p+1\) codes are obtained from the previous ones by exchanging \(a_{2}\) and \(b_{2}\). ### Hecke operators and triality The \(2(p+1)\) codes described above, once promoted to code CFTs, can be described as two families of \(p+1\) theories, specified by modular parameters \(t=\frac{r+t_{0}}{p}\) and \(t=p\,t_{0}\) with fixed \(b=b_{0}\), and \(b=\frac{r+b_{0}}{p}\) and \(b=p\,b_{0}\) with fixed \(t=t_{0}\), where \(0\leq r<p\). The sum over the latter (fixed \(t=t_{0}\)) series is easy to reformulate using the representation of the partition function (106). We start with \[\frac{1}{p}\sum_{k=0}^{p-1}\Theta_{c=2}\left(\tau,t_{0},\frac{b_{0}+k}{p}\right) \tag{111}\] and immediately conclude from (106) that the role of the sum over \(k\) is to impose \(n\wedge m=0\,\mathrm{mod}\,p\). This constraint means that the vectors \(n\,\mathrm{mod}\,p\) and \(m\,\mathrm{mod}\,p\), understood as vectors in \(\mathbb{Z}_{p}^{2}\), are collinear. Thus, when \(p\) is prime, we can write \[m=r\,n+p\tilde{m},\quad\tilde{m}\in\mathbb{Z}^{2}, \tag{112}\] for some integer \(0\leq r<p\), unless \(n=0\,\mathrm{mod}\,p\), in which case \(n=p\,\tilde{n}\) for arbitrary \(\tilde{n},m\in\mathbb{Z}^{2}\). First we consider the latter case, and find \[\frac{b_{2}}{p\,\tau_{2}}\sum_{\tilde{n},m\in\mathbb{Z}^{2}}e^{-\frac{\pi}{ \tau_{2}}|p^{-1/2}\gamma(p\tilde{n}\tau+m)|^{2}-2\pi i\,\frac{b_{1}}{p}p\, \tilde{n}\wedge m}=\Theta_{c=2}(p\,\tau,t,b). \tag{113}\] Next, we consider the case (112), \[\sum_{r=0}^{p-1}\frac{b_{2}}{p\,\tau_{2}}\left(\sum_{n,\tilde{m}\in\mathbb{Z} ^{2}}e^{-\frac{\pi}{\tau_{2}}|p^{-1/2}\gamma(n\tau+rn+p\tilde{m})|^{2}-2\pi i \,\frac{b_{1}}{p}p\,n\wedge\tilde{m}}-\sum_{\tilde{n},\tilde{m}\in\mathbb{Z} ^{2}}e^{-\frac{\pi}{\tau_{2}}|p^{1/2}\gamma(\tilde{n}\tau+rn)|^{2}-2\pi i\, \frac{b_{1}}{p}p^{2}\,\tilde{n}\wedge\tilde{m}}\right),\] where we explicitly subtracted the terms when both \(n,m=0\,{\rm mod}\,p\), which were already covered in (107). We can easily recognize these terms to be \[\sum_{r=0}^{p-1}\frac{1}{p^{2}}\Theta_{c=2}\left(\frac{\tau+r}{p},t,b\right)- \frac{1}{p}\Theta_{c=2}(\tau,t,p\,b). \tag{108}\] Combining (105),(107),(108) we find, in terms of the Hecke operator (101), \[\frac{1}{p}T_{p}^{\tau}\,\Theta_{c=2}=T_{p}^{b}\,\Theta_{c=2}. \tag{109}\] We label each Hecke operator by the variable it acts on; since \(\Theta_{c=2}\) is a modular form of weight \(2\) with respect to \(\tau\) and weight \(0\) with respect to \(b\), \(T_{p}^{\tau}\) and \(T_{p}^{b}\) act differently. The left-hand-side of (109) is manifestly invariant under the exchange of \(t\) and \(b\), thus \(T_{p}^{\tau}\,\Theta_{c=2}=p\,T_{p}^{t}\,\Theta_{c=2}=p\,T_{p}^{b}\,\Theta_{c=2}\). This relation implies that the Hecke operators for \(\tau\), \(b\), and \(t\) act on \(\hat{Z}\) in a triality-symmetric way \[T_{p}^{\tau}\,\hat{Z}=T_{p}^{t}\,\hat{Z}=T_{p}^{b}\,\hat{Z}. \tag{110}\] This identity also follows directly from the representation in equation (104) of [9], which makes the triality explicit, if one takes into account that the Eisenstein series \(E_{k}\) and the Maas cusp forms are eigenfunctions of \(T_{p}\), and \(T_{p}\) acts on the pseudo-modular form \(E_{2}\) by shifting it by a constant, \(T_{p}E_{2}=(p+1)E_{2}+{\rm const}\). ### Averaging over the moduli space The average of \(Z_{c=2}(\tau,t,b)\) over the moduli space was considered in [66], where the integral over the fundamental domain of \(\tau\) was regularized and evaluated to be \[\langle\hat{Z}\rangle_{\tau}=\frac{3}{\pi}\int\frac{d^{2}\tau}{\tau_{2}^{2}}( \tau_{2}\Theta_{c=2})=\frac{3}{\pi}\left(\ln(\frac{N_{\tau}}{N_{0}})-\ln(t_{2 }|\eta(t)|^{4})-\ln(b_{2}|\eta(b)|^{4})\right), \tag{111}\] where \(N_{\tau}\to\infty\) is a regulator and \(N_{0}\) is some constant. Here the integral over \(\tau\) is over the "keyhole" fundamental domain, which has volume \(\pi/3\). To compare with the code ensemble in the \(p\to\infty\) limit, we are interested in a different average, over the fundamental domains of \(t\) or \(b\), \[\langle Z_{c=2}\rangle_{t}=\frac{3}{\pi}\int\frac{d^{2}t}{t_{2}^{2}}Z_{c=2}( \tau,t,b). \tag{112}\] It is in principle related to (111) by triality. Since the latter is not manifest in (111), we perform this calculation below. Many of the technical steps will mirror similar steps in [66], in particular the splitting of the sum in (111) into three different contributions coming from the the origin \(\vec{n}=\vec{m}=0\), the "single-vector" points \(\vec{n}\parallel\vec{m}\), and the "two-vector" points \(\vec{n}\nparallel\vec{m}\). #### The origin The contribution of the origin is simply \[\frac{b_{2}}{\tau_{2}|\eta(\tau)|^{4}}.\] (C.13) It is \(t\)-independent. Obviously, it remains the same after averaging over \(t\). #### Contribution of the single vector orbits The starting point is to parametrize collinear \(\vec{n}=c\,\vec{\ell}\) and \(\vec{m}=d\,\vec{\ell}\) using a co-prime pair \((c,d)=1\) and an arbitrary non-zero vector \(\vec{\ell}\in\mathbb{Z}^{2}\). The resulting sum is \[\sum_{(c,d)=1}\frac{b_{2}}{\tau_{2}|\eta(\tau)|^{4}}\sum_{\vec{\ell}\in\mathbb{Z }^{2}}e^{-\frac{\pi}{\tau_{2}^{2}}|\tau|\ell|^{2}},\qquad\tau_{2}^{\prime}(c,d )=\frac{\tau_{2}}{|c\tau+d|^{2}}.\] (C.14) Next, we parametrize a non-zero \(\vec{\ell}=(\tilde{d},\tilde{c})k\) with co-prime \((\tilde{c},\tilde{d})=1\) and an arbitrary non-zero integer \(k\) to find \[\sum_{\vec{\ell}\in\mathbb{Z}^{2}}e^{-\frac{\pi}{\tau_{2}^{\prime}}|\gamma \ell|^{2}}=\sum_{(\tilde{c},\tilde{d})=1}\sum_{k\neq 0}e^{-\frac{\pi b_{2}k^{2}}{ \tau_{2}^{2}t_{2}}|\tilde{ct}+\tilde{d}|^{2}}.\] (C.15) Here we readily recognize \(t_{2}^{\prime}(\tilde{c},\tilde{d})=t_{2}/|\tilde{c}t+\tilde{d}|^{2}\) as being generated by a modular transformation of \(t\). Though originally \(t\) belonged to the fundamental "keyhole domain", the sum over co-prime pairs \((\tilde{c},\tilde{d})=1\) extends the range of \(t^{\prime}\) to the entire strip \(|t_{1}|\leq 1/2\), \(t_{2}\geq 0\). Averaging over \(t\) thus gives \[\left\langle\sum_{\vec{\ell}\in\mathbb{Z}^{2}}e^{-\frac{\pi}{\tau_{2}^{2}}| \gamma|\ell|^{2}}\right\rangle_{t}=\frac{3}{\pi}\int_{-1/2}^{1/2}dt_{1}^{ \prime}\int_{0}^{\infty}\frac{dt_{2}^{\prime}}{(t_{2}^{\prime})^{2}}\sum_{k \neq 0}e^{-\frac{\pi b_{2}}{\tau_{2}^{2}t_{2}^{2}}k^{2}}=\frac{\tau_{2}^{ \prime}}{b_{2}}.\] (C.16) Going back to (C.14), we find the single-vector contribution, averaged over the fundamental domain of \(t\), to be \[\sum_{(c,d)=1}\frac{1}{|\eta(\tau)|^{4}|c\tau+d|^{2}}.\] (C.17) Of course this is merely a formal expression as it is divergent. Following [66] we can regularize it by multiplying (C.15) by \((1-e^{-N_{t}/t_{2}^{\prime}})\) where \(N_{t}\to\infty\) is a regulator. As a result we have instead of (C.17) \[\frac{3}{\pi^{2}|\eta(\tau)|^{4}}\sum_{(c,d)=1}\sum_{k\neq 0} \left(\frac{1}{k^{2}|c\tau+d|^{2}}-\frac{1}{k^{2}|c\tau+d|^{2}+ \frac{N_{t}\tau_{2}}{\pi b_{2}}}\right)\] \[=\frac{3}{\pi\tau_{2}|\eta(\tau)|^{4}}\left(-\ln(\tau_{2}|\eta( \tau)|^{4})-\ln(b_{2})+2\gamma+\ln(\frac{N_{t}}{4\pi})\right).\] It is interesting to note that the finite part in (C.18), which is essentially the Eisenstein series (C.17) regularized with help of a Pauli-Villars-like approach, matches with the one obtained from the Kronecker limit formula, \[\sum_{(c,d)\neq(0,0)}\frac{\tau_{2}^{s}}{|c\tau+d|^{2s}}=\frac{\pi}{s-1}+2\pi( \gamma-\ln(2))-\pi\ln(\tau_{2}|\eta(\tau)|^{4})+o(s-1),\] (C.19) which is akin to dimensional regularization. #### Contribution of the two vector orbits Our final step is the two-vector contribution with non-collinear \(\vec{n}\) and \(\vec{m}\). We can start with the same parametrization as above, \(\vec{n}=(\tilde{d},\tilde{c})k\) with co-prime \((\tilde{c},\tilde{d})=1\) and nonzero \(k\). Then by applying a (non-unique) \(SL(2,\mathbb{Z})\) transformation parametrized by \((\tilde{c},\tilde{d})\) acting on the vectors \(\vec{n}\) and \(\vec{m}\) we can bring the first vector to the form \(\vec{n}=(k,0)\): \[\sum_{(\tilde{c},\tilde{d})=1}\sum_{k\neq 0}\sum_{\vec{m}}e^{-\frac{\pi}{\tau_{ 2}}|\gamma^{\prime}(\vec{n}\tau+\vec{m})|^{2}-2\pi ib_{1}n\wedge m}.\] (C.20) Here the matrix \(\gamma^{\prime}\) is defined the same way as in (4.27), but with \(t\) transformed by an \(SL(2,\mathbb{Z})\) matrix parametrized by \((\tilde{c},\tilde{d})\). As in the previous subsection, the sum over \((\tilde{c},\tilde{d})\) extends the domain of \(t\) from the "keyhole" region to the strip \(|t_{1}|\leq 1/2\), \(t_{2}>0\). Let us now write \(\vec{m}=(d^{\prime},c^{\prime})\). Then the two-vector contribution averaged over the fundamental region of \(t\) is \[\frac{3}{\pi}\int_{-1/2}^{1/2}dt^{\prime}_{1}\int_{0}^{\infty}\frac{dt^{\prime }_{2}}{(t^{\prime}_{2})^{2}}\frac{b_{2}}{\tau_{2}|\eta(\tau)|^{4}}\sum_{k\neq 0} \sum_{c^{\prime}\neq 0,\,d^{\prime}}e^{-\frac{\pi b_{2}}{\tau_{2}t^{\prime}_{2}}(|k \tau+c^{\prime}t^{\prime}_{1}+d^{\prime}|^{2}+(c^{\prime}t^{\prime}_{2})^{2} )-2\pi i\,b_{1}kc^{\prime}}.\] (C.21) In the sum above we must keep \(c^{\prime}\neq 0\) lest the vectors \(\vec{n},\vec{m}\) become collinear. The sum over \(d^{\prime}\) is not restricted. We can represent it as \(d^{\prime}=c^{\prime}r+d^{\prime\prime}\) where \(r\in\mathbb{Z}\) and \(d^{\prime\prime}\) is an integer between \(0\) and \(c^{\prime}-1\). We can now combine \[c^{\prime}t^{\prime}_{1}+d^{\prime}=c^{\prime}(t^{\prime}_{1}+r)+d^{\prime \prime},\] (C.22) and the sum over \(r\) plus the integral over the strip \(|t_{1}|\leq 1/2\), \(t_{2}>0\) become an integral over the whole upper half-plane of \(t^{\prime}\). The dependence on \(d^{\prime\prime}\) disappears and the sum over \(d^{\prime\prime}\) simply gives a factor of \(|c^{\prime}|\), \[\frac{3}{\pi}\int_{0}^{\infty}\frac{dt^{\prime}_{2}}{(t^{\prime}_{2})^{2}} \frac{\sqrt{b_{2}t^{\prime}_{2}}}{\sqrt{\tau_{2}}|\eta(\tau)|^{4}}\sum_{k\neq 0 }\sum_{c^{\prime}\neq 0}e^{-\frac{\pi b_{2}}{\tau_{2}t^{\prime}_{2}}((k\tau_{2})^{2} +(c^{\prime}t^{\prime}_{2})^{2})-2\pi i\,b_{1}kc^{\prime}}.\] (C.23) At this point we can integrate over \(t_{2}\), \[\frac{3}{\pi}\frac{1}{\tau_{2}|\eta(\tau)|^{4}}\sum_{k\neq 0}\sum_{c^{\prime} \neq 0}e^{-2\pi b_{2}|kc^{\prime}|-2\pi i\,b_{1}kc^{\prime}}\frac{1}{|k|}. \tag{108}\] There are four "branches" with positive and negative \(k\) and \(c^{\prime}\), which we combine into a sum of the form \[\frac{6}{\pi\,\tau_{2}|\eta(\tau)|^{4}}\sum_{k>0}\sum_{c^{\prime} >0}\frac{e^{2\pi i\,b\,kc^{\prime}}+e^{2\pi i\,\tilde{b}\,kc^{\prime}}}{k}. \tag{109}\] Now we introduce \(q_{b}=e^{2\pi ib}\) and sum over \(k\) using \[\sum_{c^{\prime},k>0}\frac{q_{b}^{kc^{\prime}}}{k}+\text{c.c}=- \ln\prod_{c^{\prime}=1}^{\infty}(1-q_{b}^{c^{\prime}})+\text{c.c}=\frac{i\pi} {12}b-\ln(\eta(b))+\text{c.c} \tag{110}\] Finally, we find for the two-vector contribution, averaged over \(t\), \[\frac{1}{\tau_{2}|\eta(\tau)|^{4}}\left(-b_{2}-\frac{3}{\pi}\ln| \eta(b)|^{4}\right). \tag{111}\] Combining everything together, we find that the first term in (111) exactly cancels the "origin" contribution (107). Hence, \(Z_{c=2}\) averaged over the modular parameter \(t\) and covariantly regularized is \[\langle Z_{c=2}(\tau,t,b)\rangle_{t}=\frac{3}{\pi\tau_{2}|\eta( \tau)|^{4}}\left(\ln(\frac{N_{t}}{N_{0}})-\ln(\tau_{2}|\eta(\tau)|^{4})-\ln(b_ {2}|\eta(b)|^{4})\right), \tag{112}\] where \(N_{t}\to\infty\) is a regulator and \(N_{0}=4\pi e^{-2\gamma}\). The final expression is in full agreement with (106). ### Large-\(p\) limit To evaluate the large-\(p\) limit of \(T_{p}\hat{Z}\) we first approximate it by the regularized integral over the fundamental domain \[T_{p}^{\tau}(\hat{Z})\approx\frac{3}{\pi}\int_{\mathcal{F}}\frac {d^{2}\tau^{\prime}}{(\tau_{2}^{\prime})^{2}}\hat{Z}(\tau^{\prime})\left(1-e ^{-N/\tau_{2}^{\prime}}\right). \tag{113}\] The value of \(N\) can be fixed as follows. Modular transformations mapping \((\tau+k)/p\) back to the fundamental keywhole domain \(\mathcal{F}\) will be more dense in the region of small \(\tau_{2}^{\prime}\), with only one point reaching the maximal value of \(\tau_{2}^{\prime}=p/\tau_{2}\). Thus we can take \(N\propto p/\tau_{2}\), leading to, c.f. (106), \[\frac{3}{\pi}\left(\ln(p/p_{0})-\ln(\tau_{2})-\ln(t_{2}|\eta(t)|^{ 4})-\ln(b_{2}|\eta(b)|^{4})\right)+\ldots \tag{114}\] This expression is not modular invariant with respect to \(\tau\), although the left-hand side of (111) is, which suggests there might be additional \(\tau\)-dependent finite terms. We therefore conjecture \[T_{p}^{\tau}(\hat{Z}(\tau))=\frac{3}{\pi}\ln(p/p_{0})-\frac{3}{\pi}\ln(t_{2}| \eta(t)|^{4})-\frac{3}{\pi}\ln(b_{2}|\eta(b)|^{4})+f(\tau)+O(1/p), \tag{112}\] where the crucial assumption is that \(f(\tau)\) does not depend on \(t\) and \(b\). The rest follows from the extension of triality (110), \[g(\tau)=-\frac{3}{\pi}\ln(\tau_{2}|\eta(\tau)|^{4}). \tag{113}\]
2305.14878
Leveraging GPT-4 for Automatic Translation Post-Editing
While Neural Machine Translation (NMT) represents the leading approach to Machine Translation (MT), the outputs of NMT models still require translation post-editing to rectify errors and enhance quality under critical settings. In this work, we formalize the task of direct translation post-editing with Large Language Models (LLMs) and explore the use of GPT-4 to automatically post-edit NMT outputs across several language pairs. Our results demonstrate that GPT-4 is adept at translation post-editing, producing meaningful and trustworthy edits to translations that help improve its general quality as well as remove different classes of major errors in translations. In particular, human evaluations on assessing edit trustworthiness show that GPT-4 exhibits a large improvement over the prior state-of-the-art LLM. Notably, we improve upon state-of-the-art performance on WMT-22 English-Chinese, English-German, Chinese-English and German-English language pairs using GPT-4 based post-editing, as evaluated by state-of-the-art MT quality metrics. However, we also show that GPT-4 could produce hallucinated edits, thereby urging caution in its use as an expert translation post-editor.
Vikas Raunak, Amr Sharaf, Yiren Wang, Hany Hassan Awadallah, Arul Menezes
2023-05-24T08:30:05Z
http://arxiv.org/abs/2305.14878v2
# Leveraging GPT-4 for Automatic Translation Post-Editing ###### Abstract While Neural Machine Translation (NMT) represents the leading approach to Machine Translation (MT), the outputs of NMT models still require translation post-editing to rectify errors and enhance quality, particularly under critical settings. In this work, we formalize the task of translation post-editing with Large Language Models (LLMs) and explore the use of GPT-4 to automatically post-edit NMT outputs across several language pairs. Our results demonstrate that GPT-4 is adept at translation post-editing and produces meaningful edits even when the target language is not English. Notably, we achieve state-of-the-art performance on WMT-22 English-Chinese, English-German, Chinese-English and German-English language pairs using GPT-4 based post-editing, as evaluated by state-of-the-art MT quality metrics. ## 1 Introduction State of the art Neural Machine Translation (NMT) models, trained on web-mined parallel corpora suffer from reliability problems even for high resource language pairs, despite high average case performance [14, 15, 23, 26, 17, 16]. Thereby, post-editing neural machine translations remains an important exercise for their use in critical settings. As such, a relevant question to ask is whether Large Language Models (LLMs) such as GPT-3, GPT-4 and PaLM, PaLM2 [15, 23, 16], which have demonstrated a wide-range of general purpose reasoning as well as knowledge-based capabilities, could be leveraged for the task of translation post-editing. Post-editing of translations obtained from MT models is a staple task across the translation and localization industry, with higher quality translations obtained from NMT models leading to reduced post-editing time [23]. However, a number of prior works have demonstrated that the parallel data and model training artifacts in NMT could manifest in terms of catastrophic outputs in rare cases, and the detection of such egregious model behaviors remains a challenging task [14, 13, 15, 16, 17, 18, 19]. LLM based automatic translation post-editing could aid in detecting and fixing such errors to ensure greater reliability of NMT outputs. Besides alleviating reliability problems in NMT, there are a couple of reasons as to why leveraging LLMs for post-editing could be opportune, namely, the advanced multi-lingual understanding capabilities of latest LLMs [15] and potentially, their ability to apply desirable knowledge-based or culture-specific customization to translations [1]. In this work, we explore the efficacy of state-of-the-art LLMs such as GPT-4 on the task of translation post-editing in a _natural_ setting i.e., without any quality-estimation or error detection step applied to the translations. We formalize the task of translation post-editing with LLMs and posit a set of research questions to quantify their utility for the goal of improving translations obtained from NMT models. We also demonstrate gains on translation quality across a number of language pairs on the WMT-22 benchmark [12], achieving state-of-the-art translation performance on WMT-22 English-Chinese, English-German, Chinese-English and German-English language pairs using GPT-4 based post-editing, as evaluated by state-of-the-art MT quality metrics. ## 2 The Translation Post-editing Task Task Definition:We formalize the Post-editing Task in a generative setting as follows: Given a Source (\(S\)) and a Translation (\(T\)), propose improvements over the translation (\(E\)) and generate the translation with the proposed improvements (\(T^{\prime}\)), i.e.: \[(S,T)\to E+T^{\prime}\] Under this task setting, \(E\) represents the improvements or edits that are verbalized by a LLM. Note that in the absence of \(E\), the task is to simply generate the improved translation without any intermediate reasoning chain or _Chain of Thought_ (CoT) Wei et al. (2022); Kojima et al. (2022). Throughout this work, we refer to the post-editing task in the above _zero-shot_ CoT setting setting as post-editing with CoT and with the setting without \(E\) as post-editing without CoT. Table 1 shows an input-output example for the post-editing task under the CoT setting. Additionally, throughout this work, we refer to \(Z\) as the zero-shot translation of a given source obtained from the LLM that is employed for the post-editing task. Further, through this formalization, we explore the following research questions: General Quality Improvements:Do LLMs lead to general quality improvements as measured by state-of-the-art MT quality metrics? An affirmative answer to this question would enable the use of LLMs as a way to detect reliability issues in existing translations. Another related questions is whether the Post-editing chain of thought is helpful towards translation quality improvements? Even though zero-shot chain of thought has been demonstrated to be effective across reasoning tasks, the translation post-editing task might not require the same degree of variable computation that makes it effective. Editing Human Annotated Error Spans:Do LLMs modify human annotated translation error spans during the post-editing task? A high frequency of modifications made to the human annotated error spans would signify a greater correlation with human judgement in evaluating translation quality. Fidelity of Proposed Edits:Do the proposed improvements actually appear in the improved translation produced by LLMs? It is quite conceivable that LLMs might make edit proposals or produce chain of thought that is not realized in the final post-edited translation produced by the same model Ye and Durrett (2022). If the post-editing explanation is a desiderata of the translation post-editing process, then it becomes critical to examine the fidelity of the proposed edits in addition to the final translation quality. Through the above questions, we study the efficacy of translation post-editing capabilities of LLMs. In the next section, we describe our experimental settings. ## 3 Experimental Settings DatasetsWe experiment with WMT-22 News translation task datasets Kocmi et al. (2022) as well as with WMT-20 and WMT-21 News translation task submissions annotated with MQM errors Freitag et al. (2021). For the post-editing experiments pertaining to the MQM annotated WMT-20 and WMT-21 system outputs, we experiment with samples that have a Major error as an annotation, whereas we experiment with the full WMT-22 datasets throughout. We use the latest WMT-22 test sets for the majority of our experiments, the curation of which falls beyond the training cut-off date for GPT-4 and other LLMs under investigation1. Footnote 1: [https://platform.openai.com/docs/models](https://platform.openai.com/docs/models) Large Language ModelsWe experiment with gpt-3.5-turbo and GPT-4 in our experiments. These models represent the most capable publicaly available LLMs Liang et al. (2022). We use a prompt that describes the system role as a translation post-editor and under the CoT setting, instruct the LLM to propose improvements to the provided translation (\(T\)) of a given source (\(S\)), before producing the final post-edited translation (\(T^{\prime}\)). Metrics and EvaluationFor each of the four research questions posed, we use the metrics highlighted in Table 2. We explain the justification of these measurements in the relevant following sections. For general quality measurements, we use four COMET Rei et al. (2020) models: the reference-free COMET-QE (wmt20-comet-qe-da), COMET-KIWI (wmt-22-cometkiwi-da) Quality Estimation models and the reference-based COMET-20 (wmt20-comet-da) and COMET-22 (wmt22-comet-da) models. We use the Translation Edit Rate (TER) implementation from Post (2018). ## 4 Results and Measurements To answer the above questions, we experiment under two settings: for WMT-20 and WMT-21 systems, we take the translations provided by the different NMT systems as the initial translation upon which the post-editing step is applied. For WMT-22, we use the translations provided by Microsoft-Translator as the initial translation upon which post-editing step is applied. ### Nature of Post-Edited Translations To measure whether the translations leverage the generation capabilities of LLMs for producing the final translations or adhere to editing the initial translations provided, we compute the Translation Edit Rate (TER) Snover et al. (2006) of the post-edited translation against the zero-shot translations obtained using the same LLM, and compare it with the TER of the post-edited translation against the initial translation. A higher value of the difference between the two measurements would imply that the translation is closer to the initial translation and that the LLM adheres to the task of editing the translation. To quantify this, we experiment on 10 different English to German NMT systems from the WMT-20 and WMT-21 Shared Tasks on MT. The Direct-Assessment and MQM evaluation based evaluation of these systems is described in Freitag et al. (2021). WMT-20 Systems (En-De):Table 3 describes our results on five WMT-20 systems. We find that the post-edited translations (in the default CoT setting) are closer to the initial translations than to the zero-shot translations obtained from the same LLM (gpt-3.5-turbo). WMT-21 Systems (En-De):Table 4 describes our results on five WMT-21 systems. Here again, we find that the post-edited translations (\(T^{\prime}\)) (in the default CoT setting) are closer to the initial translations (\(T\)) than to the zero-shot translations (\(Z\)) obtained from the same LLM (gpt-3.5-turbo) in this case. Impact of CoTTable 5 describes our results on WMT-22 En-Zh and Table 6 describes our results on Zh-En with post-editing using GPT-4. We find that CoT constrains the final translations to be closer to the initial translation. In the direct post-editing setting, the final translation is closer to the zero-shot translation, even though the TER difference in the direct setting is much smaller than the difference in the CoT setting. DiscussionWe find that the above results hold true across different metrics such as edit distance, BLEU Post (2018) or ChrF Popovic (2015). Further, our results also show a peculiar side-effect of the post-editing task under the CoT setting - that post-editing a system translation might end up leading to a lower quality final translation if the initial translation is lower in quality than the zero shot translation quality of the LLM under consideration. In the next section, we evaluate GPT-4 both under direct and CoT post-editing settings in terms of general quality improvements. ### General Quality Improvements We compare the translation quality of the post-edited translation with the initial translation using both reference-free and reference-based state-of-the-art neural quality metrics for MT Rei et al. (2020). Results:Tables 7, 8, 10 and 11 provide the results of the experiments done on the WMT-22 test sets. Throughout, we find that post-editing under both CoT and direct settings lead to improvements over high-quality initial translations obtained through MS-Translator. Further, direct post-editing of MS-Translator outputs with GPT-4 \begin{table} \begin{tabular}{l|c|c} \hline \hline PE Setting & TER (\(T^{\prime}\), \(Z\)) & TER (\(T^{\prime}\), \(T\)) \\ \hline With CoT & 42.9 & **22.0** \\ Without CoT & **38.1** & 34.9 \\ \hline \hline \end{tabular} \end{table} Table 6: **WMT-22 Zh-En**: The post-edited translations (\(T^{\prime}\)) are closer to the initial translations (\(T\)) than the zero-shot translations (\(Z\)) in the CoT setting, however, in the direct setting the opposite holds true, albeit with a smaller magnitude. \begin{table} \begin{tabular}{l|c|c} \hline \hline System Name & TER (\(T^{\prime}\), \(Z\)) & TER (\(T^{\prime}\), \(T\)) \\ \hline VolcTrans-GLAT & 40.3 & **34.7** \\ Facebook-AI & 39.7 & **25.8** \\ HuaweiTSC & 44.5 & **34.0** \\ UEdin & 40.7 & **35.7** \\ eTranslation & 39.1 & **33.3** \\ \hline \hline \end{tabular} \end{table} Table 4: **WMT-21 Systems (En-De)**: The post-edited translations (\(T^{\prime}\)) are closer to the initial translations (\(T\)) than the zero-shot translations (\(Z\)) obtained through gpt-3.5-turbo. \begin{table} \begin{tabular}{l|c|c} \hline \hline System Name & TER (\(T^{\prime}\), \(Z\)) & TER (\(T^{\prime}\), \(T\)) \\ \hline Tohuku & 38.2 & **32.8** \\ OPPO & 39.2 & **35.7** \\ eTranslation & 38.3 & **37.1** \\ Tencent & 37.6 & **31.5** \\ Huoshan & 37.2 & **35.1** \\ \hline \hline \end{tabular} \end{table} Table 3: **WMT-20 Systems**: The post-edited translations (\(T^{\prime}\)) are closer to the initial translations (\(T\)) than the zero-shot translations (\(Z\)) obtained through gpt-3.5-turbo. \begin{table} \begin{tabular}{l|c|c} \hline \hline System Name & TER (\(T^{\prime}\), \(Z\)) & TER (\(T^{\prime}\), \(T\)) \\ \hline Tohuku & 38.2 & **32.8** \\ OPPO & 39.2 & **35.7** \\ eTranslation & 38.3 & **37.1** \\ Tencent & 37.6 & **31.5** \\ Huoshan & 37.2 & **35.1** \\ \hline \hline \end{tabular} \end{table} Table 5: **WMT-22 En-Zh**: The post-edited translations (\(T^{\prime}\)) are closer to the initial translations (\(T\)) than the zero-shot translations (\(Z\)) obtained through gpt-3.5-turbo. consistently surpasses the WMT-22-Best translation system quality. Gains vs Initial System QualityWe observe two different trends in the translation quality: for the more recent systems (outputs on WMT-22), only GPT-4 leads to quality gains in the CoT based post-editing setting whereas for older systems (e.g., outputs on WMT-20, Table 9), both gpt-3.5-turbo and GPT-4 lead to quality improvements. Analyzing the Distribution of Quality Gains In Figure 1, we compare the distribution of the gains in COMET-KIWI on the En-De WMT-22 dataset and find that GPT-4 is better at abstaining from proposing any edits if the initial translation is already fluent and adequate. ### Edits On Human Annotated Error Spans We use the the MQM error span annotated system outputs provided by Freitag et al. (2021) and measure whether the post edited translation modifies the translation error span as annotated by \begin{table} \begin{tabular}{l|c|c|c|c} \hline \hline **System** & **COMET-KIWI** & **COMET-QE** & **COMET-22** & **COMET-20** \\ \hline WMT-Best & 81.38 & 39.96 & 85.04 & 56.60 \\ \hline MS Translator & 81.04 & 38.64 & 84.68 & 55.28 \\ **MS Translator + GPT-4** & **81.66** & **42.15** & **85.41** & **58.21** \\ MS Translator + GPT-4-CoT & 81.40 & 41.05 & 85.28 & 57.84 \\ \hline MS Translator + GPT-3.5-CoT & 79.32 & 41.56 & 82.71 & 44.82 \\ \hline GPT-4-Zero-Shot & 81.51 & 41.36 & 85.26 & 57.53 \\ \hline \hline \end{tabular} \end{table} Table 7: **General Quality Improvements on WMT-22 De-En:** The \(+\) sign reflects that the post-editing is applied on the initial translations produced by the given System. MS-Translator + GPT-4 shows better performance than GPT-4-Zero-Shot. \begin{table} \begin{tabular}{l|c|c|c} \hline \hline **System Name** & **QE-22** & **QE-20** \\ \hline eTranslation & 80.50 & 26.20 \\ eTranslation + GPT-4 & 83.74 & 30.56 \\ eTranslation + GPT-Turbo & 83.01 & 33.70 \\ \hline Tencent & 81.79 & 27.29 \\ Tencent + GPT-4 & 83.95 & 30.93 \\ Tencent + GPT-Turbo & 83.17 & 33.67 \\ \hline Tohoku & 81.09 & 26.44 \\ Tohoku + GPT-4 & 83.69 & 30.68 \\ Tohoku + GPT-Turbo & 83.35 & 34.57 \\ \hline OPPO & 81.03 & 26.41 \\ OPPO + GPT-4 & 83.54 & 30.33 \\ OPPO + GPT-Turbo & 83.16 & 33.54 \\ \hline \hline \end{tabular} \end{table} Table 8: **General Quality Improvements on WMT-22 Zh-En:** The \(+\) sign reflects that the post-editing is applied on the initial translations produced by the given System. MS-Translator + GPT-4 shows better performance than GPT-4-Zero-Shot. \begin{table} \begin{tabular}{l|c|c|c} \hline \hline **System** & **COMET-KIWI** & **COMET-QE** & **COMET-22** & **COMET-20** \\ \hline WMT-Best & 77.66 & 23.98 & 81.02 & 45.21 \\ \hline MS Translator & 77.58 & 23.97 & 80.35 & 40.40 \\ **MS Translator + GPT-4** & **79.75** & **31.84** & **82.79** & **53.42** \\ MS Translator + GPT-4-CoT & 79.02 & 28.96 & 82.20 & 50.77 \\ \hline MS Translator + GPT-3.5-CoT & 79.32 & 41.56 & 82.71 & 44.82 \\ \hline GPT-4-Zero-Shot & 79.29 & 30.13 & 82.49 & 51.78 \\ \hline \hline \end{tabular} \end{table} Table 9: **General Quality Improvements on WMT-20 En-De Systems:: On the WMT-22 test sets, we do not observe gains with gpt-3.5-turbo, however, for older NMT systems, even gpt-3.5-turbo leads to consistent quality improvements over initial translations.** human annotators. For each of the Major MQM error spans modified, we record a score or 1, else a score of 0. The final score reported, named Edit Efficacy over Erroneous Error Spans (E3S) is higher if more of the erroneous spans have been modified in the post edited translation. The E3S metric is reported as a percentage over the whole test set. **Results** Tables 12 and 13 report the results obtained on 10 NMT system outputs from WMT-20 and WMT-21. We find that gpt-3.5-turbo produces high E3S rates with gains in general quality as well (measured through COMET-KWI), signifying that it is able to remove the undesirable artifacts (spans) present in the translations. We find that GPT-4 obtains lower E3S values than gpt-3.5-turbo. ### Fidelity of the Proposed Edits In a practical setting, the edits (\(E\)) produced in the post-editing task might be useful to illustrate the changes made by the LLM in the post edited translation. Therefore, not only is the fidelity of the proposed edits useful in helping the model leverage variable compute (Wei et al., 2022) prior to producing the final improved translation, but also in imparting more trust in the LLM based post-editing process. Thereby, the question whether the proposed edits are present in the final improved translation or are hallucinated by the model is of significant practical interest. As such, we quantify this property using Edit Realization Rate (ERR), which measures: of the proposed edits (\(E\)) by the LLM in the CoT post-editing setting, how many of the edits were actually realized in the improved translation? Since, we do not have any ground truth data to quantify this, we use human evaluation for measuring this property and differentiating between different LLM variants. ERR EvaluationWe ask a human annotator (native in the target language) to label 50 post-editing samples for both En-De (from the OPPO WMT-20 system) and De-En (from the WMT-22 test set), generated by both gpt-3.5-turbo and GPT-4. The annotator is asked to identify if all the proposed edits were realized in the final translation or not. Hence, a binary score is produced for each sample. We do not observe a significant difference in the De-En setting, whereas for En-De we notice a gap of over 30 percent in terms of ERR, with GPT-4 producing edits with higher fidelity. Qualitatively, \begin{table} \begin{tabular}{l|c|c|c|c} \hline \hline **System** & **COMET-KWI 1** & **COMET-QE** & **COMET-22** & **COMET-20** \\ \hline WMT-Best & 83.56 & 44.67 & 87.21 & 62.35 \\ \hline MS Translator & 83.35 & 43.48 & 86.78 & 62.06 \\ **MS Translator + GPT-4** & **83.69** & **44.50** & **87.37** & **62.85** \\ MS Translator + GPT-4-CoT & 83.32 & 43.96 & 87.13 & 62.62 \\ \hline MS Translator + GPT-3.5-CoT & 81.36 & 43.12 & 84.55 & 50.52 \\ \hline GPT-4-Zero-Shot & 82.95 & 44.69 & 86.80 & 60.85 \\ \hline \hline \end{tabular} \end{table} Table 10: **General Quality Improvements on WMT-22 En-De: The \(+\) sign reflects that the post-editing is applied on the initial translations produced by the given System.** \begin{table} \begin{tabular}{l|c|c|c|c} \hline \hline **System** & **COMET-KWI 1** & **COMET-QE** & **COMET-22** & **COMET-20** \\ \hline WMT-Best & 82.04 & 32.11 & 86.69 & 61.04 \\ \hline MS Translator & 81.39 & 31.46 & 86.11 & 59.43 \\ **MS Translator + GPT-4** & **82.68** & **34.47** & **87.53** & **63.21** \\ MS Translator + GPT-4-CoT & 81.60 & 32.01 & 86.43 & 59.97 \\ \hline MS Translator + GPT-3.5-CoT & 79.32 & 41.56 & 82.71 & 44.82 \\ \hline GPT-4-Zero-Shot & 81.73 & 32.61 & 86.51 & 58.66 \\ \hline \hline \end{tabular} \end{table} Table 11: **General Quality Improvements on WMT-22 En-Zh: The \(+\) sign reflects that the post-editing is applied on the initial translations produced by the given System.** we find that a number of edits proposed by gpt-3.5-turbo for En-De could be deemed as hallucinations, while in general, the edits proposed by GPT-4 do pertain to the edits made in the actual improved translations. An instance of this difference is illustrated in Table 14. However, currently, our ERR quantification is limited both in the sample size (50) as well as in coverage (language pairs) and we leave further analysis to future work. ## 5 Does GPT-4 Show Emergent Cross-Lingual Reasoning Capabilities? Our results on the WMT-22 test sets and ERR measurements show consistent gains in quality produced by GPT-4 over gpt-3.5-turbo. Thereby, a useful question to explore is whether GPT-4 shows emergent cross-lingual reasoning abilities. Recent debates on emergent abilities [13, 14] have posited it as an artifact of the metric against which the model performance is evaluated, rather than a sharp gain in the underlying capabilities [15]. However, we find that on the post-editing task under investigation, under the discontinuous metric of ERR, GPT-4 does show an emergent capability on En-De when compared to the prior generation of GPT models. We further test this hypothesis and experiment on the multilingual Grade School Mathematics (MGSM) dataset [16]. Results on MGSM:We present the results on the MGSM benchmark in Table 15. The results show that GPT-4 exhibits significantly better performance on this reasoning task than the prior generations of GPT models. We find the gains obtained by GPT-4 on this task are consistent with our results on post-editing in the zero-shot CoT setting. However, note the MGSM results are contaminated by the inclusion of the GSM-8K training set in the GPT-4 training corpus, hence the results should not be interpreted as results on a pure zero-shot task. We leave further analysis of our hypothesis to future work. ## 6 Discussion Quality Gains Across Language Pairs:We also report the GPT-4 post-editing quality gains, under the CoT setting over MS Translator for several other language pairs in Table 16. The results show that post-editing leads to consistent gains in translation quality across language pairs. Utility of the Chain of Thought:From our results, the inclusion of the Edit Proposal or CoT Step is detrimental towards the quality of the post edited Figure 1: Post-editing Quality Analysis: (Top) gpt-3.5-turbo and (Bottom) GPT-4. The X-axis represents the difference in the segment-level COMET-KIWI scores between the initial translation and the post edited output. We find that GPT-4 shows a higher rate of abstention in the CoT post-editing setting, with 75 percent of the GPT-4 edits not leading to any degradation in segment-level COMET-KIWI scores. \begin{table} \begin{tabular}{l c c c} \hline \hline **System** & **Initial-QE** & **PE-QE** & **E3S** \\ \hline WMT-20 & & & \\ \hline Tohoku & 80.93 & 82.50 & 69.86 \\ OPPO & 79.25 & 81.59 & 73.49 \\ eTranslation & 78.82 & 82.03 & 73.45 \\ Tencent & 80.03 & 82.49 & 71.48 \\ Huoshan & 78.49 & 82.34 & 71.72 \\ \hline WMT-21 & & & \\ \hline VolcTrans-GLAT & 80.22 & 82.60 & 56.52 \\ Facebook-AI & 82.88 & 82.45 & 60.53 \\ HuaweiTSC & 80.98 & 82.64 & 70.71 \\ UEdin & 80.82 & 81.99 & 74.17 \\ eTranslation & 80.04 & 81.60 & 74.05 \\ \hline \hline \end{tabular} \end{table} Table 12: **Edit Efficacy over Erroneous Spans** with gpt-3.5-turbo: On both WMT-20 and WMT-21 Systems, post-editing with gpt-3.5-turbo modifies more than half of the erroneous spans. translations, but useful in constraining the post-edited outputs to the initial translations. Therefore, the necessity of variable computation leveraged by the zero-shot chain of thought step is questionable for the post-editing task, even though the edit artifacts produced by the GPT-4 might themselves be valuable for further research. ## 7 Related Work Automatic Post-editing of Translations:A long line of prior work has tried to build Neural models for the translation post-editing task Vu and Haffari (2018); Shterionov et al. (2020); Chatterjee (2019); Gois et al. (2020); Correia and Martins (2019); Voita et al. (2019); Chollampatt et al. (2020); do Carmo et al. (2021). Shterionov et al. (2020) presented a comprehensive road-map for APE, highlighting the challenges and potential directions for future research. Chatterjee (2019) explored the use of deep learning techniques for APE and proposed novel architectures to improve the quality of post-edited translations, while Gois et al. (2020) focused on learning strategies for APE and investigated the use of automatic orderings techniques to refine translations. Correia and Martins (2019) proposed a simple yet effective neural model for APE using transfer learning, demonstrating promising results. Voita et al. (2019) introduced a context-aware approach to APE, incorporating source context information into the neural model to generate more accurate post-edits. Chollampatt et al. (2020) explored the use of APE to improve the overall translation quality for NMT models. They investigate the effects of varying training data sizes, using artificial training data, and domain specificity for the APE task. In a comprehensive review, do Carmo et al. (2021) provided an overview of various techniques and approaches in the field of APE, covering both traditional and neural-based methods. Their work summarized the advancements made in the area, highlighting the strengths and limitations of different approaches. Overall, these studies, among others, have contributed significantly to the development of neural models for APE. They have explored different architectures, learning strategies, and contextual information integration to improve the quality of post-edited translations. However, to the best of our knowledge, we present the first work that investigates using GPT-4 for automatic post-editing of translations. Our work is also related to a number of works exploring the using of LLMs for translation Hendy et al. (2023); Gao et al. (2023); Lu et al. (2023); Vilar et al. (2022); Garcia et al. (2023). ## 8 Conclusion We demonstrated promising results on post-editing using GPT-4, achieving state-of-the-art translation performance on WMT-22 English-Chinese, English-German, Chinese-English and German-English language pairs using GPT-4 based post-editing. We formalized a clear framework to make post-editing using LLMs amenable to further study. We will be exploring the posited questions with additional experiments in the immediate future.
2301.00322
Encrypted Data-driven Predictive Cloud Control with Disturbance Observer
In data-driven predictive cloud control tasks, the privacy of data stored and used in cloud services could be leaked to malicious attackers or curious eavesdroppers. Homomorphic encryption technique could be used to protect data privacy while allowing computation. However, extra errors are introduced by the homomorphic encryption extension to ensure the privacy-preserving properties, and the real number truncation also brings uncertainty. Also, process and measure noise existed in system input and output may bring disturbance. In this work, a data-driven predictive cloud controller is developed based on homomorphic encryption to protect the cloud data privacy. Besides, a disturbance observer is introduced to estimate and compensate the encrypted control signal sequence computed in the cloud. The privacy of data is guaranteed by encryption and experiment results show the effect of our cloud-edge cooperative design.
Qiwen Li, Runze Gao, Yuanqing Xia
2023-01-01T01:41:13Z
http://arxiv.org/abs/2301.00322v2
# Encrypted Data-driven Predictive Cloud Control with Disturbance Observer ###### Abstract In data-driven predictive cloud control tasks, the privacy of data stored and used in cloud services could be leaked to malicious attackers or curious eavesdroppers. Homomorphic encryption technique could be used to protect data privacy while allowing computation. However, extra errors are introduced by the homomorphic encryption extension to ensure the privacy-preserving properties, and the real number truncation also brings uncertainty. Also, process and measure noise existed in system input and output may bring disturbance. In this work, a data-driven predictive cloud controller is developed based on homomorphic encryption to protect the cloud data privacy. Besides, a disturbance observer is introduced to estimate and compensate the encrypted control signal sequence computed in the cloud. The privacy of data is guaranteed by encryption and experiment results show the effect of our cloud-edge cooperative design. Cloud Control Systems, Data-Driven Predictive Control, Disturbance Observer, Homomorphic Encryption. ## I Introduction Cloud computing provides enormous computing and storage resources for the implementation of control applications, which brings the concept of cloud control systems (CCSs) [1, 2, 3]. In CCSs, control algorithms are outsourced and executed on cloud platforms to offer control services for local plants. With the development of CCSs, there is an emerging requirement of cloud control for complex systems. However, the complexity and scale of control systems bring new difficulty in designing model-based cloud control laws, since system models are difficult to obtain. As a kind of model-free control approach, data-driven predictive control (DPC) [4] directly computes control sequences based on the input-output data of the system, which avoids the process of system modeling. Therefore, the combination of CCSs and DPC, i.e., data-driven predictive cloud control (DPCC) [5, 6, 7], takes advantage of data storage and computation in the cloud, as well as the model-free manner in control of complex systems, becoming a potential candidate in CCSs. However, in DPCC scenarios, the input-output data and control law of systems are stored and computed in the cloud with no data privacy protection, leading to the risk of privacy leakage. To be specific, an eavesdropper could get access to the private system data through communication channel, cloud storage and memory. The eavesdropper could consequently infer the state and model of the system for malicious purposes, such as advanced persistent threat (APT) design and system state tracking. Thus, the privacy issues in DPCC should be seriously considered. As a solution, we use homomorphic encryption (HE) approaches to protect data privacy while computing the DPCC control law, since HE schemes allow computations on encrypted data. Specifically, we use CKKS scheme [8], which is a RLWE-based HE protocol that ensures the privacy of the scheme through introducing errors to satisfy the hardness of the RLWE problem. In CKKS scheme, complex-number vectors are mapped to integer-coefficient polynomials through interpolation, amplification and truncation. Consequently, the addition and multiplication of ciphertext in polynomials are homomorphically equivalent to element-wise addition and multiplication of plaintext in vectors. In this work, we design a privacy-preserving DPCC controller based on CKKS scheme to compute control sequences while keeping system information invisible to potential attackers. When performing the privacy-preserving DPCC tasks described above, we should consider the effects on the control quality induced by system noise and uncertainty. Firstly, errors are introduced to the privacy-preserving DPCC procedure through HE scheme. To be specific, errors are introduced to public keys in CKKS scheme to protect the semantic security properties. Moreover, the amplification and truncation procedure bring noises into ciphertexts. Besides, measurement noise, process noise and system uncertainty are ubiquitous in control systems, which consequently influence the control effect of data-driven approaches. Hence, disturbance observer (DOB) [5, 9, 10] is used to guarantee the control accuracy under the uncertainty, including system noise and errors induced by HE scheme. The function of DOB is to estimate the effects performed on a system based on an auxiliary system. If estimated, the system uncertainty could be properly compensated with a suitable magnitude. Motivated by the above reasons, the main contributions of the privacy-preserving DPCC based on HE scheme are listed as follows: * We design a private DPCC protocol based on CKKS scheme, which preserves the privacy of sensitive system input-output data. * We apply the DOB technique to estimate and compensate for the uncertainty induced by the HE scheme and system noise under the privacy-preserving DPCC scenario. * A numerical example shows the effectiveness of privacy-preserving DPCC with DOB, compared to un-encrypted non-DOB and encrypted non-DOB condi tions. The remainder of this work is shown as follows. DPCC approaches and their privacy issues are briefly surveyed in Section II, based on which we develop a privacy-preserving data-driven control protocol in Section III. In Section IV, a disturbance observer is proposed to compensate for the error induced by encryption and data noise. In Section V a numerical example of our proposed method is shown to demonstrate its effectiveness. Section VI concludes the paper. ## II Related Works Showing potential in model-free control scenarios, DPC approaches compute the control input directly from the input-output data of the system, and have been widely used in extended situations. [11] propose a model-free approach for linear parameter-varying systems. A data-driven error model is learned with precollected data in [12] to achieve accurate position tracking with a robot arm. DPC approaches may require extensive data to estimate system models or generate control inputs, in which cases the computation time of system input may become the bottleneck of implementation. Thus cloud computing and distributed computing are gathering more and more attention in DPC tasks for the possibility of computation acceleration by properly utilizing elastic resources in the cloud. [6, 7] develop a cloud-edge-endpoint DPC prototype, showing the feasibility of cloud-based control systems. To optimize the effort of subspace identification task, which is the basis of data-driven control, [13] decomposes the identification algorithm to interconnected containerized tasks through parallel computing. A further implementation of cloud-edge cooperative DPCC [5] uses workflow-based parallel cloud control and edge compensation. The privacy of data and models could be leaked through outsourced tasks, since the communication channel and execution environment could be eavesdropped by untrusted third-parties. Therefore, encrypted control approaches have been widely studied since it could simultaneously allow the computation of control signals and preservation of data privacy. Encrypted linear feedback controllers are realized in [14]. Moreover, the encrypted realization of more efficient and complex control schemes are proposed to fit integrated cloud scenarios. In [15], a privacy-preserving subspace identification approach based on partially HE scheme is proposed. Alexandru et al. [16] offer offline and online encrypted cloud control designs, both based on HE, to protect the input-output data of DPC based on a single cloud server. Subsequently a privacy-preserving distributed alternating direction method of multipliers approach is designed to perform the system estimation process in ciphertexts [17]. ## III Preliminaries In this section, we sketch the preliminaries of DPC and RLWE-based HE. ### _Implementation of data-driven predictive control_ We consider a state-space expression of discrete linear time-invariant (LTI) system: \[\begin{split} x(k+1)=& Ax(k)+Bu(k)+\epsilon_{p},\\ y(k)=& Cx(k)+\epsilon_{s},\end{split} \tag{1}\] where \(x(k)\in\mathbb{R}^{n},u(k)\in\mathbb{R}^{m},y(k)\in\mathbb{R}^{p}\) are the state, input and output vector of the system, \(\epsilon_{p}\), \(\epsilon_{s}\) are process noise and measure noise of suitable shapes, respectively. In the following statements, vectors are all viewed as column vectors, except for additional specifications. In DPC, we cannot access the specific parameter \(A\), \(B\) and \(C\) mentioned in (1). Therefore, data-driven approaches are used to infer the system information and perform control task. Specifically, we have the input-output data series of the system through time: \[\{u(n),y(n),n=1,2,...,T\}.\] At every time step \(k\), we use some slices of the input-output data series as prior information of the system for identification, which are denoted as: \[\begin{split} u_{f}(k)&=\left[\begin{array}{c}u(k )\\ u(k+1)\\ \vdots\\ u(k+N-1)\end{array}\right],y_{f}(k)=\left[\begin{array}{c}y(k)\\ y(k+1)\\ \vdots\\ y(k+N-1)\end{array}\right],\\ u_{p}(k)&=\left[\begin{array}{c}u(k-N)\\ u(k-N+1)\\ \vdots\\ y(k-1)\end{array}\right],\end{split} \tag{2}\] and \[v_{p}(k)=\left[\begin{array}{c}y_{p}(k)\\ u_{p}(k)\end{array}\right], \tag{3}\] where the subscript "\(p\)" and "\(f\)" indicate "past" and "future", respectively. Based on the slices shown above, we can fit the implicit system expression with linear regression: \[y_{f}(k)=L_{v}v_{p}(k)+L_{u}u_{f}(k)+e(k), \tag{4}\] where \(L_{v}\) and \(L_{u}\) are coefficient matrices to be fit with appropriate shapes that contain system information, \(e(k)\) is a noise vector. Aiming at sufficiently utilizing prior information, we concatenate the slices of data into the form of Hankel matrix: \[U_{f}(k)=[u_{f}(N)\ u_{f}(N+1)\ \cdots\ u_{f}(N+j-1)], \tag{5}\] \[Y_{f}(k)=[y_{f}(N)\ y_{f}(N+1)\ \cdots\ y_{f}(N+j-1)], \tag{6}\] \[V_{p}(k)=[v_{p}(N)\ v_{p}(N+1)\ \cdots\ v_{p}(N+j-1)]. \tag{7}\] Thus the linear regression problem (4) can be viewed as: \[Y_{f}(k)=L_{v}V_{p}(k)+L_{u}U_{f}(k)+E(k). \tag{8}\] After solving this linear regression problem, i.e. \(L_{v}\), \(L_{u}\) being obtained, we consider an optimal control problem with the loss function \[J=(r_{f}(k)-y_{f}(k))^{\top}\mathcal{Q}(r_{f}(k)-y_{f}(k))+u_{f}(k)^{\top}\mathcal{ R}u_{f}(k), \tag{9}\] where \(\mathcal{Q}\) and \(\mathcal{R}\) are positive-definite matrices of appropriate shapes, \(r_{f}\) is the reference signal. Problem (9) could be solved by taking derivative of \(J\) with respect to \(u_{f}\) after substituting (4) to (9): \[u_{f}(k)=(\mathcal{R}+L_{u}^{\top}\mathcal{Q}L_{u})^{-1}L_{u}^{\top}\mathcal{ Q}(r_{f}-L_{v}v_{p}(k)), \tag{10}\] where \(u_{f}(k)\) is a sequence of predicted control signals. ### _Lattice-based HE_ HE schemes enable addition and/or multiplication on encrypted data, which is ensured by a homomorphism between ciphertext space and plaintext space [18]. HE schemes can be divided into three categories [16]: partially HE, somewhat HE and fully HE. Partially HE schemes only support addition or multiplication. Levelled or somewhat HE schemes extend the functionality of partially HE and enable both addition and multiplication, with limited times of computation. Fully HE schemes allow infinite times of addition and multiplication, thus support evaluating arbitrary computable functions. Some levelled HE schemes could be converted to fully HE schemes with the use of a refresh algorithm called bootstrapping [19]. In this work, we use CKKS scheme [8, 19], a typical public key encryption scheme which is levelled homomorphic on complex vectors. CKKS scheme supports addition, finite times element-wise multiplication on real vectors, to protect the privacy of data-driven control. Besides, CKKS scheme utilizes key-switching technique to support advanced operation like element-wise vector rotation and relinearization after multiplication. Also, CKKS scheme supports ciphertext rescaling to control the noise expansion caused by specific operations. A brief description of CKKS scheme is shown in Fig. 1. Denote \(N\) be power of 2 and \(Q_{L}\) be a big modulus that equals to the product of a series of positive integers \(\{q_{0},q_{1},...,q_{L}\}\). In CKKS scheme, a complex vector \(m\) with at most \(N/2\) elements is interpolated into a polynomial. Then the embedded polynomial is multiplied by a large scaling factor \(\Delta\) and truncated to get plaintext \(p\), which is a polynomial in \(\mathcal{Z}_{Q_{L}}\left[X\right]/(X^{N}+1)\), for further encryption. The plaintext \(p\) will be encrypted into the form of ciphertext \(c=\left(c_{0},c_{1}\right)\) such that \(c_{0}+c_{1}s=p+e\ (mod\ Q_{l})\), where \(s\) is the secret key and \(e\) is the error. Here, ciphertext \(c\in\mathcal{Z}_{Q_{l}}^{2}\left[X\right]/(X^{N}+1)\) is denoted to be at level \(l\) with \(Q_{l}=\prod_{i=0}^{l}q_{i}\) for \(l=1,...,L+1\). The plaintext \(p\) could be encrypted both by the secret key \(s\) and the public key but could be only decrypted with the secret key. The security properties of CKKS scheme are ensured by the hardness of the RLWE problem [18]. Specifically, all the public keys are in the form of RLWE example \((-as+e,a)\), where random polynomial \(a\) and error \(e\) safely seal the secret key \(s\) according to the hardness of the RLWE problem. Besides, extra public keys in CKKS scheme are available to perform advanced operations like relinearization and rotation to support the design of elaborated computations. The noise bound in ciphertexts explodes when performing multiple homomorphic multiplications since the noise is exponentially amplified by the extra scaling factor \(\Delta\). As shown in Fig. 2, the multiplication result \(c\) at level \(l\) could be rescaled by dividing \(q_{l}\), and the level is consequently reduced to \(l-1\). Therefore, the noise bound explosion could be reduced to linear expansion, which allows more multiplications to be performed. ## IV Privacy-Preserving Dpcc Design with Dob In DPCC scenarios, we assume that the public cloud environment and potential eavesdroppers are honest but curious, which means that they will perform the specified computation or communication correctly, but they want to access the system information to infer the current state and system dynamics. Therefore, the untrusted part placed in the cloud should be encrypted. In this process, the encryption module may introduce new uncertainty. Based on this consideration, the DOB-based privacy-preserving DPCC solution requires the cooperation of three general components: public cloud, trustable edge and plant, respectively. The system design is shown in Fig. 3. In the public cloud, an encrypted controller is deployed, maintaining some encrypted matrices to compute encrypted control input sequences. On the trustable edge platform, the HE module is equipped to encrypt and decrypt data, along with a DOB to perform control signal compensation. The plant feeds the modified control input to the system and returns the current output to the edge side. The encrypted data in the cloud controller could be periodically updated to fit the current system dynamics. Fig. 1: A brief description of CKKS scheme. Fig. 2: Illustrated procedure of the scale limitation in CKKS scheme. ### _Privacy-preserving DPC_ The privacy of the system behavior, including input-output data, should be protected. Similar to [16], an offline privacy-preserving DPC solution is introduced based on CKKS homomorphic encryption scheme. We could observe that the computation of (10) is realized by specified matrix-vector multiplications. In practice, denote matrix \(M_{r}:=(\mathcal{R}+L_{u}^{\top}\mathcal{Q}L_{u})^{-1}L_{u}^{\top}\mathcal{Q}\) and \(M_{v}:=(\mathcal{R}+L_{u}^{\top}\mathcal{Q}L_{u})^{-1}L_{u}^{\top}\mathcal{Q}L _{v}\), which are 2 terms in (10). Since we could compute \(L_{v}\) and \(L_{u}\) in advance, \(M_{r}\) and \(M_{v}\) could be consequently computed offline on a trustable platform, which could be encrypted and uploaded to the cloud, then updated periodically. Then, the cloud receives the ciphertexts of \(M_{r}\) and \(M_{v}\), and the control input could be consequently computed: \[u_{f}=M_{r}r_{f}-M_{v}v_{p}, \tag{11}\] where \(v_{p}\) is the same as in (3) and timestamp \(t\) is omitted for convenience. For the efficiency of computation, matrices \(M_{r}\) and \(M_{v}\) would be reused for a given interval and then updated, which is a trade-off in the computation overhead. Consequently, the computing procedure could be reduced to a matrix-vector multiplication in ciphertext space. Here, a diagonal computation method is utilized to perform the computation [19]. To implement the encrypted matrix-vector computation \(Mx\), the matrix \(M\in\mathbb{R}^{K\times L}\) and vector \(x\in\mathbb{R}^{L}\) should firstly be rewritten in an encryption-friendly way, which are illustrated in upper part of Fig. 4(a). The modified matrix \(M_{mod}\) of matrix \(M\) and repeated vector \(x_{dup}=\left[x^{\top}~{}x^{\top}~{}...~{}x^{\top}\right]^{\top}\) of \(x\) are provided, which are encrypted and sent to the cloud computing component. Denote the encrypted columns of matrix \(M_{mod}\in\mathbb{R}^{K\times L}\) as \(M_{mod}^{(i)}\), and we need to homomorphically compute matrix-vector multiplication \(y=Mx\) in the form of ciphertexts. The matrix-vector multiplication in ciphertext is shown as below: \[y=\sum_{i=0}^{L-1}M_{mod}^{(i)}*rot(x_{dup},i), \tag{12}\] where the function \(rot(x_{dup},i)\) is the rotation operation supported by the CKKS scheme, meaning that rotating vector \(x_{dup}~{}i\) steps to the left. The computation procedures are illustrated in Fig. 4(b). Based on above description, the whole encrypted matrix-vector computation procedure is described in Algorithm 1. ``` 0: Matrix \(M\in\mathcal{R}^{m\times n}\), vector \(x\in\mathcal{R}^{n}\). 0: Encrypted result of \(Mx\). 1: Initialization: build a full zero matrix \(M_{mod}\) with the same shape as \(M\). 2:for\(i:=0\) to \(n-1\)do 3:for\(j:=0\) to \(m-1\)do 4:\(M_{mod}[j][i]=M[j][(i+j)~{}~{}\mathrm{mod}~{}n]\). 5:endfor 6:endfor 7:\(x_{dup}\) := Encryption of \(\left[x^{\top}~{}x^{\top}~{}...~{}x^{\top}\right]^{\top}\). 8:\(M_{mod}^{(0)},~{}...~{}M_{mod}^{(n-1)}\) := Encryption of \(M_{mod}\)'s columns 9: Compute matrix-vector multiplication through (12). ``` **Algorithm 1** Encryption-friendly matrix-vector multiplication. ### _DOB and DOB-based cooperative control design_ As analyzed in III, CKKS scheme introduces error to protect its security, meanwhile the amplification and truncation procedures bring error to the system. Besides, the process and measurement noise may also impact the control effect. For reducing the uncertainty and disturbance existed in HE scheme and system dynamics, we adopt the solution in [5], which uses a cloud-edge cooperative control design with a data-driven DOB to estimate the uncertainty and disturbance brought by the cloud. The estimation result obtained by data-driven DOB could be added to the control input for compensation with a proper gain. Assume that only the first term in the decrypted \(u_{f}\) is fed to the system, which is denoted as \(u_{c}\), as the cloud control signal. We take the nominal input-output relationship into consideration without noise and disturbance: \[\hat{y}(k+1)= \sum_{i=1}^{N}\hat{g}_{i}y(k+i-N) \tag{13}\] \[+ \sum_{i=1}^{N}\hat{h}_{i}u(k+i-N)+\hat{b}(k)u_{c}(k+1),\] where \(\hat{g}_{i}\) and \(\hat{h}_{i}\)s form the first block row of \(\hat{L_{v}}\) and \(\hat{L_{u}}\), i.e. the disturbed term of \(L_{v}\) and \(L_{u}\), respectively. (13) is actually the first \(p\) rows of the HE implementation of (4). If uncertainty and disturbance are considered, the real system dynamics should be: \[y(k+1)= \sum_{i=1}^{N}\hat{g}_{i}y(k+i-N) \tag{14}\] \[+ \sum_{i=1}^{N}\hat{h}_{i}u(k+i-N)+\hat{b}u_{c}(k)+\hat{b}(k)d(k),\] where \(d(k)=\Delta u(k)\) is the input disturbance. Then, a DOB is introduced with the form \[\hat{d}(k)=P(k)+Ky(k), \tag{15}\] where the disturbance \(d(k)\) is estimated by \(\hat{d}(k)\), \(K\) is the observer amplification matrix to be designed, and \(P(k)\) is an Fig. 3: Design of privacy-preserving DPCC. auxiliary vector which is updated as below: \[\begin{split} P(k+1)=&-K(\sum_{i=1}^{N}\hat{g}_{i}(k)y (k+i-N)\\ &+\sum_{i=1}^{N}\hat{h}_{i}(k)u(k+i-N)\\ &+\hat{b}u_{c}(k)+\hat{b}\hat{d}(k)).\end{split} \tag{16}\] From (16), one can obtain \[\hat{d}(k+1)=K\hat{b}(d(k)-\hat{d}(k)). \tag{17}\] Now, define the estimation error as \(\Delta d(k)=d(k)-\hat{d}(k)\) and we have the residue system: \[\Delta d(k+1)=-K\hat{b}\Delta d(k)+d(k+1). \tag{18}\] In this system, the edge-compensated input \(u_{e}\) is added to the cloud control signal \(u_{c}\), i.e. \(u=u_{c}+u_{e}\), to get the DPCC cloud-edge co-design. Since the uncertainty caused by HE is viewed as a part of input disturbance, \(u_{e}\) is designed to be \[u_{e}(k)=-\hat{d}(k), \tag{19}\] and \[\begin{split}\hat{d}(k)=&\ K\Big{(}y(k)-\sum_{i=1 }^{N}\hat{g}_{i}(k-1)y(k-N+i-1)\\ &-\sum_{i=0}^{N-1}\hat{h}_{i}(k-1)u(k-N+i-1)\\ &-\hat{b}(k-1)u_{c}(k-1)\Big{)}\end{split} \tag{20}\] when \(k=N+1,N+2,...\). When \(k=1,2,...,N\), the DOB-based edge compensator do not have enough data in the DPC stage, and \(u_{e}\) could be set to 0 in this time interval, i.e. \(u=u_{c}\). ## V Numerical Examples We consider a typical 2-order discrete LTI system control problem with parameters \[A=\begin{bmatrix}2&-1\\ 1&0\end{bmatrix}, \tag{21}\] \[B=\begin{bmatrix}1\\ 0\end{bmatrix}, \tag{22}\] and \[C=\begin{bmatrix}0.00014&0.00014\end{bmatrix}. \tag{23}\] The control input \(u\) is clipped between -0.15 and 0.15, and the measure output \(y\) is clipped between 0 and 0.4. The system parameters are: \(N=20\), \(j=1000\), \(K=62\), \(\lambda=0.009\). The system state is initialized at \(\left[0\ 0\right]^{\top}\) and the whole control procedure is divided into 2 stages, i.e. data precollection stage and data-driven control stage. In the data precollection stage, the system is controlled through a PID controller with \(K_{p}=K_{d}=9\) and \(K_{i}=3\). The control reference is \(y_{r}=0.2\) in the first \(2N+j=1040\) steps. In the data-driven control stage, \(L_{v}\) and \(L_{u}\) are computed and updated periodically every 50 iterations based on newly collected data. In this stage, the control reference is set to 0.1. The whole experiment is realized in a standard Hyper Elastic Cloud Server (HECS) in Huawei Cloud with 2GB RAM and 1 CPU. We implement the private-preserving part of the whole algorithm using the RLWE-based HE library Microsoft SEAL [20]. The security parameter \(\lambda\) is chosen to be 128-bit, meaning an encryption scheme could be infiltrated with a probability of \(2^{-128}\). The ring dimension is chosen to be 4096, which controls the packing capability of vectors and multiplication depth. The truncation error, which is related to the scaling factor and modulus bits, influences the effect of control. The scaling factor determines the multiplication level, which is bounded by the 128-bit security requirement. The multiplication depth is chosen to be 2, since in this experiment only one multiplication depth is performed in each step. The scaling factor of CKKS scheme is chosen to be \(2^{22}\) and \(2^{25}\), based on which the influence of floating point number truncation is researched. The process noise and measurement noise are set to be Gaussian with the variance of \(0.0027\). The experiment is performed to show the control effect of the privacy-preserving DPCC with a DOB-based compensator in three circumstances for comparison, i.e. data-driven control in plaintext, data-driven control in ciphertext with and without DOB-based compensator. The experimental results are illustrated in Fig. 5(a) and Fig. 5(b). As shown in these figures, the DOB-based compensator effectively removes the error induced by system uncertainty, encryption error and external noise. Specifically, in Fig 5(a), the scaling factor is set to be \(2^{22}\), i.e. about Fig. 4: Encryption-friendly matrix-vector multiplication: an illustrative example. 4 million, which truncates too much information from the plaintext such that compromises the system performance. The system is out of control without compensation. In contrast, DOB-based compensator successfully compensates the uncertainty and disturbance, which improves the control quality. In Fig 5(b), the scaling factor is 8 times bigger than \(2^{22}\), reducing the truncation error by 8 times, which leads to a similar performance compared to the unencrypted and uncompensated benchmark. In this case, the uncertainty mainly appears in encryption and noise, which could be well estimated and compensated. ## VI Conclusion In this work, we design a privacy-preserving DPCC solution. Based on HE, we implement a privacy-preserving cloud controller to ensure the data privacy using the CKKS scheme. Also, the uncertainty and disturbance in HE-based control systems are considered, a DOB-based compensator is designed on a trustable edge to estimate and compensate the uncertainty and disturbance. A numerical example shows the effect of our proposed privacy-preserving DPCC design. In the future, the computation efficiency problem of privacy-preserving cloud control solutions would be studied.
2303.16339
High-energy dipole scattering amplitude from evolution of low-energy proton light-cone wave functions
The forward scattering amplitude of a small dipole at high energies is given in the mean field approximation by the Balitsky-Kovchegov (BK) evolution equation. It requires an initial condition $N(r; x_0)$ describing the scattering of a dipole with size $r$ off the target that is probed at momentum fraction $x_0$. Rather than using ad hoc parameterizations tuned to high-energy data at $x\ll x_0$, here we attempt to construct an initial scattering amplitude that is consistent with low-energy, large-$x$ properties of the proton. We start from a non-perturbative three quark light-cone model wave function from the literature. We add ${\cal O}(g)$ corrections due to the emission of a gluon, and ${\cal O}(g^2)$ virtual corrections due to the exchange of a gluon, computed in light-cone perturbation theory with exact kinematics. We provide numerical data as well as analytic parameterizations of the resulting $N(r; x_0)$ for $x_0=0.01 - 0.05$. Solving the BK equation in the leading logarithmic (LL) approximation towards lower $x$, we obtain a fair description of the charm cross section in deeply inelastic scattering measured at HERA by fitting one parameter, the coupling constant $\alpha_s\simeq 0.2$. However, without the option to tune the initial amplitude at $x_0$, the fit of the high precision data results in $\chi^2/N_\text{dof} = 2.3$ at $N_\text{dof} =38$, providing clear statistical evidence for the need of systematic improvement e.g. of the photon wave function, evolution equation, and initial condition.
Adrian Dumitru, Heikki Mäntysaari, Risto Paatelainen
2023-03-28T22:38:07Z
http://arxiv.org/abs/2303.16339v2
High-energy dipole scattering amplitude from evolution of low-energy proton light-cone wave functions ###### Abstract The forward scattering amplitude of a small dipole at high energies is given in the mean field approximation by the Balitsky-Kovchegov (BK) evolution equation. It requires an initial condition \(N(r;x_{0})\) describing the scattering of a dipole with size \(r\) off the target that is probed at momentum fraction \(x_{0}\). Rather than using ad hoc parameterizations tuned to high-energy data at \(x\ll x_{0}\), here we attempt to construct an initial scattering amplitude that is consistent with low-energy, large-\(x\) properties of the proton. We start from a non-perturbative three quark light-cone model wave function from the literature. We add \(\mathcal{O}(g)\) corrections due to the emission of a gluon, and \(\mathcal{O}(g^{2})\) virtual corrections due to the exchange of a gluon, computed in light-cone perturbation theory with exact kinematics. We provide numerical data as well as analytic parameterizations of the resulting \(N(r;x_{0})\) for \(x_{0}=0.01-0.05\). Solving the BK equation in the leading logarithmic (LL) approximation towards lower \(x\), we obtain a fair description of the charm cross section in deeply inelastic scattering measured at HERA by fitting one parameter, the coupling constant \(\alpha_{s}\simeq 0.2\). However, without the option to tune the initial amplitude at \(x_{0}\), the fit of the high precision data results in \(\chi^{2}/N_{\mathrm{dof}}=2.3\) at \(N_{\mathrm{dof}}=38\), providing clear statistical evidence for the need of _systematic_ improvement e.g. of the photon wave function, evolution equation, and initial condition. + Footnote †: preprint: HIP-2023-6/TH ## I Introduction In Deep Inelastic Scattering (DIS) a pointlike virtual photon probes the rich QCD dynamics taking place inside the proton or a nucleus. At high energies, where the small Bjorken-\(x\) part of the target wave function is probed, one observes very large gluon densities [1]. When the gluon densities become of the same order as inverse coupling, non-linear QCD dynamics start to dominate and multiple scattering effects are important [2]. In the high-energy limit, the scattering process is most conveniently described in the dipole picture in a frame where the virtual photon has a large momentum [3], and its partonic Fock states, such as \(|q\bar{q}\rangle\) at leading order (LO), have a long lifetime as they scatter from the color field of the target. Describing the QCD dynamics in this high-density domain is natural in the Color Glass Condensate [4] framework. Here the center-of-mass energy or Bjorken-\(x\) dependence of various observables (and as such the target structure) is described in the large-\(N_{\mathrm{c}}\) limit by the perturbative Balitsky-Kovchegov (BK) renormalization group equation [5; 6]. It describes how the dipole-target scattering amplitude, which contains information about the target structure, changes with increasing energy. The dipole amplitude (a correlator of two Wilson lines) is actually a convenient degree of freedom at high energies: all cross sections computed at high energy in the CGC framework are expressed in terms of the dipole amplitude or higher-point correlators which can be written, in a Gaussian approximation, in terms of the two-point dipole amplitude [7]. The initial condition for the dipole-proton scattering amplitude depends on non-perturbative properties of the proton. A typical approach in the field has been to assume an intuitive functional form at an initial \(x_{0}\ll 1\) and fit various unknown parameters to the HERA total cross section data; see, e.g., Refs. [8; 9; 10] where a very good description of small-\(x\) HERA data is obtained at leading order, resumming powers of \(\alpha_{s}\ln 1/x\) via BK evolution with running coupling corrections [11]. Recent developments to full NLO accuracy have also allowed for a simultaneous description of total and heavy quark production data [12; 13]. The drawback of this approach is that one is sensitive to the assumed functional form of the initial dipole amplitude and that the model parameters need to be re-fitted if the evolution is initialized at different \(x_{0}\). Furthermore, there is no relation to the low energy ( or "large-\(x\)") proton structure. In this work, we take a complementary approach aim ing to _compute_ the initial dipole-proton scattering amplitude at moderate \(x_{0}\). As we will discuss in more detail next, the necessary non-perturbative input consists in a proton valence quark wave function that is constrained by low-energy data. The \(x_{0}\)-dependent initial condition is then obtained by computing the dipole-target scattering amplitude including one perturbative gluon emission in the target, with the gluon longitudinal momentum fraction regulated by \(x_{0}\)[14; 15]. The advantages of this approach are that we do not assume an ad-hoc functional form of the scattering amplitude and that the initial condition can be computed and the BK evolution initialized at any (moderate) \(x_{0}\) without a need to perform new fits. Also, this approach largely eliminates the freedom of tuning initial conditions in order to optimally match the evolution equation to the small-\(x\) data. This may reveal quantitative evidence for the need for improvements beyond leading-log, or even running coupling BK evolution. ## II Dipole-proton scattering at moderate \(x\) We first provide an overview of our approach to the light-cone structure of the proton. We employ a truncated Fock space description which starts with a three quark state. The corresponding Fock space amplitude (wave function) \(\Psi_{\rm qqq}\) corresponds to a non-perturbative solution of the QCD light-front Hamiltonian. To date, exact solutions for the light-cone wave functions are not available. In the future, lattice gauge theory may provide numerical solutions for moderate parton momentum fractions \(x_{i}\) and transverse momenta \(\vec{k}_{i}\) via a large momentum expansion of equal-time Euclidean correlation functions in instant quantization [16; 17]; see ref. [18] for a recent lattice computation of the wave function of the leading \(q\overline{q}\) state of the pion. Also, the MAP collaboration [19] has recently extracted the wave functions of the first four Fock states of the pion from fits to its parton distribution functions and electromagnetic form factor. Here, we rely on solutions of effective light-cone Hamiltonians for guidance on the low energy and low virtuality \(Q^{2}\) structure of the proton. Specifically, we shall employ the HO wave function of Refs. [20; 21]. In these references, the authors fixed the parameters of the three quark wave function to the proton "radius", or Dirac form factor at \(Q^{2}\to 0\), to the anomalous magnetic moments of the proton and neutron, and to the axial vector coupling \(g_{A}\). The wave function also matches reasonably well the empirical knowledge of the longitudinal and transverse momentum distribution of single quarks in the valence quark regime. Finally, the wave function of Refs. [20; 21] also provide predictions for quark _momentum correlations_. At next-to-leading order (NLO) in the Fock expansion we add the three quarks and one gluon state with amplitude \(\Psi_{\rm qqqg}\), as well as the virtual corrections to \(\Psi_{\rm qqq}\) due to the exchange of a gluon by two quarks in the proton. These corrections are obtained via light-cone perturbation theory calculations [14; 15]. The presence or exchange of the gluon extends the range of parton light-cone momentum fractions to lower \(x\), and pushes their transverse momenta into the perturbative regime. It also affects their momentum correlations. The central element of our analysis is the (imaginary part of the) eikonal scattering amplitude \(N\) of a small dipole of transverse size \({\bf r}\). The real part of \(N\) corresponds to two-gluon exchange, \[N({\bf r},{\bf b})=-g^{4}C_{\rm F}\int\frac{{\rm d}^{2}{\bf K} \,{\rm d}^{2}{\bf q}}{(2\pi)^{4}}\frac{\cos{({\bf b}\cdot{\bf K})}}{({\bf q}- \frac{1}{2}{\bf K})^{2}\ ({\bf q}+\frac{1}{2}{\bf K})^{2}}\] \[\times\bigg{(}\cos({\bf r}\cdot{\bf q})-\cos\bigg{(}\frac{{\bf r }\cdot{\bf K}}{2}\bigg{)}\bigg{)}\,G_{2}\,\bigg{(}{\bf q}-\frac{1}{2}{\bf K}, -{\bf q}-\frac{1}{2}{\bf K}\bigg{)}\,. \tag{1}\] Here \({\bf K}\) is the momentum transfer which is Fourier conjugate to the impact parameter \({\bf b}\). As explained below, we will eventually average \(N({\bf r},{\bf b})\) over a suitable range of impact parameters. We emphasize that the expression above accounts only for a single, perturbative two-gluon exchange (see its derivation in Ref. [22]), it does not resum the Glauber-Mueller multiple scattering series. This restricts its applicability to the regime of weak scattering, \(N({\bf r},{\bf b})\ll 1\). Furthermore, \(N({\bf r},{\bf b})\) actually acquires an imaginary part due to the perturbative exchange of three gluons; its magnitude has been shown to be much smaller than its real part [23; 24], and in practice it is of interest only for processes involving \(C\)-conjugation odd exchanges [25]. For the present purposes, it can be neglected. The coupling of the two static gluons to the proton is described in terms of the color charge density correlator \[\langle\rho^{a}({\bf q}_{1})\,\rho^{b}({\bf q}_{2})\rangle\equiv\delta^{ab}\,g ^{2}G_{2}({\bf q}_{1},{\bf q}_{2}). \tag{2}\] The color charge density operator corresponds to the light-cone plus component of the color current, \(\rho^{a}({\bf q})\equiv J^{+a}({\bf q})\), when the proton carries positive \(P_{z}\). Dozens of diagrams contribute to this correlator at NLO, their explicit expressions are listed in Ref. [14]. We point out that \(G_{2}({\bf q}_{1},{\bf q}_{2})\) satisfies a Ward identity due to the color neutrality of the proton; it vanishes when either \(q_{1}^{2}\) or \(q_{2}^{2}\to 0\) so that \(N({\bf r},{\bf b})\) in Eq. (II) is free of IR divergences. However, \(G_{2}\) does exhibit a collinear singularity which is regularized by assigning a mass to the quarks in the light-cone energy denominators for the \(q\to qg\) and \(qg\to q\) vertices; see Ref. [14] for details. All the results presented here were obtained with \(m_{\rm coll}=0.2\,{\rm GeV}\). This is consistent with the quark mass and transverse momentum scales which appear in the non-perturbative three quark wave function of Refs. [20; 21]. The color charge correlator also exhibits a soft singularity when the light-cone momentum fraction \(x_{g}\) of the gluon goes to zero. This is regularized with a cutoff \(x\) on \(x_{g}\), and the resummation of yet softer gluons will be performed through the BK equation. Note that at \(x=0.1\) the NLO contribution to \(G_{2}\) truly is a reasonably small \(\mathcal{O}(g^{2})\) perturbative correction [15]. However, by \(x=0.01\) its magnitude grows to essentially \(\mathcal{O}(1)\), a leading-log correction. Hence, at such \(x\) resummation is required and it is justified to use the computed dipole as an initial condition for the leading order BK evolution. We recall, also, that at the given order ultraviolet divergences cancel [14], so that \(G_{2}\) is independent of the renormalization scale, and that the coupling does not run. Lastly, let us mention that the angular dependence of the correlator \(G_{2}\), as well as the dependence of its Fourier transform on impact parameter, has been analyzed numerically in detail in Ref. [15]. ## III Small-\(x\) evolution of the proton light-cone wave function In order to obtain an initial condition for \(\mathbf{b}\)-independent BK evolution1 we average the dipole-target scattering amplitude obtained from Eq. (1) over the impact parameter \(\mathbf{b}\), Footnote 1: We limit ourselves to the \(\mathbf{b}\)-independent evolution in order to avoid the need to effectively model confinement scale effects which has been attempted e.g. in Refs. [26; 27]. \[N(r,x_{0})=\frac{1}{S_{T}}\int^{b_{\mathrm{max}}}\mathrm{d}^{2}\mathbf{b}\,N( \mathbf{r},\mathbf{b},x_{0}). \tag{3}\] Throughout this work, we denote the magnitudes of the transverse vectors as \(b=|\mathbf{b}|\) and \(r=|\mathbf{r}|\). The resulting amplitude is dominated by perturbative contributions when the dipole size \(r\) is small. In this region there is a small \(\cos(2\phi)\) dependence on the angle \(\phi\) between \(\mathbf{r}\) and \(\mathbf{b}\)[15] which vanishes when we integrate over \(\mathbf{b}\). Here \(S_{T}\) is the proton transverse area. Inclusive cross sections considered in this work are not sensitive to the actual shape of the target but only to the total transverse size. The proton geometry is most directly probed in exclusive vector meson production process where the total momentum transfer \(\mathbf{K}\) which is Fourier conjugate to the impact parameter is measurable. Parametrizing the \(\mathrm{J}/\psi\) production cross section in HERA kinematics as \(e^{-B_{D}\mathbf{K}^{2}}\) one obtains \(B_{D}=4\,\mathrm{GeV}^{-2}\)[28]. Assuming a Gaussian impact parameter profile for the proton, this corresponds to a two-dimensional root-mean-square radius \(b_{\mathrm{Gaussian}}=\sqrt{2B}\approx 0.56\,\mathrm{fm}\) and a proton area \(S_{T}=2\pi B\). On the other hand, if we assume a step function (hard sphere) profile for the proton, the same diffractive slope is obtained when the proton radius is \(b_{\mathrm{Hard\,sphere}}=2\sqrt{B}\approx 0.79\,\mathrm{fm}\), which corresponds to \(S_{T}=4\pi B\). Although exclusive vector meson data favors the Gaussian density profile over the hard sphere one (see e.g. [29]), the current data does not constrain the proton shape precisely. We also note that if the \(\mathbf{b}\)-dependent dipole amplitude from Eq. (1) is directly used to compute exclusive \(\mathrm{J}/\psi\) production cross section, the resulting spectra differs from the Gaussian profile case only in the region where there are no experimental constraints [30]. In this work the results shown below by default correspond to the Gaussian density profile (with \(b_{\mathrm{max}}=b_{\mathrm{Gaussian}}\)) unless otherwise stated, but we also study the dependence on the \(b_{\mathrm{max}}\) cut by using a step function profile with \(b_{\mathrm{max}}=b_{\mathrm{Hard\,\,sphere}}=\sqrt{2}b_{\mathrm{Gaussian}}\). The proton transverse area \(S_{T}\) has also been extracted by fitting a parametrized initial condition for the BK evolution equation to the HERA structure function data. Leading order analyses [9; 10] typically obtain \(S_{T}\sim 16\,\mathrm{mb}\). In recent fits at NLO accuracy [12; 13] proton areas \(S_{T}\sim 10\ldots 20\,\mathrm{mb}\) were obtained depending on the details of the analysis setup. We test this uncertainty in the proton small-\(x\) transverse profile by showing some results for both the Gaussian and hard sphere profiles with transverse areas \(9.8\,\mathrm{mb}\) and \(19.6\,\mathrm{mb}\), respectively. Before performing the impact parameter average we first study the impact parameter profile from the NLO light-cone wave function seen by a perturbative probe: \[T(\mathbf{b})=C\int^{r_{\mathrm{max}}}\mathrm{d}^{2}\mathbf{r}\,N(\mathbf{r}, \mathbf{b},x). \tag{4}\] The normalization condition \(\int^{b_{\mathrm{max}}}\mathrm{d}^{2}\mathbf{b}\,T(\mathbf{b})=1\) is used to fix the constant \(C\). We will refer to \(T(\mathbf{b})\) as the transverse "density" profile to match standard terminology from the literature. As the dipole amplitude is a rapidly increasing function of the dipole size \(r\), this integral is dominated by dipoles of size \(r\sim r_{\mathrm{max}}\), as long as \(r_{\mathrm{max}}\) is in the perturbative domain. The extracted density profiles up to \(b_{\mathrm{max}}=0.8\,\mathrm{fm}\) for different \(r_{\mathrm{max}}\) are shown in Fig. 1. For reference, a Gaussian profile, as used e.g. in the popular IPsat Figure 1: Effective normalized proton density profiles at \(x=0.01\) extracted from \(N(\mathbf{r},\mathbf{b})\) with \(b_{\mathrm{max}}=0.8\,\mathrm{fm}\) up to NLO in the Fock expansion of the light-cone wave function. parametrization [31] for the dipole amplitude with the slope \(B=4\,\mathrm{GeV}^{-2}\), is also shown. We observe a similar transverse profile except for very central \(b\lesssim 0.2\) fm where the computed profile is more steeply falling. This region can only be probed at high momentum transfer \(|t|\gtrsim 1\) GeV\({}^{2}\)[30], which is not covered in the currently available coherent vector meson production data. The high-\(b\) tails of \(T(\mathbf{b})\) resulting from the LCPT one gluon emission corrections are exponential rather than Gaussian. However, in all we conclude that for the present purposes the Gaussian profile used to match \(S_{T}\) to \(b_{\mathrm{max}}\) is a reasonable approximation. The \(\mathbf{b}\)-averaged dipole amplitudes (using a Gaussian profile) are shown in Fig. 2 (linear scale) and Fig. 3 (logarithmic scale) at \(x_{0}=0.05,x_{0}=0.025\) and \(x_{0}=0.01\). Here we also show the dependence on the diffractive slope \(B\): the bands correspond to varying \(B\) by \(\pm 10\%\) which changes both \(S_{T}\) and \(b_{\mathrm{max}}\). The results depend weakly on this cut especially in the perturbative small-\(r\) domain. The dipole amplitude increases with \(r\), approximately proportional to \(r^{2}\), as expected. For \(r\gtrsim 0.4\) fm the color neutrality of the proton, and the fact that the dipole scatters from a target of finite transverse extent, begin to slow the growth of \(N\). Finally, when the size of the dipole becomes comparable to that of the target the amplitude is found to decrease again (not shown) as the end points of the dipole essentially "miss" the target. However, we emphasize that this behavior occurs at large \(r\sim\mathrm{few\,fm}\) where in any case the perturbative calculation of the scattering amplitude is not valid. Figures 2 and 3 confirm that down to \(x=0.01\) scattering of small dipoles with \(r\) significantly less than 1 fm remains quite weak, at least for \(\alpha_{\mathrm{s}}=0.2\) which we determine below from a fit to the charm cross section in DIS. Therefore, it appears reasonable to start small-\(x\) evolution with this initial condition at \(x\) in the range \(0.01\ldots 0.05\). To obtain analytic parameterizations of the dipole amplitude we fit our numerical data for the \(\mathbf{b}\)-averaged scattering amplitude to the following expression which is inspired by the McLerran-Venugopalan (MV) model [32]: \[N(r)=1-\exp\left[-\frac{(r^{2}Q_{s,0}^{2})^{\gamma}}{4}\ln\left(\frac{1}{r \Lambda}+e_{c}\cdot e\right)\right], \tag{5}\] where \(\Lambda=0.241\,\mathrm{GeV}\) is a fixed infrared scale. Such a parameterization has been used previously e.g. in Refs. [9; 10; 12] to fit the initial condition for BK evolution to the HERA data. While our fit is restricted to \(r<0.5\) fm, the parameterization forces \(N(r)\to 1\) in the large-\(r\) region. Of course, the behavior at large \(r\) can not be trusted, and other extrapolations would be possible. It is important, however, that the large-\(r\) extrapolation is such that the Fourier transform of \(1-N(r)\) at high \(k\) (and, consequently, the forward particle production cross section, for example) will be insensitive to the assumed form. We will also demonstrate below that perturbative observables, in our case the charm production cross section, are only sensitive to the perturbative regime of small dipoles where our calculation should apply, and not to the extrapolation to large \(r\). The free parameters in Eq. (5), \(Q_{s,0}^{2},\gamma\) and \(e_{c}\), are fit to the calculated dipole amplitude in the region \(0.01\,\mathrm{fm}<r<0.5\,\mathrm{fm}\) (we actually fit the logarithm of the dipole in order to give equal weight to small and intermediate \(r\)). The upper limit restricts to the perturbative domain, and the lower limit is imposed in order to give some weight to the region of intermediate \(r\) as well. The resulting dipole amplitudes are shown in Figs. 2 and 3 as dotted lines. The fit parameters are listed in Table 1. In the fit we require that \(e_{c}>e^{-1}\) in order to enforce positivity of the logarithm in Eq. (5), and all fit results give \(e_{c}=e^{-1}\) within numerical accuracy, i.e. they require as Figure 3: Same as Fig. 2 but on a double logarithmic scale in order to better exhibit the behavior at small \(r\). Figure 2: Impact-parameter averaged dipole as a function of dipole size \(r\) at two different momentum fractions \(x\). The bands correspond to varying the proton shape parameter \(B\) by 10%. The dotted lines show best fits to the central values with the modified MV model parametrization of Eq. (5). small an infrared cutoff as allowed. The MV-model inspired parameterization is found to describe the dipole-proton scattering amplitude quite well, for all dipole sizes in the perturbative \(r\lesssim 0.5\,\)fm region. Here, of course, the linearized version of Eq. (5) is sufficient, as it should be: recall that Eq. (1) does not resum multiple scattering. The momentum scale \(Q_{s,0}^{2}\) remains non-perturbative down to \(x=0.01\); see below for an extraction of a "saturation scale" at lower \(x\). However, it increases approximately as \(Q_{s,0}^{2}\sim 1/x^{0.47}\). The "anomalous dimension" of the dipole amplitude is \(\gamma=1\) within the numerical accuracy. On the other hand, leading order fits to HERA total cross section data [9; 10] require \(\gamma\sim 1.1-1.2\) in order to obtain as slow a \(Q^{2}\) dependence of the cross section as required by the HERA data [1; 33]. Recent fits at next-to-leading order accuracy performed in Ref. [12; 13] also prefer \(\gamma\gtrsim 1\), although some fits with \(\gamma\sim 1\) are also possible if the heavy quark production data is not taken into account. A problem with \(\gamma>1\) is that it renders the (dipole) unintegrated gluon distribution function [7; 34; 35] and the forward particle production cross section negative [10; 36] in some range of transverse momentum \({\bf k}_{T}\). The dipole amplitude obtained here does not display this issue. Next we solve the leading order BK equation with fixed coupling, using the numerical data for \(N(r,x_{0})\) as an initial condition at \(x_{0}=0.01\). Note that at this order in \(\alpha_{\rm s}\) the coupling constant does not run in the LCPT calculation of the initial condition, and consequently we also limit ourselves to the fixed coupling case here. Evolution over 6 units of rapidity is shown in Fig. 4. For comparison, we also solve the BK equation using the modified MV-model initial condition with parameters as shown in Table 1. This parameterized initial condition has a completely different behavior in the infrared region with \(N(r)\to 1\) at large \(r\) whereas the numerical data gives a decreasing \(N(r)\) when \(r\) exceeds a few fm, as already mentioned above. However, as can be seen in Fig. 4 the resulting BK-evolved dipole amplitudes are basically identical in the perturbative \(r\lesssim 0.5\,\)fm domain. In fact, due to the approach to the fixed point of the BK equation [37; 38; 39], at high rapidity the difference between the scattering amplitudes evolved with the two initial conditions diminishes. This demonstrates that the BK-evolved amplitude at small \(r\) is not affected by the uncontrolled large-\(r\) extrapolation of the initial condition. One may define a saturation radius \(r_{s}\), and a corresponding saturation momentum \(Q_{s}=\sqrt{2}/r_{s}\) from the condition that \(N(r_{s})=1-\exp(-\frac{1}{2})\simeq 0.4\). For this to be a perturbative scale requires about 6 units of rapidity evolution, as can also be seen from Fig. 4. This corresponds to \(x\simeq 2.5\cdot 10^{-5}\), where \(r_{s}\simeq 0.3\) fm, and \(Q_{s}\simeq 1\) GeV. These values are not very far from the first "saturation model" fit to HERA DIS data by Golec-Biernat and Wusthoff [40] from 25 years ago. Many more recent fits mentioned above have since confirmed that reaching the strong scattering regime with a small dipole and a proton target requires deep evolution to rather small \(x\). Since scattering at \(x=0.01\) is fairly weak we have also evolved our initial condition with the linear BFKL equation [41; 42; 43], see Fig. 5. After a few units of rapidity evolution, the linear equation begins to violate unitarity, \(N(r)\leq 1\), at large \(r\). However, this regime of large dipoles is not under control in any case. More importantly though, at \(y=2-4\) the absence of the non-linear correction begins to affect the solution significantly even at \(r\) substantially less than 1 fm. With BFKL we also noticed a greater difference between evolving the actual numerical data vs. the analytic modified MV-model parametrization (not shown), which differ in their large-\(r\) extrapolation. Therefore, for accurate results it appears to be rather important to evolve with the non-linear BK equation even if one restricts to \(r<1\) fm. Let us finally study how the \(x\) dependence obtained from the direct, fixed order NLO LCPT calculation differs to the one obtained by solving the BK equation. We note that in the LCPT calculation \(x\) is an explicit cutoff for the longitudinal momentum of the emitted gluon, and this gluon emission is calculated in exact kinematics. On Figure 4: Leading-log BK evolution at \(\alpha_{\rm s}=0.2\) starting at \(x_{0}=0.01\). From bottom to top the lines correspond to evolution rapidity \(0,2,4\) and \(6\). The dashed lines are obtained with the fitted MV-model like parameterization from Table 1 as an initial condition; the solid lines evolve the actual numerical data for \(N(|{\bf r}|,x_{0})\). \begin{table} \begin{tabular}{c|c|c|c} \(x\) & \(Q_{s,0}^{2}\) [GeV\({}^{2}\)] & \(\gamma\) & \(e_{c}\) \\ \hline 0.01 & \(0.100^{+0.004}_{-0.004}\) & \(1.001^{+0.001}_{-0.001}\) & \(e^{-1}\) \\ 0.025 & \(0.066^{+0.003}_{-0.003}\) & \(0.998^{+0.001}_{-0.001}\) & \(e^{-1}\) \\ 0.05 & \(0.047^{+0.002}_{-0.002}\) & \(0.997^{+0.001}_{-0.001}\) & \(e^{-1}\) \\ \end{tabular} \end{table} Table 1: Best fit parameters to the modified MV model parameterization, Eq. (5), for a fit over \(0.01\,\)fm \(<r<0.5\,\)fm. The upper and lower limits are obtained by varying the proton shape parameter \(B\) by \(\pm 10\%\). All fit results give \(e_{c}=e^{-1}\) within numerical accuracy. the other hand, in BK evolution multiple soft gluon emissions are resummed. This comparison is done by calculating the dipole amplitude at \(x=0.01\) directly from the LCPT using Eq. (1), and comparing that to the dipole amplitude obtained by solving the BK equation with the initial condition computed at \(x_{0}=0.05\). The resulting dipole amplitudes are shown in Fig. 6. The most significant difference between the fixed order \(\mathcal{O}(g^{2})\) LCPT amplitude and the BK evolved dipole is that the evolution equation decreases the anomalous dimension \(\gamma\) towards the asymptotic value \(\gamma\sim 0.6\). On the other hand, the emission of one gluon in the direct LCPT calculation does not modify the extracted anomalous dimension, as can also be seen from Table. 1. This is, of course, the expected behavior. As already mentioned above, DIS phenomenology does not appear to support \(\gamma<1\) at \(x=0.01\) or greater, so it seems reasonable to treat at least the emission of the first gluon with \(x_{g}>0.01\) in exact kinematics2. Footnote 2: Also, the emission of the first gluon actually increases the imaginary part due to \(C\)-odd three gluon exchange [24], which provides another indication that small-\(x\) evolution should not be started much before \(x_{0}\simeq 0.01\). ## IV Total cross section at small \(x\) Next, we consider the DIS structure functions at small Bjorken-\(x\). The overall normalization of the dipole amplitude depends on the strong coupling constant \(\alpha_{\rm s}=g^{2}/(4\pi)\), see Eq. (1). The same coupling constant also affects the Bjorken-\(x\) dependence of the dipole amplitude via the BK evolution. In this work, our strategy is to fix the value of \(\alpha_{\rm s}\) by calculating the total charm production cross section, and comparing it to the HERA reduced cross section data from Ref. [44]. We set the initial condition for the BK evolution at \(x_{0}=0.01\), and compare it to the HERA data in the region \(x<0.01,Q^{2}<100\,\mathrm{GeV}^{2}\) (note that the smallest \(Q^{2}\) bin in the data is \(Q^{2}=2.5\) GeV\({}^{2}\)). In this region, there are \(N=39\) data points. The experimental data is reported as reduced cross section \[\sigma_{r}(x,y,Q^{2})=F_{2}(x,Q^{2})-\frac{y^{2}}{1+(1-y)^{2}}F_{L}(x,Q^{2}). \tag{6}\] Here \(y=Q^{2}/(sx)\) is the inelasticity variable, not to be confused with the evolution rapidity. The proton structure functions \(F_{2}\) and \(F_{L}\) are expressed in terms of the total cross section for the virtual photon-proton cross section \(\sigma^{\gamma^{*}p}\): \[F_{2}(x,Q^{2}) =\frac{Q^{2}}{4\pi\alpha_{\rm em}}\left(\sigma_{T}^{\gamma^{*}A} +\sigma_{L}^{\gamma^{*}A}\right), \tag{7}\] \[F_{L}(x,Q^{2}) =\frac{Q^{2}}{4\pi\alpha_{\rm em}}\sigma_{L}^{\gamma^{*}A}. \tag{8}\] In the dipole picture, the total cross section for the virtual photon-proton scattering can be expressed as a convolution of the photon wave function and the dipole amplitude as [2] \[\sigma_{T,L}^{\gamma^{*}A}=2\sum_{f}\int\mathrm{d}^{2}\mathbf{b}\,\mathrm{d}^ {2}\mathbf{r}\,\mathrm{d}z\,\left|\Psi^{\gamma^{*}\to q\bar{q}}(\mathbf{r},z,Q ^{2})\right|^{2}\,N(\mathbf{r},\mathbf{b},\bar{x}). \tag{9}\] Here \(f\) is the quark flavor, \(Q^{2}\) the photon virtuality and \(\Psi^{\gamma^{*}\to q\bar{q}}\) is the leading order wave function for the \(q\bar{q}\) Fock state of the virtual photon. In this equation we replace \(N(\mathbf{r},\mathbf{b},\bar{x})\) by the impact parameter averaged dipole amplitude \(N(r,\bar{x})\), as described above, and \(\int\mathrm{d}^{2}\mathbf{b}\to S_{T}\) Figure 6: The dipole scattering amplitude at \(x=0.01\) with and without prior BK evolution. The parameter \(\gamma\) denotes the resulting anomalous dimension fitted in the region \(0.01\,\mathrm{fm}<r<0.5\,\mathrm{fm}\). Figure 5: Leading-log BK (solid lines) vs. BFKL (dotted lines) evolution starting at \(x_{0}=0.01\). We fix the mass of the \(c\) quark to 1.4 GeV. The dipole amplitude in Eq. (9) is evaluated at \(\bar{x}=x(1+4m_{q}^{2}/Q^{2})\), where \(m_{q}\) is the quark mass which enforces a smooth approach to the photoproduction limit [9; 40]. In order to confirm that the charm production cross section is not sensitive to non-perturbatively large dipoles we show in fig. 7 the fraction of the total cross section at \(x=0.0056,Q^{2}=10\,\mathrm{GeV}^{2}\) as a function of the upper limit for the \(r\) integral \(r_{\mathrm{max}}\) included in Eq. (9). It is evident that the charm cross section is saturated by small dipoles whereas the inclusive cross section (calculated using \(m_{q}=0.14\) GeV for the light quarks) at \(Q^{2}=10\) GeV\({}^{2}\) is sensitive to larger dipoles beyond sizes where we may trust our calculation. When using a modified MV-model parametrization as an initial condition for the evolution with different extrapolation in the infrared region, one needs to integrate up to even larger \(r\) in order to recover the full result for \(F_{2}\). The charm production cross section is not affected by the different large-\(r\) extrapolation (not shown). Qualitatively similar results have been obtained with the commonly used IPsat parametrization for the dipole-proton amplitude in which case typically even larger dipole sizes contribute compared to the setup with factorized impact parameter dependence applied in this work [31; 45]. For these reasons we shall focus on charm production. Considering only the strong coupling constant \(\alpha_{\mathrm{s}}\) as free parameter we obtain a reasonably good description of the charm production data. The value of \(\chi^{2}/(N-1)\) as a function of \(\alpha_{\mathrm{s}}\) is shown in Fig. 8 using two different density profiles (Gaussian and hard sphere) for the proton. These two setups have different upper limits for the impact parameter \(b\) and correspondingly different transverse areas for the proton. The extracted optimal values for the strong coupling constant are \(\alpha_{\mathrm{s}}=0.200\) for the Gaussian proton and \(\alpha_{\mathrm{s}}=0.181\) for the hard sphere profile. These values are used throughout this paper. We note that fits of the total (rather than charm) cross section with tuned initial conditions [9; 10] and running coupling corrections to the BK equation have achieved lower \(\chi^{2}/N\approx 1\), without being able to simultaneously describe the charm data [9]. However, with our _calculated_ initial condition there is room for the expected improvements of the photon wave function, evolution equation, and, of course, of the initial condition. In this analysis, we have fixed the collinear regulator to \(m_{\mathrm{coll}}=0.2\,\mathrm{GeV}\) in the LCPT calculation of the initial condition. As the charm cross section is dominated by small dipoles, our results are not highly sensitive to the actual value of this regulator: changing \(m_{\mathrm{coll}}\) by a factor of 2 changes \(\chi^{2}/(N-1)\) to HERA data by only \(2\dots 5\%\) when using the optimal \(\alpha_{\mathrm{s}}\). We also keep the charm mass fixed to \(1.4\) GeV. The optimal value for \(\alpha_{\mathrm{s}}\) and the achieved \(\chi^{2}/(N-1)\) naturally will depend on this choice. We choose to work with fixed quark mass and collinear regulator and do not attempt to fit these simultaneously with \(\alpha_{\mathrm{s}}\), as the purpose of this work is to demonstrate the feasibility of _computing_ the initial condition for the BK equation, and we emphasize that numerically potentially important higher order effects such as running coupling are still missing from the setup. A comparison to the HERA charm production data in different Bjorken-\(x\) bins is shown in Fig. 9 as a function of the photon virtuality. We have checked that these results remain the same if we use the analytic parameterization (5), with parameters from table 1, as initial Figure 7: The fraction of the charm and inclusive DIS structure functions at \(x=0.0056,Q^{2}=10\) GeV\({}^{2}\) (corresponding to \(\bar{x}=0.01\) in the case of charm production) as function of the cutoff on the dipole size imposed in Eq. (9). The total cross section is the sum of light quark (mass \(0.14\,\mathrm{GeV}\)) and charm quark (mass \(1.4\,\mathrm{GeV}\)) production contributions. Figure 8: Determining the strong coupling constant by fitting the HERA charm production data in the region \(Q^{2}<100\,\mathrm{GeV}^{2},x<0.01\), with leading-log BK evolution started at \(x_{0}=0.01\). The solid lines are polynomial fits to the computed values used to extract the optimal values for \(\alpha_{\mathrm{s}}\). The optimal values are \(\alpha_{\mathrm{s}}=0.200,\chi^{2}/(N-1)=2.27\) for the Gaussian proton and \(\alpha_{\mathrm{s}}=0.181,\chi^{2}/(N-1)=2.28\) for the hard sphere proton. condition; this confirms the insensitivity of the charm cross section to the large-\(r\) extrapolation of the scattering amplitude. At \(x=0.008\) there is only very little (\(\leq 0.2\) units of rapidity when \(x_{0}=0.01\)) evolution, so the dipole amplitude is almost completely determined by our initial condition. On the other hand, we also show results at lower \(x\) which is dominated by the BK evolution. In addition to our standard setup where the initial condition for the BK evolution is set at \(x_{0}=0.01\), we also show results using an initial condition computed at larger \(x_{0}=0.05\). Note the weak dependence of this observable, at least, on where the "hand-off" from the \(x\)-dependent initial condition to the evolution equation occurs. In contrast, ad-hoc initial condition parametrizations have to be re-tuned when \(x_{0}\) is changed. Fig. 9 shows a fair agreement of the reduced cross section obtained from our light-cone wave function with the HERA charm data. Close to the initial condition we obtain a slightly slower \(Q^{2}\) dependence than seen in the data. As a result of the evolution this changes into faster virtuality dependence at very small \(x\). This is because the BK evolution at fixed coupling develops a small anomalous dimension \(\gamma\approx 0.6\) for the dipole amplitude and a smaller anomalous dimension results in faster \(Q^{2}\) dependence. Lastly, in Fig. 10 we study how sensitive the charm production cross section is on the chosen proton density profile, and as such on the maximum impact parameter \(b_{\rm max}\) used in Eq. (3). The cross section is calculated at \(x=0.008\) which is close to the initial condition for the BK evolution, again set at \(x_{0}=0.01\). In both cases, we use the optimal value for the strong coupling constant extracted above. The cross section increases only slightly when the hard sphere profile with larger \(b_{\rm max}\) is used, but the dependence on the virtuality is not affected. This weak dependence on the selected proton profile confirms that our results are not sensitive to non-perturbatively large impact parameters. ## V Summary and discussion The present work represents a first attempt at relating the short-distance structure of the proton at high energies to its low-energy properties, covering several orders of magnitude in energy. We start from an effective three quark light-cone wave function which models the non-perturbative longitudinal and transverse momentum distributions of quarks at \(x\gtrsim 0.1\) as well as some of their correlations. The next step involves the computation, using exact kinematics, in light-cone perturbation theory of the \({\cal O}(g)\) correction to the light-cone wave function due to the emission of a gluon, and the \({\cal O}(g^{2})\) virtual corrections due to the exchange of a gluon by two quarks. Optimistically, this should extend the validity of the resulting light-cone wave function into the regime of perturbative transverse momenta, and parton momentum fractions \(x\gtrsim 0.01\). The corresponding dipole scattering amplitude is then evolved to yet higher energies (lower \(x\)) by solving the BK equation, which resums emissions of additional soft gluons. Figure 10: Reduced cross section at \(\sqrt{s}=318\,\)GeV calculated close to the initial condition using the Gaussian and Hard sphere density profiles, respectively. Figure 9: Charm production reduced cross section at \(\sqrt{s}=318\,\)GeV compared to the HERA data. Results are shown in the region where \(\bar{x}\leq 0.01\). The convolution of the LO photon light-cone wave function with the impact parameter averaged BK dipole scattering amplitude at leading logarithmic accuracy provides a fair description of the reduced DIS charm cross section measured at HERA, for \(\alpha_{\mathrm{s}}\simeq 0.2\). This value of the strong coupling was obtained from a fit to the charm cross section at \(Q^{2}<100\) GeV\({}^{2}\) and \(x<0.01\). None of the parameters of the low-energy three-quark model wave function were re-tuned to the high-energy data. Despite the fair description of the highly accurate data the resulting \(\chi^{2}/N_{\mathrm{dof}}\approx 2.27\) with \(N_{\mathrm{dof}}=38\) implies a very low statistical significance, i.e. a very low probability that the data represents statistical fluctuations about the model predictions: the integral over the \(\chi^{2}\) distribution from \(\chi^{2}=2.27\times 38\) to infinity, commonly denoted as the "p-value", is \(1.3\times 10^{-5}\). However, the very low statistical significance of the fit should not be confused with a need for large corrections, Fig. 9 shows that this is clearly not the case. This is entirely expected since there are multiple _known_ sources of corrections such as, for example, of the photon wave function [46; 47; 48], of the evolution equation [49; 50; 51; 52; 53], and, of course, of the initial condition for the evolution equation (our proton light-cone wave function) which, e.g., may be improved with running coupling corrections. The data requires fairly moderate but _systematic_ improvements of the model predictions across the relevant ranges of \(x\) and \(Q^{2}\). We have also provided analytic parameterizations of the impact parameter averaged dipole scattering amplitude for \(x=0.01\ldots 0.05\) which accurately fit the numerical data in the regime of perturbative dipoles, \(r\lesssim 0.5\) fm. These parameterizations can be used in practice to estimate the corrections predicted by more accurate evolution equations. In the supplementary material we also provide the tabulated numerical data for \(N(r,x)\). Their large-\(r\) extrapolation differs from that of the analytic parameterizations which allows for tests of the (in-)sensitivity to the uncontrolled non-perturbative regime of large dipoles. The quest for more accurate theoretical predictions at high energy for the upcoming EIC at BNL [54] and the proposed LHeC/FCC-he at CERN [55] requires initial conditions for the evolution equations which do not absorb theoretical improvements into a re-tune of their parameters. ###### Acknowledgements. This material is based upon work supported by the U.S. Department of Energy, Office of Science, Office of Nuclear Physics, within the framework of the Saturated Glue (SURGE) Topical Theory Collaboration. A.D. acknowledges support by the DOE Office of Nuclear Physics through Grant DE-SC0002307, and The City University of New York for PSC-CUNY Research grant 65079-00 53. This work was supported by the Academy of Finland, the Centre of Excellence in Quark Matter and projects 338263 and 346567 (H.M), and projects 347499 and 353772 (R.P). H.M is also supported under the European Union's Horizon 2020 research and innovation programme by the European Research Council (ERC, grant agreement No. ERC-2018-ADG-835105 YoctoLHC) and by the STRONG-2020 project (grant agreement No. 824093). The content of this article does not reflect the official opinion of the European Union and responsibility for the information and views expressed therein lies entirely with the authors. Computing resources from CSC - IT Center for Science in Espoo, Finland and from the Finnish Grid and Cloud Infrastructure (persistent identifier urn:nbn:fi:research-infras-2016072533) were used in this work.
2310.16201
A Convex Parameterization of Controllers Constrained to use only Relative Measurements
The optimal controller design problem for systems equipped with sensors that measure only relative, rather than absolute, quantities is considered. This relative measurement structure is formulated as a design constraint; it is demonstrated that the resulting constrained controller design problem can be written as a convex program. Certain additional network structural constraints can be incorporated into this formulation, making it especially useful in distributed or networked settings. An illustrative example highlights the advantage of the proposed methodology over the standard formulation of the output feedback controller design problem. A numerical example is provided.
Walden Marshall, Bassam Bamieh, Emily Jensen
2023-10-24T21:36:37Z
http://arxiv.org/abs/2310.16201v2
# A Convex Parameterization of Controllers Constrained to use only Relative Measurements ###### Abstract We consider the optimal controller design problem for distributed systems in which subsystems are equipped with sensors that measure only differences of quantities such as relative (rather than absolute) positions and velocities. While such problems can be set up as a standard problem of robust output feedback control, we illustrate with a counterexample that this may be undesirable and then propose an alternate equivalent formulation. In particular, we provide an example of sparsity constraints that are not quadratically-invariant with respect to a standard formulation of a given plant, but that can be written as quadratically-invariant constraints with respect to a transformed version of this problem. In effect, our transformation provides a path to convert the controller design problem to an equivalent convex program. This problem transformation relies on a novel parameterization of controllers with general relative measurement structures that we derive here. We further illustrate the usefulness of this novel parameterization through an example of optimal consensus design with prescribed communication delays within the controller. ## I Introduction The optimal distributed controller design problem is inherently more challenging than its centralized counterpart. In the distributed setting, each subcontroller component has access to only a subset of system measurements - those taken by its own local sensors or obtained through measurement sharing with other subcontrollers according to the underlying network structure. This limited measurement access across different subcontrollers is typically encoded through sparsity, delay, or other structural constraints on the controller design problem, and will be referred to as _network control structure_ throughout this paper. When these constraints are imposed, many controller design problems with well-known solutions (e.g., \(\mathcal{H}_{2}\) or \(\mathcal{H}_{\infty}\)) become non-convex without known tractable solution methods. Notable exceptions include the settings of quadratic invariance [1] and funnel causality [2]. In addition to this network control structure, some distributed systems are also limited by the type of sensor available. Specifically, we consider here the setting of systems that are equipped with sensors that measure only _relative_, rather than absolute, quantities. In some cases, it is not possible to obtain absolute measurements, e.g. in satellite formation problems where GPS or communication with a ground station is unavailable [3]. In other cases, relative measurement sensors are much easier to implement, e.g. the measurement of relative position between neighboring vehicles in a platoon [4] or relative voltage angles between generators in an AC power network [5], or are more accurate, e.g., in the case of vision sensors that measure relative bearings between subsystems [6, 7]. Relative measurements are also commonplace in a variety of synchronization and consensus problems [8, 9, 10]. Performance and control of systems with these relative sensing architectures have been studied in detail [8, 11, 12]. However, although network control structures are typically encoded as constraints on the controller design problem, an analogous general characterization of _relative measurement structures_ as a design constraint appears to be lacking. This work addresses this gap in the literature by providing _a convex parameterization of the set of all controllers that satisfy a given relative measurement architecture_. This relative measurement architecture describes which relative quantities are measured by any sensor in the system, and is independent of any network control structure. A framework for analysis of relative measurement structures as a controller design constraint may allow for a more systematic approach to quantifying achievable performance and fundamental limitations of systems that use relative sensing. Some of these limitations are already well-known, e.g., vehicular platoons with only relative measurement devices have been shown to have arbitrarily degrading best-achievable performance as the number of vehicles increases [4], and additional limitations based on the directionality of these measurements has been quantified [13]. The variance of noisy consensus-type problems with relative measurements has also been shown to scale unboundedly for certain network topologies [10]. The potential for the analysis of relative sensing as a design constraint to identify further fundamental system properties has been observed in special cases already. For example, a relative measurement structure was incorporated as a design constraint for a specific consensus problem in [14], and was shown to lead to infeasibility of the controller design problem when combined with additional closed-loop structural [15] constraints. This clearly illustrated the incompatibility of closed-loop structure and relative sensing in this problem. However, the approach taken in this work relied on the system dynamics being dependent only on relative states. Although this does occur, e.g., the swing equations for power networks depend only on relative generator voltage angles, there are clearly many settings where this property does not occur, e.g., a case is that of dynamics that are decoupled across subsystems in open-loop. The parameterization herein does not rely on any such structure of the open-loop dynamics. Additionally, the parameterization of [14] assumed that every possible relative measurement was available to the controller - in what follows, we consider the case that only certain relative quantities are sensed. Moreover, the parameterization presented here is easily compatible with well-studied controller design approaches such as the classical Youla-Kucera parameterization [16]. The rest of this paper is structured as follows. In Section II, we introduce a graph to formalize the notion of a relative measurement structure and review network control structures that introduce additional constraints in the distributed setting. In Section III, we provide an example for which the standard output feedback design formulation is not solvable by standard techniques, motivating the need for a new approach to relative measurement controller design. Our main results are presented in Section IV: a characterization of all controllers that conform to a given relative measurement structure is derived in Theorem 1, and this is utilized to provide an equivalent formulation of the optimal controller design problem in Corollary 1. In Section V, we illustrate the convexity of our new formulation and describe how to incorporate network control structural constraints. An example of consensus of first-order systems with communication delays is presented in Section VI. We conclude with a summary of future directions for this work. ## II Problem Set Up We consider a generalized plant with linear time-invariant (LTI) dynamics \[\dot{x}(t) =Ax(t)+B_{1}w(t)+B_{2}u(t) \tag{1}\] \[z(t) =C_{1}x(t)+D_{12}u(t)\] \[y(t) =C_{2}x(t)\] where \(x(t)\in\mathbb{R}^{n},w(t)\in\mathbb{R}^{q},u(t)\in\mathbb{R}^{l},z(t)\in \mathbb{R}^{r}\) and \(y(t)\in\mathbb{R}^{p}\) are the internal state, exogenous disturbance, control signal, performance output and measurement available to the controller, each at time \(t\), respectively. We often omit the dependence on \(t\) when clear from context. Using bold face lowercase lettering to denote signals in the frequency domain, and bold face upper case lettering for linear time-invariant (LTI) systems, we write the transfer function representation of the system (1), \(\left[\begin{array}{c}\mathbf{z}\\ \mathbf{y}\end{array}\right]=\mathbf{P}\left[\begin{array}{c}\mathbf{w}\\ \mathbf{u}\end{array}\right],\) partitioned as \[\mathbf{P} =\left[\begin{array}{cc}\mathbf{P}_{zw}&\mathbf{P}_{zu}\\ \mathbf{P}_{yw}&\mathbf{P}_{yu}\end{array}\right] \tag{2}\] \[=\left[\begin{array}{cc}C_{1}(sI-A)^{-1}B_{1}&C_{1}(sI-A)^{-1}B _{2}+D_{12}\\ C_{2}(sI-A)^{-1}B_{1}&C_{2}(sI-A)^{-1}B_{2}\end{array}\right].\] The standard problem of robust/optimal design of an output feedback controller \(\mathbf{u}=\mathbf{K}\mathbf{y}\) takes the form \[\inf_{\mathbf{K}} \|\mathbf{P}_{zw}+\mathbf{P}_{zu}\mathbf{K}(I-\mathbf{P}_{yu}\mathbf{K})^{-1}\mathbf{P}_{ yw}\| \tag{3}\] \[\mathrm{s.t.} \mathbf{K}\ \mathrm{internally\ stabilizes}\ \mathbf{P},\] for \(\|\cdot\|\) an operator norm. For simplicity of exposition, we present our results in the continuous-time setting, noting that the discrete-time setting follows analogously. ### _Relative Measurement Structure_ We assume the output vector \(y(t)\) of system (1) contains all measurements available to the controller, and that each of these measurements is a _relative_ (rather than absolute) quantity. Specifically, we make the following assumption. **Assumption 1**.: \(C_{2}\) _is full row rank and each row of \(C_{2}\) contains exactly one entry of \(1\) and one entry of \(-1\)._ This assumption corresponds to each entry of \(y(t)\) being a difference of two entries of \(x(t)\) and prevents redundant measurements. This property appears in the transfer matrices \(\mathbf{P}_{yu}\) and \(\mathbf{P}_{yw}\) of the design problem (3). Consider, for example, \[y(t)=\left[\begin{array}{ccc}1&-1&0&0\\ 0&0&1&-1\end{array}\right]x(t)=\left[\begin{array}{c}x_{1}(t)-x_{2}(t)\\ x_{3}(t)-x_{4}(t)\end{array}\right], \tag{4}\] as illustrated below. For this example, we observe that the relative measurement structure (4) splits the set of states into two connected components. To capture the relative measurement architecture more generally, let each state \(x_{i},\ i=1,...,n\) of system (1) correspond to a node in a graph. Construct the adjacency matrix of the graph, denoted by \(\mathcal{A}_{\mathrm{meas}}\). according to \[\mathcal{A}_{\mathrm{meas}}.(i,j)=\begin{cases}1,\ \exists\ \ell\ \mathrm{s.t.}\ C_{2}(\ell,i)\neq 0 \ \mathrm{and}\ C_{2}(\ell,j)\neq 0\\ 0,\ \mathrm{else}.\end{cases} \tag{5}\] In effect, there is an edge in the graph between two nodes if the relative difference between the corresponding two states is measured. ### _Network Control Structure_ When the controller is composed of spatially distributed subcontroller units, each subcontroller typically has access to only a subset of system measurements and with limited or delayed communication between these subcontrollers. We describe the corresponding delay, sparsity and other constraints that arise from the this network control structure (independent of the relative measurement architecture) as a subspace constraint, \(\mathcal{S}\), on the LTI controller \(\mathbf{u}=\mathbf{K}\mathbf{y}\): \[\mathbf{K}\in\mathcal{S}. \tag{6}\] The controller design problem (3) subject to the network control structure constraints (6) is given by: \[\inf_{\mathbf{K}} \|\mathbf{P}_{zw}+\mathbf{P}_{zu}\mathbf{K}(I-\mathbf{P}_{yu}\mathbf{K})^{-1}\mathbf{P}_{ yw}\| \tag{7}\] \[\mathrm{s.t.} \mathbf{K}\ \mathrm{internally\ stabilizes}\ \mathbf{P}\] \[\mathbf{K}\in\mathcal{S}.\] In the next section, we present an example for which the Youla-Kucera parameterization [1, 16] can be easily utilized to rewrite the standard formulation (7) in the form of an affine objective function with convex constraints on a new decision variable when * The network control structure constraint \(\mathbf{K}\in\mathcal{S}\) is removed, or * The relative measurement structure is removed, by modifying \(\mathbf{P}_{yu}\) and \(\mathbf{P}_{yw}\). When both relative measurement structure and the network control structure constraint are present, standard approaches can not be utilized to obtain such a representation of (7). This motivates the derivation of an alternate approach to do so. This is the premise of our main result - a novel equivalent formulation of the controller design problem (7) in the relative measurement setting. This formulation can be written as an affine objective function with convex constraints on a new decision variable, in certain problem settings. Constraints on this new variable ensure that the corresponding solution to the original problem will conform to the given relative measurement structure. ## III Motivating Example Consider a system composed of four subsystems with dynamics of the form \[\left[\begin{array}{c}\dot{x}_{1}\\ \dot{x}_{2}\\ \dot{x}_{3}\\ \dot{x}_{4}\end{array}\right]=\underbrace{\left[\begin{array}{cccc}a_{11}&a_{ 12}&a_{13}&a_{14}\\ 0&a_{22}&a_{23}&a_{24}\\ 0&0&a_{33}&a_{34}\\ 0&0&0&a_{44}\end{array}\right]}_{=:A}\left[\begin{array}{c}x_{1}\\ x_{2}\\ x_{3}\\ x_{4}\end{array}\right]+\left[\begin{array}{c}u_{1}\\ u_{2}\\ u_{3}\\ u_{4}\end{array}\right]+\left[\begin{array}{c}w_{1}\\ w_{2}\\ w_{3}\\ w_{4}\end{array}\right], \tag{8}\] where each \(a_{ii}<0\) so that \(A\) is Hurwitz, \(x_{i}\) is the internal state of subsystem \(i\), and \(u_{i}\) and \(w_{i}\) are the control and disturbance at subsystem \(i\), respectively. We consider the controller design problem for this system when both relative measurement architecture and network control structure are present. **Relative measurement structure:** Sensors throughout the system measure \((x_{i}-x_{j})\) for all \(j>i\). The vector containing measurements taken by all sensors throughout the system is then given by \[y=\left[\begin{array}{c}y_{1}\\ y_{2}\\ y_{3}\\ y_{4}\\ y_{5}\\ y_{6}\end{array}\right]=\left[\begin{array}{cccc}1&-1&0&0\\ 1&0&-1&0\\ 1&0&0&-1\\ 0&1&-1&0\\ 0&0&1&-1\end{array}\right]\left[\begin{array}{c}x_{1}\\ x_{2}\\ x_{3}\\ x_{4}\end{array}\right]=:C_{2}x. \tag{9}\] Note that \(\mathcal{A}_{\mathrm{meas.}}\), as defined in (5), corresponds to a fully connected graph for this example. **Network control structure:** The controller to be designed is spatially distributed with one subcontroller at each subsystem. We assume that the sensor that measures \(x_{i}-x_{j}\) for \(j\geq i\) is located at subsystem \(i\) and that only subcontroller \(i\) will have access to this measurement, i.e. the control signal \(u_{i}\) applied to subsystem \(i\) is allowed to depend only on \((x_{i}-x_{j})\) for \(j\geq i\). Thus, the control law \(\mathbf{u}=\mathbf{K}\mathbf{y}\) must satisfy the subspace constraint \(\mathbf{K}\in\mathcal{S}\), where \(\mathcal{S}\) encodes the sparsity structure: \[\left[\begin{array}{c}\mathbf{u}_{1}\\ \mathbf{u}_{2}\\ \mathbf{u}_{3}\\ \mathbf{u}_{4}\end{array}\right]=\underbrace{\left[\begin{array}{cccccc}\mathbf{K}_ {12}&\mathbf{K}_{13}&\mathbf{K}_{14}&0&0&0\\ 0&0&0&\mathbf{K}_{23}&\mathbf{K}_{24}&0\\ 0&0&0&0&0&\mathbf{K}_{34}\\ 0&0&0&0&0&0\end{array}\right]}_{=:\mathbf{K}}\left[\begin{array}{c}\mathbf{y}_{1} \\ \mathbf{y}_{2}\\ \mathbf{y}_{3}\\ \mathbf{y}_{4}\\ \mathbf{y}_{5}\\ \mathbf{y}_{6}\end{array}\right], \tag{10}\] and \(y\) is defined in (9). Standard methods convert the controller design problem (7) to an equivalent convex formulation when the subspace \(\mathcal{S}\) is quadratically invariant [1] with respect to \(\mathbf{P}_{yu}=C_{2}(sI-A)^{-1}B_{2}\). A straightforward computation shows that this is not the case for this example. When the network control structure constraint \(K\in\mathcal{S}\) is removed, the controller design problem (7) for this example becomes convex in \(\mathbf{Q}:=\mathbf{K}(I-\mathbf{P}_{yu}\mathbf{K}^{-1})\) through the classical Youla-Kucera parameterization [16]. Alternatively, if the relative measurement architecture requirement is removed by allowing subcontrollers to access states rather than differences of states, but the network control structure remains so that subcontroller \(i\) has access only to the state \(x_{j}\) for all \(j\geq i\), the corresponding controller takes the form of the sparse state feedback policy: \[\left[\begin{array}{c}\mathbf{u}_{1}\\ \mathbf{u}_{2}\\ \mathbf{u}_{3}\\ \mathbf{u}_{4}\end{array}\right]=\left[\begin{array}{cccc}\mathbf{R}_{11}&\mathbf{R}_{ 12}&\mathbf{R}_{13}&\mathbf{R}_{14}\\ 0&\mathbf{R}_{22}&\mathbf{R}_{23}&\mathbf{R}_{24}\\ 0&0&\mathbf{R}_{33}&\mathbf{R}_{34}\\ 0&0&0&\mathbf{R}_{44}\end{array}\right]\left[\begin{array}{c}\mathbf{x}_{1}\\ \mathbf{x}_{2}\\ \mathbf{x}_{3}\\ \mathbf{x}_{4}\end{array}\right]. \tag{11}\] This upper triangular structure is indeed quadratically invariant with respect to the mapping \(\mathbf{P}_{xu}=(sI-A)^{-1}\) from control to state. That imposing the relative measurement architecture alone or the network control structure alone each lead to clear convex formulations motivates us to search for a convex form of (7) when both properties are simultaneously present. The main result of this work addresses this question. We derive an equivalent parameterization of control policies with a relative measurement architecture that allows for a convex formulation of (7) for the example presented in this section as well as for a larger class of distributed systems. ## IV A Relative Feedback Parameterization We begin by introducing the notion of a relative mapping, which will be utilized in our new parameterization. **Definition 1**.: _Let \(f\) be a linear map on \(\mathbb{R}^{n}\), i.e., \(f(x)=Fx\) for some matrix \(F\in\mathbb{R}^{m\times n}\). \(f\) is relative if there exists a set of vectors \(\{v_{ij}\}\subset\mathbb{R}^{m}\) such that_ \[f(x)=Fx=\sum_{1\leq i<j\leq n}v_{ij}(x_{i}-x_{j}), \tag{12}\] _for all \(x=\left[\begin{array}{cc}x_{1}&\cdots&x_{n}\end{array}\right]^{\top}\in \mathbb{R}^{n}\)._ Clearly the choice of \(v_{ij}\) is non-unique. For example \(f(x)=3x_{1}-x_{2}-2x_{3}\) can be written as \(3(x_{1}-x_{2})+2(x_{2}-x_{3})\) and can also be written as \((x_{1}-x_{2})+2(x_{1}-x_{3})\). This notion can be extended to LTI systems as follows. **Definition 2**.: _Consider an LTI system, \(\mathbf{H}\), that maps an input signal \(\nu(\cdot)\) to an output signal \(\eta(\cdot)\) described by the dynamics_ \[\dot{\xi}(t) =A_{H}\xi(t)+B_{H}\nu(t) \tag{13}\] \[\eta(t) =C_{H}\xi(t)+D_{H}\nu(t)\] _This system is relative w.r.t. the input \(\nu\) if \(B_{H}\) and \(D_{H}\) are relative mappings. When the input is clear from context, we often simply refer to the system as relative._ Equipped with the terminology of Definitions 1 and 2, we next derive an equivalent parameterization of the set of LTI control policies \[\mathbf{u}=\mathbf{K}\mathbf{y} \tag{14}\] for system (1) with relative measurement structure described by \(\mathcal{A}_{\mathrm{meas}}\).. First note that (14) can be written equivalently as \[\mathbf{u}=\mathbf{K}C_{2}\mathbf{x}=:\mathbf{R}\mathbf{x}. \tag{15}\] \(\mathbf{R}\) is clearly a relative mapping since \(C_{2}\) satisfies Assumption 1. The following theorem describes the converse, characterizing conditions under which a relative system \(\mathbf{R}\) allows for a control policy (14) to be recovered from the relation \(\mathbf{K}C_{2}=\mathbf{R}\). **Theorem 1**.: _Let \(\mathbf{R}\) be a relative LTI system. If \(\mathcal{A}_{\mathrm{meas}}\), corresponds to a connected graph, then there exists a controller \(\mathbf{u}=\mathbf{K}\mathbf{y}\) for which \(\mathbf{K}C_{2}=\mathbf{R}\). More generally, if \(\mathcal{A}_{\mathrm{meas}}\) corresponds to a graph with \(N\geq 1\) disjoint connected components, then there exists a controller \(\mathbf{u}=\mathbf{K}\mathbf{y}\) for which \(\mathbf{K}C_{2}=\mathbf{R}\) if and only if the relative LTI system \(\mathbf{R}\) can be decomposed as_ \[\mathbf{R}\mathbf{x}=\sum_{i=1}^{N}\mathbf{R}^{(i)}\mathbf{x}^{(i)} \tag{16}\] _where the vector \(\mathbf{x}^{(i)}\) contains the subset of states contained in the \(i^{\mathrm{th}}\) connected component of the graph defined by \(\mathcal{A}_{\mathrm{meas}}\) and each \(\mathbf{R}^{(i)}\) is a relative system._ A proof of this result is provided in the Appendix. Theorem 1 allows us to reformulate the optimal controller design problem (3) as stated in the following Corollary. **Corollary 1**.: _The solution \(\mathbf{K}\) to the optimal controller design problem (3) can be recovered from the solution \(\mathbf{R}\) to the constrained optimization problem_ \[\inf_{\mathbf{R}} \|\mathbf{P}_{zw}+\mathbf{P}_{zu}\mathbf{R}(I-\mathbf{P}_{xu}\mathbf{R})^{-1}\mathbf{P}_{ xw}\|\] (17a) \[\mathrm{s.t.} \mathbf{R}\ \mathrm{internally\ stabilizes}\ \tilde{\mathbf{P}}\] (17b) \[\mathbf{R}\ \mathrm{satisfies}\ \eqref{eq:constraint_constraint_constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint constraint_ constraint_ constraint_ constraint constraint_ constraint_ constraint_ constraint_ constraint constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint constraint_ constraint_ constraint_ constraint_ constraint constraint_ constraint_ constraint_ constraint constraint_ constraint_ constraint_ constraint_ constraint constraint_ constraint_ constraint_ constraint_ constraint constraint_ constraint_ constraint_ constraint_ constraint constraint_ constraint constraint_ constraint_ constraint_ constraint constraint_ constraint_ constraint_ constraint constraint_ constraint constraint_ constraint_ constraint constraint_ constraint_ constraint_ constraint constraint_ constraint_ constraint_ constraint constraint_ constraint_ constraint_ constraint constraint_ constraint_ constraint constraint_ constraint_ constraint constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint constraint_ constraint_ constraint_ constraint_ constraint_ constraint constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint constraint_ constraint_ constraint_ constraint_ constraint_ constraint constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint constraint_ constraint_ constraint_ constraint_ constraint_ constraint constraint_ constraint_ constraint_ constraint_ constraint_ constraint constraint_ constraint_ constraint constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint constraint_ constraint_ constraint_ constraint constraint_ constraint_ constraint_ constraint constraint_ constraint_ constraint_ constraint constraint_ constraint_ constraint constraint_ constraint_ constraint_ constraint_ constraint constraint_ constraint_ constraint_ constraint_ constraint_ constraint constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint constraint_ constraint_ constraint constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint constraint_ constraint_ constraint_ constraint constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint constraint_ constraint_ constraint_ constraint_ constraint constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint constraint_ constraint_ constraint_ constraint constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint constraint_ constraint_ constraint_ constraint_ We note that the subspace description (23), which corresponds to \(\mathcal{A}_{\mathrm{meas}}\). defining a connected graph, has appeared elsewhere, e.g., [10, 13, 14]. However, the incorporation of more general relative measurement architecture according to \(\mathcal{A}_{\mathrm{meas}}\). has not been previously considered; it is this structure that allows the optimal controller \(\mathbf{K}\) solving (3) to be recovered from the new formulation (17). ## V A Convex Formulation of Controller Design for Relative Measurement Systems In this section, we illustrate that the formulation (17) can be transformed to an equivalent convex program. This follows from the parameterization of controllers with a relative feedback structure presented in Theorem 1, and a convex description of this set as described by Proposition 1. The incorporation of network structure constraints will be addressed in Section II-B. **Theorem 2**.: _The formulation (17) is equivalent to the optimization problem_ \[\begin{split}\inf_{\mathbf{Q}}&\|\mathbf{T}_{1}+\mathbf{T}_{2} \mathbf{Q}\mathbf{T}_{3}\|\\ \mathrm{s.t.}&\mathbf{Q}\ \mathrm{stable}\\ &\mathbf{Q}(s)\cdot E^{(i)}=0,\ i=1,...,N\end{split} \tag{24}\] _where_ \[\begin{split}\mathbf{T}_{1}&=\mathbf{P}_{zw}+\mathbf{P}_{zu} \mathbf{R}_{\mathrm{nom.}}(I-\mathbf{P}_{xu}\mathbf{R}_{\mathrm{nom.}})^{-1}\mathbf{P}_{xw}, \\ \mathbf{T}_{2}&=-\mathbf{P}_{zu}(I-\mathbf{R}_{\mathrm{nom.}}\bm {P}_{xu})^{-1},\\ \mathbf{T}_{3}&=(I-\mathbf{P}_{xu}\mathbf{R}_{\mathrm{nom.}})^{- 1}\mathbf{P}_{xw},\end{split} \tag{25}\] _the vectors \(E^{(i)}\) are formed from (22) according to the graph structure of \(\mathcal{A}_{\mathrm{meas.}}\) and the system \(\mathbf{R}_{\mathrm{nom.}}\) is chosen to be stable, satisfy \(\mathbf{R}_{\mathrm{nom.}}\cdot E^{(i)}=0\) for all \(i\) and internally stabilize \(\tilde{\mathbf{P}}=\left[\begin{array}{cc}\mathbf{P}_{zw}&\mathbf{P}_{zu}\\ \mathbf{P}_{xw}&\mathbf{P}_{xu}\end{array}\right]\). The solution \(\mathbf{R}\) to (17) can be recovered from the solution \(\mathbf{Q}\) of (31) as_ \[\mathbf{R}=\mathbf{R}_{\mathrm{nom.}}-\mathcal{F}\left(\mathcal{F}\left(\mathbf{R}_{ \mathrm{nom.}},\mathbf{P}_{xu}\right),\mathbf{Q}\right). \tag{26}\] _where \(\mathcal{F}(\mathbf{G},\mathbf{H}):=\mathbf{H}(I-\mathbf{G}\mathbf{H})^{-1}\) denotes a linear fractional transformation._ To prove this result, we utilize the following fact, which is presented in e.g., [1, Thm. 17]. For \(\mathbf{R}_{\mathrm{nom.}}\) a stable system that internally stabilizes \(\tilde{\mathbf{P}}=\left[\begin{array}{cc}\mathbf{P}_{zw}&\mathbf{P}_{zu}\\ \mathbf{P}_{xw}&\mathbf{P}_{xu}\end{array}\right]\), the set of all systems \(\mathbf{R}\) that internally stabilize \(\tilde{\mathbf{P}}\) is given by \[\Big{\{}\mathbf{R}=\mathbf{R}_{\mathrm{nom.}}-\mathcal{F}\left(\mathcal{F}\left(\mathbf{R }_{\mathrm{nom.}},\mathbf{P}_{xu}\right),\mathbf{Q}\right)\Big{\}}. \tag{27}\] **Lemma 1**.: _Let the systems \(\mathbf{R}\) and \(\mathbf{Q}\) satisfy \(\mathbf{R}=\mathbf{R}_{\mathrm{nom.}}-\mathcal{F}\left(\mathcal{F}\left(\mathbf{R}_{ \mathrm{nom.}},\mathbf{P}_{xu}\right),\mathbf{Q}\right),\) and assume that \(\mathbf{R}_{\mathrm{nom}}(s)E^{(i)}=0\). Then \(\mathbf{R}(s)E^{(i)}=0\) if and only if \(\mathbf{Q}(s)\cdot E^{(i)}=0\)._ Proof.: Rearranging (27), we see that \(\mathbf{Q}\) can be recovered from \(\mathbf{R}\) as \[\mathbf{Q}=(I-(\mathbf{R}_{\mathrm{nom.}}-\mathbf{R})\mathbf{N})^{-1}(\mathbf{R}_{\mathrm{nom.}}- \mathbf{R}), \tag{28}\] where \(\mathbf{N}=\mathbf{P}_{xu}(I-\mathbf{R}_{\mathrm{nom.}}\mathbf{P}_{xu})^{-1}\). Then, since \(\mathbf{R}_{\mathrm{nom.}}E^{(i)}=0\), if \(\mathbf{R}E^{(i)}=0\) then \(QE^{(i)}=0\). Conversely, if \(\mathbf{Q}E^{(i)}=0\), then \[\begin{split}\mathbf{R}E^{(i)}&=\left(\mathbf{R}_{\mathrm{ nom.}}-\mathcal{F}\left(\mathbf{M},\mathbf{Q}\right)\right)E^{(i)}\\ &=\mathcal{F}\left(\mathbf{M},\mathbf{Q}\right)E^{(i)}\\ &=-\mathbf{Q}(I-\mathbf{M}\mathbf{Q})^{-1}E^{(i)}\\ &=-(I-\mathbf{Q}\mathbf{M})^{-1}\mathbf{Q}E^{(i)}=0,\end{split} \tag{29}\] where \(\mathbf{M}=\mathcal{F}\left(\mathbf{R}_{\mathrm{nom.}},\mathbf{P}_{xu}\right).\) Theorem 2 then follows immediately from Lemma 1 and the set parameterization (27). Note that in the case that the system (1) is stable, (27) simplifies by taking \(\mathbf{R}_{\mathrm{nom.}}\) to be zero: \[\big{\{}\mathbf{R}=(I+\mathbf{Q}\mathbf{P}_{xu})^{-1};\ \mathbf{Q}(s)\mathbb{1}=0\big{\}}. \tag{30}\] Theorem 2 simplifies accordingly, as stated in the following corollary. **Corollary 2**.: _Assume that \(A\) is Hurwitz so that (1) is stable. Then (17) is equivalent to the optimization problem_ \[\begin{split}\inf_{\mathbf{Q}}&\|\mathbf{P}_{zw}+\mathbf{P}_{zu} \mathbf{Q}\mathbf{P}_{xw}\|\\ \mathrm{s.t.}&\mathbf{Q}\ \mathrm{stable}\\ &\mathbf{Q}(s)\cdot E^{(i)}=0,\ i=1,...,N\end{split} \tag{31}\] _where the vectors \(E^{(i)}\) are formed from (22) according to the graph structure of \(\mathcal{A}_{\mathrm{meas.}}\). The solution \(\mathbf{R}\) to (17) can be recovered from the solution \(\mathbf{Q}\) of (31) as_ \[\mathbf{R}=(I+\mathbf{Q}\mathbf{P}_{xu})^{-1}. \tag{32}\] ### _Incorporating Network Structure_ Throughout this section, we have considered the relative measurement architecture as described by \(\mathcal{A}_{\mathrm{meas}}\). In what follows, we provide an example to illustrate how to incorporate network control structure constraints that may also be present. Recall that these network control structure constraints may take the form of sparsity or delay requirements on the controller and are captured by a subspace constraint \(\mathbf{K}\in\mathcal{S}\). We return to the example presented in Section III. An additional example that incorporates a delay requirement will be presented in Section VI. **Example 2**.: _Recall that system (8) is assumed to be stable, and note that the relative measurement structure of (9) corresponds to a connected graph. We demonstrate that the framework developed will allow us to obtain the optimal solution of_ \[\begin{split}\inf_{\mathbf{K}}&\|\mathbf{P}_{zw}+\mathbf{P}_{zu} \mathbf{K}(I-\mathbf{P}_{yu}\mathbf{K})^{-1}\mathbf{P}_{yw}\|\\ \mathrm{s.t.}&\mathbf{K}\ \mathrm{stabilizing}\\ &\mathbf{K}\in\mathcal{S},\end{split} \tag{33}\] through an equivalent convex program. By Corollary 1, the solution \(\mathbf{K}\) of (33) can be recovered from the solution \(\mathbf{R}\) of_ \[\begin{split}\inf_{\mathbf{R}}&\|\mathbf{P}_{zw}+\mathbf{P}_{zu} \mathbf{R}(I-\mathbf{P}_{xu}\mathbf{R})^{-1}\mathbf{P}_{zw}\|\\ {\rm s.t.}&\mathbf{R}\ {\rm stabilizing\ for\ }\tilde{\mathbf{P}} \\ &\mathbf{R}(s)\mathbb{1}=0\\ &\mathbf{R}\in\mathcal{S}_{R}:=\{\mathbf{MC}_{2};\mathbf{M}\in\mathcal{S}\}.\end{split} \tag{34}\] _A straightforward computation shows that the set \(\{\mathbf{MC}_{2};\mathbf{M}\in\mathcal{S}\}\) is equivalent to the set of \(4\times 4\) transfer matrices with an upper triangular sparsity structure, i.e., \(\mathbf{R}\in\mathcal{S}_{R}\) if and only if \(\mathbf{R}\) is of the form_ \[\mathbf{R}=\left[\begin{array}{cccc}\mathbf{R}_{11}&\mathbf{R}_{12}&\mathbf{R}_{13}&\mathbf{R}_ {14}\\ 0&\mathbf{R}_{22}&\mathbf{R}_{23}&\mathbf{R}_{24}\\ 0&0&\mathbf{R}_{33}&\mathbf{R}_{34}\\ 0&0&0&\mathbf{R}_{44}\end{array}\right]. \tag{35}\] _The set \(\mathcal{S}_{R}\) is quadratically invariant with respect to \(\mathbf{P}_{zu}\). This fact, together with Corollary 2, allows us to rewrite (34) as the convex program_ \[\begin{split}\inf_{\mathbf{Q}}&\|\mathbf{P}_{zw}+\mathbf{P}_{zu}\mathbf{QP}_{ zw}\|\\ {\rm s.t.}&\mathbf{Q}\ {\rm stable}\\ &\mathbf{Q}(s)\mathbb{1}=0\\ &\mathbf{Q}\ {\rm upper\ triangular}.\end{split} \tag{36}\] We can view this reformulation as in effect, transforming from an output feedback to a state feedback problem. This preserves structure in the open-loop state dynamics that is lost through multiplication by \(C_{2}\) to form the relative output. Network control constraints may be preserved under linear fractional transformations for this _structured_ representation of the plant (through quadratic invariance). It is reasonable to expect this to occur in other problems, especially when the open-loop state dynamics are decoupled or dependent only on relative states, both of which occur frequently in distributed settings. ## VI Example: Consensus of First-Order Subsystems In this section, we apply our results to the problem of consensus of \(n\) identical first-order subsystems on a ring with communication delays. This example provides a case study that shows the discrete time setting of our formulation and illustrates how communication delay constraints can be incorporated. Moreover, in this case, we see that our formulation can be further reduced to a standard unconstrained model-matching problem [17] with \((n-1)\) transfer function parameters, further highlighting the usefulness of our methodology. #### Vi-1 System Model We consider a distributed system composed of \(n\) subsystems on a ring, each with a single control input, \(u_{i}\), process noise \(w_{i}\), and state, \(x_{i}\), with dynamics governed by \(\alpha x_{i}[t+1]=x_{i}[t]+u_{i}[t]+w_{i}[t]\), for \(i=0,...,(n-1)\). In vector form, the system dynamics are given by \[x[t+1]=I_{n}x[t]+I_{n}u[t]+I_{n}w[t], \tag{37}\] where \(I_{n}\) denotes the identity matrix of size \(n\times n\). For controller design, we consider a regulated output that penalizes deviation from consensus and control effort: \[\mathbf{z}=(1-\gamma)\left[\begin{array}{c}\check{C}_{n}\\ 0_{n}\end{array}\right]\mathbf{x}+\gamma\left[\begin{array}{c}0_{n}\\ I_{n}\end{array}\right]\mathbf{u}, \tag{38}\] where \(\check{C}_{n}:=I_{n}-\frac{1}{n}\mathbb{1}_{n}\mathbb{1}_{n}^{T}\) is a matrix that maps \(\mathbf{x}\) to a vector of deviations from the average of \(\mathbf{x}\), and \(\gamma\in[0,1]\) quantifies a tradeoff between the importance of control effort and consensus. #### Vi-2 Network Control Structure The controller unit for subsystem \(i\) has access at time \((t+m)\) to (relative) measurements from subsystem \(j\) at time \(t\) only if \(l(i,j)\leq m\), where \(l(i,j)\) is the length of the shortest path between \(i\) and \(j\) in the ring graph defined by the adjacency matrix \[\mathcal{A}_{ij}=\begin{cases}1,&|i-j|=1\\ 0,&\text{else},\end{cases} \tag{39}\] where \(|i-j|\) is computed modulo \(n\). To rigorously characterize this property, define the \(m\)-step adjacency matrix \(\mathcal{A}^{m}\) as \[\mathcal{A}^{m}_{ij}=\begin{cases}1,&l(i,j)\leq m\\ 0,&\text{else},\end{cases} \tag{40}\] and let \(\mathrm{Sp}(\mathcal{A}^{m})\) be the subset of matrix-valued functions with the same sparsity pattern as \(\mathcal{A}^{m}\), i.e. \(H\in\mathrm{Sp}(\mathcal{A}^{m})\) if \(H_{ij}\equiv 0\) whenever \(\mathcal{A}^{m}_{ij}=0\). Let \(RL_{\infty}\) denote the set of rational, proper, matrix-valued functions of dimension \(n\times n\) and define \[z^{-m}RL_{\infty}=\{\mathbf{H}(z)\in RL_{\infty};\ z^{m}\mathbf{H}(z)\in RL_{\infty}\}. \tag{41}\] Then define the subset \(\mathcal{S}_{R}\) of \(RL_{\infty}\) by \[\mathcal{S}_{R}:=\sum_{m=0}^{p}S_{m}, \tag{42}\] where \(S_{m}=\mathrm{Sp}(\mathcal{A}^{m})\cap z^{-m}RL_{\infty}\) and \(p\) is the is the smallest integer for which \(\mathcal{A}^{m}\) has no zero entries. For example, for a ring graph of size \(n=3\), elements of \(\mathcal{S}_{R}\) take the form \[\mathbf{R}(z)=\left[\begin{array}{ccc}\mathbf{r}_{11}^{0}&0&0\\ 0&\mathbf{r}_{22}^{0}&0\\ 0&0&\mathbf{r}_{33}^{0}\end{array}\right]+\frac{1}{z}\left[\begin{array}{ccc} \mathbf{r}_{11}^{1}&\mathbf{r}_{12}^{1}&\mathbf{r}_{13}^{1}\\ \mathbf{r}_{21}^{2}&\mathbf{r}_{22}^{2}&\mathbf{r}_{33}^{1}\\ \mathbf{r}_{31}^{2}&\mathbf{r}_{32}^{1}&\mathbf{r}_{33}^{1}\end{array}\right] \tag{43}\] where \(\mathbf{r}_{ij}^{m}\in RL_{\infty}\). #### Vi-3 Controller Design To solve the controller design problem (7) for this example, we utilize Corollary 1 and Proposition 1 to formulate this design problem as the equivalent constrained optimization problem \[\inf_{\mathbf{R}} \|\mathbf{P}_{zw}+\mathbf{P}_{zu}\mathbf{R}(I-\mathbf{P}_{xu}\mathbf{R})^{-1}\mathbf{P}_{ zw}\| \tag{44a}\] \[{\rm s.t.} \mathbf{R}\ {\rm internally\ stabilizes\ }\tilde{\mathbf{P}}\] (44b) \[\mathbf{R}\mathbb{1}=0,\ \ \mathbf{R}\in\mathcal{S}_{R}. \tag{44c}\] It is straightforward to show that \(\mathcal{S}_{R}\) is quadratically invariant with respect to \(\mathbf{P}_{zu}\)[1, 2], so that using Theorem 2 we can rewrite (44) as \[\inf_{\mathbf{Q}} \|\mathbf{T}_{1}+\mathbf{T}_{2}\mathbf{QT}_{3}\| \tag{45a}\] \[\mathrm{s.t.} \mathbf{Q}\in\mathcal{S}_{R},\ \mathbf{Q}\ \mathrm{stable},\ \mathbf{Q}(s) \mathbb{1}=0, \tag{45b}\] where \(\mathbf{T}_{1},\mathbf{T}_{2},\mathbf{T}_{3}\) are constructed according to Equation (25) with \[\mathbf{R}_{\mathrm{nom}}(s)=-\frac{1}{n}L_{n}, \tag{46}\] where \(L_{n}\) is the graph Laplacian associated with (39). Clearly (45) is convex. In what follows, we will demonstrate that it can be further reduced to a standard unconstrained model-matching problem [17]. There is no loss [18] in restricting our search of solution \(\mathbf{R}\) of (44) to spatially invariant systems, so that \(\mathbf{R}\), and therefore \(\mathbf{Q}(z)\), will have a circulant structure. E.g., for \(n=3\), \(\mathbf{Q}(z)\in\mathcal{S}_{R}\) will have the form \[\mathbf{Q}(z)=\mathbf{q}_{0}(z)I_{3}+\frac{1}{z}\left[\begin{array}{ccc}0&\mathbf{q}_{1} (z)&\mathbf{q}_{2}(z)\\ \mathbf{q}_{2}(z)&0&\mathbf{q}_{1}(z)\\ \mathbf{q}_{1}(z)&\mathbf{q}_{2}(z)&0\end{array}\right]. \tag{47}\] Since circulant matrices commute under multiplication and are uniquely determined by a single column, the objective (31) can be rewritten as \[\|\mathbf{T}_{1}-\mathbf{T}_{2}\mathbf{QT}_{3}\|_{\mathcal{H}_{2}}^{2} =n\left\|\mathbf{T}_{1}e_{1}-(\mathbf{T}_{2}\mathbf{QT}_{3})e_{1}\right\|_{ \mathcal{H}_{2}}^{2} \tag{48}\] \[=n\left\|\mathbf{T}_{1}e_{1}-\mathbf{T}_{2}\mathbf{T}_{3}(\mathbf{Q}e_{1})\right\| _{\mathcal{H}_{2}}^{2}\] In this form, the vector \(\mathbf{Q}e_{1}\) is the only term in the objective that depends on the \(n\) free scalar transfer function parameters, \(\{\mathbf{q}_{0},\ldots,\mathbf{q}_{n-1}\}\). We impose the relative constraint \(\mathbf{Q}\mathbb{1}=\mathbf{0}\), writing \(\mathbf{q}_{0}\) in terms of the other free parameters. For example, for \(n=3\), \[\mathbf{q}_{0}=-\frac{1}{z}\mathbf{q}_{1}-\frac{1}{z}\mathbf{q}_{2}. \tag{49}\] Define \(\mathbf{q}=\left[\begin{array}{ccc}\mathbf{q}_{1}&\ldots&\mathbf{q}_{n-1}\end{array} \right]^{T}\) and let \(M\) be the matrix that satisfies \(\mathbf{Q}e_{1}=M\mathbf{q}\). For example, for \(n=3\), \[\mathbf{Q}e_{1}=\left[\begin{array}{ccc}-\frac{\mathbf{q}_{1}}{z}&\mathbf{q}_{2}&-\frac{ \mathbf{q}_{2}}{z}\\ \frac{\mathbf{\tilde{q}}_{2}}{z}&z\end{array}\right]=\underbrace{\left[\begin{array} []{ccc}-\frac{1}{z}&-\frac{1}{z}\\ 0&\frac{1}{z}\\ \frac{1}{z}&0\end{array}\right]}_{M}\underbrace{\left[\begin{array}{ccc}\bm {q}_{1}\\ \mathbf{q}_{2}\\ \end{array}\right]}_{q}. \tag{50}\] Thus, (45) reduces to the unconstrained model-matching form \[J_{n}=\inf_{\mathbf{q}\ \mathrm{stable}}\ n\|\mathbf{T}_{1}e_{1}-\mathbf{T}_{2}\mathbf{T}_{3} \mathbf{M}\mathbf{q}\|_{\mathcal{H}_{2}}^{2}, \tag{51}\] where the transfer matrix \(\mathbf{q}\) is of dimension \((n-1)\times 1\). The optimal norm \(J_{n}\) is computed for various values of \(n\) and \(\gamma\); these values, normalized by \(n\) are plotted in Figure 1. Analysis of the scaling of the optimal norm \(J_{n}\) in number of subsystems \(n\) is the subject of ongoing work. ## VII Conclusion Relative measurement structures are common in distributed systems, but previously this structural requirement has not been considered as a design constraint. We provided a characterization of relative measurement structures as such a design constraint, and this allowed us to convert the optimal controller design problem to an equivalent convex program. This formulation may aid in the understanding of fundamental properties of relative measurement systems such as best achievable performance. The ability to convert our formulation to a model-matching problem for the consensus example further suggests this is the case. Indeed, this form may admit analytic solutions that could provide insight to system properties. Ongoing work is focused on extensions of these results to allow for more general relative measurements, e.g. differences of outputs rather than differences of states. Another interesting line of questioning is to characterize which network control structures will allow for a convex formulation with our methodology - specifically focusing on the cases of decoupled subsystem dynamics or system dynamics that depend only on relative states.
2301.06753
Testing topological conjugacy of time series
This paper considers a problem of testing, from a finite sample, a topological conjugacy of two dynamical systems $(X,f)$ and $(Y,g)$. More precisely, given $x_1,\ldots, x_n \subset X$ and $y_1,\ldots,y_n \subset Y$ such that $x_{i+1} = f(x_i)$ and $y_{i+1} = g(y_i)$ as well as $h: X \rightarrow Y$, we deliver a number of tests to check if $f$ and $g$ are topologically conjugated via $h$. The values of the tests are close to zero for conjugated systems and large for systems that are not conjugated. Convergence of the test values, in case when sample size goes to infinity, is established. A number of numerical examples indicating scalability and robustness of the methods are given. In addition, we show how the presented method specialize to a test of sufficient embedding dimension in Takens' embedding theorem. Our methods also apply to the situation when we are given two observables of deterministic processes, of a form of one or higher dimensional time-series. In this case, their similarity can be accessed by comparing the dynamics of their Takens' reconstructions.
Paweł Dłotko, Michał Lipiński, Justyna Signerska-Rynkowska
2023-01-17T08:30:25Z
http://arxiv.org/abs/2301.06753v3
# Testing topological conjugacy of time series from finite sample ###### Abstract This paper considers a problem of testing, from a finite sample, a topological conjugacy of two dynamical systems \((X,f)\) and \((Y,g)\). More precisely, given \(x_{1},\ldots,x_{n}\subset X\) and \(y_{1},\ldots,y_{n}\subset Y\) such that \(x_{i+1}=f(x_{i})\) and \(y_{i+1}=g(y_{i})\) as well as \(h:X\to Y\), we deliver a number of tests to check if \(f\) and \(g\) are topologically conjugated via \(h\). The values of the tests are close to zero for conjugated systems and large for systems that are not conjugated. Convergence of the test values, in case when sample size goes to infinity, is established. A number of numerical examples indicating scalability and robustness of the methods are given. In addition, we show how the presented method specialize to a test of sufficient embedding dimension in Takens' embedding theorem. Our methods also apply to the situation when we are given two observables of deterministic processes, of a form of one or higher dimensional time-series. In this case, their similarity can be accessed by comparing the dynamics of their Takens' reconstructions. Introduction Understanding sampled dynamics is of primal importance in multiple branches of science where there is a lack of solid theoretical models of the underlying phenomena [7, 8, 18, 24, 16]. It delivers a foundation for various equation-free models of observed dynamics and allows to draw conclusions about the unknown observed processes. In the considered case we start with two, potentially different, phase spaces \(X\) and \(Y\) and and a map \(h:X\to Y\). Given two sampled trajectories, refereed to in this paper by _time series_, \(x_{1},\ldots,x_{n}\subset X\) and \(y_{1},\ldots,y_{n}\subset Y\) we assume that they are both generated by a continuous maps \(f:X\to X\) and \(g:Y\to Y^{1}\) in a way that \(x_{i+1}=f(x_{i})\) and \(y_{i+1}=f(y_{i})\). In what follows, we build a number of tests that allow to distinguish trajectories that are conjugated by the given map \(h\) from those that are not. The presented problem is practically important for the following reasons: Firstly, the proposed machinery allows to test for conjugacy, in case when the formulas that generate the underlying dynamics, as \(f\) and \(g\) above, are not known explicitly, and the input data are based on observations of the considered system. Secondly, some of the presented methods apply in the case when the dynamics \(f\) and \(g\) on \(X\) and \(Y\) is explicitly known, but we want to test if a given map \(h:X\to Y\) between the phase spaces has a potential to be a topological conjugacy. It is important as the theoretical results on conjugacy are given only for a handful of systems and our methods gives a tool for numerical hypothesis testing. Thirdly, those methods can be used to estimate the optimal parameters of the dynamics reconstruction. A basic way to achieve such a reconstruction is via time delay embedding, a technique that depends on parameters including the _embedding dimension_ and the _time lag_ (or _delay_). When the parameters of the method are appropriately set up and the assumptions of Takens' Embedding Theorem holds, a reconstruction is obtained, meaning that the reconstructed dynamics is _conjugate_ (dynamically equivalent) to the original (unknown) dynamics. However without the prior knowledge of the underlying dynamics, the values of those parameters have to be determined experimentally from the data. It is typically achieved by implicitly testing for a conjugacy of the time delay embeddings to spaces of constitutive dimensions. Specifically, it is assumed that the optimal dimension of reconstruction \(d\) is achieved when there is no conjugacy of the reconstruction in \(d\) to the reconstruction in the dimension \(d^{\prime}<d\) while there is a conjugacy between reconstruction in dimension \(d\) and dimension \(d^{\prime\prime}>d\). Those conditions can be tested with methods presented in this paper. The main contributions of this paper includes: * We propose a generalization of the FNN (_False Nearest Neighbor_) method [11] so that it can be applied to test for topological conjugacy of time series2. Moreover, we present its further modification called KNN method. Footnote 2: Classical FNN method was used only to estimate the embedding dimension in a dynamics reconstruction using time delay embedding. * We propose two entirely new methods: ConjTest and ConjTest\({}^{+}\). Instead of providing an almost binary answer to a question if two sampled dynamical systems are conjugate (which happens for the generalized FNN and the KNN method), their result is a continuous variable that can serve as a scale of similarity of two dynamics. This property makes the two new methods appropriate for noisy data. * We present a number of benchmark experiments to test the presented methods. In particular we analyze how different methods are robust for the type of testing (e.g. noise, determinism, alignment of a time series). To the best of our knowledge there are no methods available to test conjugacy of dynamical systems given by their finite sample in a form of time series as proposed in this paper. A number of methods exist to estimate the parameters of a time delay embedding. They include, among others, mutual information ([10]), autocorrelation and higher order correlations ([2]), a curvature-based approach ([9]) or wavering product ([6]) for selecting the time-lag, selecting of embedding dimension based on GP algorithm ([1]) or the above mentioned FNN algorithm, as well as some methods allowing to choose the embedding dimension and the time lag simultaneously as, for example, C-C method based on correlation integral ([15]), methods based on symbolic analysis and entropy ([17]) or some rigorous statistical tests ([19]). Numerous methods providing some similarity measures between time series exist (see reviews [14]). However, we claim that those classical methods are not suitable for the problem we tackle in this paper. While those methods often look for an actual similarity of signals or correlation, we are more interested in the dynamical generators hiding behind the data. For instance, two time series sampled from the same chaotic system can be highly uncorrelated, yet we would like to recognize them as similar, because the dynamical system constituting them is the same. Moreover, methods introduced in this work are applicable for time series embedded in any metric space, while most of the methods are restricted to \(\mathbb{R}\), some of them are still useful in \(\mathbb{R}^{d}\). The paper consists of four parts: Section 2 introduces the basic concepts behind the proposed methods. Section 3 presents four methods designed for data-driven evaluation of conjugacy of two dynamical systems. Section 4 explores the features of the proposed methods using a number of numerical experiments. Lastly, in Section 5 we summarize most important observations and discuss their possible significance in real-world time series analysis. ## 2 Preliminaries ### Topological conjugacy We start with a pair of metric spaces \(X\) and \(Y\) and a pair of dynamical systems: \(\varphi:X\times\mathbb{T}\to X\) and \(\psi:Y\times\mathbb{T}\to Y\), where \(\mathbb{T}\in\{\mathbb{Z},\mathbb{R}\}\). Fixing \(t_{X},t_{Y}\in\mathbb{T}\) define \(f:X\ni x\to\varphi(x,t_{X})\) and \(g:Y\ni y\to\psi(y,t_{Y})\). We say that \(f\) and \(g\) are _topologically conjugate_ if there exists a homeomorphism \(h:X\to Y\) such that the diagram (1) commutes, i.e., \(h\circ f=g\circ h\). If the map \(h:X\to Y\) is not a homeomorphism but a continuous surjection then we say that \(g\) is _topologically semi-conjugate_ to \(f\). Let us consider as an example \(X\) being a unit circle, and \(f_{\alpha}\) a rotation of \(X\) by an angle \(\alpha\). In this case, two maps, \(f_{\alpha},f_{\beta}:X\to X\) are conjugate if and only if \(\alpha=\beta\) or \(\alpha=-\beta\). This known fact is verified in the benchmark test in Section 4.1. In our work we will consider finite time series \(\mathcal{A}=\{x_{i}\}_{i=1}^{n}\) and \(\mathcal{B}=\{y_{i}\}_{i=1}^{n}\) so that \(x_{i+1}=f^{i}(x_{1})\) and \(y_{i+1}=g^{i}(y_{1})\) for \(i\in\{1,2,\ldots,n-1\}\), \(x_{1}\in X\) and \(y_{1}\in Y\) and derived a criteria to test topological conjugacy of \(f\) and \(g\) based on those samples. In what follows, a Hausdorff distance between \(A,B\subset X\) will be used. It is defined as \[\mathrm{d}_{\mathrm{H}}(A,B)=\max\{\sup_{a\in A}d(a,B),\sup_{b\in B}d(b,A)\}\] where \(d\) is metric in \(X\) and \(d(x,A):=\inf_{a\in A}d(x,a)\). ### Takens' Embedding Theorem Our work is related to the problem of reconstruction of dynamics from one dimensional time series. For a fixed a map \(f:X\to X\) and \(x_{1}\in X\) take a time series \(\mathcal{A}=\{x_{i}=f^{i-1}(x_{1})\}_{i\geq 1}\) being a subset of an attractor \(\Omega\subset X\) of the (box-counting) dimension \(m\). Take \(s:X\to\mathbb{R}\), a generic measurement function of observable states of the system and the associated to \(\mathcal{A}\) one dimensional time series \(\mathcal{S}=\{s(x_{i})\}_{x_{i}\in\mathcal{A}}\). The celebrated Takens' Embedding Theorem [22] states that given \(\mathcal{S}\) it is possible to reconstruct the original system with delay vectors, for instance \((s(x_{i}),s(x_{i+1}),\ldots,s(x_{i+d-1}))\), for sufficiently large _embedding dimension_\(d\geq 2m+1\) (the bound is often not optimal). The Takens' theorem implies that, under certain generic assumptions, an embedding of the attractor \(\Omega\) into \(\mathbb{R}^{d}\) given by \[F_{s,f}:\Omega\ni x\mapsto\left(s(x),s(f(x)),\ldots,s(f^{d-1}(x))\right)\in \mathbb{R}^{d} \tag{2}\] establishes a _topological conjugacy_ between the original system \((\Omega,f)\) and \((F_{s,f}(\Omega),\sigma)\) with the dynamics on \(F_{s,f}(\Omega)\subset\mathbb{R}^{d}\) given by the shift \(\sigma\) on the sequence space. Hence, Takens' Embedding Theorem allows to reconstruct both the topology of the original attractor and the dynamics. The formula presented above is a special case of a reconstruction with a _lag_\(l\) given by \[\Pi(\mathcal{A},d,l):=\left\{(s(x_{i}),s(x_{i+l}),\ldots,s(x_{i+(d-1)l}))\mid i \in\{1,2,\ldots,n-d\,l\}\right\}.\] From the theoretical point of view, the Takens' theorem holds for an arbitrary lag. However in practice a proper choice of \(l\) may strongly affect numerical reconstructions (see [12, Chapter 3]). The precise statements, interpretations and conclusions of the mentioned theorems can be found in [22], [20], [5] and references therein. ### Search for an optimal dimension for reconstruction In practice, the bound in Takens' theorem is often not sharp and an embedding dimension less than \(2m+1\) is already sufficient to reconstruct the original dynamics (see [3, 4]). Moreover, for time series encountered in practice, the attractor's dimension \(m\) is almost always unknown. To discover the sufficient dimension of reconstruction, the False Nearest Neighbor (FNN) method [11, 13], a heuristic technique for estimating the optimal dimension using a finite time series, is typically used. It is based on an idea to compare the embeddings of a time series into a couple of consecutive dimensions and to check if the introduction of an additional \(d+1\) dimension separates some points that were close in \(d\)-dimensional embedding. Hence, it tests whether \(d\)-dimensional neighbors are (false) neighbors just because of the tightness of the \(d\)-dimensional space. The dimension where the value of the test stabilizes and no more false neighbors can be detected is proclaimed to be the optimal embedding dimension. ### False Nearest Neighbor and beyond The False Nearest Neighbor method implicitly tests semi-conjugacy of \(d\) and \(d+1\) dimensional Takens' embedding by checking if the neighborhood of \(d\)-embedded points are preserved in \(d+1\) dimension. This technique was an inspiration for stating a more general question: given two time series, can we test if they were generated from conjugate dynamical systems? The positive answer could suggest that the two observed signals were actually generated by the same dynamics, but obtained by a different measurement function. In what follows, a number of tests inspired by these observations concerning False Nearest Neighbor method and Takens' Embedding Theorem, are presented. ## 3 Conjugacy testing methods In this section we introduce a number of new methods for quantifying the dynamical similarity of two time series. Before digging into them let us introduce some basic pieces of notation used throughout the section. From now we assume that \(X\) is a metric space. Let \(\mathcal{A}=\{x_{i}\}_{i=1}^{n}\) be a finite time series in space \(X\). By \(\kappa(x,k,\mathcal{A})\) we denote the set of \(k\)_-nearest neighbors_ of a point \(x\in X\) among points in \(\mathcal{A}\). Thus, the nearest neighbor of point can be denoted by \(\kappa(x,\mathcal{A}):=\kappa(x,1,\mathcal{A})\). If \(x\in\mathcal{A}\) then clearly \(\kappa(x,\mathcal{A})=x\). Hence, it will be handful to consider also \(\overline{\kappa}(x,k,\mathcal{A}):=\kappa(x,k,\mathcal{A}\setminus\{x\})\) and \(\overline{\kappa}(x,\mathcal{A}):=\kappa(x,1,\mathcal{A}\setminus\{x\})\). ### False Nearest Neighbor method The first proposed method is an extension of the already mentioned FNN technique for estimating the optimal embedding dimension of time series. The idea of the classical FNN method relies on counting of the number of so-called false nearest neighbors depending on the threshold parameter \(r\). This is based on the observation that if the two reconstructed points \[\mathbf{s}^{1}_{d}:=(s(x_{k_{1}}),s(x_{k_{1}+l}),\ldots,s(x_{k_{1}+(d-1)l}))\] and \[\mathbf{s}^{2}_{d}:=(s(x_{k_{2}}),s(x_{k_{2}+l}),\ldots,s(x_{k_{2}+(d-1)l}))\] are nearest neighbors in the \(d\)-dimensional embedding but the distance between their (d+1)-dimensional counterparts \[\mathbf{s}^{1}_{d+1}:=(s(x_{k_{1}}),\ldots,s(x_{k_{1}+(d-1)l}),s(x_{k_{1}+dl}))\] and \[\mathbf{s}^{2}_{d+1}:=(s(x_{k_{2}}),\ldots,s(x_{k_{2}+(d-1)l}),s(x_{k_{2}+dl}))\] in \((d+1)\)-dimensional embedding differs too much, then \(\mathbf{s}^{1}_{d}\) and \(\mathbf{s}^{2}_{d}\) were \(d\)-dimensional neighbors only due to folding of the space. In this case, we will refer to them as "false nearest neighbors". Precisely, the ordered pair \((\mathbf{s}^{1}_{d},\mathbf{s}^{2}_{d})\) of \(d\)-dimensional points is counted as false nearest neighbor, if the following conditions are satisfied: (I.) the point \(\mathbf{s}^{2}_{d}\) is the closest point to \(\mathbf{s}^{1}_{d}\) among all points in the \(d\)-dimensional embedding, (II.) the distance \(|\mathbf{s}^{1}_{d}-\mathbf{s}^{2}_{d}|\) between the points \(\mathbf{s}^{1}_{d}\) and \(\mathbf{s}^{2}_{d}\) is less than \(\sigma/r\), where \(\sigma\) is the standard deviation of \(d\)-dimensional points formed from delay-embedding of the time series and (III.) the ratio between the distance \(|\mathbf{s}^{1}_{d+1}-\mathbf{s}^{2}_{d+1}|\) of \(d+1\)-dimensional counterparts of these points, \(\mathbf{s}^{1}_{d+1}\) and \(\mathbf{s}^{2}_{d+1}\), and the distance \(|\mathbf{s}^{1}_{d}-\mathbf{s}^{2}_{d}|\) is greater than the threshold \(r\). The condition (III.) is motivated by the fact that under continuous evolution, even if the original dynamics is chaotic, the position of two close points should not deviate too much in the nearest future (we assume that the system is deterministic, even if subjected to some noise, which is the main assumption of all the nonlinear analysis time series methods). On the other hand, (II.) means that we consider only pairs of points which are originally not too far away since applying (III.) to points which are already outliers in \(d\) dimensions does not make sense. Next, the statistic \(\mathrm{FNN}(r)\) counts the relative number of such false nearest neighbors i.e. after normalizing with respect to the number of all the ordered pairs of points which satisfy (I.) and (II.). For discussion and some examples see e.g. [12]. We generalize the FNN method to operate in the case of two time series (not necessary created in a time-delay reconstruction) as follows. Let \(\mathcal{A}=\{a_{i}\}_{i=1}^{n}\subset X\) and \(\mathcal{B}=\{b_{i}\}_{i=1}^{n}\subset Y\) be two time series of the same length. Let \(\xi:\mathcal{A}\rightarrow\mathcal{B}\) be a bijection relating points with the same index, i.e., \(\xi(a_{i}):=b_{i}\). Then we define the directed FNN ratio between \(\mathcal{A}\) and \(\mathcal{B}\) as \[\mathrm{FNN}(\mathcal{A},\mathcal{B};r):=\frac{\sum_{i=1}^{n}\Theta\left( \frac{\mathbf{d}_{Y}(b_{i},\xi(\mathcal{R}(a,\mathcal{A})))}{\mathbf{d}_{X}(a _{i},\overline{\kappa}(a,\mathcal{A}))}-r\right)\Theta\left(\frac{\sigma}{r}- \mathbf{d}_{X}(a_{i},\overline{\kappa}(a,\mathcal{A}))\right)}{\sum_{i=1}^{n} \Theta\left(\frac{\sigma}{r}-\mathbf{d}_{X}(a_{i},\overline{\kappa}(a, \mathcal{A}))\right)} \tag{3}\] where \(\mathbf{d}_{X}\) and \(\mathbf{d}_{Y}\) denotes the distance function respectively in \(X\) and \(Y\), \(\sigma\) is the standard deviation of the data (i.e. the standard deviation of the elements of the sequence \(\mathcal{A}\)), \(r\) is the parameter of the method and \(\Theta\) is the usual Heaviside step function, i.e. \(\Theta(x)=1\) if \(x>0\) and \(0\) otherwise. Note that the distance \(\mathbf{d}\) (i.e. \(\mathbf{d}_{X}\) or \(\mathbf{d}_{Y}\)) might be defined in various ways however, as elements of time-series are usually elements of \(\mathbb{R}^{k}\) (for some \(k\)), then \(\mathbf{d}(x,y)\) is often simply the Euclidean norm \(\|x-y\|\). In the original FNN procedure we compare embeddings of a 1-dimensional time series \(\mathcal{A}\) into \(d\)- versus \((d+1)\)-dimensional space for a sequence of values of \(d\) and \(r\). In particular, the following application of (3): \[\mathrm{FNN}(\mathcal{A};r,d):=\mathrm{FNN}(\Pi_{d}(\mathcal{A}),\Pi_{d+1}( \mathcal{A});r), \tag{4}\] coincides with the formula used in the standard form of FNN technique (compare with [12]). For a fixed value of \(d\), if the values of FNN declines rapidly with the increase of \(r\), then we interpret that dimension \(d\) is large enough not to introduce any artificial neighbors. The heuristic says that the lowest \(d\) with that property is the optimal embedding dimension for time series \(\mathcal{A}\). ### K-Nearest Neighbors The key to the method presented in this section is an attempt to weaken and simplify the condition posed by FNN by considering a larger neighborhood of a point. As in the previous case, let \(\mathcal{A}=\{a_{i}\}_{i=1}^{n}\) and \(\mathcal{B}=\{b_{i}\}_{i=1}^{n}\) be two time series of the same length. Let \(\xi:\mathcal{A}\rightarrow\mathcal{B}\) be a bijection defined \(\xi(a_{i}):=b_{i}\). The proposed statistics, taking into account \(k\) nearest neighbors of each point, is given by the following formula: \[\text{KNN}(\mathcal{A},\mathcal{B};k):=\frac{\sum_{i=1}^{n}\min\left\{e\in \mathbb{N}\ \mid\ \xi\left(\overline{\kappa}(a_{i},k,\mathcal{A})\right)\subseteq\overline{ \kappa}(\xi(a_{i}),e+k,\mathcal{B})\right\}}{n^{2}}, \tag{5}\] where \(n\) is the length of time series \(\mathcal{A}\) and \(\mathcal{B}\). We refer to the above method as KNN distance. The ideas of the KNN method can be seen in the Figure 1. **Remark 1**.: _In the above formula (5), for simplicity there is no counterpart of the parameters \(r\) that was present in FNN which controlled the dispersion of data and outliers. This means that one should assume that the data Figure 1: Top (continuous) black line represent trajectory from which \(\mathcal{A}\) is sampled (black dots). Bottom (continuous) trajectory is sampled to obtain \(\mathcal{B}\) (burgundy dots). Set \(U:=\overline{\kappa}(a_{6},4,\mathcal{A})\) highlighted with orange color, represents a 4-nearest neighbors of \(x_{6}\in\mathcal{A}\). The smallest \(k\)-neighborhood of \(b_{6}\) that contains \(\xi(U)\) is the one with \(k=6\). The corresponding \(\overline{\kappa}(b_{6},4,\mathcal{B})\) is highlighted with green color. Hence, the contribution of point \(x_{6}\) to the numerator of \(\text{KNN}(\mathcal{A},\mathcal{B},4)\) is \(6-4=2\). (perhaps after some pre-processing) does not contain unexpected outliers and is reasonably dense. Alternatively, the formula might be easily modified to include such a parameter._ Set \(\overline{\kappa}(a_{i},k,\mathcal{A})\) can be interpreted as a discrete approximation of neighborhood of \(a_{i}\). Thus, for a point \(a_{i}\) the formula measures how much larger neighborhood of the corresponding point \(b_{i}=\xi(a_{i})\) we need to take to contain the image of the chosen neighborhood of \(a_{i}\). This discrepancy is expressed relatively to the size of the point cloud. Next we compute the average of this relative discrepancy among all points. Hence, we divide by \(n^{2}\) in the denominator of (5). Note that neither \(f\) nor \(g\) appear in the definitions of FNN and KNN. Nevertheless, the dynamics is hidden in the indices. That is, \(a_{j}\in\overline{\kappa}(a_{i},k,\mathcal{A})\) means that \(a_{i}\) returns to its own vicinity in \(|j-i|\) time steps. ### Conjugacy test The third method tests the conjugacy of two time series by directly checking the commutativity of the diagram (1) which is tested in a more direct way compared to the methods presented so far. We no longer assume that both time series are of the same size, however, the method requires a _connecting map_\(h:X\to Y\), a candidate for a (semi-)conjugating map. Unlike the map \(\xi\) in FNN and KNN method map \(h\) may transform a point \(a_{i}\in\mathcal{A}\) into a point in \(Y\) that doesn't belong to \(\mathcal{B}\). Nevertheless, the points in \(\mathcal{B}\) are crucial because they carry the information about the dynamics \(g:Y\to Y\). Thus, in order to follow trajectories of points in \(Y\) we introduce \(\tilde{h}:\mathcal{A}\to\mathcal{B}\), a discrete approximation of \(h\): \[\tilde{h}(a_{i}):=\kappa\left(h(a_{i}),\mathcal{B}\right). \tag{6}\] The map \(\tilde{h}\) simply assigns to \(a_{i}\) the closest element(s) of \(h(a_{i})\) from the time series \(\mathcal{B}\). For a set \(A\subset\mathcal{A}\) we compute the value pointwise, i.e. \(\tilde{h}(A)=\{\tilde{h}(a)\mid a\in A\}\) (see Figure 2). Note that it may happen that \(\tilde{h}(A)\) has less elements than \(A\). Denote the discrete \(k\)-approximation of the neighborhood of \(a_{i}\) in \(\mathcal{A}\), namely the \(k\) nearest neighbours of \(a_{i}\), by \(U_{i}^{k}:=\kappa(a_{i},k,\mathcal{A})\subset\mathcal{A}\). Then we define \[\mathrm{ConjTest}(\mathcal{A},\mathcal{B};k,t,h):=\frac{\sum_{i=1}^{n}\mathrm{ d}_{\mathrm{H}}\left((h\circ f^{t})(U_{i}^{k}),\ (g^{t}\circ\tilde{h})(U_{i}^{k})\right)}{n\,\mathrm{diam}(\mathcal{B})}, \tag{7}\] where \(\mathrm{d}_{\mathrm{H}}\) is the Hausdorff distance between two discrete sets and \(\mathrm{diam}(\mathcal{B})\) is the diameter of the set \(\mathcal{B}\). The idea of the formula (7) is to test at every point \(a_{i}\in\mathcal{A}\) how two time series together with map \(h\) are close to satisfy diagram (1) defining topological conjugacy. First, we approximate the neighborhood of \(a_{i}\in\mathcal{A}\) with \(U_{i}^{k}\) and then we try to traverse the diagram in two possible ways. Thus, we end up with two sets in \(Y\), that is \((h\circ f^{t})(U_{i}^{k})\) and \((g^{t}\circ\tilde{h})(U_{i}^{k})\). We measure how those two sets diverge using the Hausdorff distance. The extended version of the test presented above considers a larger approximation of \(\tilde{h}(U_{i}^{k})\). To this end, find the smallest \(k_{i}\) such that \(\tilde{h}(U_{i}^{k})\subset\kappa(h(a_{i}),k_{i},\mathcal{B})\). The corresponding superset defines the enriched approximation (see Figure 2): \[\tilde{h}^{+}(U_{i}^{k}):=\kappa(h(a_{i}),k_{i},\mathcal{B}). \tag{8}\] We use it to define a modified version of (7). \[\mathrm{ConjTest}^{+}(\mathcal{A},\mathcal{B};k,t,h):=\frac{\sum_{i=1}^{n} \mathrm{d}_{\mathrm{H}}\left((h\circ f^{t})(U_{i}^{k}),\ g^{t}\left(\tilde{h} ^{+}(U_{i}^{k})\right)\right)}{n\,\mathrm{diam}(\mathcal{B})}. \tag{9}\] Figure 2: A pictorial visualization of a difference between \(h\), \(\tilde{h}\) and \(\tilde{h}^{+}\). Map \(h\) transforms a point \(a\in\mathcal{A}\) into a point \(h(a)\in Y\). Map \(\tilde{h}\) approximates the value of the map \(h\) by finding the closest point in \(\mathcal{B}\) for \(h(a)\). The discrete neighborhood \(U_{i}^{2}\subset\mathcal{A}\) of \(a_{i}\) consists of three points and its image under \(\tilde{h}\) has three points as well. However, \(\tilde{h}^{+}(U_{i}^{2})\) counts five elements, as there are points in \(\mathcal{B}\) closer to \(\tilde{h}(a_{i})\) then points in \(\tilde{h}(U_{i}^{2})\). The extension of ConjTest to \(\mathrm{ConjTest}^{+}\) was motivated by results of Experiment 4A described in Subsection 4.4. The experiment should clarify the purpose of making the method more complex. We refer collectively to ConjTest and \(\mathrm{ConjTest}^{+}\) as \(\mathrm{ConjTest}\) methods. The forthcoming results provides mathematical justification of our method, i.e. "large" and non-decreasing values of the above tests suggest that there is no conjugacy between two time-series. **Theorem 2**.: _Let \(f:X\to X\) and \(g:Y\to Y\), where \(X\subset\mathbb{R}^{d_{X}}\) and \(Y\subset\mathbb{R}^{d_{Y}}\), be continuous maps (\(d_{X}\) and \(d_{Y}\) denote dimensions of the spaces). For \(y_{1}\in Y\) define \(\mathcal{B}_{m}:=\{b_{i}:=g^{i-1}(y_{1})\mid i\in\{1,\ldots,m\}\}\)._ _Suppose that \(Y\) is compact and that the trajectory of \(y_{1}\) is dense in \(Y\), i.e. the set \(\mathcal{B}_{m}\) becomes dense in \(Y\) as \(m\to\infty\). If \(g\) is semi-conjugate to \(f\) with \(h\) as a semi-conjugacy map, then for every fixed \(n\), \(t\) and \(k\)_ \[\lim_{m\to\infty}\mathrm{ConjTest}(\mathcal{A}_{n},\mathcal{B}_{m};k,t,h)=0, \tag{10}\] _where \(\mathcal{A}_{n}:=\{a_{i}:=f^{i-1}(x_{1})\mid i\in\{1,\ldots,n\}\}\), \(x_{1}\in X\), is any time-series in \(X\) of a length \(n\)._ _Moreover, the convergence is uniform with respect to \(n\) and with respect to the choice of the starting point \(x_{0}\) (i.e. the "rate" of convergence does not depend on the time-series \(\mathcal{A}_{n}\))._ Proof.: Since \(g\) is semi-conjugate to \(f\) via \(h\), \(h:X\to Y\) is a continuous surjection such that for every \(t\in\mathbb{N}\) we have \(h\circ f^{t}=g^{t}\circ h\). Fix \(t\in\mathbb{N}\) and \(k\in\mathbb{N}\) and let \(\varepsilon>0\). We will show that there exists \(M\) such that for all \(m>M\), all \(n\in\mathbb{N}\) and every finite time-series \(\mathcal{A}_{n}:=\{a_{i}:=f^{i-1}(x_{1})\mid i\in\{1,\ldots,n\}\}\subset X\) of length \(n\) (where \(x_{1}\in X\) is some point in \(X\)) it holds that \[\mathrm{ConjTest}(\mathcal{A}_{n},\mathcal{B}_{m};k,t,h)<\varepsilon. \tag{11}\] Note that \(|b_{2}-b_{1}|\leq|\mathcal{B}_{m}|\) for any \(m\geq 2\), which we will use at the end of the proof. As \(g\) is continuous and \(Y\) is compact, there exists \(\delta\) such that \(|g^{t}(y)-g^{t}(\tilde{y})|<\varepsilon\,|b_{2}-b_{1}|\) for every \(y,\ \tilde{y}\in Y\) with \(|y-\tilde{y}|<\delta\). As \(\mathcal{B}=\{y_{1},g(y_{1}),\ldots,g^{m}(y_{1}),\ldots\}=\{b_{1},\ldots,b_{m },\ldots\}\) is dense in \(Y\), there exists \(M\) such that if \(m>M\) then for every \(n\in\mathbb{N}\), every \(x_{1}\in X\) and every \(i\in\{1,2,\ldots,\}\) there exists \(j_{m}(i)\in\{1,2,\ldots,m\}\) such that \[|b_{j_{m}(i)}-h(a_{i})|<\delta,\] where \(a_{i}=f^{i-1}(x_{1})\in\mathcal{A}_{n}\). Thus for \(m>M\), we always (independently of the point \(a_{i}\in X\)) have \[|h(f^{t}(a_{i}))-g^{t}(\tilde{h}(a_{i}))|=|g^{t}(h(a_{i}))-g^{t}(\tilde{h}(a_{i} ))|<\varepsilon\,|b_{2}-b_{1}|\] as \(g^{t}(h(a_{i}))=h(f^{t}(a_{i}))\) and \(|\tilde{h}(a_{i})-h(a_{i})|<\delta\). Consequently, \[\mathrm{d}_{\mathrm{H}}\left((h\circ f^{t})(U_{i}^{k}),\ (g^{t}\circ\tilde{h})(U_ {i}^{k})\right)<\varepsilon\,|b_{2}-b_{1}|,\] where \(U_{i}^{k}=\kappa(a_{i},k,\mathcal{A}_{n})\) and \(\tilde{h}(U_{i}^{k})=\{\kappa(h(a_{j}),\mathcal{B}_{m})\mid a_{j}\in U_{k}^{i}\}\). Therefore \[\frac{\sum_{i=1}^{n}\mathrm{d}_{\mathrm{H}}\left((h\circ f^{t})(U_{i}^{k}), \ (g^{t}\circ\tilde{h})(U_{i}^{k})\right)}{n\,\mathrm{diam}(\mathcal{B}_{m})}< \frac{n\,\varepsilon\,|b_{2}-b_{1}|}{n\,\mathrm{diam}(\mathcal{B}_{m})}\leq\varepsilon\] since \(|b_{2}-b_{1}|\leq\mathrm{diam}(\mathcal{B}_{m})\) for every \(m\geq 1\). This proves (11). \(\square\) The compactness of \(Y\) and the density of the set \(\mathcal{B}=\{y_{1},g(y_{1}),\ldots,g^{m}(y_{1}),\ldots\}\) in \(Y\) is needed to obtain the uniform convergence in (10) but, as follows from the proof above, these assumptions can be relaxed at the cost of possible loosing the uniformity of the convergence: **Corollary 3**.: _Let \(f:X\to X\) and \(g:Y\to Y\), where \(X\subset\mathbb{R}^{d_{X}}\) and \(Y\subset\mathbb{R}^{d_{Y}}\), be continuous maps. Let \(x_{1}\in X\) and \(y_{1}\in Y\). Define \(\mathcal{A}_{n}:=\{a_{i}:=f^{i-1}(x_{1})\mid i\in\{1,\ldots,n\}\}\) and \(\mathcal{B}_{m}:=\{b_{i}:=g^{i-1}(y_{1})\mid i\in\{1,\ldots,m\}\}\). Suppose that \(\{h(a_{1}),\ldots h(a_{n})\}\subset\hat{Y}\) for some compact set \(\hat{Y}\subset Y\) such that the set \(\hat{Y}\cap\mathcal{B}\) is dense in \(\hat{Y}\), where \(\mathcal{B}=\{b_{1},\ldots,b_{m},\ldots\}\)._ _If \(g\) is **semi-conjugate** to \(f\) with \(h\) as a semi-conjugacy, then for every \(t\) and \(k\)_ \[\lim_{m\to\infty}\mathrm{ConjTest}(\mathcal{A}_{n},\mathcal{B}_{m};k,t,h)=0.\] **Remark 4**.: _In the above corollary the assumption on the existence of the set \(\hat{Y}\) means just that the trajectory of the point \(y_{1}\) contains points \(g^{j_{i}}(y_{1})\) which, respectively, "well-approximate" points \(h(a_{i})\), \(i=1,2,\ldots,n\)._ _Note also that we do not need the compactness of the space \(X\) nor the density of \(\mathcal{A}=\{a_{1},a_{2},\ldots,a_{n},\ldots\}\) in \(X\)._ The following statement is an easy consequence of the statements above **Theorem 5**.: _Let \(X\subset\mathbb{R}^{d_{X}}\) and \(Y\subset\mathbb{R}^{d_{Y}}\) be compact sets and \(f:X\to X\) and \(g:Y\to Y\) be continuous maps which are **conjugate** by a homeomorphism \(h:X\to Y\). Let \(x_{1}\in X\), \(y_{1}\in Y\) and \(\mathcal{A}_{n}\) and \(\mathcal{B}_{m}\) be defined as before. Suppose that \(\mathcal{A}_{n}\) and \(\mathcal{B}_{m}\) are dense, respectively, in \(X\) and \(Y\) as \(n\to\infty\) and \(m\to\infty\). Then for every \(t\) and \(k\)_ \[\lim_{m\to\infty}\operatorname{ConjTest}(\mathcal{A}_{n},\mathcal{B}_{m};k,t,h )=\lim_{n\to\infty}\operatorname{ConjTest}(\mathcal{B}_{m},\mathcal{A}_{m};k,t,h)=0.\] The assumptions on the compactness of the spaces and density of the trajectories can be slightly relaxed in the similar vein as before. The above results concern \(\operatorname{ConjTest}\). Note that in case of \(\operatorname{ConjTest}^{+}\) the neighborhoods \(\tilde{h}^{+}(U_{i}^{k})\), thus also \((g^{t}\circ\tilde{h}^{+})(U_{i}^{k})\), can be significantly enlarged by adding additional points to \(\tilde{h}(U_{i}^{k})\) and thus increasing the Hausdorff distance between corresponding sets. In order to still control this distance and formally prove desired convergence additional assumptions concerning space \(X\) and the sequence \(\mathcal{A}\) are needed: **Theorem 6**.: _Let \(f:X\to X\) and \(g:Y\to Y\), where \(X\subset\mathbb{R}^{d_{X}}\) and \(Y\subset\mathbb{R}^{d_{Y}}\) be continuous functions. For \(x_{1}\in X\) and \(n\in\mathbb{N}\) define \(\mathcal{A}_{n}:=\{a_{i}:=f^{i-1}(x_{1})\mid i\in\{1,2,\ldots,n\}\}\). Similarly, for \(y_{1}\in Y\) and \(m\in\mathbb{N}\) define \(\mathcal{B}_{m}:=\{b_{i}:=g^{i-1}(y_{1})\mid i\in\{1,2,\ldots,m\}\}\). Assume that \(X\) and \(Y\) are compact and that the set \(\mathcal{A}_{n}\) becomes dense in \(X\) as \(n\to\infty\), and \(\mathcal{B}_{m}\) becomes dense in \(Y\) as \(m\to\infty\)._ _Under those assumptions, if \(g\) is semiconjugate to \(f\) with \(h:X\to Y\) as a semi-conjugacy we have that_ \[\lim_{n\to\infty}\lim_{m\to\infty}\operatorname{ConjTest}^{+}(\mathcal{A}_{n},\mathcal{B}_{m};k,t,h)=0 \tag{12}\] _for any \(k\in\mathbb{N}\) and \(t\in\mathbb{N}\)._ Proof.: Since \(g\) is semi-conjugate to \(f\) via \(h\), for every \(t\in\mathbb{N}\) we have \(h\circ f^{t}=g^{t}\circ h\), where \(h:X\to Y\) is a continuous surjection. Expanding (12) yields \[\begin{split}&\lim_{n\to\infty}\lim_{m\to\infty}\frac{\sum_{i=1}^{ n}\mathrm{d}_{\mathrm{H}}\left((h\circ f^{t})(U_{i}^{k}),\ (g^{t}\circ\tilde{h}^{+})(U_{i}^{k})\right)}{n\ \mathrm{diam}( \mathcal{B}_{m})}\leq\\ &\leq\lim_{n\to\infty}\lim_{m\to\infty}\frac{\sum_{i=1}^{n} \mathrm{d}_{\mathrm{H}}\left((h\circ f^{t})(U_{i}^{k}),\ (g^{t}\circ\tilde{h})(U_{i}^{k})\right)}{n\ \mathrm{diam}( \mathcal{B}_{m})}+\\ &+\lim_{n\to\infty}\lim_{m\to\infty}\frac{\sum_{i=1}^{n} \mathrm{d}_{\mathrm{H}}\left((h\circ f^{t})(U_{i}^{k}),\ (g^{t}(\tilde{h}^{+}(U_{i}^{k})\setminus\tilde{h}(U_{i}^{k})))\right)}{n\ \mathrm{diam}( \mathcal{B}_{m})}.\end{split} \tag{13}\] Recall that \(U_{i}^{k}:=\kappa(a_{i},k,{\cal A}_{n})\), \(\tilde{h}(a_{i}):=\kappa(h(a_{i}),{\cal B}_{m})\), \(\tilde{h}(U_{i}^{k}):=\{\tilde{h}(a_{j}):a_{j}\in U_{i}^{k}\}\) and \(\tilde{h}^{+}(U_{i}^{k}):=\kappa(h(a_{i}),k_{i},{\cal B}_{m})\), where \(k_{i}\) is the smallest integer \(k_{i}\) such that \(\tilde{h}(U_{i}^{k})\subset\kappa(h(a_{i}),k_{i},{\cal B}_{m})\). Thus in particular, \(\tilde{h}(U_{i}^{k})\subset\tilde{h}^{+}(U_{i}^{k})\). Obviously all these neighborhoods \(U_{i}^{k}\), \(\tilde{h}(U_{i}^{k})\) and \(\tilde{h}^{+}(U_{i}^{k})\) depend on \(n\) and \(m\) (since they are taken with respect to \({\cal A}_{n}\) and \({\cal B}_{m}\)). Note that from Theorem 2 already follows that the first of the two terms in the sum in (13) vanishes. Thus we will only show that the second double limit vanishes as well. Let \(\varepsilon>0\), \(k\in\mathbb{N}\) and \(t\in\mathbb{N}\). Since \(g^{t}:Y\to Y\) is a continuous function on a compact metric space \(Y\), there exists \(\delta\) such that \(|g^{t}(x)-g^{t}(y)|<\frac{\varepsilon}{2}\) whenever \(x,y\in Y\) are such that \(|x-y|<\delta\). Similarly, since \(X\) is compact and \(h:X\to Y\) is continuous, there exists \(\delta_{1}\) such that \(|h(x)-h(y)|<\frac{\delta}{2}\) whenever \(x,y\in X\) such that \(|x-y|<\delta_{1}\). Since \({\cal B}\) is dense in \(Y\), there exists \(M\in\mathbb{N}\) such that for \(m>M\) and every \(y\in Y\), there exists \(\tilde{b}\in{\cal B}_{m}\) such that \(|\tilde{b}-y|<\frac{\delta}{4}\). Moreover, from the density of \({\cal A}\), there exists \(N\in\mathbb{N}\) such that for every \(n>N\) and every \(i\in\{1,2,\ldots,n\}\) we have \(\mbox{diam}(U_{i}^{k})<\delta_{1}\), i.e. if \(a_{j}\in U_{i}^{k}=\kappa(a_{i},k,{\cal A}_{n})\) then \(|a_{j}-a_{i}|<\delta_{1}\) and consequently \[|g^{t}(h(a_{j}))-g^{t}(h(a_{i}))|<\frac{\varepsilon}{2}. \tag{14}\] Assume thus \(n>N\). Then for \(m>M\) and every \(i\in\{1,2,\ldots n\}\) we have \(\mbox{diam}(U_{i}^{k})<\delta_{1}\) which also implies \(\mbox{diam}(h(U_{i}^{k}))<\frac{\delta}{2}\). As \(m>M\), every point of \(h(U_{i}^{k})\) can be approximated by some point of \({\cal B}_{m}\) with the accuracy better than \(\frac{\delta}{4}\). Consequently, \(\mbox{diam}(\tilde{h}(U_{i}^{k}))<\delta\) for every \(i\in\{1,2,\ldots,n\}\). Suppose that \(\tilde{b}\in\tilde{h}^{+}(U_{i}^{k})\setminus\tilde{h}(U_{i}^{k})\) for some \(\tilde{b}\in{\cal B}_{m}\). Then, by definition of \(\tilde{h}^{+}\), \[|\tilde{b}-h(a_{i})|\leq\mbox{diam}(\tilde{h}(U_{i}^{k}))<\delta. \tag{15}\] Thus for any \(a_{j}\in U_{i}^{k}=\kappa(a_{i},k,{\cal A}_{n})\) and any \(\tilde{b}\in\tilde{h}^{+}(U_{i}^{k})\setminus\tilde{h}(U_{i}^{k})\) we obtain \[|h(f^{t}(a_{j}))-g^{t}(\tilde{b})|\leq\\ \leq|h(f^{t}(a_{j})-g^{t}(h(a_{j}))|+|g^{t}(h(a_{j}))-g^{t}(h(a_{i }))|+|g^{t}(h(a_{i}))-g^{t}(\tilde{b})|\] where * \(|h(f^{t}(a_{j})-g^{t}(h(a_{j}))|=0\) by semi-conjugacy assumption * \(|g^{t}(h(a_{j}))-g^{t}(h(a_{i}))|<\frac{\varepsilon}{2}\) as follows from (14) * \(|g^{t}(h(a_{i}))-g^{t}(\tilde{b})|<\frac{\varepsilon}{2}\) as follows from (15). Finally for every \(i\in\{1,2,\ldots n\}\), every \(a_{j}\in U_{i}^{k}\) and every \(\tilde{b}\in(\tilde{h}^{+}(U_{i}^{k})\setminus\tilde{h}(U_{i}^{k}))\) we have \(|h(f^{t}(a_{j}))-g^{t}(\tilde{b})|<\varepsilon\) meaning that \[\frac{\sum_{i=1}^{n}\mathrm{d}_{\mathrm{H}}\left((h\circ f^{t})(U_{i}^{k}),\ g^{t}( \tilde{h}^{+}(U_{i}^{k})\setminus\tilde{h}(U_{i}^{k}))\right)}{n\ \mathrm{diam}(\mathcal{B}_{m})}<\frac{\varepsilon}{ \mathrm{diam}(\mathcal{B}_{m})}\] if only \(n>N\) and \(m>M\). This shows that the value of \(\mathrm{ConjTest}^{+}(\mathcal{A}_{n},\mathcal{B}_{m};k,t,h)\) can be arbitrarily small if \(n\) and \(m\) are sufficiently large and ends the proof. ## 4 Conjugacy experiments In this section the behavior of the described methods is experimentally studied. For that purpose a benchmark set of a number of time series originated from (non-)conjugate dynamical systems is generated. A time series of length \(N\) generated by a map \(f:X\to X\) with a starting point \(x_{1}\in X\) is denoted by \[\varrho(f,x_{1},N):=\left\{f^{t-1}(x_{1})\in X\ |\ t\in\{1,2,\ldots,N\}\right\}.\] All the experiments were computed in Python using floating number precision. The implementations of the methods presented in this paper as well as the notebooks recreating the presented experiments are available at [https://github.com/dioscuri-tda/conjtest](https://github.com/dioscuri-tda/conjtest). ### Irrational rotation on a circle The first example involves a dynamics generated by rotation on a circle by an irrational angle. Let us define a \(1\)-dimensional circle as a quotient space \(\mathbb{S}:=\mathbb{R}/\mathbb{Z}\). Denote the operation of taking a decimal part of a number (modulo \(1\)) by \(x_{1}:=x-\lfloor x\rfloor\). Then, for a parameter \(\phi\in[0,1)\) we define a rigid rotation on a circle, \(f_{[\phi]}:\mathbb{S}\to\mathbb{S}\), as \[f_{[\phi]}(x):=(x+\phi)_{1}.\] We consider the following metric on \(\mathbb{S}\) \[\mathbf{d}_{\mathbb{S}}:\mathbb{S}\times\mathbb{S}\ni(x,y)\mapsto\min\left((x -y)_{1},(y-x)_{1}\right)\in[0,1). \tag{16}\] In this case \(\mathbf{d}_{\mathbb{S}}(x,y)\) can be interpreted as the length of the shorter arc joining points \(x\) and \(y\) on \(\mathbb{S}\). It is known that two rigid rotations, \(f_{[\phi]}\) and \(f_{[\psi]}\), are topologically conjugate if and only if \(\phi=\psi\) or when \(\phi+\psi=1\) (see e.g. Theorem 2.4.3 and Corollary 2.4.1 in [21]). In the first case the conjugating circle homeomorphism \(h\) preserves the orientation i.e. the lift \(H:\mathbb{R}\to\mathbb{R}\) of \(h:\mathbb{S}\to\mathbb{S}\) satisfies \(H(x+1)=H(x)+1\) for every \(x\in\mathbb{R}\) and in the second case \(h\) reverses the orientation \(H(x+1)=H(x)-1\) and the two rotations \(f_{[\phi]}\) and \(f_{[\psi]}\) are just mutually inverse. Moreover, for a map \(f_{[\phi]}\) we introduce a family of topologically conjugate maps given by \[f_{[\phi],s}(x):=\Big{(}(x_{1}^{s}+\phi)_{1}\Big{)}^{1/s},\ x\in\mathbb{R}\] with \(s>0\). In particular, \(f_{[\phi]}=f_{[\phi],1}\). It is easy to check that by putting \(h_{s}(x):=x_{1}^{s}\) we get \(f_{[\phi],s}=h_{s}^{-1}\circ f_{[\phi]}\circ h_{s}\). #### 4.1.1 Experiment 1A SetupLet \(\alpha=\frac{\sqrt{2}}{10}\). In the first experiment we compare the following time series: \[\mathcal{R}_{1} =\varrho(f_{[\alpha]},0.0,2000), \mathcal{R}_{2} =\varrho(f_{[\alpha]},0.25,2000),\] \[\mathcal{R}_{3} =\varrho(f_{[\alpha+0.02]},0.0,2000), \mathcal{R}_{4} =\varrho(f_{[2\alpha]},0.0,2000),\] \[\mathcal{R}_{5} =\varrho(f_{[\alpha],2},0.0,2000), \mathcal{R}_{6} =\mathcal{R}_{5}+\mathrm{err}(0.05),\] where \(\mathrm{err}(\epsilon)\) denotes a uniform noise sampled from the interval \([-\epsilon,\epsilon]\). In case of ConjTest the comparison \(\mathcal{R}_{1}\) versus \(\mathcal{R}_{2}\), \(\mathcal{R}_{3}\) and \(\mathcal{R}_{4}\) was done with \(h\equiv\mathrm{id}_{\mathbb{S}}\). For \(\mathcal{R}_{3}\) and \(\mathcal{R}_{4}\), it is not a proper connecting homeomorphism, but as we already mentioned, those systems are not conjugate to \(\mathcal{R}_{1}\) and therefore, there is no connecting homeomorphism at all. When comparing \(\mathcal{R}_{1}\) versus \(\mathcal{R}_{4}\) and \(\mathcal{R}_{5}\) we use homeomorphism \(h_{2}(x):=x_{1}^{2}\). As follows from Poincare Classification Theorem, \(f_{[\alpha]}\) and \(f_{[2\alpha]}\) are not conjugate nor semi-conjugate whereas \(f_{[\alpha]}\) and \(f_{[\alpha],2}\) are conjugate via \(h_{2}\). Thus the expectation is to confirm conjugacy of \(\mathcal{R}_{1}\) and \(\mathcal{R}_{2}\) and of \(\mathcal{R}_{1}\) and \(\mathcal{R}_{5}\) and indicate deviations from conjugacy in all the remaining cases. Let us recall that for FNN and KNN methods we always use \(h(x_{i})=y_{i}\), a connecting homeomorphism based on the indices correspondence. ResultsThe results are presented in Table 1. Since the presented methods are not symmetric, order of input time series matters. To accomodate this information, every cell contains two values, above and below the diagonal. For the column with header "\(\mathcal{R}_{i}\) vs. \(\mathcal{R}_{j}\)", the cells upper value corresponds to the outcome of \(\mathrm{FNN}(\mathcal{R}_{i},\mathcal{R}_{j};r)\), \(\mathrm{KNN}(\mathcal{R}_{i},\mathcal{R}_{j};k)\), \(\mathrm{ConjTest}(\mathcal{R}_{i},\mathcal{R}_{j};k,t,h)\) and \(\mathrm{ConjTest}^{+}(\mathcal{R}_{i},\mathcal{R}_{j};k,t,h)\), respectively to the row. The lower values corresponds to \(\mathrm{FNN}(\mathcal{R}_{j},\mathcal{R}_{i};r)\), \(\mathrm{KNN}(\mathcal{R}_{j},\mathcal{R}_{i};k)\), \(\mathrm{ConjTest}(\mathcal{R}_{j},\mathcal{R}_{i};k,t,h)\) and \(\mathrm{ConjTest}^{+}(\mathcal{R}_{j},\mathcal{R}_{i};k,t,h)\), respectively. As we can see from Table 1 the starting point does not affect results of methods (\(\mathcal{R}_{1}\) vs. \(\mathcal{R}_{2}\)) since all the values in the first column are close to 0. It is expected due to the symmetry of the considered system. A non-linearity introduced in time series \(\mathcal{R}_{5}\) also does not affect the results. Despite the fact that \(f_{[\alpha],2}\) is nonlinear, it is conjugate to the rotation \(f_{[\alpha]}\) which is reflected by tests' values. However, when we change the rotation parameter we can see an increase of measured values (\(\mathcal{R}_{1}\) vs. \(\mathcal{R}_{3}\) and \(\mathcal{R}_{1}\) vs. \(\mathcal{R}_{4}\)). It is particularly visible in case of FNN and KNN. Interestingly, a small perturbation of the angle (\(\mathcal{R}_{3}\)) can cause a bigger change in a value then a large one (\(\mathcal{R}_{4}\)). We investigate how the perturbation of the rotation parameter affects values of examined methods in Experiment 1B. Moreover, the last column (\(\mathcal{R}_{1}\) vs. \(\mathcal{R}_{6}\)) shows that FNN is very sensitive to a noise, while KNN and ConjTest methods present some robustness. The influence of noise on the value of the test statistics is further studied in Experiment 1C. Note also that additional summary comments concerning Table 1, as well as results of other forthcoming experiments, will be also presented at the end of the article. #### 4.1.2 Experiment 1B In this experiment we test how the difference of the system parameter affects tested methods. SetupLet \(\alpha:=\frac{\sqrt{2}}{10}\approx 0.141\). We consider a family of time series parameterized by \(\beta\). \[\left\{\mathcal{R}_{\beta}:=\varrho(f_{[\beta]},0.0,2000)\mid\beta=\alpha+ \frac{i\alpha}{100},\ i\in[-50,-49,\ldots,125]\right\}. \tag{17}\] Thus, the tested interval of values of \(\beta\) is approximately \([0.07,0.32]\). As a reference value we chose \(\alpha=\frac{\sqrt{2}}{10}\approx 0.141\). We denote the corresponding time series as \(\mathcal{R}_{\alpha}\). We compare all time series from the family (17) with \(\mathcal{R}_{\alpha}\). In the case of ConjTest methods we use \(h=\mathrm{id}\). ResultsThe outcome of the experiment is plotted in Figure 3. We can see that all methods gives values close to \(0\) when comparing \(\mathcal{R}_{\alpha}\) with itself. For different values of parameter \(r\) of FNN plots (Figure 3 top left looks almost identically. Even a small perturbation of the rotation parameter causes an immediate jump of FNN value from \(0\) to \(1\), making it extremely sensitive to any changes in the system. Obviously, unless \(\beta=\alpha\), \(\mathcal{R}_{\alpha}\) and \(\mathcal{R}_{\beta}\) are not conjugate. However, sometimes it might be convenient to have somehow smoother relation of the test value to the small infinitesimal change of the rotation angle. KNN method seems to behave inconsistently, but we can see that the higher parameter \(k\) gets the closer we get to a shape resembling the curve obtained with FNN. On the other hand, ConjTest shows a linear dependence on \(\beta\) parameter. Moreover, different values of ConjTest's parameter \(t\) result in a different slope of this dependency. Both, FNN and KNN exhibit an interesting drop of the value when \(\beta\approx 0.283\) that is \(\beta=2\alpha\). Formally, we know that \(f_{[\alpha]}\) and \(f_{[2\alpha]}\) are not conjugate systems. However, we can explain this outcome by analyzing the methods. Let \(a_{i}\in\mathcal{R}_{\alpha}\) and let \(\tau\in\mathbb{Z}\) such that \(a_{j}:=a_{i+\tau}\in\mathcal{R}_{\alpha}\) be the nearest neighbor Figure 3: Dependence of the conjugacy measures on the perturbation of rotation angle. Top left: FNN method. Top right: ConjTest method. Bottom: KNN method. of \(a_{i}\). In particular, \(\tau\in\mathbb{Z}\). By (16) we get \(\mathbf{d}_{\mathbb{S}}(a_{i},a_{j})=(\alpha\tau)_{1}\) or \((-\alpha\tau)_{1}\). There is an \(N\in\mathbb{Z}\) and a \(\delta\in[0,1)\) such that \(\alpha\tau=N+\delta\). Since \(\mathbf{d}_{\mathbb{S}}(a_{i},a_{j})\approx 0\), it follows that \(\delta_{1}\approx 0\). To get FNN we also need to know \(\mathbf{d}_{\mathbb{S}}(b_{i},b_{j})\). Let \(\beta=z\alpha\). Then, \(b_{i}=(z\alpha i)_{1}\), \(b_{j}=(z\alpha i+z\alpha\tau)_{1}\) and \(\mathbf{d}_{\mathbb{S}}(b_{i},b_{j})=(z\alpha\tau)_{1}\) or \((-z\alpha\tau)_{1}\). Thus, \(z\alpha\tau=zN+z\delta\). We assume that \(z\delta\in[0,1)\), because \(\delta_{1}\approx 0\) and \(z\) is not very large. Again, there exists an \(M\in\mathbb{Z}\) and \(\epsilon\in[0,1)\) such that \(zN=M+\epsilon\). Now, if \(zN\in\mathbb{Z}\), then \(\epsilon=0\), \(\mathbf{d}_{\mathbb{S}}(b_{i},b_{j})=(z\delta)_{1}=z\,\mathbf{d}_{\mathbb{S}} (a_{i},a_{j})\) (last equality given by \(\delta_{1}\approx 0\)) and \(\frac{\mathbf{d}_{\mathbb{S}}(b_{i},b_{j})}{\mathbf{d}_{\mathbb{S}}(a_{i},a_{ j})}=z\). If \(zN\not\in\mathbb{Z}\) then \(\epsilon\neq 0\) and \(\frac{\mathbf{d}_{\mathbb{S}}(b_{i},b_{j})}{\mathbf{d}_{\mathbb{S}}(a_{i},a_{ j})}=\frac{\geq 0}{\sim 0}\). Hence, the fraction gives a large number and the numerator of FNN will count most of the points, unless \(zN\in\mathbb{Z}\) which is always satisfied when \(z\in\mathbb{Z}\). Moreover, for the irrational rotation \(\tau\) might be large. In our experiments we usually get \(|\tau|>1000\). Thus, \(N\) is large and \(\epsilon\) is basically a random number. In the case of KNN there is a chance that at least for some of the \(k\)-nearest neighbors \(zN\in\mathbb{Z}\). Hence, the more rugged shape of the curve. In the case of ConjTest we observe a clear impact of ConjTest's parameter \(t\) on the shape of the curve. The method takes \(k\)-nearest neighbors of a point \(x_{i}\) (\(U_{i}^{k}\) in the formula (1)) and moves them \(t\) times about angle \(\alpha\). At the same time the corresponding image of those points in the system \(\mathcal{R}_{\beta}\) (\(\tilde{h}(U_{i}^{k})\) in the formula (1)) is rotated \(t\) times about \(\beta\) angle. Thus, discrepancy of the position of those two sets of points is proportional to \(t\beta\). In particular, when \((t\beta)_{1}=\alpha\), these two sets are in the same position. #### 4.1.3 Experiment 1C In this experiment, instead of perturbing the parameter of the system we perturb the time series itself by applying a noise to every point of the series. SetupSet \(\alpha=\frac{\sqrt{2}}{10}\). We compare a time series \(\mathcal{R}_{1}:=\varrho(f_{[\alpha]},0.0,2000)\) with a family of time series: \[\left\{\widetilde{\mathcal{R}}_{\epsilon}:=\varrho(f_{[\alpha],2},0.0,2000)+ \mathrm{err}(\epsilon)\mid\epsilon\in[0.00,0.25]\right\}, \tag{18}\] where \(\mathrm{err}(\epsilon)\) is a uniform noise sampled from the interval \([-\epsilon,\epsilon]\) applied to every point of the time series. In the case of ConjTest we again use \(h(x)=x^{2}{}_{1}\). Figure 4: Dependence of conjugacy measures on the perturbation of time series. ResultsResultsResults are presented in Figure 4. Again, FNN presents a very high sensitivity on any disruption of a time series and even small amount of noise gives a conclusion that two systems are not conjugate. On the other hand, KNN and ConjTest present an almost linear dependence on noise level. Note that higher values of parameters \(k\) and \(t\) make methods more sensitive to the noise. ### Example: irrational rotation on a torus Let us consider a simple extension of the previous rotation example to a rotation on a torus. With a torus defined as \(\mathbb{T}:=\mathbb{S}\times\mathbb{S}\), where \(\mathbb{S}=\mathbb{R}/\mathbb{Z}\), we can introduce map \(f_{[\phi_{1},\phi_{2}]}:\mathbb{T}\to\mathbb{T}\) defined as \[f_{[\phi_{1},\phi_{2}]}(x^{(1)},x^{(2)})=((x^{(1)}+\phi_{1})_{1},(x^{(2)}+\phi _{2})_{1}),\] where \(\phi_{1},\phi_{2}\in[0,1)\). We equip the space with the maximum metric \(\mathbf{d}_{\mathbb{T}}\): \[\mathbf{d}_{\mathbb{T}}:\mathbb{T}\times\mathbb{T}\ni((x_{1},y_{1}),(x_{2},y_ {2}))\mapsto\max{(\mathbf{d}_{\mathbb{S}}(x_{1},x_{2}),\mathbf{d}_{\mathbb{S }}(y_{1},y_{2}))}\in[0,1),\] where \(\mathbf{d}_{\mathbb{S}}\) is the sphere metric (see (16)). Note that rotation on a torus described above and rotation on a circle \(f_{[\phi_{i}]}:\mathbb{S}\to\mathbb{S}\) studied in Section 4.1 gives a simple example of a semi-conjugate systems. Namely, let \(h:\mathbb{T}\to\mathbb{S}\) be a projection \(h_{i}(x^{(1)},x^{(2)})=x^{(i)}\), \(i=1,2\). Then we get the equality \(h_{i}\circ f_{[\phi_{1},\phi_{2}]}=f_{[\phi_{i}]}\circ h_{i}\) for \(i\in\{1,2\}\). #### 4.2.1 Experiment 2A SetupFor this experiment we consider the following time series: \[\mathcal{T}_{1} =\varrho(f_{[\alpha,\beta]},(0.0,0.0),2000), \mathcal{S}_{1} =\mathcal{T}_{1}^{(1)},\] \[\mathcal{T}_{2} =\varrho(f_{[1.1\alpha,\beta]},(0.1,0.0),2000), \mathcal{S}_{2} =\mathcal{T}_{2}^{(1)},\] \[\mathcal{T}_{3} =\varrho(f_{[\beta,\beta]},(0.1,0.0),2000), \mathcal{S}_{3} =\mathcal{T}_{3}^{(1)},\] where \(\alpha=\sqrt{2}/10\), \(\beta=\sqrt{3}/10\), and \(\mathcal{S}_{i}=\mathcal{T}_{i}^{(1)}\), \(i=1,2,3\), is a time series obtained from the projection of the elements of \(\mathcal{T}_{i}\) onto the first coordinate. When comparing \(\mathcal{T}_{i}\) with \(\mathcal{T}_{j}\) for \(i,j\in\{1,2,3\}\) we use \(h\equiv\mathrm{id}\). When we compare \(\mathcal{T}_{i}\) versus \(\mathcal{S}_{j}\) we use \(h(x,y)=x\), and for \(\mathcal{S}_{i}\) versus \(\mathcal{T}_{j}\) we get \(h(x)=(x,0)\). ResultsThe asymmetry of results in the first column (\(\mathcal{T}_{1}\) vs. \(\mathcal{S}_{1}\)) in Table 2 shows that all methods detects a semi-conjugacy between \(\mathcal{T}_{1}\) and \(\mathcal{S}_{1}\), i.e. that \(f_{[\alpha]}\) is semi-conjugate to \(f_{[\alpha,\beta]}\) via \(h_{1}\). An embedding of a torus into a 1-sphere preserves neighborhood of a point. The inverse map is clearly does not exist. Rest of the results confirms conclusions from the previous experiment. The second and the third column (\(\mathcal{T}_{1}\) vs. \(\mathcal{T}_{2}\) and \(\mathcal{T}_{1}\) vs. \(\mathcal{S}_{2}\)) show that FNN and KNN are sensitive to a perturbation of the system parameters. The fourth and the fifth column (\(\mathcal{T}_{1}\) vs. \(\mathcal{T}_{3}\) and \(\mathcal{T}_{1}\) vs. \(\mathcal{S}_{3}\)) present another example where those two methods produce a false positive answer suggesting a semi-conjugacy. This time the problematic case is not due to a doubling of the rotation parameter, but because of coinciding rotation angles. Again, the behavior of ConjTest method exhibits a response that is relative to the level of perturbation. ### Example: the logistic map and the tent map Our next experiment examine two broadly studied chaotic maps defined on a real line. The logistic map and the tent map, \(f_{l},g_{\mu}:[0,1]\to[0,1]\), respectively defined as: \[f_{l}(x):=lx(1-x)\qquad\text{and}\qquad g_{\mu}(x):=\mu\min\{x,\,1-x\}, \tag{19}\] \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|} \hline methodtest & \(\mathcal{T}_{1}\) vs. \(\mathcal{S}_{1}\) & \(\mathcal{T}_{1}\) vs. \(\mathcal{T}_{2}\) & \(\mathcal{T}_{1}\) vs. \(\mathcal{S}_{2}\) & \(\mathcal{T}_{1}\) vs. \(\mathcal{T}_{3}\) & \(\mathcal{T}_{1}\) vs. \(\mathcal{S}_{3}\) \\ \hline FNN (r=2) & 0.0 & 1.0 & 1.0 & 1.0 & 0.0 & 0.0 \\ \hline KNN (k=5) &.042 &.617 &.275 &.514 &.041 &.041 \\ \hline ConjTest (k=5, t=5) &.001 &.149 &.142 &.451 &.318 \\ \hline ConjTest\({}^{+}\) (k=5, t=5) &.270 &.148 &.270 &.322 &.322 \\ \hline ConjTest\({}^{+}\) (k=5, t=5) &.018 &.272 &.154 &.143 &.458 &.319 \\ \hline \end{tabular} \end{table} Table 2: Comparison of conjugacy measures for time series generated by the rotation on a torus. The number in the upper left part of the cell corresponds to a comparison of the first time series vs. the second one, while the lower right number corresponds to the inverse comparison. where, typically, \(l\in[0,4]\) and \(\mu\in[0,2]\). For parameters \(l=4\) and \(\mu=2\) the systems are conjugate via homeomorphism: \[h(x):=\frac{2\arcsin(\sqrt{x})}{\pi}, \tag{20}\] that is, \(h\circ f_{4}=g_{2}\circ h\). In this example we use the standard metric induced from \(\mathbb{R}\). #### 4.3.1 Experiment 3A SetupIn the initial experiment for those systems we compare the following time series: \[\mathcal{A} =\varrho(f_{4},0.2,2000), \mathcal{B}_{2} =\varrho(f_{4},0.21,2000),\] \[\mathcal{B}_{1} =\varrho(g_{2},h(0.2),2000), \mathcal{B}_{3} =\varrho(f_{3.99},0.2,2000),\] \[\mathcal{B}_{4} =\varrho(f_{3.99},0.21,2000).\] Time series \(\mathcal{A}\) is conjugate to \(\mathcal{B}_{1}\) through the homeomorphism \(h\). Time series \(\mathcal{A}\) and \(\mathcal{B}_{2}\) comes from the same system - \(f_{4}\), but are generated using different starting points. Sequences \(\mathcal{B}_{3}\) and \(\mathcal{B}_{4}\) are both generated by the logistic map but with different parameter value (\(l=3.99\)) than \(\mathcal{A}\); thus, they are not conjugate with \(\mathcal{A}\). For ConjTest methods we use (20) to compare \(\mathcal{A}\) with \(\mathcal{B}_{1}\), and the identity map to compare \(\mathcal{A}\) with \(\mathcal{B}_{2}\), \(\mathcal{B}_{3}\) and \(\mathcal{B}_{4}\). ResultsThe first column of Table 3 shows that all methods properly identify the tent map as a system conjugate to the logistic map (provided that the two time series are generated by dynamically corresponding points, i.e. \(a_{1}\) and \(b_{1}:=h(a_{1})\), respectively). The second column demonstrates that FNN and KNN get confused by a perturbation of the starting point generating time series. This effect was not present in the circle and the torus example (Sections 4.1 and 4.2) due to a full symmetry in those examples. The ConjTest methods are only weakly affected by the perturbation of the starting point. Though, we expect that for higher values of parameter \(t\) may significantly affect the outcome of ConjTest due to chaotic nature of the map. We test it further in the context of Lorenz attractor (Experiment \(4C\)). The third and the fourth column reflect high sensitivity of FNN and KNN to the parameter of the system. On the other hand, ConjTest methods admit rather conservative response to a change of the parameter. The experiment shows that FNN and KNN are able to detect a change caused by a perturbation of a system immediately. However, in the context of empirical data we may not be able determine whether the starting point was perturbed, the system has actually changed, or there is a noise in our measurements. Thus, some robustness with respect to noise might be desirable and the seemingly blurred concept of the conjugacy represented by ConjTest might be helpful. #### 4.3.2 Experiment 3B The logistic map is is one of the standard example of chaotic maps. Thus, we expect that the behavior of the system will change significantly if we modify the parameter \(l\). Here, we examine how the perturbation of \(l\) affects the outcome of tested methods. SetupWe generated a collection of time series: \[\left\{\mathcal{B}(l):=\varrho(f_{l},0.2,2000)\mid l\in\left\{3.8,3.805,3.81, \ldots,4.0\right\}\right\}.\] Every time series \(\mathcal{B}(l)\) in the collection was compared with a reference time series \(\mathcal{B}(4.0)\). \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline \multirow{2}{*}{method test} & \multirow{2}{*}{\(\mathcal{A}\) vs. \(\mathcal{B}_{1}\)} & \multicolumn{2}{c|}{\(\mathcal{A}\) vs. \(\mathcal{B}_{2}\)} & \multicolumn{2}{c|}{\(\mathcal{A}\) vs. \(\mathcal{B}_{3}\)} & \multicolumn{2}{c|}{\(\mathcal{A}\) vs. \(\mathcal{B}_{4}\)} \\ & & \multicolumn{2}{c|}{(starting point} & \multicolumn{2}{c|}{(parameter} & \multicolumn{1}{c}{(st.point + param.} \\ & & \multicolumn{2}{c|}{perturbation)} & \multicolumn{1}{c|}{perturbation)} & \multicolumn{1}{c|}{perturbation)} \\ \hline FNN (r=2) & \(.205\) & \(.998\) & \(1.0\) & \(1.0\) & \(1.0\) \\ \cline{2-6} & \(0.0\) & \(.10\) & \(1.0\) & \(1.0\) & \(.999\) \\ \hline KNN (k=5) & \(0.0\) & \(.825\) & \(.831\) & \(.835\) & \(.835\) \\ \cline{2-6} & \(0.0\) & \(.828\) & \(.832\) & \(.833\) \\ \hline ConjTest (k=5, t=5) & \(0.0\) & \(.017\) & \(.099\) & \(.099\) & \(.099\) \\ \cline{2-6} & \(0.0\) & \(.017\) & \(.059\) & \(.059\) \\ \hline ConjTest\({}^{+}\) (k=5, t=5) & \(0.00\) & \(.027\) & \(.104\) & \(.104\) \\ \cline{2-6} & \(.001\) & \(.023\) & \(.065\) & \(.064\) \\ \hline \end{tabular} \end{table} Table 3: Comparison of conjugacy measures for time series generated by logistic and tent maps. The number in the upper left part of the cell corresponds to a comparison of the first time series vs. the second one, while the lower right number corresponds to the inverse comparison. ResultsThe results are plotted in Figure 5. As Experiment 3A suggested, FNN and KNN quickly saturates, providing almost "binary" response i.e. the output value is either 0 or a fixed non-zero number depending on parameter \(k\). Similarly to Experiment 1B we observe that with a higher parameter \(k\) the curve corresponding to KNN gets more similar to FNN and becomes nearly a step function. ConjTest admits approximately continuous dependence on the value of the parameter of the system. However, higher values of the parameter \(t\) of ConjTest make the curve more steep and forms a significant step down in the vicinity of \(l=4\). This makes sense, because the more time-steps forward we take into account the more non-linearity of the system affects the tested neighborhood. We can observe an interesting effect in the neighborhood of \(l=3.83\) where values of FNN suddenly drop and values of ConjTest rise. However, the origin of this effect is not clear. Figure 5: Dependence of the conjugacy measures on a change of the parameter of the logistic map. ### Example: Lorenz attractor and its embeddings The fourth example is based on the Lorenz system defined by equations: \[\begin{cases}\dot{x}&=\sigma(y-x),\\ \dot{y}&=x(\rho-z)-y,\\ \dot{z}&=xy-\beta z,\end{cases} \tag{21}\] which induce a continuous dynamical system \(\varphi:\mathbb{R}^{3}\times\mathbb{R}\to\mathbb{R}^{3}\). We consider the classical values of the parameters: \(\sigma=10\), \(\rho=28\), and \(\beta=8/3\). A time series can be generated by iterates of the map \(f(x):=\varphi(x,\tilde{t})\), where \(\tilde{t}>0\) is a fixed value of the time parameter. For the following experiments we chose \(\tilde{t}=0.02\) and we use the Runge-Kutta method of an order \(5(4)\) to generate the time series. #### 4.4.1 Experiment 4A SetupLet \(p_{1}=(1,1,1)\) and \(p_{2}=(2,1,1)\). In this experiment we compare the following time series: \[\mathcal{L}_{i}=\varrho(f,f^{2000}(p_{i}),10000),\ \ \mathcal{P}_{x,d}^{i}=\Pi( \mathcal{L}_{i}^{(1)},d,5),\ \ \mathcal{P}_{z,d}^{i}=\Pi(\mathcal{L}_{i}^{(3)},d,5),\] where \(i\in\{1,2\}\). Recall that \(\Pi\) denotes the embedding of a time series into \(\mathbb{R}^{d}\) and \(\mathcal{L}_{i}^{j}\) is a projection of time series \(\mathcal{L}_{i}\) onto its \(j\)-th coordinate. In all the embeddings we choose the lag \(l=5\). Note that the first point of time series \(\mathcal{L}_{i}\) is equal to the 2000-th iterate of point \(p_{i}\) under map \(f\). It is a standard procedure to cut off some transient part of the time series. Time series \(\mathcal{P}_{x,d}^{i}\) and \(\mathcal{P}_{z,d}^{i}\) are embeddings of the first and third coordinate of \(\mathcal{L}_{i}\), respectively. As Figure 6 (top right) suggests, the embedding of the first coordinate into \(\mathbb{R}^{3}\) results in a structure topologically similar to the Lorenz attractor. The embedding of the third coordinate, due to the symmetry of the system, produces a collapsed structure with "wings" of the attractor glued together (Figure 6, right). Thus, we expect time series \(\mathcal{P}_{z,d}^{i}\) be recognized as non-conjugate to \(\mathcal{L}_{i}\). In order to compare \(\mathcal{L}_{i}\) and embedded time series with ConjTest we shall find the suitable map \(h\). Ideally, such a map should be a homeomorphism between the Lorenz attractor \(L\subset\mathbb{R}^{3}\) (or precisely the \(\omega\)-limit set of the corresponding initial condition under the system (21)) and its image \(h(L)\). However, the construction of the time series allows us to easily define the Figure 6: A time series generated from the Lorenz system (top left) and 3-d embeddings of its projections onto the \(x\)-coordinate (top right), \(y\)-coordinate (bottom left) and \(z\)-coordinate (bottom right) with a delay parameter \(l=5\). best candidate for such a correspondence map point-wise for all elements of time series. For instance, a local approximation of \(h\) when comparing \(\mathcal{L}_{i}\subset\mathbb{R}^{3}\) and \(P^{j}_{x,d}\subset\mathbb{R}^{d}\) will be given by: \[h:\mathcal{L}_{i}\ni\mathbf{x}_{t}\mapsto(\mathbf{x}_{t}^{(1)},\mathbf{x}_{t+5 }^{(1)},\ldots,\mathbf{x}_{t+5d}^{(1)})\in P^{i}_{x,d}\subseteq\mathbb{R}^{d}, \tag{22}\] where \(\mathbf{x}_{t}:=(x_{t},y_{t},z_{t})\in\mathbb{R}^{3}\) denotes the state of the system (21) at time \(t\) and \(\mathbf{x}_{t}^{(1)}=x_{t}\) denotes its projection onto the \(x\)-coordinate. When \(j=i\) this formula matches the points of \(\mathcal{L}_{i}\) to the corresponding points of \(P^{j}_{x,d}=P^{i}_{x,d}\). However, if \(j\neq i\) then the points of \(\mathcal{L}_{i}\) are mapped onto the points of \(P^{i}_{x,d}\), not to \(P^{j}_{x,d}\), thus in fact in our comparison tests we verify how well \(P^{j}_{x,d}\) approximates the image of \(\mathcal{L}_{i}\) under \(h\) and the original dynamics. For the symmetric comparison of \(P^{j}_{x,d}\) with \(\mathcal{L}_{i}\) the local approximation of \(h^{-1}:\mathbb{R}^{d}\rightarrow\mathbb{R}^{3}\) will take a form: \[h^{-1}:P^{j}_{x,d}\ni(\mathbf{x}_{t}^{(1)},\mathbf{x}_{t+5}^{(1)},\ldots, \mathbf{x}_{t+5d}^{(1)})\mapsto\mathbf{x}_{t}\in\mathcal{L}_{j}\subset \mathbb{R}^{3}. \tag{23}\] These are naive and data driven approximations of the potential connecting map \(h\). It particular the homeomorphism from the Lorenz attractor to its 1D embedding cannot exists, but we still can construct map \(h\) using the above receipt, which seems natural and the best candidate for such a comparison. More sophisticated ways of finding the optimal \(h\) in general situations will be the subject of future studies. In experiments below we use maximum metric. ResultsAs one can expect, Table 4 shows that embeddings of the first coordinate give in general noticeably lower values then embeddings of the \(z\)'th coordinate. Thus, suggesting that \(\mathcal{L}_{1}\) is conjugate to \(\mathcal{P}^{1}_{x,3}\), but not to \(\mathcal{P}^{1}_{z,3}\). Again, Table 5 shows that, in the case of chaotic systems, FNN and KNN are highly sensitive to variation in starting points of the series. All methods suggest that 2-d embedding of the \(x\)-coordinate has structure reasonably compatible with \(\mathcal{L}_{1}\). With the additional dimension values gets only slightly lower. One could expect that 3 dimensions would be necessary for an accurate reconstruction of the attractor. Note that Takens' Embedding Theorem suggests even dimension of 5, as the Hausdorff dimension of the Lorenz attractor is about 2.06 [23]. However, it often turns out that the dynamics can be reconstructed with the embedding dimension less than given by Takens' Embedding Theorem (as implied e.g. by Probabilistic Takens' Embedding Theorem, see [3, 4]). We also attribute our outcome to the observation that the \(x\)-coordinate carries large piece of the system information, which is visually presented in Figure 6. Interestingly, when we use ConjTest to compare \(\mathcal{L}_{1}\) with embedding time series generated from \(\mathcal{L}_{1}\) we always get values \(0.0\). The connecting maps used in this experiment, defined by (22) and (23), establish a direct correspondence between points in two time series. As a result we get \(\tilde{h}=h\) in the definition of ConjTest, and consequently, every pair of sets in the numerator of equation (7) is the same. If the embedded time series comes from another trajectory then \(\tilde{h}\neq h\) and ConjTest gives the expected results, as visible in Table 5. On the other hand, computationally more demanding \(\mathrm{ConjTest}^{+}\) exhibits virtually the same results in both cases, when \(\mathcal{L}_{1}\) is compared with embeddings of its own (Table 4) and when \(\mathcal{L}_{1}\) is compared with embeddings of \(\mathcal{L}_{2}\) (Table 5). #### 4.4.2 Experiment 4B This experiment is proceeded according to the standard use of FNN for estimating optimal embedding dimension without an explicit knowledge about the original system. SetupLet \(p=(1,1,1)\), we generate the following collection of time series \[\mathcal{L}=\varrho(f,f^{2000}(p),10000),\quad\mathcal{P}_{d}=\Pi(\mathcal{L} ^{(1)},d,5),\] where \(d\in\{1,2,3,4,5,6\}\). In the experiment we compare pairs of embedded time series corresponding to consecutive dimensions, e.g., \(\mathcal{P}_{d}\) with \(\mathcal{P}_{d+1}\), for the entire range of a parameter values. We are looking for the minimal value of \(d\) such that \(\mathcal{P}_{d-1}\) is dynamically different from \(\mathcal{P}_{d}\), but \(\mathcal{P}_{d}\) is similar to \(\mathcal{P}_{d+1}\). The interpretation says that \(d\) is optimal, because by passing from \(d-1\) to \(d\) we split some false neighborhoods apart (hence, dissimilarity of dynamics), but by passing from \(d\) to \(d+1\) there is no difference, because there is no false neighborhood left to be separated. ResultsResults are presented in Figure 7. In general, the outputs of all methods are consistent. When the one-dimensional embedding, \(\mathcal{P}_{1}\), is compared with two-dimensional embedding, \(\mathcal{P}_{2}\), we get large comparison values \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|} \hline methodtest & \(\mathcal{L}_{1}\) vs. \(\mathcal{P}^{1}_{x,1}\) & \(\mathcal{L}_{1}\) vs. \(\mathcal{P}^{1}_{x,2}\) & \(\mathcal{L}_{1}\) vs. \(\mathcal{P}^{1}_{x,3}\) & \(\mathcal{L}_{1}\) vs. \(\mathcal{P}^{1}_{z,1}\) & \(\mathcal{L}_{1}\) vs. \(\mathcal{P}^{1}_{z,3}\) \\ \hline FNN (r=3) & \(\begin{array}{c}0.0\\ 1.0\end{array}\) & \(\begin{array}{c}0.0\\ 362\end{array}\) & \(\begin{array}{c}.05\\.196\end{array}\) & \(\begin{array}{c}0.0\\ 1.0\end{array}\) & \(\begin{array}{c}.111\\.541\end{array}\) \\ \hline KNN (k=5) & \(\begin{array}{c}.019\\.465\end{array}\) & \(\begin{array}{c}.003\\.036\end{array}\) & \(\begin{array}{c}.003\\.007\end{array}\) & \(\begin{array}{c}.024\\.743\end{array}\) & \(\begin{array}{c}.002\\.519\end{array}\) \\ \hline ConjTest (k=5, t=10) & \(\begin{array}{c}0.0\\ 0.0\end{array}\) & \(\begin{array}{c}0.0\\ 0.0\end{array}\) & \(\begin{array}{c}0.0\\ 0.0\end{array}\) & \(\begin{array}{c}0.0\\ 0.0\end{array}\) & \(\begin{array}{c}0.0\\ 0.0\end{array}\) & \(\begin{array}{c}0.0\\ 0.0\end{array}\) \\ \hline ConjTest\({}^{+}\) & \(\begin{array}{c}.330\\ (k=5, t=10)\end{array}\) & \(\begin{array}{c}.030\\.401\end{array}\) & \(\begin{array}{c}.030\\.087\end{array}\) & \(\begin{array}{c}.024\\.051\end{array}\) & \(\begin{array}{c}.406\\.396\end{array}\) & \(\begin{array}{c}.046\\.407\end{array}\) \\ \hline \end{tabular} \end{table} Table 4: Comparison of conjugacy measures for time series generated by the Lorenz system. The number in the upper left part of the cell corresponds to a comparison of \(\mathcal{L}_{1}\) vs. the second time series, while the lower right number corresponds to the inverse comparison. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline methodtest & \(\mathcal{L}_{1}\) vs. \(\mathcal{L}_{2}\) & \(\mathcal{L}_{1}\) vs. \(\mathcal{P}^{2}_{x,1}\) & \(\mathcal{L}_{1}\) vs. \(\mathcal{P}^{2}_{x,2}\) & \(\mathcal{L}_{1}\) vs. \(\mathcal{P}^{2}_{x,3}\) & \(\mathcal{L}_{1}\) vs. \(\mathcal{P}^{2}_{z,1}\) & \(\mathcal{L}_{1}\) vs. \(\mathcal{P}^{2}_{z,3}\) \\ \hline FNN (r=3) & \(\begin{array}{c}.995\\.996\end{array}\) & \(\begin{array}{c}.955\\.996\end{array}\) & \(\begin{array}{c}.987\\ 1.0\end{array}\) & \(\begin{array}{c}.991\\.996\end{array}\) & \(\begin{array}{c}.963\\.996\end{array}\) & \(\begin{array}{c}.996\\ 1.0\end{array}\) & \(\begin{array}{c}.996\\ 997.\end{array}\) \\ \hline KNN (k=5) & \(\begin{array}{c}.822\\.827\end{array}\) & \(\begin{array}{c}.826\\.832\end{array}\) & \(\begin{array}{c}.829\\.826\end{array}\) & \(\begin{array}{c}.829\\.825\end{array}\) & \(\begin{array}{c}.823\\.828\end{array}\) & \(\begin{array}{c}.820\\.833\end{array}\) \\ \hline ConjTest (k=5, t=10) & \(\begin{array}{c}.010\\.009\end{array}\) & \(\begin{array}{c}.236\\.012\end{array}\) & \(\begin{array}{c}.016\\.010\end{array}\) & \(\begin{array}{c}.010\\.009\end{array}\) & \(\begin{array}{c}.391\\.012\end{array}\) & \(\begin{array}{c}.017\\.009\end{array}\) \\ \hline ConjTest\({}^{+}\) (k=5, t=10) & \(\begin{array}{c}.020\\.017\end{array}\) & \(\begin{array}{c}.331\\.400\end{array}\) & \(\begin{array}{c}.039\\.092\end{array}\) & \(\begin{array}{c}.033\\.056\end{array}\) & \(\begin{array}{c}.431\\.392\end{array}\) & \(\begin{array}{c}.060\\.404\end{array}\) \\ \hline \end{tabular} \end{table} Table 5: Comparison of conjugacy measures for time series generated by the Lorenz system. The number in the upper left part of the cell corresponds to a comparison of \(\mathcal{L}_{1}\) vs. the second time series, the lower right corresponds to the symmetric comparison. for the entire range of every parameter. When we compare \(\mathcal{P}_{2}\) with \(\mathcal{P}_{3}\) the estimation of dissimilarity drops significantly, i.e. we conclude that the time series \(\mathcal{P}_{2}\) and \(\mathcal{P}_{3}\) are more "similar" than \(\mathcal{P}_{1}\) and \(\mathcal{P}_{2}\). The comparison of \(\mathcal{P}_{3}\) with \(\mathcal{P}_{4}\) still decrease the values suggesting, that the third dimension improves the quality of our embedding. The curve corresponding to \(\mathcal{P}_{4}\) vs. \(\mathcal{P}_{5}\) essentially overlaps \(\mathcal{P}_{3}\) vs. \(\mathcal{P}_{4}\) curve. Thus, the third dimension seems to be a reasonable choice. We can see that FNN (Figure 7 top left), originally designed for this test gives clear answer. However, in the case of KNN (Figure 7 top right) the difference between the yellow and the green curve is rather subtle. Thus, the outcome could be alternatively interpreted with a claim that two dimensions are enough for this embedding. In the case of \(\mathrm{ConjTest}^{+}\) we have two parameters. For the fixed value of \(t=10\) we manipulated the value of \(k\) (Figure 7 bottom left) and the outcome match up with the FNN result. However, the situation is slightly different when we fix \(k=5\) and vary the \(t\) (Figure 7 bottom right). For \(t<30\) results suggests dimension 3 to be optimal for the embedding, but for \(t>40\) the green and the red curve split. Moreover, for \(t>70\), we can observe the beginning of another split of the red (\(\mathcal{P}_{4}\) vs. \(\mathcal{P}_{5}\)) and the violet (\(\mathcal{P}_{5}\) vs. \(\mathcal{P}_{6}\)) curves. Hence, the answer is not fully conclusive. We attribute this effect to the chaotic nature of the attractor. The higher the value of \(t\) the higher the effect. We investigate it further in the following experiment. #### 4.4.3 Experiment 4C In this experiment we investigate the dependence of \(\mathrm{ConjTest}^{+}\) on the choice of value of parameter \(t\). Parameter \(t\) of \(\mathrm{ConjTest}^{+}\) controls how far we push the approximation of a neighborhood of a point \(x_{i}\) (\(U_{k}^{i}\) in (9)) through the dynamics. In the case of systems with a sensitive dependence on initial conditions (e.g., the Lorenz system) we could expect that higher values of \(t\) spread the neighborhood over the attractor. In the consequence, we obtain higher values of \(\mathrm{ConjTest}^{+}\). SetupLet \(p_{1}=(1,1,1)\), \(p_{2}=(2,1,1)\), \(p_{3}=(1,2,1)\), and \(p_{4}=(1,1,2)\). In this experiment we study the following time series: \[\mathcal{L}_{i}=\varrho(f,f^{2000}(p_{i}),10000),\quad\mathcal{P}_{x,d}^{i}= \Pi(\mathcal{L}_{i}^{(1)},d,5),\quad\mathcal{P}_{y,d}^{i}=\Pi(\mathcal{L}_{i}^ {(2)},d,5),\] Figure 7: A comparison of embeddings for consecutive dimensions. Top left: FNN with respect to parameter \(r\). Top right: KNN with respect to parameter \(k\). Bottom left: ConjTest\({}^{+}\) with respect to parameter \(k\). Bottom right: ConjTest\({}^{+}\) with respect to parameter \(t\). where \(i\in\{1,2,3,4\}\) and \(d\in\{1,2,3,4\}\). We compare the reference time series \(\mathcal{L}_{1}\) with all the others using \(\mathrm{ConjTest}^{+}\) method with the range of parameter \(t\in\{1,5,9,13,17,21,25,30,35,40,45,50,55,60,65,70,75,80\}\). ResultsThe top plot of Figure 8 presents the results of comparison \(\mathcal{L}_{1}\) to time series \(\mathcal{L}_{i}\) and \(\mathcal{P}_{x,d}^{i}\) with \(i\in\{2,3,4\}\) and \(d\in\{1,2,3,4\}\). Red curves correspond to \(\mathcal{L}_{1}\) vs. \(\mathcal{P}_{x,1}^{i}\), green curves to \(\mathcal{L}_{1}\) vs. \(\mathcal{P}_{x,2}^{i}\), blue curves to \(\mathcal{L}_{1}\) vs. \(\mathcal{P}_{x,3}^{i}\), and dark yellow curves to \(\mathcal{L}_{1}\) vs. \(\mathcal{P}_{x,4}^{i}\). There are three curves of every color, each one corresponds to a different starting point \(p_{i}\), \(i\in\{2,3,4\}\). The bottom part shows results for comparison of \(\mathcal{L}_{1}\) to \(\mathcal{P}_{y,d}^{i}\) (we embed the \(y\)-coordinate time series instead of \(x\)-coordinate). The color of the curves is interpreted analogously. Black curves on both plots are the same and correspond to the comparison of \(\mathcal{L}_{1}\) with \(\mathcal{L}_{j}\) for \(j\in\{2,3,4\}\). As expected, we can observe a drift toward higher values of \(\mathrm{ConjTest}^{+}\) as the value of parameter \(t\) increases. Let us recall that \(U_{i}^{k}\) in (9) is a \(k\)-element approximation of a neighborhood of a point \(x_{i}\). The curve reflects how the image of \(U_{i}^{k}\) under \(f^{t}\) gets spread across the attractor with more iterations. In consequence, a 2D embedding with \(t=10\) might get lower value than 3D embedding with \(t=40\). Nevertheless, Figure 8 (top) shows consistency of the results across the tested range of values of parameter \(t\). Red curves corresponding to 1D embeddings gives significantly higher values then the others. We observe the strongest drop of values for 2D embeddings (green curves). The third dimension (blue curves) does not improve the situation essentially, except for \(t\in[1,25]\). The curves corresponding to 4D embeddings (yellow curves) overlaps those of 3D embeddings. Thus, the 4D embedded system does not resemble the Lorenz attractor essentially better than the 3D embedding. It agrees with the analysis in the Experiment 4B. The \(y\)-coordinate embeddings presented in the bottom part of Figure 8 give similar results. However, we can see that gaps between curves corresponding to different dimensions are more visible. Moreover, the absolute level of all curves is higher. We interpret this outcome with a claim that the \(y\)-coordinate inherits a bit less information about the original system than the \(x\)-coordinate. In Figure 6 we can see that \(y\)-embedding is more twisted in the center of the attractor. Hence, generally values are higher, and more temporal information is needed (reflected by higher embedding dimension) to compensate. Note that the comparison of \(\mathcal{L}_{1}\) to any embedding \(\mathcal{P}_{x,d}^{i}\) is always signifi cantly worse than comparison of \(\mathcal{L}_{1}\) to any \(\mathcal{L}_{j}\). This may suggests that any embedding is not perfect. ### Example: rotation on the Klein bottle In the next example we consider the Klein bottle, denoted \(\mathbb{K}\) and defined as an image \(\mathbb{K}:=\operatorname{im}\beta\) of the map \(\beta\): \[\beta:[0,2\pi)\times[0,2\pi)\ni\begin{bmatrix}x\\ y\end{bmatrix}\mapsto\begin{bmatrix}\cos\frac{x}{2}\cos y-\sin\frac{x}{2}\sin(2 y)\\ \sin\frac{x}{2}\cos y+\cos\frac{x}{2}\sin(2y)\\ 8\cos x(1+\frac{\sin y}{2})\\ 8\sin x(1+\frac{\sin y}{2})\end{bmatrix}\in\mathbb{R}^{4}. \tag{24}\] In particular, the map \(\beta\) is a bijection onto its image and the following "rotation map" \(f_{[\phi_{1},\phi_{2}]}:\mathbb{K}\to\mathbb{K}\) over the Klein bottle is well-defined: \[f_{[\phi_{1},\phi_{2}]}(x):=\beta\left(\beta^{-1}(x)+\begin{bmatrix}\phi_{1} \\ \phi_{2}\end{bmatrix}\mod 2\pi\right).\] #### 4.5.1 Experiment 5A We conduct an experiment analogous to Experiment \(4B\) on estimating the optimal embedding dimension of a projection of the Klein bottle. SetupWe generate the following time series \[\mathcal{K}=\varrho(f_{[\phi_{1},\phi_{2}]},(0,0,0,0),8000),\] \[\mathcal{P}_{d}=\Pi\left((\mathcal{K}^{(1)}+\mathcal{K}^{(2)}+ \mathcal{K}^{(3)}+\mathcal{K}^{(4)})/4,d,8\right),\] where \(\phi_{1}=\frac{\sqrt{2}}{10}\), \(\phi_{2}=\frac{\sqrt{3}}{10}\), \(d\in\{2,3,4,5\}\) and \(\mathcal{K}^{(i)}\) denotes the projection onto the \(i\)-th coordinate. Note that in previous experiments we mostly used a simple observable \(s\) which was a projection onto a given coordinate. However, in general one can consider any (smooth) function as an observable. Therefore in the current experiment, in the definition of \(\mathcal{P}_{d}\), \(s\) is a sum of all the coordinates, not the projection onto a chosen one. Note also that because of the symmetries (see formula (24)) a single coordinate might be not enough to reconstruct the Klein bottle. Figure 8: Dependence of \(\mathrm{ConjTest}^{+}\) on the parameter \(t\) for Lorenz system. In this experiment multiple time series with a different starting points were generated. Each of them was used to produce an embedding. Top: comparison of \(x\)-coordinate embedding with \(\mathcal{L}_{1}\). Bottom: comparison of \(y\)-coordinate embedding with \(\mathcal{L}_{1}\). For more explanation see text. ResultsWe can proceed with the interpretation similar to Experiment \(4B\). The FNN results (Figure 9 top left) suggests that 4 is a sufficient embedding dimension. The similar conclusion follows from KNN (Figure 9 top right) and \(\text{ConjTest}^{+}\) with a fixed parameter \(k=10\) (Figure 9 bottom right). The bottom left figure of 9 is inconclusive as the higher \(k\) as the curves does not stabilize even with high dimension. Note that the increase of parameter \(t\) in \(\text{ConjTest}^{+}\) (Figure 9 bottom right) does not result in drift of values as in Figure 7 (bottom right). In contract to the Lorenz system studied in Experiment \(4B\) the rotation on the Klein bottle is not a sensitive on the initial conditions. Figure 9: A comparison of the conjugacy measures for embeddings of the Klein bottle for consecutive dimensions. Top left: FNN with respect to parameter \(r\). Top right: KNN with respect to parameter \(k\). Bottom left: \(\text{ConjTest}^{+}\) with respect to parameter \(k\) (\(t=10\) fixed). Bottom right: \(\text{ConjTest}^{+}\) with respect to parameter \(t\) (\(k=5\) fixed). Discussion and Conclusions There is a considerable gap between theory and practice when working with dynamical systems; In theoretical consideration, the exact formulas describing the considered system is always known. Yet in biology, economy, medicine, and many other disciplines, those formulas are unknown; only a finite sample of dynamics is given. This sample contains either sequence of points in the phase space, or one-dimensional time series obtained by applying an observable function to the trajectory of the unknown dynamics. This paper provides tools, FNN, KNN, ConjTest, and ConjTest\({}^{+}\), which can be used to test how similar two dynamical systems are, knowing them only through a finite sample. Proof of consistency of some of the presented methods is given. The first method, FNN distance, is a modification of the classical False Nearest Neighbor technique designed to estimate the embedding dimension of a time series. The second one, KNN distance, has been proposed as an alternative to FNN that takes into account larger neighborhood of a point, not only the nearest neighbor. The conducted experiments show a strong similarity of FNN and KNN methods. Additionally, both methods admit similar requirements with respect to the time series being compared: they should have the same length and their points should be in the exact correspondence, i.e., we imply that an \(i\)-th point of the first time series is a dynamical counterpart of the \(i\)-th point of the second time series. An approximately binary response characterizes both methods in the sense that they return either a value close to 0 when the compared time series come from conjugate systems, or a significantly higher, non-zero value in the other case. This rigidness might be advantageous in some cases. However, for most empirical settings, due to the presence of various kind of noises, FNN and KNN may fail to recognize similarities between time series. Consequently, these two methods are very sensitive to any perturbation of the initial condition of time series as well as the parameters of the considered systems. However, KNN, in contrast to FNN, admits robustness on a measurements noise as presented in Experiment 1C. On the other hand, FNN performs better than KNN in estimating the sufficient embedding dimension (Experiments 4B, 5A). Moreover, the apparently clear response given by FNN and KNN tests might not be correct (see Experiment 1\(A\), \(\mathcal{R}_{1}\) vs. \(\mathcal{R}_{4}\)). Both ConjTest and ConjTest\({}^{+}\) (collectively called ConjTest methods) are directly inspired by the definition and properties of topological conjugacy. They are more flexible in all considered experiments and can be applied to \begin{table} \begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline Property Method & FNN & KNN & ConjTest & ConjTest\({}^{+}\) \\ \hline Requirements & identical matching between indexes of the elements (in particular the series must be of the same length) & & & * an exact correspondence between the two time series is not needed \\ & & & * allow examining arbitrary, even very complicated potential relations between the series \\ & & & * can be used for comparison of time series of different length \\ & & & * require defining the possible (semi-)conjugacy \(h\) at least locally i.e. giving the corresponding relation between indexes of the elements of the two series \\ \hline Parameters & only one parameter: \(r\) (but one should examine large interval of \(r\) values) & only one parameter: \(k\) (but is recommend to check a couple of different \(k\) values) & involve two parameters: \(k\) and \(t\) \\ \hline Robustness & & less robust to noise and perturbation than ConjTest methods & * more robust to noise and perturbation \\ & & give nearly a binary output & * the returned answer depends continuously on the level of perturbation and noise compared to the binary response given by FNN or KNN \\ \hline Recurrent properties & takes into account only the one closest return of a series (trajectory) to each neighborhood & takes into account \(k\)-closest returns \\ \hline Further properties & & more likely to give false positive answer than ConjTest\({}^{+}\) & more computationally demanding than ConjTest but usually more reliable \\ \hline \end{tabular} \end{table} Table 6: Comparison of the properties of discussed conjugacy measures. time series of different lengths and generated by different initial conditions (the first point of the series). In contrast to FNN and KNN, they admit more robust behavior with respect to any kind of perturbation, be it measurement noise (Experiment \(1C\)), perturbation of the initial condition (Experiments 1A, 2A, 3A, and 4A), \(t\) parameter (Experiment \(4C\)), or a parameter of a system (Experiment \(1B\)). In most experiments, we can observe a continuous-like dependence of the test value on the level of perturbations. We see this effect as softening the concept of topological conjugacy by ConjTest methods. A downside of this weakening is a lack of definite response whether two time series come from conjugate dynamical systems. Hence the ConjTest methods should be considered as a means for a quantification of a dynamical similarity of two processes. Experiments 1A, 2A, and 3A show that both methods, ConjTest and \(\mbox{ConjTest}^{+}\), capture essentially the same information from data. In general, ConjTest is simpler and, thus, computationally more efficient. However, Experiment 4A shows that ConjTest (in contrast to \(\mbox{ConjTest}^{+}\)) does not work well in the context of embedded time series, especially when the compared embeddings are constructed from the same time series. Experiments 4B and 5A show that the variation of ConjTest methods with respect to the \(t\) parameter can also be used for estimating a good embedding dimension. Further comparison between ConjTest and \(\mbox{ConjTest}^{+}\) reveals that \(\mbox{ConjTest}^{+}\) is more computationally demanding than \(\mbox{ConjTest}\), but also more reliable. Indeed, in our examples with rotations on the circle and torus and with the logistic map, both these tests gave nearly identical results, but the examples with the Lorenz system show that ConjTest is more likely to give a false positive answer. This is due to the fact that ConjTest works well if the map \(h\) connecting time series \(\mathcal{A}\) and \(\mathcal{B}\) is a reasonably good approximation of the true conjugating homeomorphism, but in case of embeddings and naive, point-wise connection map, as in some of our examples with Lorenz system, the Hausdorff distance in formula (7) might vanish resulting in false positive. The advantages of ConjTest and \(\mbox{ConjTest}^{+}\) methods come with the price of finding a connecting map relating two time series. When it is unknown, in the simplest case, one can try the map \(h\) which is defined only locally i.e. on points of the time series and provide an order- preserving matching of indexes of corresponding points in the time series. The simplest example of such a map is an identity map between indices. The question of finding an optimal matching is, however, much more challenging and will be a subject of a further study. A convenient summary of the presented methods is gathered in Table 6. ## Acknowledgments The authors gratefully acknowledge the support of Dioscuri program initiated by the Max Planck Society, jointly managed with the National Science Centre (Poland), and mutually funded by the Polish Ministry of Science and Higher Education and the German Federal Ministry of Education and Research. J S-R was also supported by National Science Centre grant 2019/35/D/ST1/02253.
2305.05975
Diversity of information pathways drives scaling and sparsity in real-world networks
Empirical complex systems must differentially respond to external perturbations and, at the same time, internally distribute information to coordinate their components. While networked backbones help with the latter, they limit the components' individual degrees of freedom and reduce their collective dynamical range. Here, we show that real-world networks are formed to optimize the gain in information flow and loss in response diversity. Encoding network states as density matrices, we demonstrate that such a trade-off mathematically resembles the thermodynamic efficiency characterized by heat and work in physical systems. Our findings explain, analytically and numerically, the sparsity and the empirical scaling law observed in hundreds of real-world networks across multiple domains. We show, through numerical experiments in synthetic and biological networks, that ubiquitous topological features such as modularity and small-worldness emerge to optimize the above trade-off for middle- to large-scale information exchange between system's units. Our results highlight that the emergence of some of the most prevalent topological features of real-world networks have a thermodynamic origin.
Arsham Ghavasieh, Manlio De Domenico
2023-05-10T08:35:42Z
http://arxiv.org/abs/2305.05975v1
# Diversity of information pathways drives scaling and sparsity in real-world networks ###### Abstract Empirical complex systems must differentially respond to external perturbations [1, 2] and, at the same time, internally distribute information to coordinate their components [3, 4]. While networked backbones help with the latter [5], they limit the components' individual degrees of freedom and reduce their collective dynamical range [6]. Here, we show that real-world networks are formed to optimize the gain in information flow and loss in response diversity. Fondazione Bruno Kessler, Via Sommarive 18, 38123 Povo, Italy Department of Physics, University of Trento, Via Sommarive 14, 38123 Povo, Italy Department of Physics and Astronomy "Galileo Galilei", University of Padua, Via F. Marzolo 8, 315126 Padova, Italy Padua Center for Network Medicine, University of Padua, Via F. Marzolo 8, 315126 Padova, Italy Istituto Nazionale di Fisica Nucleare, Sez. Padova, Italy \(*\) Corresponding author: [email protected], [email protected] **Empirical complex systems must differentially respond to external perturbations [1, 2] and, at the same time, internally distribute information to coordinate their components [3, 4]. While networked backbones help with the latter [5], they limit the components' individual degrees of freedom and reduce their collective dynamical range [6]. Here, we show that real-world networks are formed to optimize the gain in information flow and loss in response diversity. Encoding network states as density matrices [7, 8], we demonstrate that such a trade-off mathematically resembles the thermodynamic efficiency characterized by heat and work in physical systems. Our findings explain, analytically and numerically, the sparsity and the empirical scaling law observed in hundreds of real-world networks across multiple domains. We show, through numerical experiments in synthetic and biological networks, that ubiquitous topological features such as modularity and small-worldness emerge to optimize the above trade-off for middle- to large-scale information exchange between system's units. Our results highlight that the emergence of some of the most prevalent topological features of real-world networks have a thermodynamic origin.** Real-world networks are usually sparsely connected [9] - i.e., the number of existing links between units is much smaller than the one of potentially available links - and exhibit peculiar topological properties like heterogeneous connectivity [10, 11, 12], small-worldness [5], modularity [13] and hierarchy [14] as well as a balance between segregation and integration [15] or order and disorder [16]. Several network growth mechanisms [17, 18, 19, 20, 21, 22, 23, 24], as well as methods not directly based on growth [25, 26, 27], have been proposed to replicate these features. However, a theoretical framework to explain why a certain transition from the disconnected state to a relatively stable wiring configuration is naturally favored is still lacking. This transition is observed in a variety of complex biological systems: communication among units - that can be understood in terms of information exchange of chemical, electric or electrochemical signals, as well as binary packets or language - allow the system to start operating and functioning. For instance, human oral bacteria convey information within multispecies communities via signaling, such as adhesins and receptors, allowing for adherence and community development [28]. Once a fungal colony is established the cellular network use communication signals to regulate colony growth and development [29, 30]. Broadly speaking, information search and exchange plays an important role, still to be fully uncovered, in the formation, adaptation and evolution of living, synthetic or engineered complex systems [31], which have to balance dynamic functions for a rapid response to internal and external perturbations with the energetic cost of the intervening actions [32]. In equilibrium and near-equilibrium statistical physics the above questions are naturally answered by fundamental principles, like the Gibbs entropy maximization or the free-energy minimization [33, 34, 35]. Conversely, for complex systems that are open and far from equilibrium, we lack an adequate theoretical framework to describe, explain and predict state transitions. Consequently, it is not clear how the aforementioned topological features observed in several real-world networks might emerge, and to which extent they are a causal byproduct of fundamental mechanisms related to how information between units is communicated and at which cost. In the following, we will firstly introduce a theoretical framework for the analysis of network formation processes. Successively, we will show how this framework allows us to make predictions about specific topological features of complex networks, such as their connectedness and density, the coexistence of segregation and integration, as well as the coexistence of topological order and disorder responsible for the emergence of small-worldness. Finally, we compare our predictions for topological sparsity against hundreds of empirical networks, finding an excellent agreement between theoretical expectation and data. **Network formation process.** Let \(G\) be the structure of a complex system, modeled as a network of \(N\) nodes and \(|E|\) connections, respectively. The links between network units are often encoded into an adjacency matrix \(\mathbf{A}\), where \(A_{ij}=1\) if nodes \(i\) and \(j\) are connected, and \(A_{ij}=0\) otherwise. System's units can exchange information in several ways, e.g. to create, destroy or efficiently use links in order to reach a stable functional regime, which is usually out of equilibrium because the system has to adapt to a changing environment. In the following we will generally refer to information exchange to characterize any type of signal that can be used for unit-unit communication from short- to long-range scales. A variety of dynamical processes has been used to model the flow of information between the nodes [36, 37, 38]. Yet, one of the simplest and most versatile dynamical process that can be used to model the flow of information through the networks is diffusion [39, 40], which is governed by the Laplacian matrix \(\mathbf{L}=\mathbf{D}-\mathbf{A}\) where \(\mathbf{D}\) is a diagonal matrix, with \(D_{ii}=k_{i}\) -- i.e., degree--being the connectivity of node \(i\). The solution of the diffusion equation (See Methods) on top of networks can be understood in terms of a time evolution operator \(e^{-\tau\mathbf{L}}\), where \(\left(e^{-\tau\mathbf{L}}\right)_{ij}\) proxies the flow of information from node \(j\) to \(i\) and \(\tau\) determines the propagation scale. For instance, when \(\tau\) is small signals travel mostly between neighboring nodes, whereas large values of \(\tau\) enable long-range communication. The state of networks can be suitably encoded into density matrices, \(\mathbf{\rho}_{\tau}=e^{-\tau\mathbf{L}}/Z\)[6, 7], with \(Z=\text{Tr}\left(e^{-\tau\mathbf{L}}\right)\) being the partition function. Network density matrices have been used to tackle a broad range of problems, from classification of healthy and disease states [41, 42] and robustness analysis [43, 44] to dimensionality reduction of multilayer networks [45, 46] and renormalization group [8]. This relatively new framework is versatile because it is grounded in the physics of linear or non-linear response to the stochastic perturbations that occur at different locations and propagate throughout the links. Remarkably, the network Von Neumann entropy [7] defined by \(\mathcal{S}=-\text{Tr}\left(\mathbf{\rho}_{\tau}\log\mathbf{\rho}_{\tau}\right)\) measures how diverse the system responds to perturbations [6], and the network free energy, \(F=-\log Z/\tau\), is a measure of how fast the signal can transport between the nodes [46]. This framework has been also used to characterize information cores in empirical networks [47] and multiscale brain functional connectivity [48]. Here, we characterize the network formation process as a transition between an initial state \(G_{0}\) - a collection of \(N\) nodes with no links among them, \(|E|=0\) - and another network \(G_{1}\), characterized by topological connectivity with certain properties. We show (See Methods) that this process can only lower the response diversity (\(\delta\mathcal{S}\leq 0\)), whereas it enhances the transport properties (\(\delta F\geq 0\)). To compare the two factors, we define a gain function \(W=\delta F\) and a loss function \(Q=\delta\mathcal{S}/\tau\), showing that \(W\geq|Q|\) (See Methods), and their relative trade-off: \[\eta=1-\frac{|Q|}{W}. \tag{1}\] Note that these quantities characterize the network formation process in terms of perturbation propagation and the diversity of systems' response. It is worth noticing that, from a mathematical perspective, they resemble the thermodynamic functions of heat \(Q\) and work \(W\), although such concepts cannot be directly extended to networks. In Fig. 1, an emblematic example is shown, where the response diversity and signal flow are compared among four graphs with different connectivities. \(\eta\) **and the signal propagation scale.** How deep the perturbations penetrate through networks depends on many factors, including the underlying topology, conductance, and the type of signals. Here, we use \(\tau\), the temporal parameter determining the scale of propagation, to characterize such a depth. For instance, at very small scales \((\tau\ll 1)\), the perturbation can not propagate through the system, whereas \(\tau\gg 1\) let the perturbations flow between topologically distant nodes. Firstly, we Taylor expand the generalized heat and work functions while keeping only the leading terms. We show that at very small scales, where signals are contained locally \((\tau\approx 0)\), all network formation processes are indistinguishable, being characterized by \(\eta=1\) (See Methods). Remarkably, the network topology becomes irrelevant to system's functionality during this regime. Secondly, we find that networks with fewer links, in the linear regime \(\tau\ll 1\), exhibit higher values of \(\eta\). Finally, a second-order correction reveals the importance of connectivity: assuming that the number of links in a network scales with the number of nodes as \(|E|=cN^{\gamma}\), where \(c\) is a constant, we find that the optimal exponent which maximizes \(\eta\) by \(\partial_{\gamma}\eta=0\) is obtained for \(\gamma=1\). Therefore, by requiring that \(\eta\) is maximized by a network formation process, we naturally derive that the corresponding connectivity distribution must be sparse and scale with a specific exponent. **The effect of topological features on \(\eta\).** Here we study how some of the most prominent topological features of real-world networks affect \(\eta\). More technically, these features include connectivity, integration versus segregation, and order versus disorder as captured by synthetic network (See Fig. 2). First, we use Erdos-Renyi (ER) networks [49, 50], where the existence of links Figure 1: **Response diversity and information propagation.**\(4\) simple graphs are considered, with different connectivities, and their response to different environmental perturbations is shown, with the perturbed nodes colored in red. For convenience, a simple dynamical rule is considered for the propagation of perturbations, where each perturbed node conveys the perturbation only to its neighbors. Trivially, connectivity enhances the flow of perturbations captured by the monotonic increase of \(\delta F\) (See the text). For isolated nodes, the system’s responses to different perturbations have no similarity (overlap). However, the overlap increases due to connectivity, reducing the diversity of responses. This is captured by the monotonically decreasing \(\delta\mathcal{S}\) (See the text). between every pair of nodes is independent and given by a connectivity probability \(p\), to study the behavior of \(\eta\) for varying \(p\). Results indicate that being connected while sparse ensures high \(\eta\): Figure 2: **The effect of topological features on \(\eta\).** (a) Connectivity versus disconnectivity, characterized by the emergence of a giant connected component, modeled by Erdős-Rényi random networks for different wiring probability \(p\). (b) Segregation versus integration, modeled by stochastic block networks for different mixing parameter \(\mu\). (c) Order versus disorder, modeled by Watts-Strogatz small-world networks with degree \(8\) and rewiring probability \(p_{rew}\). In all three cases, different propagation scales \(\tau\) are considered in the heatmap (top panels), while characteristic line curves have been separately shown for fixed values of \(\tau\) (bottom panels) to better visualize transitions in \(\eta\). In all cases, the size of synthetic networks is \(N=10^{3}\). maximum \(\eta\) is found between \(p\propto N^{-1}\) and \(p\propto N^{-1}\log N\) (See Fig. 2). It is worth noting that in the thermodynamic limit \(N\rightarrow\infty\), the difference between these two probabilities becomes negligible (\(N^{-1}\approx N^{-1}\log N\)), and the optimal probability can be approximated as \(p\approx N^{-1}\). Therefore, given that in ER model the number of links is determined by the connectivity probability (\(p\)) as \(|E|=p\binom{N}{2}\), the numerical result validates our theoretical prediction of maximum \(\eta\) at \(|E|\propto N^{\gamma}\), with \(\gamma\approx 1\). Figure 3: **Scaling in empirical networks.** The logarithm of the number of edges \(|E|\) and nodes \(N\) is represented for \(543\) empirical networks, where the best fitting exponent is \(\gamma=1.07\pm 0.02\). Similar analyses have been done for each network domain (informational, social, biological, technological, economic, and transportation.) separately and are represented in small blocks. The exponents are reported on top of each block. Second, we consider a special type of stochastic block networks [51]-- i.e., models reproducing groups or communities of nodes that are randomly connected internally with probability \(p_{in}\) and externally with probability \(p_{out}\)-- with \(10\) communities and average degree \(\bar{k}=10\). This class of models is useful to investigate the mesoscale organization of complex networks. We vary the mixing parameter, defined as \(\mu=\frac{k_{out}}{k_{in}+k_{out}}\), where \(k_{in}\) is the average number of connections a node has with the nodes in its community, and \(k_{out}\) is the average number of external connections. Values of \(\mu\ll 1\) generate a highly segregated network with most connections existing only within a community, while \(\mu\approx 1\) gives a network with no particular community structure, resembling ER models. In this case, at small scales (\(\tau\ll 1\)), the integrated communities exhibit the highest \(\eta\), whereas at larger scales, the trade-off between integration and segregation - with \(\log\mu\) between \(-6\) and \(-2\) - is favored. Therefore, our results confirm that in systems which are either too segregated or too integrated, the trade-off between gain in information flow and loss in response diversity is not optimal. Third, we consider small-world networks-- i.e., models where nodes are initially connected in a lattice-like regular pattern, and then the links are shuffled with a given rewiring probability \(p_{rew}\) to introduce topological shortcuts-- with average degree \(\bar{k}=8\). Results indicate that at small scales, \(\eta\) is higher in disordered networks (\(p_{rew}\approx 1\)), while at larger scales, ordered structures (\(p_{rew}\approx 0\)) work better than disordered ones, with the optimal \(\eta\) found in a middle regime between order and disorder exhibiting high small-worldness [5]. Once again, our framework provides an elegant explanation of why small-worldness is so ubiquitous in empirical systems. **Scaling and sparsity of empirical networks.** We have analytically shown that if the number of links scales with the number of nodes as \(|E|=cN^{\gamma}\), the optimal exponent which maximizes \(\eta\) is given by \(\gamma=1\). This derivation implies that the real-world networks must be sparse in a specific way: their average degree must scale as \(\bar{k}=\frac{2|E|}{N}=\frac{2cN^{\gamma}}{N}=2c\), and their connectivity must scale with the number of nodes. Here, we compare this theoretical expectation with empirical data. We analyzed \(543\) social, biological, informational, economic, transportation and technological networks from ICON [52], ranging in size from \(N\approx 10^{2}\) to \(N\approx 10^{8}\) nodes. Assuming a power-law ansatz between network size and the number of links, \(|E|\propto N^{\gamma}\), we perform a linear regression in the log-log space to obtain the scaling exponent. We find that the overall best fit is for \(\gamma=1.07\pm 0.02\), while the proximity to \(\gamma=1\) result is relatively stable across domains (See Fig. 3), confirming our analytical and numerical prediction of \(|E|\propto N^{1}\), with error bounds compatible with random topological fluctuations in network formation and noise in the data set. Furthermore, we analyze hundreds of biological networks, from fungi to the human, C Elegans, and Ciona intestinalis connectomes (See Methods, Fig. Extended Data 1 and Fig. Extended Data 2). A direct comparison between these empirical networks and different ensembles of null models - namely, Erdos-Renyi (ER) and Configuration models (CM) - (See Methods), shows that they are characterized by significantly higher \(\eta\) at middle and large propagation scales, with respect to their randomized null models. This finding provides further evidence, supporting the numerical experiments of the previous section (See Fig. 2), that complex systems must exhibit topological correlations to sustain their flow of information and, simultaneously, their pathway diversity at the macroscopic scales. **Discussion.** While an interconnected structure is essential for information to flow between systems' units and for their coordination, it also limits the units' independence. This simple fact restricts their collective dynamical range, posing a significant challenge to complex systems that must diversely respond, and adapt, to unpredictable environments. Therefore, it is of paramount importance for real-world networks to exhibit topological features allowing them to ensure an adequate information flow among units, while simultaneously minimizing the cost of the diversity of available information pathways used for communication across multiple time scales. Here, using network density matrices, we have characterized network formation as a physical process, quantifying the loss in response diversity, gain in information flow, and their trade-off. To this aim, we have shown that it is possible to introduce a new quantity (\(\eta\)) that, mathematically, resembles the thermodynamic efficiency related to heat and work. We analytically and numerically predict that networks must be sparse to optimize the trade-off, with their connectivity following a scaling law \(|E|\propto N^{\gamma}\), with \(\gamma\approx 1\). Our analysis of 543 empirical networks from biological, social, informational and transportation domains, reveals a scaling with exponent \(\gamma\approx 1.07\pm 0.02\) that is strikingly compatible with our the theoretical expectation. Moreover, our numerical experiments show that complex topological features like modularity and small-worldness can maximize \(\eta\) when perturbations propagate beyond the first neighbors of nodes, enabling middle- to long-range communications between system's units. We validated these results with empirical data from hundreds of fungal and neural systems (See Methods, Fig. Extended Data 1 and Fig. Extended Data 2), while comparing against suitable null models that capture distinct topological features of the data. It demonstrates that topological correlations enable a complex network to maximize the trade-off and, therefore, play a crucial role in network formation and function. Clearly, diffusion dynamics coupled with network topology is only a first step toward modeling the flow of information in empirical complex systems. This work opens the doors for future studies engaging nonlinear processes, especially reaction-diffusion, using suitable generalization of our framework. Furthermore, higher-order networks including multilayers [53] and hypergraphs [54], remain to be explored. Most importantly, relaxing some of our assumptions in the analytical calculation of the optimal exponent (\(\gamma=1\)), like the mean-field approximation, can lead to even more precise predictions, spurring further interest for future developments of our approach. Overall, our results are compatible with the compelling hypothesis that complex networks shape their topology to optimize the trade-off between maximally exchanging information among their units and guaranteeing an adequate response diversity for middle- to long-range signaling. **Author Contributions** AG and MDD designed the study, performed the theoretical analysis and wrote the manuscript. AG performed the numerical experiments. **Acknowledgements** MDD acknowledges financial support from the Human Frontier Science Program Organization (HFSP Ref. RGY0064/2022), from the University of Padua (PRD-BIRD 2022) and from the EU funding within the MUR PNRR "National Center for HPC, BIG DATA AND QUANTUM COMPUTING" (Project no. CN00000013 CN1).
2306.08901
A cosmic-ray database update: CRDB v4.1
The cosmic-ray database, CRDB, has been gathering cosmic-ray data for the community since 2013. We present a new release, CRDB v4.1, providing many new quantities and data sets, with several improvements made on the code and web interface, and with new visualisation tools. CRDB relies on the mysql database management system, jquery and tsorter libraries for queries and sorting, and PHP web pages and AJAX protocol for displays. A REST interface enables user queries from command line or scripts. A new (pip-installable) CRDB python library is developed and extensive jupyter notebook examples are provided. This release contains cosmic-ray dipole anisotropy data, high-energy $\bar{p}/p$ upper limits, some unpublished LEE and AESOP lepton time series, many more ultra-high energy data, and a few missing old data sets. It also includes high-precision data from the last three years, in particular the hundreds of thousands AMS-02 and PAMELA data time series (time-dependent plots are now enabled). All these data are shown in a gallery of plots, which can be easily reproduced from the public notebook examples. CRDB contains 316,126 data points from 504 publications, in 4111 sub-experiments from 131 experiments.
D. Maurin, M. Ahlers, H. Dembinski, A. Haungs, P. -S. Mangeard, F. Melot, P. Mertsch, D. Wochele, J. Wochele
2023-06-15T07:09:23Z
http://arxiv.org/abs/2306.08901v2
# A cosmic-ray database update+ ###### Abstract Context:The cosmic-ray database, CRDB, has been gathering cosmic-ray data for the community since 2013. Aims:We present a new release, CRDB v4.1, providing many new quantities and data sets, with several improvements made on the code and web interface, and with new visualisation tools. Methods:CRDB relies on the MySQL database management system, jquery and table-sorter libraries for queries and sorting, and PHP web pages and AJAX protocol for displays. A REST interface enables user queries from command line or scripts. A new (pip-installable) CRDB python library is developed and extensive jupyter notebook examples are provided. Results:This release contains cosmic-ray dipole anisotropy data, high-energy \(\bar{p}/p\) upper limits, some unpublished LEE and AESOP lepton time series, many more ultra-high energy data, and a few missing old data sets. It also includes high-precision data from the last three years, in particular the hundreds of thousands AMS-02 and PAMELA data time series (time-dependent plots are now enabled). All these data are shown in a gallery of plots, which can be easily reproduced from the public notebook examples. Conclusions:CRDB contains 314902 data points from 487 publications, in 4092 sub-experiments from 126 experiments. ## 1 Introduction Owing to the quantity and variety of data gathered in cosmic-ray (CR) physics, a central shared database (DB) assuring data quality, completeness, and traceability is an asset for the community. Although the oldest datasets have a historical value mostly, the low-energy data still trace and give a unique perspective on the 11-year Solar cycle (e.g. Ghelfi et al. 2017; Shen et al. 2019), and may also be of unforeseen use in the future. The Cosmic-Ray DataBase1 (CRDB) team has been distributing a growing body of CR data since its first public release in 2013 (Maurin et al. 2014). In a recent update, CRDB v4.0 (Maurin et al. 2020), existing data on (groups of) ultra-heavy elements (\(Z>30\)), upper limits on anti-nuclei (\(Z\leq-2\)), and a selected sample of ultra-high-energy (UHE) CRs from ground-experiments were included. In CRDB v4.0, the DB structure and the submission data format were also revised, and users were provided with a REST interface to extract both CR data and solar modulation levels (in their own codes and scripts), with overall more flexibility and more keywords to select the data queried. Footnote 1: [https://lpsc.in2p3.fr/crdb](https://lpsc.in2p3.fr/crdb) In this release, CRDB v4.1, beside uploading data from the last three years (from AMS-02, CALET, DAMPE, PAMELA, etc.), we take advantage of an agreement with our colleagues from the KCDC2 DB (Haungs et al. 2018) to complete our sample of UHECR data. We also add energy-dependent anisotropy data, including and extending those presented in Ahlers & Mertsch (2017). We also correct the meta-data and provide a few unpublished low-energy leptons and positron fraction data from the LEE, AESOP and AESOP-LITE balloon flights (operated over a 50 year time period). Because an incredibly large body of time-dependent data has been released by the AMS-02 experiment, we provide a new interface to ease the visualisation of these time series; these data are now the most numerous by far in CRDB. One of the main novelty of this release is a new standalone python library for the plotting of CRDB data, which should further ease their distribution and use by the community at large. We also took the opportunity of this release to fix some mistakes in the data, meta-data, and to improve the code (behind the scene) and the web interface; the most important changes are documented and available on CRDB's webpage, and briefly described later on. The paper is organised as follows: Sect. 2 recalls the DB structure and the few changes made in this release; Sect. 3 presents the web interface and its novelties, and also introduce the new public python library to query and display CRDB data (outside of the website); Sect. 4 highlights the new data added in this version; we conclude in Sect. 5. ## 2 Database structure In CRDB, data are separated in two broad categories, namely the _data_ (CR data points and data uncertainties) and the _meta-data_ (data about the data): the latter include the data taking periods, the description of the experiment, links to the associated publications, etc. The DB structure, shown in Fig. 1, has only slightly changed since our last release. Its most important features are recalled below, and we use MONOSPACE font to easily identify the DB table names and keys. ### Data points and energy axis (DATA table) Data points are described in the DATA table (see Fig. 1). Each entry has a unique ID and corresponds to a measured VALUE or upper limit (if boolean IS_UPPER_LIMIT set to 1) within an energy bin [E_BIN_L, E_BIN_U] or at the mean energy bin value E_MEAN3. The data point is also associated to a sub-experiment and publication via its SUBEXP_PUBLI_ID key (whose value points at a SUBEXP_PUBLI table entry, see Sect. 2.5). Footnote 3: If only E_MEAN is provided in the publication, we set E_BIN_L = E_BIN_U = E_MEAN. If both E_BIN_L and E_BIN_U are provided but not E_MEAN, we set E_MEAN = (E_BIN_L x E_BIN_U)1/2. Finally, some experiments define their last energy bin as all events above a given energy: in that case, we manually set an upper bin value at least 100 times the lower bin value. To cover the different energy types provided in the original publications, the energy axis (E-AXIS) of each data point must be set to ETOT, EK, R, EKN, or ETOTN. These types correspond to and are given in unit of, respectively, total energy \(E_{\text{tot}}\) in GeV, kinetic energy \(E_{\text{k}}=E_{\text{tot}}-m\) in GeV, rigidity \(\mathcal{R}=pc/(Ze)\) in GV, kinetic energy per nucleon \(E_{\text{k/n}}=E_{\text{k}}/A\) in GeV/n, and total energy per nucleon \(E_{\text{tot/n}}=E_{\text{tot/n}}/A\) in GeV/A. For the data, CRDB enables asymmetric statistical (VALUE_ERRSTAT_L and VALUE_ERRSTAT_U) and systematic (VALUE_ERRSYST_L and VALUE_ERRSYST_U) uncertainties4. Footnote 4: For old data, the distinction was usually not made between the two, and because old measurements were mostly limited by their statistics, the quoted uncertainties in the publications are ascribed to VALUE_ERRSTAT_L and VALUE_ERRSTAT_U. ### Quantities and conversions (CR_QUANTITY table) The measured quantity is either a single CR quantity NUM_ID or a ratio of two CR quantities NUM_ID/DEN_ID, where NUM_ID and DEN_ID point to entries in the CR_QUANTITY table. These entries are identified by an ID (set manually), a SYMBOL, and a NAME. The keys A, Z, and M_AMU (for the atomic mass number, charge, and mass in a.m.u) are non-null for isotopes, only the key Z can be filled for elements, and all keys are set to zero for groups of elements (or compound quantities) and dipole anisotropy data. In CRDB queries, the data conversion from one energy axis to another is enabled (see Table A.1 in Maurin et al.2020). The conversion is exact for individual fluxes of CR isotopes or leptons and for ratios of leptons, and also for \(\bar{p}/p\) (this last conversion was not implemented in the previous release), but it is impossible for generic ratios, compound quantities, or anisotropy data. Nevertheless, an approximate conversion can still be enforced for fluxes of elements (or group of elements) if these quantities have a CR isotope proxy; this proxy is enabled via the PROXY_ID key in the CR_QUANTITY table (this key was previously in a separate and redundant table that we removed in this release). ### Meta-data for experiments and modulation level (EXP, SubEXP, and SubEXP_IMAGE tables) Definition and description. CR data are taken from experiments described in the EXP table (see Fig. 1). Each experiment has a TYPE (balloon, ground, or space), a unique ID (set internally in the DB), a name (EXPNAME), a starting year (DATE), and optionally a website (HTML); we stress that the experiment name is mainly used to better regroup and sort sub-experiments in the _Experiments/Data_ website tab. Sub-experiments (SUBEXP table) have an ID and are attached to a single experiment (EXP_ID). They enable to tag and distinguish, for a same experiment: (i) data obtained from different data taking periods; (ii) data taken from distinct sub-detectors or reconstructed from different analysis types; (iii) data obtained using external third-party models or different assump Figure 1: Tables and keys in the MySQL structure of CRDB. The data (energy, values, and uncertainties) are stored in DATA and CR_QUANTITY tables. The meta-data (publication, experiment and sub-experiment names and infos) are stored in the EXP, SUBEXP, SUBEXP_IMAGE, and PUBLI tables, with SUBEXP_PUBLI a bridge table enabling to access and link these various meta-data. The ISOTOP_PROXY table is used to define the rules for energy-axis conversions of CR fluxes (see App. A.4 of Maurin et al.2020 or in the new “Caveats/Tips” web page, see Sect. 3.1). The LOG_QUERIES table keeps track of the number and origin of the visits. tions. Sub-experiments have a NAME5, a short DESCRIPTION (detector or detection technique), additional INFO (e.g. location for balloon flights, GPS coordinates for ground-based detectors, etc.), and an IMAGE_ID (see next). For each sub-experiment, we also provide a single value (set to zero by default) for a possible energy-scale relative uncertainty (ESCALE_RELERR). Footnote 5: [http://www01.nmdb.eu](http://www01.nmdb.eu) Footnote 6: [http://cosmicrays.oulu.fi/phi](http://cosmicrays.oulu.fi/phi) In this release, we also added the new SUBEXP_IMAGE table (see Fig. 1). Previously, the detector images were kept in a separate directory with file names based on the EXPNAME and or sub-experiment NAME keys. In the new table, we have the image itself (DATA key) with its unique ID key, along with a brief description if needed (DESCRIPTION key). This allows to avoid storing duplicate images and makes checks on the completeness of the presence of images for all sub-experiments easier. Solar modulation level. Especially important for the interpretation of low-energy data (below a few hundreds of GeV), we must provide (i) the DISTANCE to the Sun of the sub-experiment--almost all experiments are at 1 a.u., but a few satellites (_Ulysses_ and _Voyager_) have also taken data at different position inside and outside the Solar cavity-- and (ii) the exact list of start-stop DATES of the data taking periods7. These two pieces of information allow to calculate and fill SMALL_PHI, the average modulation level over the corresponding data taking periods, in the force-field approximation (Gleeson & Axford 1967, 1968). Actually, SMALL_PHI contains different estimates of \(\langle\phi(t)\rangle\), all calculated from the same neutron monitor data8, but based on slightly different modellings: the values tagged [15] and [16] are based on monthly average public values9 from Usoskin et al. (2005, 2017), while those tagged [14] are based on daily average values from Ghelfi et al. (2017b). In CRDB, all queried data are returned with their calculated SMALL_PHI value, but users are obviously free to discard or re-calculate it--by default, the returned values are [14], which can be also calculated for any time period from the _Solar modulation_ tab (see Sect. 3.1). Footnote 8: [http://www01.nmdb.eu](http://www01.nmdb.eu) ### Meta-data for publications (PUBLI table) Almost all data in CRDB are taken from peer-reviewed publications. The main exceptions are data from balloon flights before the 1990's, which were published in the proceedings of the biennial International Cosmic-Ray Conference only. Each publication is stored in the PUBLI table (see Fig. 1) with a unique ID (set internally) and an HTML key, taken to be the publication ADS (Astrophysics Data System) identifier (e.g. 2014A&A...569A..32M). This identifier allows to retrieve and fill in a standardised manner the REF and BIBTEX keys via the ADS API9. The original publications are stored in CRDB (for the administrators) but cannot be made publicly available because of publication rights. Footnote 9: [http://www01.nmdb.eu](http://www01.nmdb.eu) Because some data sets are sometimes re-analysed and reported in a new publication, the obsolete one has its SUPERSEDEDED_BY key set to ID of the new one (it is left empty if it is not superseded). This allows us to enforce that queries to CRDB always return the most recent data, discarding the deprecated ones. We nevertheless keep track of these superseded data in the 'Experiments/Data' tab (see Sect. 3.1), where old and new publications are shown. ### Tying data and meta-data (SUBEXP_PUBLI table) The full description of the data requires the data themselves, the sub-experiment that measured them, and the publication where they appeared. The SUBEXP_PUBLI bridge table (see Fig. 1) allows to tackle situations where several sub-experiments are reported in the same publication. Each data set, with a unique ID, is tied to a sub-experiment (SUBEXP_ID) and a publication (PUBLI_ID). In addition, in this table, we keep track of the date at which each dataset was uploaded in CRDB (DATE_UPLOAD), and also of all CR QUANTITIES whose data were provided in this publication. While both these keys are unused in data queries, they are useful for maintenance and cross-checks of the DB. ## 3 Web interface and queries CRDB runs on free open source softwares with a classical LAMP solution: Linux operating system, APACHE HTTP server, MySQL database, and PHP scripting language. The server is hosted at the LPSC laboratory, and has been recently changed to have a more recent version of the operating system, the DB, and the PHP version. The DB RAM was extended from 512 MB to 2048 MB to handle the larger requests from the newly added time-series data (see Sect. 4.6). The CRDB website is organised in tabs providing different entry points to explore the DB data and meta-data. The webpages use AJAX (asynchronous JavaScript and XML) web development technique for efficiency and speed. In addition to the few improvements made on the existing website tabs, we added two new ones in this release (see Sect. 3.1). To query, sort, and show the DB content, the web interface relies on jquery, jquery-ui, jquery.cluetip, and table-sorter. There are two ways for users to query data: either from the _Data extraction_ tab (see below) or from a direct command-line call (bypassing the website) via the REST interface (also see below). The latter functionality has been fully exploited in this release, with the development of a new dedicated CRDB python library. This library is described and used to generate a gallery of plots in Sect. 3.2. ### Web pages: content and novelties We briefly describe below the content and noteworthy improvements made on the tabs. For this release, we also added a new tabs to list a few caveats and tips related to the data preparation and transformations. * _Welcome_ tab: entry point of the website, where the DB content, tools, people involved, code status, etc. are highlighted. In this release, we also added a gallery of plots to advertise the variety of data in CRDB. * _Caveats/Tips_ tab: there are a few subtleties in the way the data (and meta-data) are handled in CRDB. Indeed, at the collection stage, the information on the data is sometimes partial, and somewhat subjective choices need to be made to be able to implement them nonetheless. Then, at the query stage, combinations and conversions are enabled, with some degree of approximation as well. Users probably do no pay a lot of attention to these details, and this is probably fine most of the time. Whereas the details and caveats about these procedures are made explicit in the CRDB publications (Maurin et al., 2014, 2020), the most relevant ones are gathered here in one place. This should help users identify data for which going back to the original publication is necessary. * _Data extraction_ tab: queries of user-selected CR quantities with various options (sub-experiment names, dates, energy unit, etc.). The retrieved data include the ones matching exactly the query but also, if selected, extra sets based on energy conversions (Table A.1 of Maurin et al.2020) and data combinations (App. A of Maurin et al.2014); we added in this release the trivial but forgotten transformation rule to get Y/X from data published as X/Y. The data retrieved are then plotted and listed in a pop-up window and can be downloaded in various formats: in this release we added an extra option, 'csv (as import)', enabling to retrieve the data and all their meta-data (format similar to the one described in the _Submit data_ tab, see below). We also added a tick box for the 'Refine search criteria' box in the _Data extraction_ tab, to display the data versus time instead of energy. * _Experiments/Data_ tab: sorted list of experiments with their associated sub-experiments, including in particular a picture of the detector, their associated publications and quantities measured. In this release, to improve the sorting and readability of the numerous unnamed balloon flight series (i.e. balloon launched multiple times over years by the same team and analysed in several publications), we reporued them into fewer and more informative names, e.g. _Nuclear emulsions 1950-1968, Muon Telescope 1957-1995_, etc. * _REST/CRDB.py_ tab: details how to query CRDB from a stand-alone script, with the same options as the ones provided in the _Data extraction_ tab (datasets retrieved from the website or from the REST interface with the same selection and options are the same). We also provide a simple command-line example (to run in a terminal) using curl. This capability is taken advantage of and extended in this release thanks to a new standalone python library to retrieve and display data, for instance from a python notebook, see Sect. 3.2). * _Solar modulation_ tab: gives access, for any time interval, to the force-field modulation level (see Sect. 2.3). Behind the scene, a _cron_ scheduler downloads NM data daily from NMDB10. It also calculates the associated \(\phi_{\text{FF}}\), whose values can be retrieved for a selected time period and resolution (from 10 minute up to a month), either directly from this tab, or from a REST interface. In this release, we fixed several minor bugs (as listed on the website), and more importantly, we fixed the broken REST interface and the daily update11. Footnote 10: [http://www01.nmdb.eu](http://www01.nmdb.eu) * _Submit data_ tab: how to format and send a csv file to CRDB. * _Useful links_ tab: online resources related to CR data. * _Admin_ tab: maintenance tools to check broken or inconsistent entries and missing meta-data, detailed procedure to upload data in the DB. This tab is restricted to authenticated users (i.e. CRDB maintainers). ### Python access to CRDB (and notebook) The CRDB provides a REST interface, which can be used from any programming language to automate downloading and processing data in scripts and programs. A tutorial on how to do this is available12. Since Python is the dominant scripting language for data processing, we further provide a ready-made solution for Python users that simplifies and standardises queries from scripts. Users of this library do not need to learn the REST API, this is done internally by the library. The corresponding Python package called crdb13 can be downloaded with the standard tool pip from the Python Package Index14. The main function is crdb.query, which performs a query to the database through keyword arguments, which are internally validated so that user errors are caught early and clear error messages are returned. The tabular output of a query is transformed by this function into a structured Numpy array (Harris et al.2020), which allows for efficient fast processing in Python. Each query is automatically cached to disk for 30 days, which accelerates repeated calls to crdb.query and reduces the load on the server; this often occurs during the development of a script or program. Further utility functions allow users to easily generate lists of citations for the data sets they queried from the DB. All functions are well documented, the documentation can be accessed with Python's internal help() command. Footnote 12: [https://github.com/crdb-project/tutorial](https://github.com/crdb-project/tutorial) Footnote 13: [https://github.com/crdb-project/crdb](https://github.com/crdb-project/crdb) Footnote 14: [https://pypi.org/project/crdb](https://pypi.org/project/crdb) Footnote 15: [https://github.com/crdb-project/tutorial/blob/main/gallery.ipynb](https://github.com/crdb-project/tutorial/blob/main/gallery.ipynb) The Python package also provides a command-line interface, which allows users to perform queries and store the results in one of the ASCII formats supported by the CRDB data extraction system. In this case, the query is specified using command-line arguments, that mirror those of crdb.query. Example code on how to make standard plots in Python can be found in the gallery, and we show in Figs. 2 and 3 a few plots illustrating the variety, coverage, and completeness of CRDB's data. More plots are shown in the next section, and all of them are available from CRDB's public gallery notebook15. Footnote 16: All missing \(\phi_{\text{FF}}\) values were completed, and we also recalculated modulation levels, starting from 2015, for the THULE station (because of updated NM values in NMDB) and ROME station (using the correct number of NM tubes, which changed in 2017). ## 4 New datasets in CRDB v4.1 In addition to _regular_ data updated since the last release (Sect. 4.1), the content of CRDB has evolved in several directions. In this release, we (i) add dipolar anisotropy data (Sect. 4.2); (ii) take advantage of a partnership with KCDC to gradually move from limited sample to completeness of UHECR data (Sect. 4.3); (iii) include high-energy upper limits on antiproton fluxes from ground experiments (Sect. 4.4); (iv) correct and complete low-energy lepton data from the LEE, AESOP, and AESOP-Lite balloons flown over 50 years (Sect. 4.5); (v) expand time series data thanks to the recently released AMS-02 daily and PAMELA monthly data (Sect. 4.6). ### Data uploaded since CRDB v4.0 Many data from AMS-02, CALET, DAMPE, etc. have been published since our last release. These data sets should have ideally been uploaded in CRDB shortly after their publication, but were only prepared for this release. We also took the opportunity of this release to upload a few old datasets that were not yet in CRDB. Rather than a detailed and cumbersome description of all these new data sets, which are listed in Table 1, we prefer to highlight below some of their most salient features. To start with, the first 7 years of AMS-02 data (Aguilar et al. 2021d), along with other publications by the AMS collaboration (Aguilar et al. 2021a,b,c), all uploaded in this release, now provide the _most comprehensive set of data from a single experiment_. These data are in the GV to TV rigidity range, and correspond to fluxes and ratios of leptons, antiprotons, and nuclei from H to Si, plus Fe. Moreover, in addition to the above AMS-02 data, we have uploaded the recent CALET (Adriani et al. 2020, 2021, 2022a,b,c), DAMPE (An et al. 2019; Albanno et al. 2021; DAMPE collaboration 2022), ISS-CREAM (Choi et al. 2022), and NUCLEON (Grebenyuk et al. 2019a,b; Figure 2: Selected plots from the gallery, obtained from the CRDB python library and available from the gallery notebook[15]. **Top**: flux of selected species, multiplied by \(E_{k}^{2.6}\) on the right panel. **Middle**: energy dependence of high-energy CR (groups) of elements. Karmanov et al. 2020a,b; Turundaevskiy et al. 2021) data, which provide the _most precise set of direct measurement data in the TeV domain and above_; these data are key to investigate possible breaks and features in the spectra, and the consistency between direct and indirect measurement data. Some of the new data sets uploaded also explore in a unique way the composition of ultra-heavy CRs (UHCR). Indeed, recent ACE-CRIS data (Binns et al. 2022) _unveil the isotopic content of CR elements \(Z=30-38\)_, complementing the elemental fractions measured by Tiger and SuperTiger (already in CRDB); a further extension to the range \(41\leq Z\leq 56\) should be available soon by SuperTiger (Walsh et al. 2022). For even heavier (and rarer) elements, very few experiments have provided data so far. In addition to Ariel6, HEAO3-HNE, UHCRE-LDEF, and Trek data (already in CRDB), we added the skylab data (Shirk & Price 1978). The last piece of UHCR data that we decided to add in this release are those from the OLIMPIYA experiment. The latter uses olivine crystals contained in stony-iron meteorites (pallasites) as CR detectors. At variance with satellite experiments that provide measurements of UHCR GCRs accumulated over an exposure time of a few years, the OLIMPIYA experiment provides _measurements of GCRs accumulated over up to hundreds of Myr_--these two complementary techniques allow to have a glimpse on the GCR time evolution. The OLIMPIYA data uploaded in this release16 are taken from Alexandrov et al. Figure 3: Selected plots from the gallery, obtained from the CRDB python library and available from the gallery notebook15. **Top**: electron and positron fluxes. **Middle**: comparison of elemental abundances in Solar system and cosmic rays. * [17] AMS02 (2011/05-2018/05) Aguilar et al. (2021d) * [18] AMS02 (2011/05-2019/10) Aguilar et al. (2021a) * [19] AMS02 (2011/05-2019/10) Aguilar et al. (2021b) * [20] AMS02 (2011/05-2019/10) Aguilar et al. (2021c) * [21] CALET (2015/10-2019/10) Adriani et al. (2020) * [22] CALET (2016/01-2020/05) Adriani et al. (2021) * [23] CALET (2015/10-2022/02) Adriani et al. (2022a) * [24] CALET (2015/10-2021/12) Adriani et al. (2022b) * [25] CALET (2015/11-2021/05) Adriani et al. (2022c) * [26] DAMPE (2016/01-2018/06) DAMPE (2016/01-2020/06) DAMPE (2016/01-2020/06) DAMPE (2016/01-2021/12) DAMPE collaboration (2022) * [27] IMP8 (1982/07-1982/12) Garcia-Munoz et al. (1986) * [28] IMP8 (1984/08-1984/09) Garcia-Munoz et al. (1986) * [29] ISS-CREAM (2017/08-2019/02) Choi et al. (2022) * [30] NUCLEON (2015/07-2017/06) Karmanov et al. (2020b) * [31] NUCLEON (2015/07-2017/06) Truudaevskiy et al. (2021) Ni * [32] NUCLEON (2015/07-2017/06) Grebenyuk et al. (2019b) subFe/Fe * [33] NUCLEON-ICE (2015/07-2017/06) Krebenyuk et al. (2019a)3 * [34] NUCLEON-KLEM (2015/07-2017/06) Karmanov et al. (2020a) * [35] PAMELA-CALO (2006/07-2014/09) Nozzoli & Cernetti (2021) * [36] \({}^{7}\)Li/\({}^{6}\)Li, \({}^{7}\)Be/\({}^{10}\)Be/\({}^{9}\)Be, \({}^{11}\)B/\({}^{10}\)B * [37] PAMELA-TOF (2006/07-2014/09) Nozzoli & Cernetti (2021) * [38] \({}^{7}\)Li/\({}^{6}\)Li, \({}^{7}\)Be/\({}^{10}\)Be/\({}^{9}\)Be * [39] \({}^{11}\)Li/\({}^{12}\)Be/\({}^{10}\)Be/\({}^{9}\)Be [MISSING_PAGE_POST] ### Anisotropy data Ground-based detectors with high event statistics allow the study of anisotropies in the arrival directions of CRs. Of particular interest is here the dipole anisotropy predicted by diffusion theory, that allows us to study the nearby CR source distribution and diffuse CR transport in our local magnetic environment (e.g., Ahlers & Mertsch 2017). While the true dipole anisotropy is represented by an amplitude and two phases, the data-driven reconstruction method of ground-based observatories allows only the reconstruction of the projection of the dipole vector onto the equatorial plane. Conventionally, this projection is characterised by the (projected component of) amplitude and the phase in right ascension. These new dipole anisotropy data are indicated in the DB by the entries DipoleAmplitude and DipolePhase; we have chosen a convention where DipolePhase\(\in[-180^{\circ},180^{\circ}]\). The dipole data in terms of total energy ETOT is shown in Fig. 4. Note that the limited statistics of CR experiments in the PeV-EeV energy region has so far only yielded upper limits on the dipole anisotropy. In the DB, we indicate this by providing both the best amplitude and its upper limit as separate entries. As visible in Fig. 4, the dipole amplitude and phase data from different observatories can show strong deviations beyond statistical uncertainties. This is related to hidden (and often unquantified) systematic effects, corresponding to the partial sky coverage of experiments and reconstruction method. Furthermore, experimental collaborations oftentimes provide a number of updates of their anisotropy studies as the event statistics accumulate. We have chosen to include all the data publicly available, but note that the later data sets are usually meant to supersede the earlier ones. Finally, note that some of the (especially older) data have been extracted from publications, which give rather limited information on the methodology used. We have chosen to include these at face value, but recommend to exercise caution when using these data for quantitative studies. The experiments and associated references for all these data are gathered in Table 2. ### UHECR data from KCDC Considering the vast amount of academic databases and search engines for locating and accessing published scientific data, unified access to published datasets and spectra is still in the early stages. This is due to the large variety of experiments and thus the large variety of measured data. In cooperation with CRDB, the 'KASCADE Cosmic-ray Data Centre' (KCDC) is taking a step towards simplification, by embedding the UHECR data from KCDC, i.e. data from extensive air shower experiments, into CRDB. The advantage of such an extensive collection of UHECR data is that data from other experiments can be obtained relatively quickly. KCDC is already a demonstrator and partner of PUNCH4NFDI18, the consortium of particle, astroparticle, astro-, hadron and nuclear physics within the German National Research Data Infrastrucutre, NFDI, which is aimed to unify the methodical approach of open data in this field. Footnote 18: [https://www.punch4nfdi.de/](https://www.punch4nfdi.de/) The KCDC is a web-based interface where initially the scientific data from the completed air-shower experiment KASCADE-Grande was made available for the astroparticle community as well as for the interested public Besides a DataShop to download the reconstructed data of KASCADE-Grande and the meta-data, KCDC offers more than 100 cosmic ray spectra from about 25 different ground-based high-energy CR experiments published between 1984 and 2021 for download. The data sets available over an energy range from about \(10^{12}\) eV to more than \(10^{20}\) eV for all-particle spectra (keyword AllParticles in CRDB) as well as for mass groups like p, He up to Fe or heavy and light respectively, derived from the unfolding procedure for different high-energy interaction models like QGSJet, EPOS and SIBYLL, mostly embedded in the CORSIKA simulation package. CORSIKA19 (COsmic Ray event Simulation for KASCADE) has been written especially for KASCADE and extended since then to become the world's standard simulation package in the field of cosmic ray air shower simulations. Footnote 19: [https://www.iap.kit.edu/corsika/](https://www.iap.kit.edu/corsika/) While the KASCADE-Grande experimental data in KCDC are accessible also via an API, the spectra points and metadata, stored in a postgres database, can only be selected and displayed on the website after registration. Thus, a partnership with CRDB was set up with the aim of creating a basis for this data exchange and to provide the community with a common interface to this merged spectra data. The KCDC data sets are now being reformatted to meet the requirements of CRDB, to supplement its very extensive content with data from ground-based air shower experiments. The spectra uploaded on CRDB at the time of this release are listed in Table 3; they represent about \(\sim 25\%\) of the full data being prepared, and a sample of these data can be seen in Fig. 2. To match the requirements of UHECR measurements, the data quantity list DATA_QTY had to be extended by two more groups, the He-C-group and the Si-Fe-group. To find out more about the real meaning of the particle spectra like helium, oxygen and so on, their mixtures as well as the mixtures of different high-energy interaction models, users should refer to the original papers. ### Upper limit on high-energy \(\bar{p}/p\) With the angular resolution of ground cosmic-ray detectors reaching below the degree level in the 90's, it became possible to observe a deficit of events from the direction of the Moon or the Sun (\(\sim 0.5^{\circ}\)): the Moon or Sun shadow technique was used first to calibrate their angular resolution and pointing accuracy. Actually, the position of the shadow is offset from the true location of the blocking bodies owing to the deflection of cosmic rays in the geomagnetic field, with the shadow shifted westward (resp. eastward) for positively (resp. negatively) charged particles. This allowed several experiments to set upper limits on the \(\bar{p}/p\) ratio above TeV energies (Amenomori et al. 1995; Ambrosio et al. 2003b; Achard et al. 2005; Tibet Asy et al. 2007; Bartoli et al. 2012; Abeysekara et al. 2018). These upper limits were added in CRDB, along with the older upper limits obtained from the observed charged ratio of muons (Stephens 1985). These new datasets are shown in Fig. 5 and listed in Table 4. ### LEE, AESOP, and AESOP-Lite balloon flights From 1968 to 2011, the LEE (Low Energy Electrons) balloon-borne instrument (Hovestadt et al. 1970) was launched over 35 times. LEP provided the longest series of CR electron measurements (\(e^{-}+e^{+}\)) over a time period that covers about four solar cycles. This data is particularly relevant to the study of the solar modulation of electrons with energies up to about 20 GeV. In CRDB v4.1, we reorganized the existing LEE data from 1968 to 1994. Data points taken from figures were updated with the actual values when private communication with the authors was possible. Data post-1994 were also added to the database. Indeed, the spectra for the years 1997 to 2000 were never fully published. However, flight data were analyzed using the same method as that outlined in Fulks (1975), and the spectrum values at 1.2 GeV only were published in Clem & Evenson (2004). The full spectra for these years were provided by the authors (Paul Evenson, 2023) and uploaded in CRDB. These data are shown in the top panel of Fig. 6 along with other measurements from experiments at similar energies. We also show on this plot times series of He (second panel), NM count rates (third panel), and Solar modulation values calculated from these count rates (fourth panel). From 1994 to 2011, the AESOP (Anti-Electron Sub Orbital Payload) balloon-borne instrument (Clem et al., 1996) flew at multiple occasions with the primary objective to study the charge-sign dependence of the solar modulation of electrons from a few hundreds MeV to a few GeV. In CRDB v4.1, we re-organized the existing AESOP \(e^{+}/(e^{-}+e^{+})\) data and updated the 1994 flight (private communication with the author John Clem, 2023). The AESOP-Lite apparatus is the successor of LEE and AESOP. Its primary objectives are to search for the origin of low-energy electrons in the electron spectrum between 20-300MeV, and to provide a baseline electron spectrum at 1 au for the measurements of the Voyager probes currently transmitting data from outside the heliosphere. The \(e^{-}\), \(e^{+}\), and \(e^{+}/(e^{-}+e^{+})\) data from the AESOP-Lite's anaiden flight from Sweden in 2018 (Mechbal et al., 2020) were added to CRDB; future data will be added too. The metadata of all these balloon flights were updated using information from the original publications. When not available, the information from the stratospheric balloon flight catalogue StratoCat20 was used. The list of the balloon flight names as encoded in CRDB along with the associated publications are listed in Table 5. Footnote 20: [https://stratocat.com.ar/indexes.html](https://stratocat.com.ar/indexes.html) ### AMS-02 and PAMELA time series In CRDB previous releases, a few time series were already included: yearly averaged (1994-2014) proton fluxes from EPHIN (Kuhl et al., 2016), monthly or Carrington rotation average (2006-2014) proton fluxes from PAMELA (Adriani et al., 2013, 2015), and 6 month average (2006-2009) electron fluxes from PAMELA (Martucci et al., 2018). Thanks to its large acceptance and high statistics, AMS-02 was able, for the first time, to provide daily averaged fluxes of H, He, and He/H from 2011 to 2019 (Aguilar et al., 2018, 2022), Figure 4: Equatorial dipole amplitude and phase of the CR anisotropy inferred by various experiments (see Table 2 for references). The figure is available from the gallery notebook15. and \(e^{-}\) from 2011 to 2021 (Aguilar et al., 2023): these data are now the dominant body of data in CRDB, with about 200 000 data points over \(\sim 3000\) days. We also added the recently published He time series of PAMELA from 2006 to 2013. Owing to its smaller acceptance and statistics, the data were averaged over one Carrington rotation (\(\sim 1\) month) in the first three years (Marcelli et al., 2020), and \begin{table} \begin{tabular}{l l} \hline \hline Subexp NAME & Reference \\ \hline ARGO-YBJ (2008/01-2009/12) & Bartoli et al. (2015) \\ ARGO-YBJ (2008/01-2012/12) & Bartoli et al. (2018) \\ Artyomovsk (1981/01-1987/12) & Bergamasco et al. (1990) \\ Baksan (1982/07-1986/06) & Andreyev et al. (1987) \\ Baksan Carpet (1980/02-1981/01) & Alexeyenko et al. (1981) \\ Baksan Carpet (2007/01-2007/12) & Alekseenko et al. (2009) \\ Bolivia (1965/01-1976/12) & Swinson \& Nagashima (1985) \\ Budapest (1958/01-1963/12) & Nagashima et al. (1985) \\ EAS-TOP (1990/01-1994/12) 1 & Aglietta et al. (1996) \\ EAS-TOP (1992/01-1994/12) & Aglietta et al. (1995) \\ EBUS-TOP (1992/01-1999/12) & Aglietta et al. (2009) \\ Embudo Cave (1965/01-1983/12) & Swinson \& Nagashima (1985) \\ HAWC (2015/05-2017/05)+IceCube (2011/05-2016/05) & Abeysekara et al. (2019) \\ Hobart (1958/01-1983/12) & Nagashima et al. (1985) \\ Honkong (1983/11-1986/02) & Lee \& Ng (1987) \\ IceCube (2007/06-2008/03) & Abbasi et al. (2010) \\ IceCube (2009/05-2015/05) & Aartsen et al. (2016) \\ IceCube HE (2009/05-2010/05) & Bourbeau et al. (2017) \\ Abbasi et al. (2012) \\ ASHS (2012) \\ ASHS (2013) \\ ASHS (2014) \\ ASHS (2015) \\ ASHS (2016) \\ ASHS (2017) \\ ASHS (2018) \\ ASHS (2019) \\ ASHS (2019) \\ ASHS (2019) \\ ASHS (2019) \\ ASHS (2020) \\ ASHS (2021) \\ ASHS (2021) \\ ASHS (2022) \\ ASHS (2023) \\ ASHS (2024) \\ ASHS (2025) \\ ASHS (2026) \\ ASHS (2027) \\ ASHS (2028) \\ ASHS (2029) \\ ASHS (2020) \\ ASHS (2020) \\ ASHS (2020) \\ ASHS (2021) \\ ASHS (2022) \\ ASHS (2023) \\ ASHS (2024) \\ ASHS (2025) \\ ASHS (2026) \\ ASHS (2027) \\ ASHS (2028) \\ ASHS (2029) \\ ASHS (2020) \\ ASHS (2020) \\ ASHS (2021) \\ ASHS (2022) \\ ASHS (2023) \\ ASHS (2024) \\ ASHS (2025) \\ ASHS (2026) \\ ASHS (2027) \\ ASHS (2028) \\ ASHS (2029) \\ ASHS (2020) \\ ASHS (2020) \\ ASHS (2020) \\ ASHS (2021) \\ ASHS (2022) \\ ASHS (2023) \\ ASHS (2024) \\ ASHS (2025) \\ ASHS (2026) \\ ASHS (2027) \\ ASHS (2028) \\ ASHS (2029) \\ ASHS (2020) \\ ASHS (2020) \\ ASHS (2020) \\ ASHS (2021) \\ ASHS (2021) \\ ASHS (2022) \\ ASHS (2023) \\ ASHS (2024) \\ ASHS (2025) \\ ASHS (2026) \\ ASHS (2027) \\ ASHS (2028) \\ ASHS (2029) \\ ASHS (2020) \\ ASHS (2020) \\ ASHS (2020) \\ ASHS (2021) \\ ASHS (2022) \\ ASHS (2023) \\ ASHS (2024) \\ ASHS (2025) \\ ASHS (2026) \\ ASHS (2027) \\ ASHS (2028) \\ ASHS (2029) \\ ASHS (2029) \\ ASHS (2020) \\ ASHS (2020) \\ ASHS (2020) \\ ASHS (2021) \\ ASHS (2022) \\ ASHS (2023) \\ ASHS (2024) \\ ASHS (2025) \\ ASHS (2026) \\ ASHS (2027) \\ ASHS (2028) \\ ASHS (2029) \\ ASHS (2020) \\ ASHS (2020) \\ ASHS (2020) \\ ASHS (2021) \\ ASHS (2021) \\ ASHS (2022) \\ ASHS (2023) \\ ASHS (2024) \\ ASHS (2025) \\ ASHS (2026) \\ ASHS (2027) \\ ASHS (2028) \\ ASHS (2029) \\ ASHS (2029) \\ ASHS (2020) \\ ASHS (2020) \\ ASHS (2020) \\ ASHS (2021) \\ ASHS (2022) \\ ASHS (2023) \\ ASHS (2024) \\ ASHS (2025) \\ ASHS (2026) \\ ASHS (2027) \\ ASHS (2028) \\ ASHS (2029) \\ ASHS (2020) \\ ASHS (2020) \\ ASHS (2020) \\ ASHS (2021) \\ ASHS (2020) \\ ASHS (2021) \\ ASHS (2022) \\ ASHS (2023) \\ ASHS (2024) \\ ASHS (2025) \\ ASHS (2026) \\ ASHS (2027) \\ ASHS (2028) \\ ASHS (2029) \\ ASHS (2020) \\ ASHS (2029) \\ ASHS (2020) \\ ASHS (2020) \\ ASHS (2021) \\ ASHS (2022) \\ ASHS (2023) \\ ASHS (2024) \\ ASHS (2025) \\ ASH (2026) \\ ASHS (2027) \\ ASHS (2028) \\ ASH (2029) \\ ASHS (2028) \\ ASH (2029) \\ ASHS (2020) \\ ASH (2020) \\ ASHS (2020) \\ ASHS (2021) \\ ASHS (2020) \\ ASHS (2021) \\ ASHS (2022) \\ ASH (2023) \\ ASHS (2024) \\ ASHS (2025) \\ ASHS (2026) \\ ASHS (2027) \\ ASHS (2028) \\ ASH (2029) \\ ASHS (2020) \\ ASHS (2020) \\ ASH (2020) \\ ASHS (2020) \\ ASHS (2021) \\ ASH (2022) \\ ASH (2023) \\ ASHS (2024) \\ ASHS (2025) \\ ASH (2026) \\ ASHS (2027) \\ ASH (2028) \\ ASH (2029) \\ ASHS (2029) \\ ASH (2020) \\ ASH (2020) \\ ASH (2020) \\ ASH (2021) \\ ASH (2022) \\ ASH (2023) \\ ASH (2024) \\ ASH (2025) \\ ASH (2026) \\ ASH (2027) \\ ASH (2028) \\ ASH (2029) \\ ASH (2030) \\ ASH (2031) \\ ASH (2032) \\ ASH (2033) \\ ASH (2034) \\ ASH (2035) \\ ASH (2036) \\ ASH (2037) \\ ASH (2038) \\ ASH (2039) \\ ASH (2040) \\ ASH (2050) \\ ASH (206) \\ ASH (207) \\ ASH (208) \\ ASH (209) \\ ASH (200) \\ ASH (200) \\ ASH (200) \\ ASH (200) \\ ASH (200) \\ ASH (200) \\ ASH (200) \\ ASH (200) \\ ASH (200) \\ ASH (200) \\ ASH (200) \\ ASH (200) \\ ASH (200) \\ ASH (200) \\ ASH (200) \\ ASH (200) \\ ASH (200) \\ ASH (200) \\ ASH (200) \\ ASH (200) \\ ASH (201) \\ ASH (200) \\ ASH (201) \\ ASH ( over three Carrington rotations later because of a _random failure of a few front-end chips in the tracking system [...] particularly significant after 2009_ (Marcelli et al. 2022); this corresponds to \(\sim 3000\) new data points in CRDB (in E\({}_{h/n}\) and \(R\)), as retrieved from the CRDB@ASI database21 (Di Felice et al. 2017). We also added a few positron fraction data points taken from three different time periods (Adriani et al. 2016): the latter paper also provides 3-month averages (2006-2016) of the \(e^{+}/e^{-}\) ratio, but normalised to the unspecified 2006 value, so we did not add them in CRDB. Footnote 21: [https://tools.ssdc.asi.it/CosmicRays/](https://tools.ssdc.asi.it/CosmicRays/) To better visualise these data, we added a new query option in the web interface to plot data as a function of time (instead of energy). The direct benefit is to enable showing the evolution of data from similar energy bands over long time periods. This is illustrated with Fig. 7, available from the gallery notebook15. Footnote 15: [https://tools.ssdc.asi.it/CosmicRays/](https://tools.ssdc.asi.it/CosmicRays/) ## 5 Conclusions and future releases We have presented in this paper CRDB v4.1, an update of the CR database hosted at LPSC. On the technical side, this update involved a migration of CRDB server and a slight simplification of the DB structure. On the code side, a few minor bugs have been fixed, the queried data can now be returned in a more complete csv format (which includes all meta-data), and we fixed a missing combination rule for the data. On the web interface side, we added a new plotting capability to display CRs as a function of time, and added two new tabs: one lists all caveats related to the preparation of the data uploaded in CRDB and to the (sometimes approximate) transformation rules made on the queried data; the other provides a gallery of plots advertising and illustrating the diversity of CRDB data. Actually, this gallery and many other plots can be generated from our new public python CRDB library, and notebook examples are provided in the git page15. Footnote 16: [https://github.com/](https://github.com/) daily and PAMELA monthly data, but also yearly data from LEE/AESOP/AESOP-Lite balloons taken over a 50 year period. We also updated CRDB data with all the GCR data published in the last three years, also adding a couple of older data that had slipped our attention until now. The path to future developments is not very clear and also depends on the feedback from the community. Indeed, CRDB now accounts for most galactic and extragalactic CR data in terms of quantities that can be cast as 1D data vectors (as opposed to skymaps or higher-dimension datacubes). Missing datasets should consist mostly of old time series from satellite experiments, which are both difficult to track and retrieve from the publications: owners and authors of such datasets are welcome to get in touch with us. If need be, other quantities related to UHECR data could also be added in the future, like \(\langle\ln A\rangle\). In any case, looking at present and future high-precision CR data, we stress that the current format to store uncertainties in CRDB is already limited and should probably be improved at some time in the future. Indeed, data from the last generation of CR detectors already come with broken-down contributions from various systematics, whereas only the total systematics can be stored in CRDB. This issue will worsen when covariance matrix of uncertainties will start to be released as well (as is already the case for instance for the most recent Pierre Auger data). The CRDB team will continue uploading newly published CR data, but we also encourage collaborations to prepare \begin{table} \begin{tabular}{l l l l} \hline \hline Subexp NAME & Reference & Qty & \(N_{\rm data}\) \\ \hline \multicolumn{4}{c}{_AMS-02 (daily average)_} \\ AMS02 (2011/05/20 to 2019/10/29) & Aguilar et al. (2021e) & H & 83757 \\ AMS02 (2011/05/20 to 2019/10/29) & Aguilar et al. (2022) & He, He/H & 72879 \\ AMS02 (2011/05/20 to 2021/11/02) & Aguilar et al. (2023) & \(e^{-}\) & 32985 \\ \multicolumn{4}{c}{_PAMELA (average over Carrington rotations)_} \\ PAMELA (2006/07-2009/12) & \multirow{2}{*}{Adriani et al. (2016)} & \multirow{2}{*}{\(e^{+}/(e^{-}+e^{+})\)} & \multirow{2}{*}{15} \\ PAMELA (2011/05-2013/11) & & & \\ PAMELA (2015/01-2015/12) & & & \\ PAMELA (2006/07 to 2009/12) & Marcelli et al. (2020) & He & 2322 \\ PAMELA (2010/01 to 2013/09) & Marcelli et al. (2022) & He & 1026 \\ \hline \end{tabular} \end{table} Table 6: AMS-02 and PAMELA time series added in CRDB v4.1. \begin{table} \begin{tabular}{l l l r} \hline \hline Subexp NAME & Reference & Qty & \(N_{\rm data}\) \\ \hline LEE (1968/06-1968/07) & Fulks (1975) & \(e^{-}+e^{+}\) & 16 \\ LEE (1969/06-1969/07) & Fulks (1975) & \(e^{-}+e^{+}\) & 18 \\ LEE (1970/06-1970/07) & Fulks (1975) & \(e^{-}+e^{+}\) & 17 \\ LEE (1971/06-1971/07) & Fulks (1975) & \(e^{-}+e^{+}\) & 18 \\ LEE (1972/07) & Fulks (1975) & \(e^{-}+e^{+}\) & 16 \\ LEE (1973/07) & Caldwell et al. (1975) & \(e^{-}+e^{+}\) & 30 \\ LEE (1974/07) & Caldwell et al. (1975) & \(e^{-}+e^{+}\) & 16 \\ LEE (1975/07) & Caldwell et al. (1977) & \(e^{-}+e^{+}\) & 10 \\ LEE (1977/07) & Evenson et al. (1983) & \(e^{-}+e^{+}\) & 12 \\ LEE (1979/08-1979/09) & Evenson \& Meyer (1984) & \(e^{-}+e^{+}\) & 8 \\ LEE (1982/08) & Evenson \& Meyer (1984) & \(e^{-}+e^{+}\) & 7 \\ LEE (1984/09) & Garcia-Munoz et al. (1986) & \(e^{-}+e^{+}\) & 10 \\ LEE (1987/08) & Evenson et al. (1995) & \(e^{-}+e^{+}\) & 8 \\ LEE (1990/08) & Evenson et al. (1995) & \(e^{-}+e^{+}\) & 6 \\ LEE (1992/08) & Evenson et al. (1995) & \(e^{-}+e^{+}\) & 7 \\ LEE (1994/08) & Evenson et al. (1995) & \(e^{-}+e^{+}\) & 8 \\ LEE (1997/09) & This paper & \(e^{-}+e^{+}\) & 9 \\ LEE (1998/08-1998/09) & This paper & \(e^{-}+e^{+}\) & 8 \\ LEE (1999/08) & This paper & \(e^{-}+e^{+}\) & 9 \\ LEE (2000/08) & This paper & \(e^{-}+e^{+}\) & 8 \\ LEE (2002/08) & Clem \& Evenson (2004) & \(e^{-}+e^{+}\) & 15 \\ LEE (2009/05) & Mechbal et al. (2020) & \(e^{-}+e^{+}\) & 15 \\ LEE (2011/05) & Mechbal et al. (2020) & \(e^{-}+e^{+}\) & 15 \\ AESOP (1994/08) & Clem et al. (1996) & \(e^{+}/(e^{-}+e^{+})\) & 1 \\ AESOP (1997/09+1998/08) & Clem et al. (2000) & \(e^{+}/(e^{-}+e^{+})\) & 4 \\ AESOP (1999/08) & Clem \& Evenson (2002) & \(e^{+}/(e^{-}+e^{+})\) & 6 \\ AESOP (2000/08) & Clem \& Evenson (2002) & \(e^{+}/(e^{-}+e^{+})\) & 3 \\ AESOP (2002/08) & Clem \& Evenson (2004) & \(e^{+}/(e^{-}+e^{+})\) & 3 \\ AESOP (2006/06) & Clem \& Evenson (2009) & \(e^{+}/(e^{-}+e^{+})\) & 4 \\ AESOP-Lite (2018/05) & Mechbal et al. (2020) & \(e^{+}/(e^{-}+e^{+})\), \(e^{-}\), \(e^{+}\) & 27 \\ \hline \end{tabular} \end{table} Table 5: Lepton data from the LEE, AESOP, and AESOP-Lite balloons flow over a 50-year time period, re-organised, corrected, and with a few new data sets added in CRDB v4.1. their data (CRDB submission format) if they wish them to quickly be distributed via CRDB. Comments, questions, suggestions, and corrections on are welcome and are to be sent at [email protected]. ###### Acknowledgements. We warmly thank the continuous support and feedback from many of our colleagues who point out typos and mismatches in CRDB. We also thank the AMS-02 collaboration for providing their data as cov tables ([https://ams02.space/publications](https://ams02.space/publications)), which greatly eases the preparation and upload of these data in CRDB. This research has made use of NASA's Astrophysics Data System Bibliographic Services. This work was Figure 6: **First and second panels**: GCR fluxes of low-energy \(e^{-}+e^{+}\) and He over the last 70 years, illustrating the 11 years Solar cycle (‘balloons’ in the legend refers to unnamed balloons). **Third panel**: NM count rate from the Thule NM station retrieved from the NEST NMDB interface at [https://www.nmdb.eu/nest/help.php#helptres](https://www.nmdb.eu/nest/help.php#helptres). **Bottom panel**: Solar modulation level reconstructed from NM data (e.g., Maurin et al. 2015), as retrieved from CRDB’s _Solar Modulation_ REST interface and whose values are based on (Ghelfi et al. 2016, 2017a,b), or as retrieved from [https://cosmicrays.oulu.fi/phi/](https://cosmicrays.oulu.fi/phi/) (Usoskin et al. 2017). The figure is available from the gallery notebook15. partially supported by NASA award 80NSSC19K0746. We acknowledge the NMDB database (www.mndb.eu), founded under the European Union's FFP programme (contract no. 213007) for providing data; NM data from Oulu are provided by the Sodankyla Geophysical Observatory (see also [https://cosmicrays.oulu.fi/readme.html](https://cosmicrays.oulu.fi/readme.html)) and those from Thule by the University of Delaware Department of Physics and Astronomy and the Bartol Research Institute.
2301.12726
Specializing Smaller Language Models towards Multi-Step Reasoning
The surprising ability of Large Language Models (LLMs) to perform well on complex reasoning with only few-shot chain-of-thought prompts is believed to emerge only in very large-scale models (100+ billion parameters). We show that such abilities can, in fact, be distilled down from GPT-3.5 ($\ge$ 175B) to T5 variants ($\le$ 11B). We propose model specialization, to specialize the model's ability towards a target task. The hypothesis is that large models (commonly viewed as larger than 100B) have strong modeling power, but are spread on a large spectrum of tasks. Small models (commonly viewed as smaller than 10B) have limited model capacity, but if we concentrate their capacity on a specific target task, the model can achieve a decent improved performance. We use multi-step math reasoning as our testbed because it is a very typical emergent ability. We show two important aspects of model abilities: (1). there exists a very complex balance/ tradeoff between language models' multi-dimensional abilities; (2). by paying the price of decreased generic ability, we can clearly lift up the scaling curve of models smaller than 10B towards a specialized multi-step math reasoning ability. We further give comprehensive discussions about important design choices for better generalization, including the tuning data format, the start model checkpoint, and a new model selection method. We hope our practice and discoveries can serve as an important attempt towards specialized smaller models in the new research paradigm set by LLMs.
Yao Fu, Hao Peng, Litu Ou, Ashish Sabharwal, Tushar Khot
2023-01-30T08:51:19Z
http://arxiv.org/abs/2301.12726v1
# Specializing Smaller Language Models towards Multi-Step Reasoning ###### Abstract The surprising ability of Large Language Models (LLMs) to perform well on complex reasoning with only few-shot chain-of-thought prompts is believed to emerge only in very large-scale models (100+ billion parameters). We show that such abilities can, in fact, be distilled down from GPT-3.5 (\(\geq\) 175B) to T5 variants (\(\leq\) 11B). We propose _model specialization_, to specialize the model's ability towards a target task. The hypothesis is that large models (commonly viewed as larger than 100B) have strong modeling power, but are spread on a large spectrum of tasks. Small models (commonly viewed as smaller than 10B) have limited model capacity, but if we concentrate their capacity on a specific target task, the model can achieve a decent improved performance. We use multi-step math reasoning as our testbed because it is a very typical emergent ability. We show two important aspects of model abilities: (1). there exists a very complex balance/ tradeoff between language models' multi-dimensional abilities; (2). by paying the price of decreased generic ability, we can clearly lift up the scaling curve of models smaller than 10B towards a specialized multi-step math reasoning ability. We further give comprehensive discussions about important design choices for better generalization, including the tuning data format, the start model checkpoint, and a new model selection method. We hope our practice and discoveries can serve as an important attempt towards specialized smaller models in the new research paradigm set by LLMs. ## 1 Introduction Recently, the field of NLP is significantly impressed by large language models' strong abilities (Brown et al., 2020; Chowdhery et al., 2022). Wei et al. (2022) discuss the emergent abilities of large language models - abilities that seems to only exist in large models (more than 100B parameters), but not in small models. A very typical example (also the first discovered emergent ability) is to perform multi-step reasoning on math word problems by chain-of-thought (CoT) prompting (Wei et al., 2022) where the authors let the model generate a step-by-step reasoning chain to help get the final answer. The existence of such abilities has a very deep, profound influence on the community: on the positive side, such abilities open countless opportunities for new research directions; on the negative side, very few organizations have the compute to even fine-tune 100B-scale models, making the accessibility of such abilities extremely hard. It would be ideal if smaller models can also obtain emergent abilities like math CoT reasoning, so they can be accessed by a larger range of researchers and practitioners. However, preliminary results of Wei et al. (2022) show that if the model scale is small (empirically less than 100B parameters), CoT exhibits flat, sometimes even near zero scaling curve (Wei et al., 2022). Later smaller models' scaling curve is partially improved in Chung et al. (2022), but still worse than large models. These results so far are rather pessimistic since they suggest increasing CoT performance for smaller models can be challenging. At the current stage, the community is eager to know to what extent such abilities can be further improved in smaller models. This paper addresses the problem of CoT reasoning for smaller models by _model specialization_. Our hypothesis is that large models (\(\geq\) 100B) have strong modeling power but are spread over a large spectrum of tasks. Small models (\(\leq\) 10B) have limited model capacity, but if we concentrate their capacity on a target task, the model may still have a decent improved performance. There exists promising preliminary work on smaller models' chain-of-thought abilities such as UL2 (Tay et al., 2022) and FlanT5 (Chung et al., 2022), but they focus on generic abilities and consequently, the model's (limited) power is not concentrated. In our experiments, we show that by paying the price of decreased abilities in generic tasks (specifically we lose a large portion of accuracy on the BigBench Hard suite Suzgun et al., 2022), we can lift the scaling curve of CoT reasoning on small FlanT5 models (250M, 760M, and 3B) by a large margin (an average +10 accuracy gain) on a suite of 4 math reasoning tasks (1 in-distribution and 3 out-of-distribution). This means that we can indeed move the model's power from generic abilities to concentrate on the target math CoT. Our approach is to fine-tune an instruction-tuned model (FlanT5) by distilling chain-of-thought reasoning paths of the GSM8K data from a large teacher model (GPT-3.5 code-davinci-002 Chen et al., 2021), then do a model selection on the average performance of three held-out math reasoning data to ensure the model's out-of-distribution generalization. Although distillation per se is a well-studied area, there are multiple caveats in our process, as we will demonstrate: (1). the teacher model code-davinci-002 and our student model FlanT5 use different tokenizers, we address the tokenizer alignment problem by dynamic programming. (2). Distillation induces different performance on an instruction-tuned checkpoint (in our case, FlanT5) and the raw pretrained checkpoint (T5), where specialized FlanT5 performs better but specialized T5 achieves more accuracy gain. (3). at the late training stage, the model's in-distribution and out-of-distribution (OOD) performance fluctuates differently, so if one wants better OOD generalization, the model selection should be performed on held-out math datasets, rather than the validation portion of the tuning data. (4). multiple tradeoffs happen during the distillation/ specialization process: as we start distillation, on BigBench Hard test suite (the measure of generic ability), the model immediately loses all its CoT prompting abilities, and gradually loses a large portion (but not all) of answer-only prompting abilities. The data format we use for tuning is also closely related to model ability: in-context examples enable both in-context and zero-shot performance, but zero-shot examples lose the model's in-context ability for increased zero-shot ability. These findings deepen our understanding of language model chain-of-thought reasoning behavior in multiple aspects: (1). the previous hypothesis is that CoT has near-flat scaling curves on small scale, we show that we can lift up the scaling curve by concentrating the model's capacity on a target ability. This indicates that chain-of-thought might not be an emergent ability because, after specialization, smaller models' scaling curves become log-linear, just like large models (Kaplan et al., 2020; Hoffmann et al., 2022). (2). previous observation of LLM behaviors indicates complex tradeoffs and balances of model ability across multiple dimensions, we give a detailed description of how we move the model's power from generic abilities to a target ability, clearly showing what can be gained at what cost. (3). classical model selection theory selects the model on the validation portion of the same dataset, we select the model based on the performance of different math reasoning datasets, to prevent overfitting on one single dataset. We hope our practice and discoveries can serve as an example attempt towards strong specialized smaller models. ## 2 Background Large Language Models' AbilitiesLarge language models have significantly changed the research paradigm in NLP by showing strong abilities on multiple dimensions (Brown et al., 2020; Hoffmann et al., 2022; Chowdhery et al., 2022; Wei et al., 2022). Currently, the new recipe for training LLMs is to first train a base model (e.g., GPT-3, PaLM, OPT), then elicit the abilities of the base model by instruction tuning (e.g., GPT-3 \(\rightarrow\) InstructGPT Ouyang et al., 2022; PaLM \(\rightarrow\) FlanPaLM Chung et al., 2022, OPT \(\rightarrow\) OPT-IML Iyer et al., 2022, also see Fig. 1A step 1 and 2). For the base model, initially, Wei et al. (2022) shows that the chain-of-thought performance curve is near-zero if the model size is smaller than 100B. Later Chung et al. (2022) updated this hypothesis by showing CoT can be unlocked if CoT data is included as one particular type of instruction, but their model's performance is not as good because their model's ability is spread over multiple dimensions. This work shows that CoT performance can be significantly lifted if we concentrate model's power toward a target ability (Fig. 1A, step 3). Specialized Language ModelsAlthough modern language models show strong generic abilities on multiple directions, recent analysis (Fu et al., 2022) shows models do have different focuses (e.g., code-davinci-002 for code and text-davinci-003 for text). Ability tradeoff happens at all scale: for large models, such a tradeoff does not have to be all or nothing: code-davinci-002, although specialized for code, can still solve a lot of text problems; for small models, due to limited model capacity, they have to trade all generic abilities for one special ability. One example is GitHub Copilot, which supposedly is a 12B small model (Thakkar, 2022). The actual practice of specialization is simply finetuning: to specialize a model towards a target ability, one simply tunes the model using the related data, which is the practice of concurrent work about smaller models' CoT ability (Magister et al., 2022; Shridhar et al., 2022; Ho et al., 2022). The problem here is how to generalize beyond the tuning data, as small models may simply overfit the tuning distribution but struggle to generalize when the distribution shifts (Liu et al., 2022; Si et al., 2022). So far the community's hypothesis of OOD generation involves two important aspects: (1). model scale (Chowdhery et al., 2022); (2). instruction tuning (Chung et al., 2022), which we will also study. These factors mark the differences between our work and the concurrent distillation work: we show how the model trades generic abilities for the target ability, and how model scale and instruction tuning help the model gain better in-distribution and OOD performance. Distillation and Data AugmentationOur approach of using data generated from code-davinci-002 to tune smaller FlanT5 can be viewed as either distillation (Tan et al., 2019) or data augmentation (Li et al., 2022). Here we note that we merely use the generated data as the tool for model specialization, and the specialization data can also be from other sources like human annotation. Our focus is to study the ability tradeoff during specialization, but not directly contribute to the distillation or data augmentation literature. **Most closely related works** There are two threads of most related works: (1). FlanT5 (Chung et al., 2022) and UL2 (Tay et al., 2022) which is the first work discussing smaller models' CoT ability, but they focus on generic CoT while we trade generic ability for math CoT. (2). language model self-improvement (Huang et al., 2022) which also use CoT data augmentation, but they only consider large models and do not show the tradeoff between model abilities. Here we focus on small models and clearly show the price for ability improvements. ## 3 Specializing Multi-Step Reasoning Our objective is to study what it takes to improve smaller models' chain-of-thought math reasoning. We use GSM8K (Cobbe et al., 2021) as our seed dataset because it is one of the datasets with most diverse math reasoning problems, but test the model's performance of three additional math datasets (MultiArith, ASDiv, and SVAMP Wei et al., 2022) to show the model generalizes to OOD data. We further use BigBench Hard to test to model's generic reasoning ability, demonstrating the tradeoff between generic and target abilities. We use T5 (raw pretrained checkpoint) and FlanT5 (instruction tuned checkpoint) as our base model, and use code-davinci-002 to generate distillation/ specialization data. **Distillation from Code-Davinci-002** Given a training question corpora, we use code-davinci-002 to generate 40 new CoT solutions then take the ones that lead to the correct answers as our training data. One solution consists of an answer and a chain of thought explaining the intermediate steps towards the answer. In addition to the standard fine-tuning setting where one uses the question as the input and use the [CoT, answer] pair as the output (Fig. 1 B4), we further consider three additional data formats: (1). in-context answer-only (Fig. 1 B1), where we do not use the CoT data (hence the name "answer-only") and prepend 4 in-context examples before the question (hence the name "in-context"). The reason we prepend the in-context example is that previous work shows tuning with in-context examples improves the model's in-context learning ability (Min et al., 2022). (2). in-context chain-of-thought (Fig. 1 B2), where we add CoT to both the in-context example and the output. (3). zero-shot answer-only, where we directly input the question and output the answer. Using answer-only data is because previous work shows they improve performance. In our experiments, we will show that in-context data induces zero-shot ability but zero-shot data sacrifice in-context learning ability. We note that there also exist techniques like adding a calculator (Cobbe et al., 2021) or self-consistency decoding (Wang et al., 2022) that can further improve the performance. These techniques are orthogonal to the distillation we use and can definitely be integrated to our work for better performance. Since our focus is the balance of the models' special and generic abilities, we leave the integration of these orthogonal techniques to future work. In terms of training objectives, in the distillation literature, there are typically two types of distillation approaches: (1). Figure 1: **A.** Model specialization process. Pretraining gives a strong base model (Raffel et al., 2020; Chowdhery et al., 2022), instruction tuning elicits the model ability (Chung et al., 2022), then specialization (this work’s focus) moves model abilities to a target direction. In this work, we trade the model’s generic abilities (as measured by BigBench Hard) for the model’s multi-step math reasoning abilities. **B.** Four data formats we consider for tuning the model. We will show tuning with in-context chain-of-thought examples is particularly important for the model’s CoT ability. **C.** Aligning GPT tokenization to T5 tokenization by dynamic programming. If a T5 token has a one-to-one alignment to a GPT token, we reuse the GPT’s top 5 probability as the target distribution. If there the mapping is one-to-many/ many-to-one, we treat the T5 token’s distribution as one-hot. sample matching, where one trains the student model on the data generated by the teacher. In our case, sample matching means we directly optimize the student's likelihood on the data generated by code-davinci-002. (2). distribution matching, where one minimizes the KL divergence between the student's output distribution (in our case, the per-step autoregressive distribution) and the teacher's. Usually, distribution matching is shown to achieve faster convergence and better performance than sample matching, so we use distribution matching as our training objective. Distribution matching has an additional challenge in storing the distribution parameter: at each step, we need to store the whole distribution defined on the vocabulary \(\mathcal{V}\), so the size of the dataset is \(|\mathcal{V}|\) times larger than sample matching. Yet the OpenAI API only grants access to the 5 most probable tokens at each decoding step, but not the probability distribution over the entire vocabulary. Although the per-step distribution only covers the top 5 tokens, most of the time their probability sum is close to 1, being a good enough approximation of the full vocabulary distribution. We set to zero the probabilities of tokens not in the top 5. **Aligning tokenizers by dynamic programming** One problem when matching the two distributions is the misalignment between the GPT tokenizer and the T5 tokenizer. We solve this problem by dynamic programming. Specifically, given two sequences to tokens \([\mathbf{s}_{1:L},\mathbf{t}_{1:N}]\), our objective is to find an alignment that minimizes the total cost of editing one sequence to the other. Our dynamic program is a slight tweak of the textbook dynamic programming algorithms used in bioinformatics for sequence alignment (such as the Needleman-Wunsch algorithm (Needleman and Wunsch, 1970)) and in signal processing (such as dynamic time wrapping (Senin, 2008)). The recursion function is: \[f(i,j)=\min\{ f(i-1,j)+c(\mathbf{s}_{i},\mathbf{t}_{j}), \tag{1}\] \[f(i,j-1)+c(\mathbf{s}_{i},\mathbf{t}_{j}),\] (2) \[f(i-1,j-1)+c(\mathbf{s}_{i},\mathbf{t}_{j})\} \tag{3}\] where \(f(i,j)\) denotes the total cost aligning \(\mathbf{s}_{1:i}\) and \(\mathbf{t}_{1:j}\) and \(c(\mathbf{s}_{i},\mathbf{t}_{j})\) is the predefined string edit distance between token \(\mathbf{s}_{i}\) and \(\mathbf{t}_{j}\). our algorithm does not enforce one-on-one matching between tokens in the two sequences, and one token in \(\mathbf{s}\) might align with multiple in \(\mathbf{t}\) and vice versa Fig. 1C gives an example alignment. If there exists a one-to-one mapping between a GPT token and a T5 token, we use the GPT distribution as the T5 distribution. If the mapping is not one-to-one, e.g., two T5 tokens map to one GPT token, or two GPT tokens map to one T5 token (Fig. 1 C lower part), we do not use the corresponding GPT distribution and set the T5 distribution to be one-hot. We further note that aligning sequences generated by different tokenizers is a generic problem of contemporary NLP, yet we are not aware of any existing libraries approaching it. We plan to release the implementation of our dynamic program and hope it can be useful for future research. ## 4 Experiments The objective of the experiments is to see to what extent we can lift up the scaling curve of smaller models' math CoT performance and what is the price of it. We conduct model specialization on two model families: the raw pretrained checkpoints, and their instruction-tuned checkpoints (recall that the instruction-tuned checkpoints are generally more capable than the raw pretrained checkpoints, Fig 1A). Specifically, we consider the raw pretrained T5 Base (250M)/ Large (760M)/ XL (3B)/ XXL (11B), and the instruction-tuned FlanT5s. In Sec. 4.1, we validate our main hypothesis that large models can perform well on a wide range of tasks while smaller model's ability can be moved from generic abilities to a specialized target ability. Specifically, we show model specialization can indeed improve CoT math performance for FlanT5-Base/ Large/ XL, while paying the price of generic abilities, i.e., losing all CoT abilities on BigBench Hard and a large portion of answer-only (AO) abilities. In Sec. 4.2, we study the scaling behavior of smaller models and show how specialization lifts up the scaling curve for both T5 and FlanT5. This modifies the previous belief that smaller models exhibit a flat scaling curve (Wei et al., 2022b); we show that their scaling curve becomes log-linear after specialization, but not flat. In Sec 4.3, we show the dynamics and the generalization behavior of specialization: the model's target performance increases gradually but generic abilities decrease gradually during tuning, and there exists tradeoffs between in-distribution v.s. OOD performance and in-context v.s. zero-shot performance. ### Overall Performance Tradeoff We test the models' math reasoning ability and generic ability and show their tradeoffs. For the math reasoning ability, we use the code-davinci-002 augmented GSM8K dataset (Cobbe et al., 2021) as our tuning dataset. The GSM8K has 7K training questions, for each question we ask the large model to generate 40 different solutions, taking the correct ones from the generation, we have 130K tuning data points in total. We test the model's out-of-distribution performance on MultiArith, ASDiv, and SVAMP (collectively denoted as M-A-S) datasets (Wei et al., 2022b). None of the datasets has official train-dev-test splits, so we randomly sample 500 instances as the validation set, and use the remaining instances (800 for GSM8K, 400 for MultiArith, 18K for ASDiv, 500 for SVAMP) as the test set. The difference between M-A-S and GSM8K is that they are all primary school level arithmetic reasoning problems, but the entities involved in the datasets are different. For example, GSM8K may consider arithmetic reasoning on foods (e.g, 5 apples + 8 bananas = 13 fruits) and MultiArith may con sider animals (e.g., 2 dogs + 3 cats = 5 animals). This type of out-of-distribution generalization is usually referred to as lexical-level compositional generalization (i.e., both are addition, but the lexicons are different, see Liu et al., 2022). For the generic ability, we use BigBench Hard (BBH, Suzgun et al., 2022) test suite, a list of 26 challenging dataset testing the model's reasoning abilities from multiple dimensions (e.g., date understanding, causal judgement, referential game,.etc). Because of its difficulty and wide-coverage, BBH makes an ideal benchmark testing models' generic ability. For the baseline models, we consider generic large models and concurrent smaller distilled models, specifically: (1). generic large models, ranked according to scale: code-davinci-002 (our teacher model, presumably larger or equal to 175B); LaMDA 137B (Thoppilan et al., 2022) and PaLM 60B (Chowdhery et al., 2022), both are strong generic models for chain-of-thought reasoning; UL2 (Tay et al., 2022), a 20B model with good CoT ability. We will show that specialized FlanT5 11B outperforms UL2 20B and becomes close to PaLM 60B and LaMDA 137B on the target math reasoning task. (2). concurrent works with knowledge distillation from Magister et al. (2022); Shridhar et al. (2022); Ho et al. (2022). We will show that our specialized FlanT5 clearly outperform all of them on the distillation data (with the cost of BBH performance), mostly because we use an instruction-tuned checkpoint (FlanT5) as the base model rather than the raw pretrained checkpoint (T5). **Trading generic abilities for math CoT reasoning** The overall results are in Table 1. After tuning on the seed GSM8K augmented data, all FlanT5 models have improved math reasoning performance with approximately +10 average accuracy gain. We note that our smaller 3B model outperforms the current 11B and 6B distillation models on the GSM8K test set. Despite multiple confounders including the size and the formats of tuning data, we believe our 3B model gets a better performance mostly because the base model is an instruction-tuned FlanT5, rather than the raw pretrained T5. Later we will show that instruction-tuned checkpoint consistently outperforms pretrained checkpoint after specialization (Sec. 4.2), showing the importance of the choice of the base model. Also, although not performing \begin{table} \begin{tabular}{l c c c c c c c c c c c c} \hline \hline & & \multicolumn{6}{c}{**CoT Reasoning on Maths Word Problems**} & \multicolumn{3}{c}{**BigBench-Hard**} \\ \cline{3-13} & & \multicolumn{2}{c}{**GSM8K**} & \multicolumn{2}{c}{**MultiArith**} & \multicolumn{2}{c}{**ASDiv**} & \multicolumn{2}{c}{**SVAMP**} & \multicolumn{2}{c}{**AO**} & \multicolumn{2}{c}{**CoT**} \\ \cline{3-13} **Models** & **\#Params.** & **Acc.** & \(\Delta\) & **Acc.** & \(\Delta\) & **Acc.** & \(\Delta\) & **Acc.** & \(\Delta\) & **Acc.** & \(\Delta\) & **Acc.** & \(\Delta\) \\ \hline code-davinci-002 & \(\geq\)175B & 63.1 & - & 95.8 & - & 80.4 & - & 76.4 & - & 56.6 & - & 73.9 & - \\ LaMDA & 137B & 14.8 & - & 45.0 & - & 46.6 & - & 37.5 & - & - & - & - & - \\ PaLM & 60B & 29.9 & - & 75.0 & - & 61.9 & - & 46.7 & - & 37.4 & - & 43.0 & - \\ UL2 & 20B & 4.4 & - & - & - & 16.9 & - & 12.5 & - & - & - & - & - \\ \hline \multicolumn{13}{l}{**Concurrent Works with Knowledge Distillation**} \\ Magister22, T5 & 11B & 21.9 & - & - & - & 42.1 & - & - & - &? & - &? & - \\ Shridhar22, GPT & 6B & 21.0 & - & - & - & - & - & - &? & - &? & - \\ Ho22, GPT & 6B & 6.8 & - & 33.3 & - & - & - & - &? & - &? & - \\ \hline \multicolumn{13}{l}{**Our Specialized Models Compared with Baselines**} \\ FlanT5-XXL & 11B & 16.1 & - & 51.7 & - & 36.5 & - & 39.7 & - & 47.4 & - & 41.8 & - \\ + Specialized & 11B & 27.1 & +11.0 & 63.0 & +11.3 & 37.6 & +1.1 & 35.6 & -4.1 & 19.6 & -27.8 & 0.0 & -41.8 \\ \hline FlanT5-XL & 3B & 13.5 & - & 24.0 & - & 20.7 & - & 17.7 & - & 39.9 & - & 35.8 & - \\ + Specialized & 3B & 22.4 & +8.9 & 42.3 & +18.3 & 28.4 & +7.7 & 23.8 & +6.1 & 3.2 & -36.7 & 0.0 & -35.8 \\ \hline FlanT5-Large & 760M & 6.9 & - & 13.0 & - & 10.1 & - & 6.8 & - & 30.3 & - & 30.9 & - \\ + Specialized & 760M & 20.2 & +13.3 & 38.5 & +25.5 & 23.8 & +13.7 & 20.4 & +13.6 & 6.5 & -23.8 & 0.3 & -30.6 \\ \hline FlanT5-Base & 250M & 3.0 & - & 7.0 & - & 4.2 & - & 3.8 & - & 24.2 & - & 25.9 & - \\ + Specialized & 250M & 13.4 & +10.4 & 29.7 & +22.7 & 20.9 & +16.7 & 14.2 & +10.4 & 3.1 & -21.1 & 0.1 & -25.8 \\ \hline \hline \end{tabular} \end{table} Table 1: Overall test set performance. We specialize Flan-T5’s ability from the generic tasks (BigBench Hard) to math reasoning tasks. After paying the cost of BigBench Hard performance (the model loses all the CoT prompting ability and a large portion of the Answer-only (AO) prompting ability), we see the specialized T5 models have improved in-distribution (GSM8K) performance (where our 3B and 11B models outperform concurrent works) as well as out-of-distribution (MultiArith, ASDiv and SVAMP) performance, showing that we can move the model’s ability from generic tasks (BBH) to a specific target task (math reasoning). Magister22: Magister et al. (2022); Shridhar22: Shridhar et al. (2022); Ho22: Ho et al. (2022). well as the teacher model code-davinci-002, our specialized 11B model performance improves to be on par with LaMDA 137B and slightly below PaLM 60B, showing it is indeed possible to make smaller models expert for the particular math reasoning task. The price is also very clear: all specialized models suffer from performance drop on BigBench, specifically, they lose all the CoT prompting abilities on BBH, and a large portion of AO prompting performance. This observation validates our hypothesis: large models can perform well on a wide range of tasks (here PaLM 60B perform well on both math reasoning and BBH), versus smaller model's ability can be moved from generic tasks (BBH) to a specialized target ability (math reasoning), such that their performance on the target task can still match models that are larger than them, e.g., the average performance on the four math datasets LaMDA 137B 35.9 v.s. specialized FlanT5 11B 40.8. ### Scaling Behavior of Smaller Models' CoT Ability Now we look the scaling behavoir to smaller models. We compare the scaling curve of: (1). GPT family small variants (Ada, Babbage, Curie and code-davinci-002); (2). raw pretrained T5 of different scales and their specialized versions; (3). the instruction-tuned FlanT5 of different scales and their specialized versions; The results are shown in Fig. 2 where x-axis denotes the model scale in terms of the number of parameters and y-axis denotes the validation accuracy on the GSM8K dataset. Smaller models have log-linear, but not flat scaling curveInitially, in the original CoT paper Wei et al. (2022) and the subsequent emergent abilities paper (Wei et al., 2022), CoT prompting is believed to be an emergent property that only large models exhibit. Smaller model's CoT performance (like smaller GPT variants) was believed to be a flat scaling curve: model performance does not improve with model scale, as is shown in Fig. 2A left part. Later this belief is updated by the FlanT5 paper (Chung et al., 2022), as they show that although the pretrained checkpoint does not have CoT ability, if the model has gone through instruction tuning, smaller models can still exhibit CoT on generic tasks. Our work shows that directly trained on CoT data can also lift up the flat scaling curve of the raw T5 checkpoints (Fig. 2B) to be log-linear. In Fig. 2C, we consider specialization for the instruction-tuned FlanT5, and show that specialization significantly lifts up the scaling curve of FlanT5, and both curves are also log-linear. All the log-linear curves we observed in Fig. 2 means that the chain-of-thought behavoir of smaller models are not flat, but actually log-linear. This further indicates that chain-of-thought may not be an emergent ability which is marked by the flat-then-phase-change curve, but they have the log-linear curve just like large models (Kaplan et al., 2020; Hoffmann et al., 2022). Instruction-tuned checkpoints perform better than raw pretrained checkpointsFurthermore, comparing \begin{table} \begin{tabular}{l c|c c} \hline \hline **Before** & **Acc** & **After** & **Acc** \\ \hline FlanT5 3B & **13.5** & Specialized & **23.8** \\ T5 3B & 0.73 & Specialized & 20.6 \\ \hline FlanT5 760M & **6.9** & Specialized & **21.8** \\ T5 760M & 0.85 & Specialized & 16.2 \\ \hline FlanT5 250M & **3.0** & Specialized & **15.2** \\ T5 250M & 1.8 & Specialized & 14.2 \\ \hline \hline \end{tabular} \end{table} Table 2: GSM8K validation performance. Instruction-tuned models generally performs better than the raw pretrained checkpoints. Figure 2: X-axis means log of model scale, y-axis means validation accuracy on GSM8K. **A**: Previously, the community believe that small models has flat curve for both AO and CoT prompting and only when models become large enough the performance will have a “phase change” and suddenly increase. **B**: we show that after training on CoT, the model exhibits log-linear curves where both AO and CoT increase with model scale. **C**: for instruction-tuned models (FlanT5) that already exhibit CoT, specialization lifts up the scaling curve, and the two curves are again, log-linear shaped. All the log-linear curves indicate that chain-of-thought may not be an emergent ability which is marked by the flat-then-phase-change curve. Here we show the curve in small scale is not flat but actually log-linear, and continuously increasing model scale leads to continuously increased accuracy (no sudden phase change). Fig. 2B and Fig. 2C, we see that specialized FlanT5 generally performs better than T5 (though T5 has a larger performance gain). The exact validation performance is shown in Table 2. We also believe that, despite there exist multiple confounders, a major reason that our performance in Table 1 (FlanT5 11B GSM8K accuracy 27.1) is better than concurrent distillation methods (Magister22 T5 11B, acc. 21.9) is mostly because we use the FlanT5 as our base model versus they use the raw pretrained T5. The intuitive explanation is because instruction-tuning elicits the model's full ability while raw pretrained models' ability are not fully released (conceptally see Fig. 1A, also see Fu et al., 2022; Chung et al., 2022). So for better performance, we recommend using instruction-tuned models in practice. ### Specialization Process and Generalization Behaviors Now we consider the specialization process. Intuitively, during finetuning, the model's ability does not suddenly become the target ability, but will go through a process of moving the models' ability from generic directions to the target. We save one checkpoint every 10K instances/ updates, then evaluate the checkpoints on (1). in-distribution math performance (GSM8K); (2). out-of-distribution math performance (MultiArith, ASDiv, and SVAMP); (3). generic answer-only prompting performance (BBH-AO); (4). generic chain-of-thought prompting performance (BBH-CoT). We plot the model's performance across the fine-tuning process in Fig. 3. **The dynamics of model specialization**. At the beginning of specialization (Figure A1 at step 10K and Figure A2 at step 20K), the model immediately loses all BBH CoT ability (accuracy becomes 0), and a large portion of BBH AO ability (accuracy drops from about 0.3 to about 0.1). As tuning goes on (A1 epoch 1, A2 epoch 1 and 2), the model's in-distribution performance (GSM8K) and out-of-distribution performance (MultiArith-ASDiv-SVAMP, M-A-S) gradually increases, meaning that the model can generalize to three OOD datasets by tuning on GSM8K chain-of-thought data. At the later stage of tuning (Figure A1 at epoch 2, and Figure A2 at epoch 3), the model's math performance fluctuates and better in-distribution performance does not indicate better out-of-distribution performance. The models' BBH-AO performance drops a large portion and the BBH-CoT performance just die completely. Comparing A1 and A2, we also see that smaller models are more data-hungry than larger models (Kaplan et al., 2020): FlanT5 3B's math performance plateaus at about 90K data points, versus FlanT5 Base's performance continues increase until epoch 3 (each epoch has 130K datapoints). **In-distribution and out-of-distribution tradeoffs** Because in Fig. 3 A, both in-distribution and out-of-distribution fluctuates, choosing the best in-distribution checkpoint does not necessarily lead to the best out-of-distribution checkpoint. This observation is shown in Table 3 where if we select the best model based on the \begin{table} \begin{tabular}{l l r r} \hline \hline **Model** & **Selection** & **In-dist** & **Out-of-dist** \\ \hline FlanT5 3B & GSM8K Dev & 23.8 & 33.2 \\ & M-A-S Dev & 21.2 -2.6 & 35.0 +1.8 \\ \hline FlanT5 Large & GSM8K Dev & 21.8 & 28.7 \\ & M-A-S Dev & 19.2 -2.6 & 30.5 +1.8 \\ \hline FlanT5 Base & GSM8K Dev & 15.2 & 21.7 \\ & M-A-S Dev & 13.2 -2.0 & 22.0 +0.3 \\ \hline \hline \end{tabular} \end{table} Table 3: Model selection method induces tradeoffs between instruction and out-of-distribution performance. Figure 3: **A1 and A2**: model specialization curve of FlanT5. At the beginning of specialization (e.g., A1 step 10K), the model immediately loses all BBH CoT ability, and a large portion of BBH AO ability. As tuning goes on (e.g., A1 epoch 1), the model’s in-distribution performance (GSM8K) and out-of-distribution performance (MultiArith-ASDiv-SVAMP, M-A-S) gradually increases. At the later stage of tuning (e.g., A1 epoch 2), the model’s math performance fluctuates and better in-distribution performance does not indicate better out-of-distribution performance. Smaller models need to see the data more times than larger models (A2 has 3 epochs and A1 has 2). **B**: differences between two distillation approaches. Distribution matching gives faster and lower loss convergence than sampling matching. GSM8K validation set, it does cannot achieve the best validation performance on the M-A-S OOD setting. Yet choosing the best model based on the M-A-S validation performance leads to a smaller performance drop in GSM8K. Given this observation, in practice, we would recommend choosing the validation checkpoints according to the specific goal: if the goal is in-distribution generalization, use GSM8K, if the goal is OOD generalization, users may want to use their own validation set (in our case, the M-A-S datasets). ### Further Design Choices Analysis In this section, we study two more design choices we have discussed before: (1). using distribution matching v.s. sample matching for distillation (recall distillation matching minimizes the KL divergence between FlanT5's per-step autoregressive distribution and GPT's autoregressive distribution, versus sample matching maximizes the likelihood of the reasoning paths generated by GPT); (2). the influence of data formats, and how in-context/ zero-shot training data induces different behaviors of the specialized model. **Distribution matching gives faster convergence than sample matching**. Fig. 3 B shows the training loss of distribution matching v.s. sample matching. We show that the model converges faster under distribution matching, and the corresponding loss is lower. In terms of validation performance, these two approaches do not differ substantially. Yet since distribution matching has a faster convergence, in practice they may still be considered first especially when the model becomes large and tuning becomes expensive. **In-context data preserves zero-shot ability; Zero-shot data loses in-context ability** This is actually a very interesting observation. Specifically, in Fig. 4 A, we tune the model with only in-context data (Format B1 and B2 in Fig 1), then test the models in-context learning and zero-shot generalization performance during validation. In Fig. 4 B, we tune the model with only zero-shot data (no in-context examples prepended, format B3 and B4 in Fig 1), the test if the model can still do in-context learning. As is shown in Fig. 4 A, when tuning with in-context data, the model can do both in-context and zero-shot generalization during validation, even the model is not trained with zero-shot data. In comparison, in Fig. 4 B, when tuning with zero-shot data, the model's zero-shot performance increases, but gradually losses its in-context learning ability. This result aligns with the empirical observation on other large models, for example, text-davinci-002 has better zero-shot performance than code-davinci-002, but worse in-context learning performance (Fu et al., 2022). This means that the model's ability tradeoff not only happens on math v.s. generic ability, but also happens on zero-shot v.s. in-context learning ability. In practice, we would recommend mix the different data formats during tuning (this is why we mix the formats) to maintain a balance between in-context and zero-shot abilities, or adjusting the ratio of different formats according to the specific use case. ## 5 Conclusion In this work, we study the problem of specializing smaller language models toward multi-step reasoning using chain-of-thought prompting. We show that it is indeed possible to concentrate the small models' ability from generic directions to the target math reasoning task. After specialization, we show that the model exhibits a log-linear scaling curve where model performance increases smoothly as model scale increases, this is a correction of the previous hypothesis which believes small models have a flat scaling curve that does not increase with model scale. We show the importance of using the instruction-tuned checkpoints as the base model because their generalization performance is better than the raw pretrained checkpoints. Mutiple tradeoff happens during model specialization, including the loss of BBH performance, the balance between in-distribution and out-of-distribution generalization, and the balance of in-context learning and zero-shot generalization ability. We hope our practice and discoveries can serve as an important attempt towards specialized smaller models in the new research Figure 4: X-axis means tuning datapoints, y-axis means validation accuracy on GSM8K. Both figures use FlanT5 3B as the base model. **A**: training with in-context examples automatically give the model zero-shot ability. **B**: training with zero-shot examples sacrifices in-context ability. paradigm set by LLMs
2308.06258
Conforming Finite Element Function Spaces in Four Dimensions, Part II: The Pentatope and Tetrahedral Prism
In this paper, we present explicit expressions for conforming finite element function spaces, basis functions, and degrees of freedom on the pentatope and tetrahedral prism elements. More generally, our objective is to construct finite element function spaces that maintain conformity with infinite-dimensional spaces of a carefully chosen de Rham complex. This paper is a natural extension of the companion paper entitled "Conforming Finite Element Function Spaces in Four Dimensions, Part I: Foundational Principles and the Tesseract" by Nigam and Williams, (2023). In contrast to Part I, in this paper we focus on two of the most popular elements which do not possess a full tensor-product structure in all four coordinate directions. We note that these elements appear frequently in existing space-time finite element methods. In order to build our finite element spaces, we utilize powerful techniques from the recently developed 'Finite Element Exterior Calculus'. Subsequently, we translate our results into the well-known language of linear algebra (vectors and matrices) in order to facilitate implementation by scientists and engineers.
David M. Williams, Nilima Nigam
2023-08-11T17:47:29Z
http://arxiv.org/abs/2308.06258v2
Conforming Finite Element Function Spaces in Four Dimensions, Part II: The Pentatope and Tetrahedral Prism ###### Abstract In this paper, we present explicit expressions for conforming finite element function spaces, basis functions, and degrees of freedom on the pentatope and tetrahedral prism elements. More generally, our objective is to construct finite element function spaces that maintain conformity with infinite-dimensional spaces of a carefully chosen de Rham complex. This paper is a natural extension of the companion paper entitled "Conforming Finite Element Function Spaces in Four Dimensions, Part I: Foundational Principles and the Tesseract" by Nigam and Williams, (2023). In contrast to Part I, in this paper we focus on two of the most popular elements which do not possess a full tensor-product structure in all four coordinate directions. We note that these elements appear frequently in existing space-time finite element methods. In order to build our finite element spaces, we utilize powerful techniques from the recently developed 'Finite Element Exterior Calculus'. Subsequently, we translate our results into the well-known language of linear algebra (vectors and matrices) in order to facilitate implementation by scientists and engineers. keywords: space-time; finite element methods; tetrahedral prism; pentatope; four dimensions; finite element exterior calculus Msc: [2010] 14F40, 52B11, 58A12, 65D05, 74S05 + Footnote †: journal: Computers & Mathematics with Applications ## 1 Introduction Finite Element Exterior Calculus (FEEC) is a powerful and elegant framework for constructing exact sequences of finite element approximation spaces in arbitrary dimensions, (see for instance, the landmark paper [1]). There has been considerable literature dedicated to the development and analysis of FEEC on simplicial, tensorial, and prism-like elements. Our goal in this and a companion paper [2], is to restrict these results to the specific case of \(\mathbb{R}^{4}\), and to present an explicit construction of these families of finite elements. Notable previous contributions in this direction are due to [3] and [4], and several more important contributions will be discussed below. Broadly speaking, our construction uses a different de Rham complex than that of the previous work, (essentially, the adjoint of the complex which was used in [3]). In the companion paper [2], we presented several practical examples in \(\mathbb{R}^{4}\) to motivate our definition of a de Rham complex and the associated traces. We shall only briefly review this material in the present paper. In this work, we focus on developing conforming finite element function spaces for the _pentatope_ and _tetrahedral prism_. The pentatope is a generalization of the triangle to four dimensions, and the tetrahedral prism is a generalization of the triangular prism to four dimensions. In what follows, we review some of the relevant literature on these elements. ### Background The pentatope and tetrahedral prism have been frequently used in space-time finite element methods. For example, pentatopes have been used by Behr and coworkers to simulate linear and non-linear advection-diffusion problems for fluid dynamics applications with moving boundaries [5; 6; 7; 8; 9]. In their work, a _partially_-unstructured pentatope mesh is formed by extruding a three-dimensional tetrahedral mesh in the temporal direction to create four-dimensional tetrahedral prism elements, and thereafter, these tetrahedral prism elements are subdivided into pentatope elements in accordance with a Delaunay criterion [5]. In addition, there is considerable interest in generating _fully_-unstructured pentatope meshes, as evidenced by the efforts of Foteinos and Chrisochoides [10], Caplan et al. [11; 12; 13; 14], Frontin et al. [15], and Anderson et al. [16]. Broadly speaking, this latter work focuses on developing a more direct, Delaunay-based approach for generating unstructured meshes of pentatopes, (in contrast to the less direct extrusion technique of Behr and coworkers). Let us now turn our attention to the tetrahedral prism. Meshes of tetrahedral prisms can be generated in a very straightforward fashion, as we only need to extrude an existing tetrahedral mesh in order to generate a completely valid, boundary-conforming mesh of tetrahedral prisms (see above). We note that Tezduyar, Bazilevs, and coworkers have performed extensive work on space-time methods for tetrahedral prisms [17; 18; 19; 20; 21; 22; 23; 24]. They have used these methods to solve a host of fluid-structure interaction problems for biomedical, turbomachinery, and wind-turbine applications, amongst others. The sheer volume of their research on this topic is quite impressive, and we will not attempt to cover it all here. However, the interested reader is encouraged to consult [25] for a concise review. To the authors' knowledge, no one has explicitly constructed high-order finite element spaces which are the equivalents of H(curl)- or H(div)-conforming spaces on pentatopes or tetrahedral prism elements. Now, it is important to note that there are inherent difficulties associated with constructing these finite element spaces due to the absence of a complete tensor-product structure in all four coordinate directions. Fortunately, this exercise is still made possible by the tools of FEEC [26; 1; 27]. We refer the interested reader to part I of this paper for a detailed review of FEEC and its related publications. In this work, we will only focus on a few of these publications that are directly relevant. It turns out that FEEC techniques have already been previously employed to _implicitly_ construct high-order conforming finite element spaces on the pentatope (see Arnold et al. [26]), and on the tetrahedral prism (see Natale [28] and McRae et al. [29]). The goal of this paper is to extend this work, and generate _explicit_ expressions for the high-order conforming finite element spaces, basis functions, and degrees of freedom for the pentatope and tetrahedral prism. We believe that these explicit representations are essential to facilitating implementation and utilization of the elements by scientists and engineers. ### Overview of the Paper The remainder of this paper is outlined as follows. In section 2, we introduce some notation and essential ideas. In section 3, we introduce our particular de Rahm complex, and the associated derivative operators, Sobolev spaces, and maps. In sections 4 and 5, we present explicit conforming finite element spaces on the pentatope and tetrahedral prism, respectively. Finally, in section 6, we summarize the contributions of this paper. ## 2 Notation and Preliminaries This section begins by introducing the reference pentatope and tetrahedral prism elements which will be extensively used throughout the paper. Thereafter, we review some well-known degrees of freedom on tetrahedra and triangular prisms, which will be used extensively in our finite element constructions. ### The Reference Pentatope Consider the following definition of a reference pentatope \[\widehat{K}:=\mathfrak{T}^{4}:=\left\{x=\left(x_{1},x_{2},x_{3},x_{4}\right)\in \mathbb{R}^{4}\right|-1\leq x_{1},x_{2},x_{3},x_{4}\leq 1,\;x_{1}+x_{2}+x_{3}+x_{4} \leq-2\right\},\] with vertices \[v_{1}=[1,-1,-1,-1]^{T},\quad v_{2}=[-1,1,-1,-1]^{T},\quad v_{3}=[- 1,-1,1,-1]^{T},\] \[v_{4}=[-1,-1,-1,1]^{T},\quad v_{5}=[-1,-1,-1,-1]^{T}.\] Next, we introduce the definition of an arbitrary pentatope \(K\) with vertices \(v_{1}^{\prime},v_{2}^{\prime},\ldots,v_{5}^{\prime}\). There exists a bijective mapping between the reference pentatope and the arbitrary pentatope \(\phi:\widehat{K}\to K\), such that \[\phi\left(x_{1},x_{2},x_{3},x_{4}\right)=\sum_{i=1}^{5}v_{i}^{ \prime}N_{i}\left(x_{1},x_{2},x_{3},x_{4}\right),\] where \[N_{1}=\frac{x_{1}+1}{2},\quad N_{2}=\frac{x_{2}+1}{2},\quad N_{3 }=\frac{x_{3}+1}{2},\quad N_{4}=\frac{x_{4}+1}{2},\quad N_{5}=-\frac{x_{1}+x_ {2}+x_{3}+x_{4}}{2}-1.\] ### The Reference Tetrahedral Prism Consider the following definition of a reference tetrahedral prism \[\widehat{K}:=\mathfrak{N}^{4}:=\left\{x=\left(x_{1},x_{2},x_{3},x _{4}\right)\in\mathbb{R}^{4}\right|-1\leq x_{1},x_{2},x_{3},x_{4}\leq 1,\;x_{1}+x_{2 }+x_{3}\leq-1\right\},\] with vertices \[v_{1}=[1,-1,-1,-1]^{T},\quad v_{2}=[-1,1,-1,-1]^{T},\quad v_{3}= [-1,-1,1,-1]^{T},\quad v_{4}=[-1,-1,-1,-1]^{T},\] \[v_{5}=[1,-1,-1,1]^{T},\quad v_{6}=[-1,1,-1,1]^{T},\quad v_{7}=[ -1,-1,1,1]^{T},\quad v_{8}=[-1,-1,-1,1]^{T}.\] Next, we introduce the definition of an arbitrary tetrahedral prism \(K\) with vertices \(v_{1}^{\prime},v_{2}^{\prime},\ldots,v_{8}^{\prime}\). There exists a bijective mapping between the reference tetrahedral prism and the arbitrary tetrahedral prism \(\phi:\widehat{K}\to K\), such that \[\phi\left(x_{1},x_{2},x_{3},x_{4}\right)=\sum_{i=1}^{8}v_{i}^{ \prime}N_{i}\left(x_{1},x_{2},x_{3},x_{4}\right),\] where \[N_{1}=\frac{1}{4}\left(x_{1}+1\right)\left(x_{4}-1\right),\quad N _{2}=\frac{1}{4}\left(x_{2}+1\right)\left(x_{4}-1\right),\] \[N_{3}=\frac{1}{4}\left(x_{3}+1\right)\left(x_{4}-1\right),\quad N _{4}=-\frac{1}{4}\left(x_{1}+x_{2}+x_{3}+1\right)\left(x_{4}-1\right),\] \[N_{5}=\frac{1}{4}\left(x_{1}+1\right)\left(x_{4}+1\right),\quad N _{6}=\frac{1}{4}\left(x_{2}+1\right)\left(x_{4}+1\right),\] \[N_{7}=\frac{1}{4}\left(x_{3}+1\right)\left(x_{4}+1\right),\quad N _{8}=-\frac{1}{4}\left(x_{1}+x_{2}+x_{3}+1\right)\left(x_{4}+1\right).\] Figures 1 and 2 illustrate a generic pentatope and tetrahedral prism, respectively. We summarize geometric information regarding these four-dimensional elements in Table 1. Here, the \(d\)-dimensional simplex, simplicial prism, and cube are denoted respectively by \(\mathfrak{T}^{d}\), \(\mathfrak{N}^{d}\), and \(\mathfrak{H}^{d}\). Figure 1: Illustration of a generic pentatope. Figure 2: Illustration of a generic tetrahedral prism. ### Notation Let us begin by introducing the following generic set of differential forms on the contractible region \(\Omega\) \[0\text{-forms},\qquad\omega\in\Lambda^{0}(\Omega),\qquad\omega=\omega,\] \[1\text{-forms},\qquad\omega\in\Lambda^{1}(\Omega),\qquad\omega= \omega_{1}dx^{1}+\omega_{2}dx^{2}+\omega_{3}dx^{3}+\omega_{4}dx^{4},\] \[2\text{-forms},\qquad\omega\in\Lambda^{2}(\Omega),\qquad\omega= \omega_{12}dx^{1}\wedge dx^{2}+\omega_{13}dx^{1}\wedge dx^{3}+\omega_{14}dx^{ 1}\wedge dx^{4}\] \[\qquad\qquad\qquad\qquad\qquad\quad+\omega_{23}dx^{2}\wedge dx^{3 }+\omega_{24}dx^{2}\wedge dx^{4}+\omega_{34}dx^{3}\wedge dx^{4},\] \[3\text{-forms},\qquad\omega\in\Lambda^{3}(\Omega),\qquad\omega= \omega_{123}dx^{1}\wedge dx^{2}\wedge dx^{3}+\omega_{124}dx^{1}\wedge dx^{2} \wedge dx^{4}\] \[\qquad\qquad\qquad\qquad\quad+\omega_{134}dx^{1}\wedge dx^{3} \wedge dx^{4}+\omega_{234}dx^{2}\wedge dx^{3}\wedge dx^{4},\] \[4\text{-forms},\qquad\omega\in\Lambda^{4}(\Omega),\qquad\omega= \omega_{1234}dx^{1}\wedge dx^{2}\wedge dx^{3}\wedge dx^{4},\] where \(\omega\in\Lambda^{s}(\Omega)\) for \(s=0,1,2,3,4\) is the space of \(s\)-forms. For the sake of convenience, each differential form has a proxy which is obtained by applying a conversion operator (denoted by \(\Upsilon_{k}\), \(k=0,1,2,3,4\)) to each such that \[\Upsilon_{0}\omega=\omega,\qquad\Upsilon_{4}\omega=\omega_{1234},\] \[\Upsilon_{1}\omega=\begin{bmatrix}\omega_{1}\\ \omega_{2}\\ \omega_{3}\\ \omega_{4}\end{bmatrix},\quad\Upsilon_{2}\omega=\frac{1}{2}\begin{bmatrix}0& \omega_{12}&\omega_{13}&\omega_{14}\\ -\omega_{12}&0&\omega_{23}&\omega_{24}\\ -\omega_{13}&-\omega_{23}&0&\omega_{34}\\ -\omega_{14}&-\omega_{24}&-\omega_{34}&0\end{bmatrix},\quad\Upsilon_{3} \omega=\begin{bmatrix}\omega_{234}\\ -\omega_{134}\\ \omega_{124}\\ -\omega_{123}\end{bmatrix}.\] Let us denote by \(V_{k}\Lambda^{s}(\Omega)\) the space of \(k\)-th order polynomial shape functions for the \(s\)-forms on \(\Omega\). Next, we let \(\Sigma^{k,s}(\Omega)\) denote the _degrees of freedom (dofs)_. Technically speaking, the dofs are a collection of linear functionals on \(V_{k}\Lambda^{s}(\Omega)\) which is dual to this space. Throughout this paper, we will construct explicit descriptions of finite element triples of the following form \[\left(\widehat{K},V_{k}\Lambda^{s}(\widehat{K}),\Sigma^{k,s}(\widehat{K}) \right).\] Here, \(\widehat{K}=\mathfrak{T}^{4}\) and \(\mathfrak{N}^{4}\) are the reference elements. In accordance with standard conventions, we let \(P^{k}(x_{1},x_{2},x_{3},x_{4})\) denote the space of polynomials of degree \(\leq k\). In addition, we use \(\tilde{P}^{k}\) to represent the space of homogeneous polynomials of total degree exactly \(k\). Generally speaking, if we set \(x=(x_{1},x_{2},x_{3},x_{4})\) then it follows that \[\sum_{|\alpha|\leq k}a_{\alpha}x^{\alpha}\in P^{k}(x_{1},x_{2},x_{3},x_{4}), \qquad\sum_{|\alpha|=k}a_{\alpha}x^{\alpha}\in\tilde{P}^{k}(x_{1},x_{2},x_{3}, x_{4}),\] \begin{table} \begin{tabular}{c|c c} \hline \hline & Pentatope & Tetrahedral Prism \\ & \(\mathfrak{T}^{4}\) & \(\mathfrak{N}^{4}\) \\ \hline Vertices & 5 & 8 \\ \hline Edges & 10 & 16 \\ \hline Triangular faces \(\mathfrak{T}^{2}\) & 10 & 8 \\ \hline Quadrilateral faces \(\mathfrak{H}^{2}\) & 0 & 6 \\ \hline Tetrahedral facets \(\mathfrak{T}^{3}\) & 5 & 2 \\ \hline Triangular prism facets \(\mathfrak{N}^{3}\) & 0 & 4 \\ \hline \hline \end{tabular} \end{table} Table 1: Geometric information for reference pentatope and tetrahedral prism. where \(\alpha\) is the multi-index, and \(a_{\alpha}\) are constants. In almost all cases, we will suppress the arguments \((x_{1},x_{2},x_{3},x_{4})\) for the sake of brevity. Next, the symbol \(Q^{l,m,n,q}(x_{1},x_{2},x_{3},x_{4})\) denotes standard tensorial polynomials of maximal degree \(l,m,n,q\). These polynomials can be written explicity as follows \[Q^{l,m,n,q}(x_{1},x_{2},x_{3},x_{4})=P^{l}(x_{1})P^{m}(x_{2})P^{n}(x_{3})P^{q}( x_{4}).\] In addition, consider a bijective map from \(6\)-vectors to skew-symmetric matrices in \(\mathbb{K}\): \[\mathcal{L}\left(\cdot\right):\mathbb{R}^{6}\to\mathbb{K}:\mathcal{L}\left( \begin{bmatrix}w_{12}\\ w_{13}\\ w_{14}\\ w_{23}\\ w_{24}\\ w_{34}\end{bmatrix}\right):=\begin{bmatrix}0&w_{12}&w_{13}&w_{14}\\ -w_{12}&0&w_{23}&w_{24}\\ -w_{13}&-w_{23}&0&w_{34}\\ -w_{14}&-w_{24}&-w_{34}&0\end{bmatrix}. \tag{2.1}\] Lastly, we introduce the following two operators that denote the trace of a quantity \(u\) on to a \(n\)-dimensional submanifold \(f\): \[\operatorname{tr}[f](u),\qquad\operatorname{Tr}[f](u).\] In almost all cases, the argument \([f]\) is omitted when the submanifold of interest is clear. The first trace operator \(\operatorname{tr}[f](u)\) refers to a well-defined restriction of \(u\) to \(f\), where the restriction is a scalar, \(4\)-vector, or \(4\times 4\) matrix. The second trace operator \(\operatorname{Tr}[f](u)\) refers to a well-defined restriction of \(u\) to \(f\), where the restriction is a scalar, \(n\)-vector, or \(n\times n\) matrix. Generally speaking, there is (at least) a surjective map between the ranges of the two trace operators, such that \[\Xi:\operatorname{tr}[f](u)\longrightarrow\operatorname{Tr}[f](u).\] Here, we mean that the scalar, \(4\)-vector, or \(4\times 4\) matrix which is denoted by \(\operatorname{tr}[f](u)\) can always be identified with a scalar, \(n\)-vector, or \(n\times n\) matrix which is denoted by \(\operatorname{Tr}[f](u)\). ### Degrees of Freedom Throughout this paper, our construction of finite element triples will strongly depend on the use of well-known dofs on tetrahedral and triangular-prismatic _facets_, and their associated faces, edges, and vertices. For the sake of completeness, we review these dofs in what follows. #### 2.4.1 Vertex Degrees of Freedom In accordance with standard principles from differential geometry, vertex degrees of freedom are only well-defined for \(0\)-forms. For the pentatope and tetrahedral prism, we specify the vertex degrees of freedom \(\Sigma^{k,0}(v)\) as the vertex values of the polynomial \(0\)-form. We note that there are \(5\) such degrees of freedom for the pentatope \(\mathfrak{T}^{4}\) and \(8\) such degrees of freedom for the tetrahedral prism \(\mathfrak{N}^{4}\). #### 2.4.2 Edge Degrees of Freedom Next, we recall that the edge degrees of freedom are only defined for \(0\)- and \(1\)-forms. One may define \(e\) as an edge of an element \(\widehat{K}\), where \(\widehat{K}=\mathfrak{T}^{4}\) or \(\mathfrak{N}^{4}\), and let \(u\in V_{k}\Lambda^{0}(\widehat{K})\) be a \(0\)-form proxy. Next, one may construct edge degrees of freedom for \(u\) as follows \[M_{e}(u):=\left\{\int_{e}\operatorname{Tr}(u)q,\qquad\forall q\in P^{k-2}(e), \qquad\text{ for each edge }\,e\,\text{ of }\,\widehat{K}\right\}. \tag{2.2}\] For pentatopes \(\mathfrak{T}^{4}\) there are \(10(k-1)\) such degrees of freedom, and for tetrahedral prisms \(\mathfrak{N}^{4}\) there are \(16(k-1)\) such degrees of freedom. Consider a 1-form proxy \(U\in V_{k}\Lambda^{1}(\widehat{K})\). One may define its edge degrees of freedom as follows \[M_{e}(U):=\left\{\int_{e}\text{Tr}(U)\cdot\tau q,\qquad\forall q\in P^{k-1}(e), \qquad\text{ for each edge }\,e\,\text{ of }\,\widehat{K}\right\}, \tag{2.3}\] where \(\tau\) is a unit vector in the direction of \(e\). For \(\mathfrak{T}^{4}\), there are \(10k\) such degrees of freedom, and for \(\mathfrak{N}^{4}\), there are \(16k\) such degrees of freedom. #### 2.4.3 Face Degrees of Freedom In addition, we recall that face degrees of freedom are only defined for 0-, 1- and 2-forms. We let \(f\) denote a single face of an element \(\widehat{K}\); in the context of the present paper, this face can be either triangular or quadrilateral. **Triangular faces:** Consider polynomial 0-forms \(u\in V_{k}\Lambda^{0}(\widehat{K})\). The associated face degrees of freedom on a triangular face \(f=\mathfrak{T}^{2}\) are defined as \[M_{f}(u):=\left\{\int_{f}\text{Tr}(u)q,\quad\forall q\in P^{k-3}(f)\right\}. \tag{2.4}\] Consider polynomial 1-forms \(U\in V_{k}\Lambda^{1}(\widehat{K})\) for which the face degrees of freedom are \[M_{f}(U):=\left\{\int_{f}(\text{Tr}(U)\times\nu)\cdot(q\times\nu),\quad\forall q \in(P^{k-2}(f))^{2},\quad q\cdot\nu=0\right\}, \tag{2.5}\] where \(\nu\) denotes a unit normal vector to the face \(f\). These definitions are slightly different from the standard definitions of edge degrees of freedom for Nedelec-type elements, but can be shown to be the same, (see Remark 5.31 in [30]). Finally, consider polynomial 2-forms, \(U\in V_{k}\Lambda^{2}(\widehat{K})\) for which the face degrees of freedom are \[M_{f}(U):=\left\{\int_{f}(\text{Tr}(U)\cdot\nu)q,\quad\forall q\in P^{k-1}(f) \right\}. \tag{2.6}\] **Quadrilateral faces:** Consider polynomial 0-forms \(u\in V_{k}\Lambda^{0}(\widehat{K})\). The associated face degrees of freedom on a quadrilateral face \(f=\mathfrak{H}^{2}\) are defined as \[M_{f}(u):=\left\{\int_{f}\text{Tr}(u)q,\quad\forall q\in Q^{k-2,k-2}(f) \right\}. \tag{2.7}\] Consider polynomial 1-forms \(U\in V_{k}\Lambda^{1}(\widehat{K})\) for which the face degrees of freedom are \[M_{f}(U):=\left\{\int_{f}(\text{Tr}(U)\times\nu)\cdot q,\quad\forall q\in Q^{ k-2,k-1}\times Q^{k-1,k-2}(f)\right\}. \tag{2.8}\] Finally, consider polynomial 2-forms \(U\in V_{k}\Lambda^{2}(\widehat{K})\) for which the face degrees of freedom are \[M_{f}(U):=\left\{\int_{f}(\text{Tr}(U)\cdot\nu)q,\quad\forall q\in Q^{k-1,k-1} (f)\right\}. \tag{2.9}\] #### 2.4.4 Facet Degrees of Freedom We recall that facet degrees of freedom are only defined for 0-, 1-, 2-, and 3-forms. The elements under consideration in this paper will only have tetrahedral or triangular-prismatic facets. **Tetrahedral facets:** Let \(\mathcal{F}=\mathfrak{T}^{3}\) denote a tetrahedral facet. For 0-forms \(u\in V_{k}\Lambda^{0}(\widehat{K})\), we can specify facet degrees of freedom as \[M_{\mathcal{F}}(u):=\left\{\int_{\mathcal{F}}\text{Tr}(u)q,\qquad q\in P^{k-4} (\mathcal{F})\right\}. \tag{2.10}\] For polynomial 1-forms \(U\in V_{k}\Lambda^{1}(\widehat{K})\), we specify the facet degrees of freedom as \[M_{\mathcal{F}}(U):=\left\{\int_{\mathcal{F}}\operatorname{Tr}(U)\cdot q,\qquad q \in(P^{k-3}(\mathcal{F}))^{3}\right\}. \tag{2.11}\] For polynomial 2-forms \(U\in V_{k}\Lambda^{2}(\widehat{K})\), we specify the facet degrees of freedom as \[M_{\mathcal{F}}(U):=\left\{\int_{\mathcal{F}}\operatorname{Tr}(U)\cdot q, \qquad q\in(P^{k-2}(\mathcal{F}))^{3}\right\}. \tag{2.12}\] Lastly, for polynomial 3-forms \(U\in V_{k}\Lambda^{3}(\widehat{K})\), we specify the facet degrees of freedom as \[M_{\mathcal{F}}(U):=\left\{\int_{\mathcal{F}}\operatorname{Tr}(U)q,\qquad q \in P^{k-1}(\mathcal{F})\right\}. \tag{2.13}\] **Triangular-prismatic facets:** Let \(\mathcal{F}=\mathfrak{N}^{3}\) denote a triangular-prismatic facet. For 0-forms \(u\in V_{k}\Lambda^{0}(\widehat{K})\), we can specify facet degrees of freedom as \[M_{\mathcal{F}}(u):=\left\{\int_{\mathcal{F}}\operatorname{Tr}(u)q,\qquad q \in Q^{k-2}(\mathfrak{H}^{1})\times P^{k-3}(\mathfrak{T}^{2})\right\}. \tag{2.14}\] For polynomial 1-forms \(U\in V_{k}\Lambda^{1}(\widehat{K})\), we specify the facet degrees of freedom as \[M_{\mathcal{F},1}(U) :=\left\{\int_{\mathcal{F}}\operatorname{Tr}(U)\cdot q,\qquad q \in Q^{k-2}(\mathfrak{H}^{1})\times\left[P^{k-2}(\mathfrak{T}^{2}),P^{k-2}( \mathfrak{T}^{2}),0\right]^{T}\right\},\] \[M_{\mathcal{F},2}(U) :=\left\{\int_{\mathcal{F}}\operatorname{Tr}(U)\cdot q,\qquad q \in Q^{k-1}(\mathfrak{H}^{1})\times\left[0,0,P^{k-3}(\mathfrak{T}^{2})\right] ^{T}\right\},\] \[M_{\mathcal{F}}(U) :=M_{\mathcal{F},1}(U)\cup M_{\mathcal{F},2}(U). \tag{2.15}\] For polynomial 2-forms \(U\in V_{k}\Lambda^{2}(\widehat{K})\), we specify the facet degrees of freedom as \[M_{\mathcal{F},1}(U) :=\left\{\int_{\mathcal{F}}\operatorname{Tr}(U)\cdot q,\qquad q \in Q^{k-1}(\mathfrak{H}^{1})\times\left[P^{k-2}(\mathfrak{T}^{2}),P^{k-2}( \mathfrak{T}^{2}),0\right]^{T}\right\},\] \[M_{\mathcal{F},2}(U) :=\left\{\int_{\mathcal{F}}\operatorname{Tr}(U)\cdot q,\qquad q \in Q^{k-2}(\mathfrak{H}^{1})\times\left[0,0,P^{k-1}(\mathfrak{T}^{2})\right] ^{T}\right\},\] \[M_{\mathcal{F}}(U) :=M_{\mathcal{F},1}(U)\cup M_{\mathcal{F},2}(U). \tag{2.16}\] Lastly, for polynomial 3-forms \(U\in V_{k}\Lambda^{3}(\widehat{K})\), we specify the facet degrees of freedom as \[M_{\mathcal{F}}(U):=\left\{\int_{\mathcal{F}}\operatorname{Tr}(U)q,\qquad q \in Q^{k-1}(\mathfrak{H}^{1})\times P^{k-1}(\mathfrak{T}^{2})\right\}. \tag{2.17}\] ## 3 Sobolev Spaces and Associated Mappings In accordance with the standard FEEC approach, we introduce the _de Rahm_ complex for smooth functions in three dimensions \[C^{\infty}(\Omega,\mathbb{R})\] Here, we have used the conventional derivative operators for three dimensions: namely, \(\nabla\) denotes the gradient, \(\nabla\times\) denotes the curl, and \(\nabla\cdot\) denotes the divergence. Next, we can also define the de Rahm complex for smooth functions in four dimensions \[C^{\infty}(\Omega,\mathbb{R})\quad\xrightarrow{\mathrm{grad}}\quad C^{\infty}( \Omega,\mathbb{R}^{4})\quad\xrightarrow{\mathrm{skewGrad}}\quad C^{\infty}( \Omega,\mathbb{K})\quad\xrightarrow{\mathrm{curl}}\quad C^{\infty}(\Omega, \mathbb{R}^{4})\quad\xrightarrow{\mathrm{div}}\quad C^{\infty}(\Omega, \mathbb{R}).\] Here, we have introduced new first-derivative operators for functions in \(\mathbb{R}^{4}\). We will provide precise definitions for these operators in what follows. In four dimensions, 'grad' is the standard gradient operator which can be applied to a scalar, \(u\in L^{2}\left(\Omega,\mathbb{R}\right)\), such that \(\left[\mathrm{grad}u\right]_{i}=\partial_{i}u\) for \(i=1,2,3,4\). In addition,'skwGrad' is an antisymmetric gradient operator which can be applied to a 4-vector, \(E\in L^{2}\left(\Omega,\mathbb{R}^{4}\right)\), as follows \[\left[\mathrm{skwGrad}\,E\right]=\frac{1}{2}\left(\left[\mathrm{ Grad}\,E\right]^{T}-\left[\mathrm{Grad}\,E\right]\right),\] where \([\mathrm{Grad}E]_{ij}=\partial_{j}E_{i}\) for \(i=1,2,3,4\) and \(j=1,2,3,4\). Next, 'curl' is a derivative operator which can be applied to a \(4\times 4\) skew-symmetric matrix, \(F\in L^{2}\left(\Omega,\mathbb{K}\right)\) as follows \[\left[\mathrm{curl}\,F\right]_{i}=\sum_{k,l=1}^{4}\varepsilon_{ ijkl}\partial_{j}F_{kl},\] where \(\varepsilon_{ijkl}\) is the Levi-Civita tensor. Lastly, 'div' is the standard divergence operator which acts on a 4-vector, \(G\in L^{2}\left(\Omega,\mathbb{R}^{4}\right)\), such that \(\left[\mathrm{div}\,G\right]=\partial_{i}G_{i}\) for \(i=1,2,3,4\). For the sake of completeness, we can also define the 'Curl' and 'Div' operators, which are isomorphic to the'skwGrad' and 'curl' operators, respectively. In particular, 'Curl' is a derivative operator which can be applied to a 4-vector, \(E\in L^{2}(\Omega,\mathbb{R}^{4})\), as follows \[\left[\mathrm{Curl}\,E\right]_{ij}=\sum_{k,l=1}^{4}\varepsilon_{ ijkl}\partial_{k}E_{l},\] and 'Div' is a derivative operator which can be applied to a \(4\times 4\) skew-symmetric matrix, \(F\in L^{2}(\Omega,\mathbb{K})\), as follows \[\left[\mathrm{Div}\,F\right]_{i}=\sum_{j=1}^{4}\partial_{j}F_{ij}.\] It turns out that the first-derivative operators (above) satisfy the following relations \[\Upsilon_{1}\left(d^{(0)}\omega\right) =\mathrm{grad}\left(\Upsilon_{0}\omega\right), \omega\in\Lambda^{0}(\Omega) :=\mathcal{D}^{\prime}(\Omega,\Lambda^{0}),\] \[\Upsilon_{2}\left(d^{(1)}\omega\right) =\mathrm{skwGrad}\left(\Upsilon_{1}\omega\right), \omega\in\Lambda^{1}(\Omega) :=\mathcal{D}^{\prime}(\Omega,\Lambda^{1}),\] \[\Upsilon_{3}\left(d^{(2)}\omega\right) =\mathrm{curl}\left(\Upsilon_{2}\omega\right), \omega\in\Lambda^{2}(\Omega) :=\mathcal{D}^{\prime}(\Omega,\Lambda^{2}),\] \[\Upsilon_{2}\left(d^{(3)}\omega\right) =\mathrm{div}\left(\Upsilon_{3}\omega\right), \omega\in\Lambda^{3}(\Omega) :=\mathcal{D}^{\prime}(\Omega,\Lambda^{3}).\] In accordance with these relations, the following diagram commutes \[\mathcal{D}^{\prime}(\Omega,\Lambda^{0})\quad\xrightarrow{d^{(0)}} \quad\mathcal{D}^{\prime}(\Omega,\Lambda^{1})\quad\xrightarrow{d^{(1)}} \quad\mathcal{D}^{\prime}(\Omega,\Lambda^{2})\quad\xrightarrow{d^{(2)}} \quad\mathcal{D}^{\prime}(\Omega,\Lambda^{3})\quad\xrightarrow{d^{(3)}} \quad\mathcal{D}^{\prime}(\Omega,\Lambda^{4})\] \[\quad\xrightarrow{\Upsilon_{0}}\quad\xrightarrow{\Upsilon_{1}} \quad\xrightarrow{\Upsilon_{2}}\quad\xrightarrow{\Upsilon_{3}}\quad\xrightarrow{ \Upsilon_{4}}\quad\xrightarrow{\Upsilon_{4}}\quad\xrightarrow{}\] \[\mathcal{D}^{\prime}(\Omega,\mathbb{R})\quad\xrightarrow{\mathrm{ grad}}\quad\mathcal{D}^{\prime}(\Omega,\mathbb{R}^{4})\quad\xrightarrow{\mathrm{skewGrad}}\quad\mathcal{D}^{\prime}(\Omega,\mathbb{K})\quad \xrightarrow{\mathrm{curl}}\quad\mathcal{D}^{\prime}(\Omega,\mathbb{R}^{4}) \quad\xrightarrow{\mathrm{div}}\quad\mathcal{D}^{\prime}(\Omega,\mathbb{R})\] In addition, the first-derivative operators can be used to construct the following Sobolev spaces \[H\left(\mathrm{grad},\Omega,\mathbb{R}\right) =\left\{u\in L^{2}\left(\Omega,\mathbb{R}\right):\mathrm{grad}\,u \in L^{2}\left(\Omega,\mathbb{R}^{4}\right)\right\},\] \[H\left(\mathrm{skwGrad},\Omega,\mathbb{R}^{4}\right) =\left\{E\in L^{2}\left(\Omega,\mathbb{R}^{4}\right):\mathrm{skw Grad}\,E\in L^{2}\left(\Omega,\mathbb{K} \right)\right\},\] \[H\left(\mathrm{curl},\Omega,\mathbb{K}\right) =\left\{F\in L^{2}\left(\Omega,\mathbb{K}\right):\mathrm{curl}\,F \in L^{2}\left(\Omega,\mathbb{R}^{4}\right)\right\},\] \[H\left(\mathrm{div},\Omega,\mathbb{R}^{4}\right) =\left\{G\in L^{2}\left(\Omega,\mathbb{R}^{4}\right):\mathrm{div} \,G\in L^{2}\left(\Omega,\mathbb{R}\right)\right\},\] and \[H\left(\mathrm{Curl},\Omega,\mathbb{R}^{4}\right) =\left\{E\in L^{2}\left(\Omega,\mathbb{R}^{4}\right):\mathrm{ Curl}\,E\in L^{2}\left(\Omega,\mathbb{K}\right)\right\},\] \[H\left(\mathrm{Div},\Omega,\mathbb{K}\right) =\left\{F\in L^{2}\left(\Omega,\mathbb{K}\right):\mathrm{Div}\,F \in L^{2}\left(\Omega,\mathbb{R}^{4}\right)\right\}.\] In accordance with these definitions, we can introduce the L2 de Rahm complex in four dimensions \[H(\mathrm{grad},\Omega,\mathbb{R})\] Next, it is important for us to characterize the behavior of our function spaces on the boundary of the domain, \(\partial\Omega\). With this in mind, we can introduce the following trace identity for 1-forms \[\left(\mathrm{tr}^{(1)}E\right)(F) =\int_{\partial\Omega}\left(n\times E\right):F\,ds\] \[=\int_{\Omega}\left(\mathrm{Curl}\,E\right):F\,dx-\int_{\Omega} \left(\mathrm{curl}\,F\right)\cdot E\,dx, \tag{3.1}\] where \(E\in H\left(\mathrm{Curl},\Omega,\mathbb{R}^{4}\right)\) and \(F\in H\left(\mathrm{curl},\Omega,\mathbb{K}\right)\). Similarly, \[\left(\mathrm{tr}^{(1)}E\right)(F) =\frac{1}{2}\int_{\partial\Omega}\left[E\otimes n-n\otimes E \right]:F\,ds\] \[=\int_{\Omega}\left(\mathrm{Div}\,F\right)\cdot E\,dx-\int_{ \Omega}F:\left(\mathrm{skwGrad}\,E\right)\,dx, \tag{3.2}\] where \(E\in H\left(\mathrm{skwGrad},\Omega,\mathbb{R}^{4}\right)\) and \(F\in H\left(\mathrm{Div},\Omega,\mathbb{K}\right)\). Next, the following trace identity holds for 2-forms \[\left(\mathrm{tr}^{(2)}F\right)(E) =\int_{\partial\Omega}\left(n\times F\right)\cdot E\,ds\] \[=\int_{\Omega}\left(\mathrm{curl}\,F\right)\cdot E\,dx-\int_{ \Omega}\left(\mathrm{Curl}\,E\right):F\,dx, \tag{3.3}\] where \(E\in H\left(\mathrm{Curl},\Omega,\mathbb{R}^{4}\right)\) and \(F\in H\left(\mathrm{curl},\Omega,\mathbb{K}\right)\). Finally, the following trace identity holds for 3-forms \[\left(\mathrm{tr}^{(3)}G\right)(u) =\int_{\partial\Omega}\left(G\cdot n\right)u\,ds\] \[=\int_{\Omega}\left(\mathrm{div}\,G\right)u\,dx+\int_{\Omega}G \cdot\left(\mathrm{grad}\,u\right)dx, \tag{3.4}\] where \(G\in H(\mathrm{div},\Omega,\mathbb{R}^{4})\) and \(u\in H(\mathrm{grad},\Omega,\mathbb{R})\). There are two cross-product operators which are defined in the trace identities above. In particular, the cross-product operator between a pair of 4-vectors is given by \[\left[M\times N\right]_{ij}=\sum_{k,l=1}^{4}\varepsilon_{ijkl}M_{k}N_{l},\] where \(M\in\mathbb{R}^{4}\) and \(N\in\mathbb{R}^{4}\). Furthermore, the cross-product operator between a 4-vector and a \(4\times 4\) skew-symmetric matrix is given by \[\left[M\times U\right]_{i}=\sum_{k,l=1}^{4}\varepsilon_{ijkl}M_{j}U_{kl},\] where \(M\in\mathbb{R}^{4}\) and \(U\in\mathbb{K}\). In accordance with the equations above, the traces for 0-forms, 1-forms, 2-forms, and 3-forms can be defined as follows \[0\text{-forms}\qquad u =\Upsilon_{0}\omega, \operatorname{tr}(u) =u|_{\partial\Omega},\] \[1\text{-forms}\qquad E =\Upsilon_{1}\omega, \operatorname{tr}(E) =\frac{1}{2}\left(E\otimes n-n\otimes E\right)|_{\partial\Omega},\] \[2\text{-forms}\qquad F =\Upsilon_{2}\omega, \operatorname{tr}(F) =\left(n\times F\right)|_{\partial\Omega},\] \[3\text{-forms}\qquad G =\Upsilon_{3}\omega, \operatorname{tr}(G) =\left(G\cdot n\right)|_{\partial\Omega},\] where \[u \in H\left(\operatorname{grad},\Omega,\mathbb{R}\right), \operatorname{tr}(u) \in H^{1/2}\left(\partial\Omega,\mathbb{R}\right),\] \[E \in H\left(\operatorname{skwGrad},\Omega,\mathbb{R}^{4}\right), \operatorname{tr}(E) \in H^{-1/2}\left(\partial\Omega,\mathbb{K}\right),\] \[F \in H\left(\operatorname{curl},\Omega,\mathbb{K}\right), \operatorname{tr}(F) \in H^{-1/2}\left(\partial\Omega,\mathbb{R}^{4}\right),\] \[G \in H\left(\operatorname{div},\Omega,\mathbb{R}^{4}\right), \operatorname{tr}(G) \in H^{-1/2}\left(\partial\Omega,\mathbb{R}\right).\] We note that the traces of 4-forms are not well-defined. It may not be immediately obvious how the trace quantities behave by simply examining the identities above. In order to fix ideas, let us consider an example in which a simply connected Lipschitz domain \(\Omega\) has a boundary that (non-trivially) intersects with the hyperplane \(x_{4}=0\). We can set \(\partial\Omega\cap\left\{x_{4}=0\right\}=\mathcal{F}\), where \(\mathcal{F}\) denotes a facet. In addition, we observe that the unit normal of the facet is \(n=[0,0,0,1]^{T}\). Under these circumstances, we consider a sufficiently smooth \(s\)-form, \(\omega\): * If \(s=0\) and \(u=\Upsilon_{0}\omega\), then \[\operatorname{tr}[\mathcal{F}](u)=u|_{\mathcal{F}}=u(x_{1},x_{2},x_{3},0),\] (3.5) is the restriction of \(u\) on to \(\mathcal{F}\). The trace can be identified with a scalar field \(\operatorname{Tr}[\mathcal{F}](u)\), which is a 0-form proxy on \(\mathcal{F}\). * If \(s=1\) and \(E=\Upsilon_{1}\omega\), then \[\operatorname{tr}[\mathcal{F}](E) =\frac{1}{2}\left(E\otimes n-n\otimes E\right)|_{\mathcal{F}}\] \[=\frac{1}{2}\begin{bmatrix}0&0&0&E_{1}(x_{1},x_{2},x_{3},0)\\ 0&0&0&E_{2}(x_{1},x_{2},x_{3},0)\\ 0&0&0&E_{3}(x_{1},x_{2},x_{3},0)\\ -E_{1}(x_{1},x_{2},x_{3},0)&-E_{2}(x_{1},x_{2},x_{3},0)&-E_{3}(x_{1},x_{2},x_{ 3},0)&0\end{bmatrix}\] \[=\frac{1}{2}\mathcal{L}\left([0,0,E_{1}(x_{1},x_{2},x_{3},0),0,E_{ 2}(x_{1},x_{2},x_{3},0),E_{3}(x_{1},x_{2},x_{3},0)]^{T}\right),\] (3.6) is the bivector trace of \(E\) on to \(\mathcal{F}\). The trace can be identified with a 3-vector \(\operatorname{Tr}[\mathcal{F}](E)\), which is a 1-form proxy on \(\mathcal{F}\). * If \(s=2\) and \(F=\Upsilon_{2}\omega\), then \[\operatorname{tr}[\mathcal{F}](F)=(n\times F)\,|_{\mathcal{F}}=2\begin{bmatrix}F_ {23}(x_{1},x_{2},x_{3},0)\\ -F_{13}(x_{1},x_{2},x_{3},0)\\ F_{12}(x_{1},x_{2},x_{3},0)\\ 0\end{bmatrix},\] (3.7) is the tangential trace of \(F\) on to \(\mathcal{F}\). The trace can be identified with a 3-vector \(\operatorname{Tr}[\mathcal{F}](F)\), which is a 2-form proxy on \(\mathcal{F}\). * If \(s=3\) and \(G=\Upsilon_{3}\omega\), then \[\operatorname{tr}[\mathcal{F}](G)=(G\cdot n)\,|_{\mathcal{F}}=G_{4}(x_{1},x_ {2},x_{3},0),\] (3.8) is the normal trace of \(G\) on to \(\mathcal{F}\). The trace of a 3-form can be identified with a scalar field \(\operatorname{Tr}[\mathcal{F}](G)\), which is a 3-form proxy on \(\mathcal{F}\). * If \(s=4\), then the trace is not well-defined. Lastly, having establishing the Sobolev spaces and the corresponding derivative and trace identities, we introduce the pullback operator \(\phi^{*}\) of the differential forms \(\omega\), as follows \[u =\Upsilon_{0}\omega,\quad\forall u\in H\left(\operatorname{grad},\Omega,\mathbb{R}\right), \Upsilon_{0}\phi^{*}\omega=u\circ\phi, \tag{3.9}\] \[E =\Upsilon_{1}\omega,\quad\forall E\in H\left(\operatorname{skwGrad},\Omega,\mathbb{R}^{4}\right), \Upsilon_{1}\phi^{*}\omega=D\phi^{T}\left[E\circ\phi\right],\] (3.10) \[F =\Upsilon_{2}\omega,\quad\forall F\in H\left(\operatorname{curl},\Omega,\mathbb{K}\right), \Upsilon_{2}\phi^{*}\omega=D\phi^{T}\left[F\circ\phi\right]D\phi,\] (3.11) \[G =\Upsilon_{3}\omega,\quad\forall G\in H\left(\operatorname{div},\Omega,\mathbb{R}^{4}\right), \Upsilon_{3}\phi^{*}\omega=\left|D\phi\right|D\phi^{-1}\left[G \circ\phi\right],\] (3.12) \[q =\Upsilon_{4}\omega,\quad\forall q\in L^{2}\left(\Omega,\mathbb{R }\right), \Upsilon_{4}\phi^{*}\omega=\left|D\phi\right|\left[q\circ\phi\right]. \tag{3.13}\] Here \([D\phi]_{ij}=\partial_{j}\phi_{i}\) is the Jacobian matrix. ## 4 Finite Elements on a Reference Pentatope In this section, we record explicitly, finite element spaces and degrees of freedom on a pentatope, \(\mathfrak{T}^{4}\). The construction we choose for \(V_{k}\Lambda^{s}(\mathfrak{T}^{4})\) is based on those presented in [30]; these are directly analogous to the \(P_{k}^{-}\Lambda^{s}\) spaces on a tetrahedron, as described in [26]. We require that our spaces \(V_{k}\Lambda^{s}(\mathfrak{T}^{4})\) satisfy the relation \[V_{k}\Lambda^{0}(\mathfrak{T}^{4})\] With this in mind, we require that \[V_{k}\Lambda^{0}(\mathfrak{T}^{4}) :=P^{k}(\mathfrak{T}^{4}), \tag{4.1a}\] \[V_{k}\Lambda^{1}(\mathfrak{T}^{4}) :=(P^{k-1}(\mathfrak{T}^{4}))^{4}\oplus\left\{p\in(\tilde{P}^{k}( \mathfrak{T}^{4}))^{4}|p\cdot x=0\right\},\] (4.1b) \[V_{k}\Lambda^{2}(\mathfrak{T}^{4}) :=\mathcal{L}\left((P^{k-1}(\mathfrak{T}^{4}))^{6}\right)\oplus \tilde{P}^{k-1}(\mathfrak{T}^{4})B_{1}\oplus\tilde{P}^{k-1}(\mathfrak{T}^{4}) B_{2}\oplus\tilde{P}^{k-1}(\mathfrak{T}^{4})B_{3}\oplus\tilde{P}^{k-1}( \mathfrak{T}^{4})B_{4}, \tag{4.1c}\] where \[B_{1} :=\begin{bmatrix}0&0&0&0\\ 0&0&x_{4}&-x_{3}\\ 0&-x_{4}&0&x_{2}\\ 0&x_{3}&-x_{2}&0\end{bmatrix}, B_{2} :=\begin{bmatrix}0&0&-x_{4}&x_{3}\\ 0&0&0&0\\ x_{4}&0&0&-x_{1}\\ -x_{3}&0&x_{1}&0\end{bmatrix},\] \[B_{3} :=\begin{bmatrix}0&x_{4}&0&-x_{2}\\ -x_{4}&0&0&x_{1}\\ 0&0&0&0\\ x_{2}&-x_{1}&0&0\end{bmatrix}, B_{4} :=\begin{bmatrix}0&-x_{3}&x_{2}&0\\ x_{3}&0&-x_{1}&0\\ -x_{2}&x_{1}&0&0\\ 0&0&0&0\end{bmatrix},\] \[V_{k}\Lambda^{3}(\mathfrak{T}^{4}) :=(P^{k-1}(\mathfrak{T}^{4}))^{4}\oplus\tilde{P}^{k-1}( \mathfrak{T}^{4})x, \tag{4.1d}\] \[V_{k}\Lambda^{4}(\mathfrak{T}^{4}) :=P^{k-1}(\mathfrak{T}^{4}). \tag{4.1e}\] _Remark 4.1_.: It is easily seen that the space of 2-forms (Eq. (4.1c)) can be described as follows \[V_{k}\Lambda^{2}(\mathfrak{T}^{4}) :=\mathcal{L}\left((P^{k-1}(\mathfrak{T}^{4}))^{6}\right)\oplus \left\{B\in\mathcal{L}((\tilde{P}^{k}(\mathfrak{T}^{4}))^{6})|Bx=0\right\}.\] For details on the derivation of these spaces, we refer the interested reader to A. The exactness of the sequence follows directly. It remains for us to identify the bubble spaces \(\breve{V}_{k}\Lambda^{s}(\mathfrak{T}^{4})\). These take the following form \[\breve{V}_{k}\Lambda^{0}(\mathfrak{T}^{4}) :=\operatorname{span}\left\{\vartheta_{ij\ell m}(x_{1},x_{2},x_{3},x_{4})\right\}, \tag{4.2a}\] \[\breve{V}_{k}\Lambda^{1}(\mathfrak{T}^{4}) :=\operatorname{span}\left\{\Phi^{r}_{ij\ell m}(x_{1},x_{2},x_{3},x_{4})\right\},\] (4.2b) \[\breve{V}_{k}\Lambda^{2}(\mathfrak{T}^{4}) :=\operatorname{span}\left\{\Theta^{r}_{ij\ell m}(x_{1},x_{2},x_{3 },x_{4})\right\},\] (4.2c) \[\breve{V}_{k}\Lambda^{3}(\mathfrak{T}^{4}) :=\operatorname{span}\left\{\Psi^{r}_{ij\ell m}(x_{1},x_{2},x_{3 },x_{4})\right\}, \tag{4.2d}\] where the basis functions \(\vartheta\), \(\Phi\), \(\Theta\), and \(\Psi\) and the associated indexes \(i\), \(j\), \(\ell\), \(m\), and \(r\) are defined below. Consider the following H1-conforming interior functions \(\vartheta_{ij\ell m}\) of degree \(k\) \[\vartheta_{ij\ell m}(x_{1},x_{2},x_{3},x_{4})= L_{i}\left(\frac{\lambda_{2}}{\lambda_{1}+\lambda_{2}}\right)L_{j}^{2i} \left(\frac{\lambda_{3}}{\lambda_{1}+\lambda_{2}+\lambda_{3}}\right)L_{\ell}^ {2(i+j)}\left(\frac{\lambda_{4}}{\lambda_{1}+\lambda_{2}+\lambda_{3}+\lambda_{ 4}}\right)L_{m}^{2(i+j+\ell)}\left(\lambda_{5}\right)\] \[\cdot\left(\lambda_{1}+\lambda_{2}\right)^{i}\left(\lambda_{1}+ \lambda_{2}+\lambda_{3}\right)^{j}\left(\lambda_{1}+\lambda_{2}+\lambda_{3}+ \lambda_{4}\right)^{\ell},\] where \(i\geq 2\), \(j\geq 1\), \(\ell\geq 1\), \(m\geq 1\), and \(n=i+j+\ell+m=5,\ldots,k\) are the indexing parameters, \(\lambda=\lambda(x)=\lambda(x_{1},x_{2},x_{3},x_{4})=\left(\lambda_{1},\lambda_{2},\lambda_{3},\lambda_{4},\lambda_{5}\right)\) are barycentric coordinates for the pentatope, \(L_{i}\) are the integrated and scaled Legendre polynomials, and \(L_{j}^{\alpha}\) are the integrated and scaled Jacobi polynomials, (see Remark 4.2 for details). In addition, consider the following H(skwGrad)-conforming interior functions \(\Phi_{ij\ell m}^{r}\) of degree \(k-1\) \[\Phi_{ij\ell m}^{r}(x_{1},x_{2},x_{3},x_{4})= P_{i}\left(\frac{\lambda_{b}}{\lambda_{a}+\lambda_{b}}\right)L_{j}^{2i+1 }\left(\frac{\lambda_{c}}{\lambda_{a}+\lambda_{b}+\lambda_{c}}\right)L_{\ell}^ {2(i+j)}\left(\frac{\lambda_{d}}{\lambda_{a}+\lambda_{b}+\lambda_{c}+\lambda_{ d}}\right)L_{m}^{2(i+j+\ell)}\left(\lambda_{e}\right)\] \[\cdot\left(\lambda_{a}+\lambda_{b}\right)^{i}\left(\lambda_{a}+ \lambda_{b}+\lambda_{c}\right)^{j}\left(\lambda_{a}+\lambda_{b}+\lambda_{c}+ \lambda_{d}\right)^{\ell}\left(\lambda_{a}\nabla\lambda_{b}-\lambda_{b}\nabla \lambda_{a}\right),\] where \(i\geq 0\), \(j\geq 1\), \(\ell\geq 1\), \(m\geq 1\), and \(n=i+j+\ell+m=3,\ldots,k-1\) are the indexing parameters, and \(P_{i}\) are the shifted and scaled Legendre polynomials. In addition, for \(r=1,2,3,4\) we set \((a,b,c,d,e)=(1,2,3,4,5)\), \((a,b,c,d,e)=(2,3,4,5,1)\), \((a,b,c,d,e)=(3,4,5,1,2)\), and \((a,b,c,d,e)=(4,5,1,2,3)\), respectively. The explicit formula given above for the H(skwGrad)-conforming polynomial functions is justified via Lemma 4.1. In this lemma, we focus on the case in which \(a=1\) and \(b=2\), as all other cases are justified using similar arguments. **Lemma 4.1**.: _The following quantity belongs to the space \(V_{k+1}\Lambda^{1}(\mathfrak{T}^{4})\) of H(skwGrad)-conforming functions_ \[f_{k}\left(x_{1},x_{2},x_{3},x_{4}\right)\left(\lambda_{1}\nabla\lambda_{2}- \lambda_{2}\nabla\lambda_{1}\right),\] _where \(\lambda_{1}=\lambda_{1}(x_{1},x_{2},x_{3},x_{4})\) and \(\lambda_{2}=\lambda_{2}(x_{1},x_{2},x_{3},x_{4})\) are barycentric coordinates on the pentatope \(\mathfrak{T}^{4}\), and where \(f_{k}(x_{1},x_{2},x_{3},x_{4})\in P^{k}(\mathfrak{T}^{4})\)._ Proof.: The proof follows immediately from Lemma 2 of Fuentes et al. [4], upon setting the number of dimensions \(N=4\), and the parameters \(a=1\) and \(b=2\). Next, consider the H(curl)-conforming interior functions \(\Theta_{ij\ell m}^{r}\) of degree \(k-1\) \[\Theta_{ij\ell m}^{r}(x_{1},x_{2},x_{3},x_{4})=\] \[P_{i}\left(\frac{\lambda_{b}}{\lambda_{a}+\lambda_{b}}\right)P_{j }^{2i+1}\left(\frac{\lambda_{c}}{\lambda_{a}+\lambda_{b}+\lambda_{c}}\right)L_ {\ell}^{2(i+j+1)}\left(\frac{\lambda_{d}}{\lambda_{a}+\lambda_{b}+\lambda_{c}+ \lambda_{d}}\right)L_{m}^{2(i+j+\ell)}\left(\lambda_{e}\right)\] \[\cdot\left(\lambda_{a}+\lambda_{b}\right)^{i}\left(\lambda_{a}+ \lambda_{b}+\lambda_{c}\right)^{j}\left(\lambda_{a}+\lambda_{b}+\lambda_{c}+ \lambda_{d}\right)^{\ell}\] \[\cdot\left[\lambda_{a}\left(\nabla\lambda_{b}\otimes\nabla \lambda_{c}-\nabla\lambda_{c}\otimes\nabla\lambda_{b}\right)+\lambda_{b}\left( \nabla\lambda_{c}\otimes\nabla\lambda_{a}-\nabla\lambda_{a}\otimes\nabla \lambda_{c}\right)+\lambda_{c}\left(\nabla\lambda_{a}\otimes\nabla\lambda_{b}- \nabla\lambda_{b}\otimes\nabla\lambda_{a}\right)\right],\] where \(i\geq 0\), \(j\geq 0\), \(\ell\geq 1\), \(m\geq 1\), and \(n=i+j+\ell+m=2,\ldots,k-1\) are the indexing parameters. In addition, for \(r=1,2,3,4,5,6\) we set \((a,b,c,d,e)=(1,2,3,4,5)\), \((a,b,c,d,e)=(2,3,4,5,1)\), \((a,b,c,d,e)=(3,4,5,1,2)\), \((a,b,c,d,e)=(4,5,1,2,3)\), \((a,b,c,d,e)=(5,1,2,3,4)\), and \((a,b,c,d,e)=(1,2,4,3,5)\), respectively. The explicit formula given above for H(curl)-conforming polynomial functions is justified via Lemma 4.2. In this lemma, we focus on the case in which \(a=1\), \(b=2\), and \(c=3\) as all other cases are justified using similar arguments. **Lemma 4.2**.: _The following quantity belongs to the space \(V_{k+1}\Lambda^{2}(\mathfrak{T}^{4})\) of H(curl)-conforming functions_ \[f_{k}\left(x_{1},x_{2},x_{3},x_{4}\right) \big{[}\lambda_{1}\left(\nabla\lambda_{2}\otimes\nabla\lambda_{3} -\nabla\lambda_{3}\otimes\nabla\lambda_{2}\right)+\lambda_{2}\left(\nabla \lambda_{3}\otimes\nabla\lambda_{1}-\nabla\lambda_{1}\otimes\nabla\lambda_{3}\right)\] \[+\lambda_{3}\left(\nabla\lambda_{1}\otimes\nabla\lambda_{2}- \nabla\lambda_{2}\otimes\nabla\lambda_{1}\right)\big{]}, \tag{4.3}\] _where \(\lambda_{1}=\lambda_{1}(x_{1},x_{2},x_{3},x_{4})\), \(\lambda_{2}=\lambda_{2}(x_{1},x_{2},x_{3},x_{4})\), and \(\lambda_{3}=\lambda_{3}(x_{1},x_{2},x_{3},x_{4})\) are barycentric coordinates on the pentatope \(\mathfrak{T}^{4}\), and where \(f_{k}(x_{1},x_{2},x_{3},x_{4})\in P^{k}(\mathfrak{T}^{4})\)._ Proof.: Let us recall that \[V_{k}\Lambda^{2}(\mathfrak{T}^{4}):=\mathcal{L}\left((P^{k-1}(\mathfrak{T}^{4}))^ {6}\right)\oplus\left\{B\in\mathcal{L}\left((\tilde{P}^{k}(\mathfrak{T}^{4}))^ {6}\right)|Bx=0\right\}. \tag{4.4}\] It remains for us to show that the function in Eq. (4.3) belongs to \(V_{k+1}\Lambda^{2}(\mathfrak{T}^{4})\). Towards this end, we introduce the following identities \[\lambda_{i}=\eta_{i}+\beta_{i}\cdot x,\qquad\nabla\lambda_{i}=\beta_{i},\] where \(\eta_{i}\in\mathbb{R}\), \(\beta_{i}\in\mathbb{R}^{4}\), and \(i=1,2,3\). It immediately follows that \[\lambda_{1}\left(\nabla\lambda_{2}\otimes\nabla\lambda_{3}- \nabla\lambda_{3}\otimes\nabla\lambda_{2}\right)+\lambda_{2}\left(\nabla \lambda_{3}\otimes\nabla\lambda_{1}-\nabla\lambda_{1}\otimes\nabla\lambda_{3} \right)+\lambda_{3}\left(\nabla\lambda_{1}\otimes\nabla\lambda_{2}-\nabla \lambda_{2}\otimes\nabla\lambda_{1}\right)\] \[=\left(\eta_{1}+\beta_{1}\cdot x\right)\left(\beta_{2}\otimes \beta_{3}-\beta_{3}\otimes\beta_{2}\right)+\left(\eta_{2}+\beta_{2}\cdot x \right)\left(\beta_{3}\otimes\beta_{1}-\beta_{1}\otimes\beta_{3}\right)\] \[+\left(\eta_{3}+\beta_{3}\cdot x\right)\left(\beta_{1}\otimes \beta_{2}-\beta_{2}\otimes\beta_{1}\right)=A+C(x),\] where \[A :=\eta_{1}\left(\beta_{2}\otimes\beta_{3}-\beta_{3}\otimes\beta_{ 2}\right)+\eta_{2}\left(\beta_{3}\otimes\beta_{1}-\beta_{1}\otimes\beta_{3} \right)+\eta_{3}\left(\beta_{1}\otimes\beta_{2}-\beta_{2}\otimes\beta_{1} \right),\] \[C(x) :=\left(\beta_{1}\cdot x\right)\left(\beta_{2}\otimes\beta_{3}- \beta_{3}\otimes\beta_{2}\right)+\left(\beta_{2}\cdot x\right)\left(\beta_{3} \otimes\beta_{1}-\beta_{1}\otimes\beta_{3}\right)+\left(\beta_{3}\cdot x \right)\left(\beta_{1}\otimes\beta_{2}-\beta_{2}\otimes\beta_{1}\right).\] By inspection, we have that \(A\in\mathcal{L}\left((P^{0}(\mathfrak{T}^{4}))^{6}\right)\) and \(C(x)\in\mathcal{L}\left((\tilde{P}^{1}(\mathfrak{T}^{4}))^{6}\right)\). In addition, following some algebraic manipulations, it turns out that \(C(x)x=0\). Therefore \[C(x)\in\left\{E\in\mathcal{L}\left((\tilde{P}^{1}(\mathfrak{T}^{4}))^{6} \right)\mid Ex=0\right\}.\] Next, we can perform the following decomposition \[f_{k}\in P^{k}(\mathfrak{T}^{4}) =P^{k-1}(\mathfrak{T}^{4})\oplus\tilde{P}^{k}(\mathfrak{T}^{4}),\] \[f_{k} =f_{k-1}+\tilde{f}_{k},\] where \(f_{k-1}\in P^{k-1}(\mathfrak{T}^{4})\) and \(\tilde{f}_{k}\in\tilde{P}^{k}(\mathfrak{T}^{4})\). As a result, we have that \[f_{k}\left[A+C(x)\right]=f_{k}A+f_{k-1}C(x)+\tilde{f}_{k}C(x).\] Naturally, by inspection, we have that \[f_{k}A+f_{k-1}C(x)\in\mathcal{L}\left((P^{k}(\mathfrak{T}^{4}))^{6}\right), \qquad\tilde{f}_{k}C(x)\in\left\{B\in\mathcal{L}\left((\tilde{P}^{k+1}( \mathfrak{T}^{4}))^{6}\right)\mid Ex=0\right\}.\] Based on these identities and the definition in Eq. (4.4), we immediately obtain the desired result \[f_{k}\left[A+C(x)\right]\in V_{k+1}\Lambda^{2}(\mathfrak{T}^{4}).\] Next, consider the \(\mathrm{H}(\mathrm{div})\)-conforming interior functions \(\Psi_{ij\ell m}^{r}\) of degree \(k-1\) \[\Psi_{ij\ell m}^{r}(x_{1},x_{2},x_{3},x_{4})= P_{i}\left(\frac{\lambda_{b}}{\lambda_{a}+\lambda_{b}}\right)P_{j}^{2i+1} \left(\frac{\lambda_{c}}{\lambda_{a}+\lambda_{b}+\lambda_{c}}\right)P_{\ell}^{2 (i+j+1)}\left(\frac{\lambda_{d}}{\lambda_{a}+\lambda_{b}+\lambda_{c}+\lambda _{d}}\right)L_{m}^{2(i+j+\ell)+3}\left(\lambda_{e}\right)\] \[\cdot\left(\lambda_{a}+\lambda_{b}\right)^{i}\left(\lambda_{a}+ \lambda_{b}+\lambda_{c}\right)^{j}\left(\lambda_{a}+\lambda_{b}+\lambda_{c}+ \lambda_{d}\right)^{\ell}\] \[\cdot\left[\lambda_{a}\left(\nabla\lambda_{b}\times\nabla \lambda_{c}\times\nabla\lambda_{d}\right)-\lambda_{b}\left(\nabla\lambda_{c} \times\nabla\lambda_{d}\times\nabla\lambda_{a}\right)\right.\] \[+\left.\lambda_{c}\left(\nabla\lambda_{d}\times\nabla\lambda_{a} \times\nabla\lambda_{b}\right)-\lambda_{d}\left(\nabla\lambda_{a}\times \nabla\lambda_{b}\times\nabla\lambda_{c}\right)\right],\] where \(i\geq 0\), \(j\geq 0\), \(\ell\geq 0\), \(m\geq 1\), and \(n=i+j+\ell+m=1,\ldots,k-1\) are the indexing parameters. In addition, for \(r=1,2,3,4\) we set \((a,b,c,d,e)=(1,2,3,4,5)\), \((a,b,c,d,e)=(2,3,4,5,1)\), \((a,b,c,d,e)=(3,4,5,1,2)\), and \((a,b,c,d,e)=(4,5,1,2,3)\), respectively. The explicit formula given above for the \(\mathrm{H}(\mathrm{div})\)-conforming polynomial functions is justified via Lemma 4.3. In this lemma, we focus on the case in which \(a=1\), \(b=2\), \(c=3\), and \(d=4\) as all other cases are justified using similar arguments. **Lemma 4.3**.: _The following quantity belongs to the space \(V_{k+1}\Lambda^{3}(\overline{\Sigma}^{4})\) of H(div)-conforming functions_ \[f_{k}\left(x_{1},x_{2},x_{3},x_{4}\right) \big{[}\lambda_{1}\left(\nabla\lambda_{2}\times\nabla\lambda_{3} \times\nabla\lambda_{4}\right)-\lambda_{2}\left(\nabla\lambda_{3}\times\nabla \lambda_{4}\times\nabla\lambda_{1}\right)\] \[+\lambda_{3}\left(\nabla\lambda_{4}\times\nabla\lambda_{1} \times\nabla\lambda_{2}\right)-\lambda_{4}\left(\nabla\lambda_{1}\times\nabla \lambda_{2}\times\nabla\lambda_{3}\right)\big{]}, \tag{4.5}\] _where \(\lambda_{1}=\lambda_{1}(x_{1},x_{2},x_{3},x_{4})\), \(\lambda_{2}=\lambda_{2}(x_{1},x_{2},x_{3},x_{4})\), \(\lambda_{3}=\lambda_{3}(x_{1},x_{2},x_{3},x_{4})\), and \(\lambda_{4}=\lambda_{4}(x_{1},x_{2},x_{3},x_{4})\) are barycentric coordinates on the pentatope \(\overline{\Sigma}^{4}\), and where \(f_{k}(x_{1},x_{2},x_{3},x_{4})\in P^{k}(\overline{\Sigma}^{4})\)._ Proof.: Let us recall that \[V_{k}\Lambda^{3}(\overline{\Sigma}^{4}):=(P^{k-1}(\overline{ \Sigma}^{4}))^{4}\oplus\tilde{P}^{k-1}(\overline{\Sigma}^{4})x. \tag{4.6}\] In addition, the following identities hold \[\lambda_{i}=\eta_{i}+\beta_{i}\cdot x,\qquad\nabla\lambda_{i}= \beta_{i},\] where \(\eta_{i}\in\mathbb{R}\), \(\beta_{i}\in\mathbb{R}^{4}\), and \(i=1,2,3,4\). Next, upon expanding the triple products in Eq. (4.5) in terms of these identities, one obtains \[\lambda_{1}\left(\nabla\lambda_{2}\times\nabla\lambda_{3}\times \nabla\lambda_{4}\right)-\lambda_{2}\left(\nabla\lambda_{3}\times\nabla \lambda_{4}\times\nabla\lambda_{1}\right)+\lambda_{3}\left(\nabla\lambda_{4} \times\nabla\lambda_{1}\times\nabla\lambda_{2}\right)-\lambda_{4}\left(\nabla \lambda_{1}\times\nabla\lambda_{2}\times\nabla\lambda_{3}\right)\] \[=\left(\eta_{1}+\beta_{1}\cdot x\right)\left(\beta_{2}\times \beta_{3}\times\beta_{4}\right)-\left(\eta_{2}+\beta_{2}\cdot x\right)\left( \beta_{3}\times\beta_{4}\times\beta_{1}\right)\] \[+\left(\eta_{3}+\beta_{3}\cdot x\right)\left(\beta_{4}\times \beta_{1}\times\beta_{2}\right)-\left(\eta_{4}+\beta_{4}\cdot x\right)\left( \beta_{1}\times\beta_{2}\times\beta_{3}\right)=A+C(x),\] where \[A :=\eta_{1}\left(\beta_{2}\times\beta_{3}\times\beta_{4}\right)- \eta_{2}\left(\beta_{3}\times\beta_{4}\times\beta_{1}\right)+\eta_{3}\left( \beta_{4}\times\beta_{1}\times\beta_{2}\right)-\eta_{4}\left(\beta_{1}\times \beta_{2}\times\beta_{3}\right),\] \[C(x) :=\left(\beta_{1}\cdot x\right)\left(\beta_{2}\times\beta_{3} \times\beta_{4}\right)-\left(\beta_{2}\cdot x\right)\left(\beta_{3}\times \beta_{4}\times\beta_{1}\right)+\left(\beta_{3}\cdot x\right)\left(\beta_{4} \times\beta_{1}\times\beta_{2}\right)-\left(\beta_{4}\cdot x\right)\left(\beta _{1}\times\beta_{2}\times\beta_{3}\right).\] After some algebraic manipulations, we find that \[C(x)=\beta_{1}\cdot\left(\beta_{2}\times\beta_{3}\times\beta_{4} \right)x.\] By inspection, we have that \[A\in\left(P^{0}(\overline{\Sigma}^{4})\right)^{4},\qquad C(x) \in\left\{Q\in(\tilde{P}^{1}(\overline{\Sigma}^{4}))^{4}\,|\,Q(x)=\phi(x)x \right\}.\] Next, we can perform the following decomposition \[f_{k}=f_{k-1}+\tilde{f}_{k},\] where \(f_{k-1}\in P^{k-1}(\overline{\Sigma}^{4})\) and \(\tilde{f}_{k}\in\tilde{P}^{k}(\overline{\Sigma}^{4})\). As a result, we have that \[f_{k}\left[A+C(x)\right]=f_{k}A+f_{k-1}C(x)+\tilde{f}_{k}C(x).\] Naturally, by inspection \[f_{k}A+f_{k-1}C(x)\in(P^{k}(\overline{\Sigma}^{4}))^{4},\qquad \tilde{f}_{k}C(x)\in\left\{Q\in(\tilde{P}^{k+1}(\overline{\Sigma}^{4}))^{4}\,| \,Q(x)=\phi(x)x\right\}.\] Based on these identities and the definition in Eq. (4.6), we immediately obtain the desired result \[f_{k}\left[A+C(x)\right]\in V_{k+1}\Lambda^{3}(\overline{\Sigma}^{4}).\] Finally, for the sake of completeness, consider the L2-conforming interior functions \(v_{ij\ell m}\) of degree \(k-1\) \[v_{ij\ell m}(x_{1},x_{2},x_{3},x_{4})= P_{i}\left(\frac{\lambda_{2}}{\lambda_{1}+\lambda_{2}}\right)P_{j}^ {2i+1}\left(\frac{\lambda_{3}}{\lambda_{1}+\lambda_{2}+\lambda_{3}}\right)P_{ \ell}^{2(i+j+1)}\left(\frac{\lambda_{4}}{\lambda_{1}+\lambda_{2}+\lambda_{3}+ \lambda_{4}}\right)P_{m}^{2(i+j+\ell)+3}\left(\lambda_{5}\right)\] \[\cdot\left(\lambda_{1}+\lambda_{2}\right)^{i}\left(\lambda_{1}+ \lambda_{2}+\lambda_{3}\right)^{j}\left(\lambda_{1}+\lambda_{2}+\lambda_{3}+ \lambda_{4}\right)^{\ell},\] where \(i\geq 0\), \(j\geq 0\), \(\ell\geq 0\), \(m\geq 0\), and \(n=i+j+\ell+m=0,\ldots,k-1\) are the indexing parameters. _Remark 4.2_.: In the above discussion, the Legendre and Jacobi polynomials are critical for developing explicit expressions for the bubble spaces. For the sake of brevity, these polynomials will be not defined in this work, but we encourage the curious reader to consult Fuentes et al. [4] for their precise definitions. ### Degrees of Freedom on the Reference Pentatope, \(\mathfrak{T}^{4}\) We now return our attention to the sequence of spaces in Eqs. (4.1a)-(4.1e). Our objective is to construct degrees of freedom for these spaces. There are already well-known sets of degrees of freedom for simplicial elements using wedge products, as in [31], etc. Unfortunately, while the wedge product is mathematically elegant, it is frequently difficult for engineers and programmers to interpret and use for implementation purposes. In order to address this issue, in this section we provide an alternative, more explicit construction of the degrees of freedom. In addition, these degrees of freedom are shown to be unisolvent. To specify the degrees of freedom for an \(s\)-form on the pentatope, we will make use of the preceding discussions; in particular, several degrees of freedom can be specified by using well-known trace degrees of freedom on \(\mathfrak{T}^{3}\) (tetrahedra), \(\mathfrak{T}^{2}\) (triangles), \(\mathfrak{T}^{1}\) (edges), and \(\mathfrak{T}^{0}\) (vertices). Recall from Table 1 that \(\mathfrak{T}^{4}\) has 5 vertices, 10 edges, 10 triangular faces, and 5 tetrahedral facets. Our task is reduced to specifying the remaining interior degrees of freedom, and ensuring unisolvency. #### 4.1.1 Dofs for 0-forms on \(\mathfrak{T}^{4}\) The polynomial 0-forms on \(\mathfrak{T}^{4}\) are denoted by \(V_{k}\Lambda^{0}(\mathfrak{T}^{4}):=P^{k}(\mathfrak{T}^{4})\). This space has dimension \[\dim(V_{k}\Lambda^{0}(\mathfrak{T}^{4}))=\binom{k+4}{4}=\frac{1}{24}(k+1)(k+2) (k+3)(k+4).\] The dual space \(\Sigma^{k,0}(\mathfrak{T}^{4})\) must have the same dimension. We can decompose \(\Sigma^{k,0}(\mathfrak{T}^{4})\) into trace and volume degrees of freedom. For the trace degrees of freedom, \(\Sigma^{k,0}_{trace}(\mathfrak{T}^{4})\), we use vertex, edge, face, and facet degrees of freedom from Eqs. (2.2), (2.4), and (2.10). The total number of trace degrees of freedom is, therefore \[\dim\left(\Sigma^{k,0}_{trace}(\mathfrak{T}^{4})\right) =5+10\dim(P^{k-2}(\mathfrak{T}^{1}))+10\dim(P^{k-3}(\mathfrak{T} ^{2}))+5\dim(P^{k-4}(\mathfrak{T}^{3}))\] \[=5+10\binom{k-1}{k-2}+10\binom{k-1}{k-3}+5\binom{k-1}{k-4}\] \[=\frac{5}{6}k(k^{2}+5).\] We can also specify volume degrees of freedom on \(\mathfrak{T}^{4}\) for the 0-form proxy \(u\) as follows \[\Sigma^{k,0}_{vol}(\mathfrak{T}^{4}):=\left\{u\rightarrow\int_{\mathfrak{T}^{4 }}uq,\qquad q\in P^{k-5}(\mathfrak{T}^{4})\right\}. \tag{4.7}\] It immediately follows that \[\dim\left(\Sigma^{k,0}_{vol}(\mathfrak{T}^{4})\right)=\binom{k-1}{4}=\frac{1}{ 24}(k-4)(k-3)(k-2)(k-1).\] **Lemma 4.4**.: _The degrees of freedom_ \[\Sigma^{k,0}(\mathfrak{T}^{4}):=\Sigma^{k,0}_{trace}(\mathfrak{T}^{4})\cup \Sigma^{k,0}_{vol}(\mathfrak{T}^{4}), \tag{4.8}\] _form a unisolvent set for \(V_{k}\Lambda^{0}(\mathfrak{T}^{4})\)._ Proof.: We begin by noting that \[\dim(V_{k}\Lambda^{0}(\mathfrak{T}^{4}))=\dim(\Sigma^{k,0}(\mathfrak{T}^{4})) =\dim(\Sigma^{k,0}_{trace}(\mathfrak{T}^{4}))+\dim(\Sigma^{k,0}_{vol}( \mathfrak{T}^{4})).\] It will therefore suffice to show that the vanishing of all degrees of freedom for \(u\) implies \(u=0\). Suppose that for a particular \(u\in V_{k}\Lambda^{0}(\mathfrak{T}^{4})\) that all the degrees of freedom vanish. The vanishing of the trace degrees of freedom means, successively, that \(u\) has zero traces on the vertices, edges, faces, and facets of \(\mathfrak{T}^{4}\). It is therefore a bubble function in \(\tilde{V}_{k}\Lambda^{0}(\mathfrak{T}^{4})\) and can be expressed as \[u=\lambda_{1}\lambda_{2}\lambda_{3}\lambda_{4}\lambda_{5}\psi,\qquad\psi\in P^ {k-5}(\mathfrak{T}^{4}).\] Now, since all degrees of freedom of the form given by Eq. (4.7) also vanish, upon setting \(q=\psi\) we see that \(\psi\equiv 0\). This establishes that \(u\equiv 0\). #### 4.1.2 Dofs for 1-forms on \(\mathfrak{T}^{4}\) We recall that \[V_{k}\Lambda^{1}(\mathfrak{T}^{4}):=(P^{k-1}(\mathfrak{T}^{4}))^{4}\oplus \left\{p\in(\tilde{P}^{k}(\mathfrak{T}^{4}))^{4}|p\cdot x=0\right\}.\] Also, we note that any polynomial in \(\tilde{P}^{k+1}(\mathfrak{T}^{4})\) can be written as \(p\cdot x\) for \(p\in(\tilde{P}^{k}(\mathfrak{T}^{4}))^{4}.\) Therefore, the polynomial 1-forms on \(\mathfrak{T}^{4}\) have the following dimension \[\dim(V_{k}\Lambda^{1}(\mathfrak{T}^{4})) =\dim((P^{k-1}(\mathfrak{T}^{4}))^{4})+\dim((\tilde{P}^{k}( \mathfrak{T}^{4}))^{4})-\dim(\tilde{P}^{k+1}(\mathfrak{T}^{4}))\] \[=4\binom{k+3}{4}+4\binom{k+3}{3}-\binom{k+4}{3}=\frac{1}{6}k(k+2) (k+3)(k+4).\] The dual space \(\Sigma^{k,1}(\mathfrak{T}^{4})\) must have the same dimension. We decompose \(\Sigma^{k,1}(\mathfrak{T}^{4})\) into the trace and volume degrees of freedom. For the trace degrees of freedom, \(\Sigma^{k,1}_{trace}(\mathfrak{T}^{4})\), we use edge, face, and facet degrees of freedom from Eqs. (2.3), (2.5), and (2.11). The total number of trace degrees of freedom is, therefore \[\dim(\Sigma^{k,1}_{trace}(\mathfrak{T}^{4})) =10\dim(P^{k-1}(\mathfrak{T}^{1}))+10\dim((P^{k-2}(\mathfrak{T}^ {2}))^{2})+5\dim((P^{k-3}(\mathfrak{T}^{3}))^{3})\] \[=10\binom{k}{1}+20\binom{k}{2}+15\binom{k}{3}\] \[=\frac{5}{2}k(k^{2}+k+2).\] We can also specify volume degrees of freedom on \(\mathfrak{T}^{4}\) for the 1-form proxy \(E\) as follows \[\Sigma^{k,1}_{vol}(\mathfrak{T}^{4}):=\left\{\int_{\mathfrak{T}^{4}}E\cdot q,\qquad q\in(P^{k-4}(\mathfrak{T}^{4}))^{4}\right\}. \tag{4.9}\] The corresponding dimension is \[\dim\left(\Sigma^{k,1}_{vol}(\mathfrak{T}^{4})\right)=4\binom{k}{4}=\frac{1}{ 6}(k-3)(k-2)(k-1)k.\] We see that \[\dim(V_{k}\Lambda^{1}(\mathfrak{T}^{4}))=\dim(\Sigma^{k,1}(\mathfrak{T}^{4}))= \dim(\Sigma^{k,1}_{trace}(\mathfrak{T}^{4}))+\dim(\Sigma^{k,1}_{vol}(\mathfrak{ T}^{4})),\] from which unisolvency will follow if we can show that the only element of \(V_{k}\Lambda^{1}(\mathfrak{T}^{4})\) with vanishing degrees of freedom is the zero element. To establish this, we follow the argument in Lemma 5.36 of [30]. During the proof of unisolvency of 1-forms, for ease of exposition, we work on a pentatope \(K\) whose vertices are \[(0,0,0,0),(1,0,0,0),(0,1,0,0),(0,0,1,0),(0,0,0,1). \tag{4.10}\] There exists an affine map between the reference pentatope \(\mathfrak{T}^{4}\) and the element \(K\), and hence the polynomial spaces \(V_{k}\Lambda^{s}(\mathfrak{T}^{4})\) are easily defined on \(K\). In what follows, we show that the degrees of freedom, \(\Sigma^{k,1}(K)\), are unisolvent. The strategy of the proof is as follows: we first show that if \(E\in V_{k}\Lambda^{1}(K)\) satisfies \(\operatorname{skwGrad}(E)=0\) then \(E=\operatorname{grad}(p)\) for some scalar \(p\in P^{k}(K)\). Next, we show that if all the degrees of freedom of \(E\in V_{k}\Lambda^{1}(K)\) vanish, then \(\operatorname{Tr}(\operatorname{skwGrad}(E))=0\) on the facets of \(K\). Furthermore, we show that \(\operatorname{skwGrad}(E)=0\) on the entirety of \(K\). Based on our first result (above), it immediately follows that \(E=\operatorname{grad}(p)\). We finally show that the vanishing of volume degrees of freedom for \(E\) implies that \(p=0\). **Lemma 4.5**.: _If \(E\in V_{k}\Lambda^{1}(K)\) satisfies \(\operatorname{skwGrad}(E)=0\) then \(E\equiv\operatorname{grad}(p)\) for some \(p\in P^{k}(K)\)._ Proof.: This proof follows closely the analogous proof for the tetrahedron in Lemma 5.28 of [30]. We first observe that if \(E\in V_{k}\Lambda^{1}(K),E\in(P^{k}(K))^{4}\). Moreover, \(\operatorname{skwGrad}(E)=0\Rightarrow E=\operatorname{grad}(p)\) for some \(p\in P^{k+1}(K)\). Now, we need to show that \(p\in P^{k}(K)\). We can decompose \(p\) such that \(p=p_{1}+p_{2}\), where \(p_{1}\in P^{k}(K)\) and \(p_{2}\in\tilde{P}^{k+1}(K).\) However, the form of \(V_{k}\Lambda^{1}(K)\) in Eq. (4.1b) forces \(\operatorname{grad}(p_{2})\cdot x=0\). Since \(p_{2}\) is homogeneous, \(x\cdot\operatorname{grad}(p_{2})=(k+1)p_{2}=0\), and therefore \(E=\operatorname{grad}(p)\) for some \(p\in P^{k}(K)\). The implication of the previous lemma is that while a generic \(w\in V_{k}\Lambda^{1}(K)\) could contain homogeneous polynomials of degree \(k\), if it satisfies \(\operatorname{skwGrad}(w)=0\), then \(w\) must be the gradient of a degree-\(k\) form. Hence \(w\in(P^{k-1}(K))^{4}\). We next show that the vanishing of all dofs for a polynomial 1-form \(E\) on \(K\) implies that not only the trace of \(E\) but also \(\operatorname{Tr}(\operatorname{skwGrad}(E))\) vanishes on the facets. **Lemma 4.6**.: _Let \(E\in V_{k}\Lambda^{1}(K)\) be a polynomial 1-form for which all the degrees of freedom \(\Sigma^{k,1}(K)\) vanish. Then \(\operatorname{Tr}(\operatorname{skwGrad}(E))\equiv 0\) on the facets of \(K\)._ Proof.: Since all the dofs for \(E\) vanish, then in particular those associated with the traces vanish. Moreover, the trace of \(E\) on to any facet \(\mathcal{F}\) is a 1-form on this tetrahedron, and \(\operatorname{Tr}(E)\) vanishes on \(\mathcal{F}\). We now integrate by parts on \(\mathcal{F}\) to see that \[\int_{\mathcal{F}}q\cdot\operatorname{Tr}(\operatorname{skwGrad}(E))\,dx=\int _{\mathcal{F}}q\cdot\nabla\times(\operatorname{Tr}(E))\,dx=\int_{\mathcal{F}} \left(\nabla\times q\right)\cdot\operatorname{Tr}(E)\,dx=0.\] This equation holds for any sufficiently smooth \(q\), and in particular for \(q\in(P^{k-1}(\mathcal{F}))^{3}\). Choosing \(q=\operatorname{Tr}(\operatorname{skwGrad}(E))\) on \(\mathcal{F}\) shows that \(\operatorname{Tr}(\operatorname{skwGrad}(E))=0\) on \(\mathcal{F}\). The next theorem uses the previous lemmas to establish unisolvency. **Theorem 4.7**.: _Let \(E\in V_{k}\Lambda^{1}(K)\) be a polynomial 1-form for which all the degrees of freedom \(\Sigma^{k,1}(K)\) vanish. Then \(E\equiv 0\)._ Proof.: In accordance with Eq. (3.2) \[\left(\operatorname{tr}^{(1)}E\right)(F) =\frac{1}{2}\int_{\partial K}\left[E\otimes n-n\otimes E\right]:F\,ds\] \[=\int_{K}\left(\operatorname{Div}F\right)\cdot E\,dx-\int_{K}F: \left(\operatorname{skwGrad}E\right)\,dx, \tag{4.11}\] for \(F\in H(\operatorname{Div},K,\mathbb{K})\). We note that if \(F\in\mathcal{L}\left((P^{k-3}(K))^{6}\right)\) then it is automatically in \(H(\operatorname{Div},K,\mathbb{K})\). Since \(\operatorname{tr}(E)=0\), we have \[\int_{K}\left(\operatorname{Div}F\right)\cdot E\,dx=\int_{K}F:\left( \operatorname{skwGrad}E\right)\,dx,\] for each \(F\in\mathcal{L}\left((P^{k-3}(K))^{6}\right)\). Since the volumetric degrees of freedom vanish, we can set \(q=\operatorname{Div}F\) in Eq. (4.9), and obtain the following \[\int_{K}F:\left(\operatorname{skwGrad}E\right)\,dx=0\qquad\forall F\in \mathcal{L}\left((P^{k-3}(K))^{6}\right). \tag{4.12}\] Now, let \(\mathcal{F}\) be a tetrahedral facet of the element \(K\). Using the previous lemma establishes that \(\operatorname{skwGrad}(E)\) has vanishing traces on \(\mathcal{F}\). Let us denote \(B:=\operatorname{skwGrad}(E)=\mathcal{L}\left(\left[B_{12},B_{13},B_{14},B_{2 3},B_{24},B_{34}\right]^{T}\right).\) Consider the trace on to the facet on the hyperplane \(x_{4}=0\), (see Eq. (3.7)). Since \[\operatorname{tr}(B)=2\begin{bmatrix}B_{23}(x_{1},x_{2},x_{3},0)\\ -B_{13}(x_{1},x_{2},x_{3},0)\\ B_{12}(x_{1},x_{2},x_{3},0)\\ 0\end{bmatrix}=0,\] it follows that \(B_{23}(x_{1},x_{2},x_{3},0)=B_{13}(x_{1},x_{2},x_{3},0)=B_{12}(x_{1},x_{2},x_{ 3},0)=0.\) Similarly, the trace of \(B\) on to the plane \(x_{3}=0\) vanishes, from which we see \(B_{12}(x_{1},x_{2},0,x_{4})=B_{14}(x_{1},x_{2},0,x_{4})=B_{24}(x_{1},x_{2},0,x _{4})=0.\) Consequently, \(B_{12}(x_{1},x_{2},x_{3},x_{4})=x_{3}x_{4}r_{12}\) for some \(r_{12}\in P^{k-3}(K)\). Similar considerations on all the other facets imply that \[\operatorname{skwGrad}(E)=B=\mathcal{L}\begin{pmatrix}\begin{bmatrix}x_{3}x_{ 4}r_{12}\\ x_{2}x_{4}r_{13}\\ x_{2}x_{3}r_{14}\\ x_{1}x_{4}r_{23}\\ x_{1}x_{3}r_{24}\\ x_{1}x_{2}r_{34}\end{bmatrix}\end{pmatrix},\qquad r_{ij}\in P^{k-3}(K),\qquad \text{for}\quad i=1,2,3,4,\quad j=1,2,3,4.\] But then choosing \(F=\mathcal{L}\left(\begin{bmatrix}r_{12},r_{13},r_{14},r_{23},r_{24},r_{34} \end{bmatrix}^{T}\right)\) in Eq. (4.12), we get that \[0 =\int_{K}F:\left(\operatorname{skwGrad}E\right)\,dx\] \[=\int_{K}\left(x_{3}x_{4}r_{12}^{2}+x_{2}x_{4}r_{13}^{2}+x_{2}x_{ 3}r_{14}^{2}+x_{1}x_{4}r_{23}^{2}+x_{1}x_{3}r_{24}^{2}+x_{1}x_{2}r_{34}^{2} \right)dx,\] from which it follows that \(r_{ij}=0\), (as the products of the form \(x_{3}x_{4}\), \(x_{2}x_{4}\), etc. are strictly non-negative on \(K\)). From this we conclude that \(B=\operatorname{skwGrad}(E)=0\) in \(K\), and consequently from Lemma 4.5, \(E=\operatorname{grad}(p)\) for some \(p\in P^{k}(K)\). Since the traces of \(E\) vanish, we can choose \(p=0\) on the facets, faces, edges, and vertices of \(K\), which allows us to write \[p=x_{1}x_{2}x_{3}x_{4}\hat{r},\quad\hat{r}\in P^{k-4}(K).\] But since the volumetric degrees of freedom of \(E\) vanish, we can pick \[q=\begin{bmatrix}x_{1}\partial_{1}(\hat{r})+\hat{r}\\ x_{2}\partial_{2}(\hat{r})+\hat{r}\\ x_{3}\partial_{3}(\hat{r})+\hat{r}\\ x_{4}\partial_{4}(\hat{r})+\hat{r}\end{bmatrix}=\begin{bmatrix}x_{1}\hat{r}_{ x_{1}}+\hat{r}\\ x_{2}\hat{r}_{x_{2}}+\hat{r}\\ x_{3}\hat{r}_{x_{3}}+\hat{r}\\ x_{4}\hat{r}_{x_{4}}+\hat{r}\end{bmatrix},\] in Eq. (4.9), in order to obtain \[0 =\int_{K}E\cdot q\,dx=\int_{K}\text{grad}(p)\cdot q\,dx\] \[=\int_{K}\left(x_{2}x_{3}x_{4}(x_{1}\hat{r}_{x_{1}}+\hat{r})^{2}+ x_{1}x_{3}x_{4}(x_{2}\hat{r}_{x_{2}}+\hat{r})^{2}+x_{1}x_{2}x_{4}(x_{3}\hat{r}_{ x_{3}}+\hat{r})^{2}+x_{1}x_{2}x_{3}(x_{4}\hat{r}_{x_{4}}+\hat{r})^{2}\right)dx.\] All the coordinate functions of \(x_{1},x_{2},x_{3},x_{4}\) are non-negative in \(K\). Therefore, the integral above only vanishes if \((x_{1}\hat{r}_{x_{1}}+\hat{r})=0\), \((x_{2}\hat{r}_{x_{2}}+\hat{r})=0\), etc.. This in turn is impossible unless \(\hat{r}=0\). But then \(p\), and consequently \(E=\text{grad}(p)\) vanishes. #### 4.1.3 Dofs for 2-forms on \(\mathfrak{T}^{4}\) The polynomial 2-forms on \(\mathfrak{T}^{4}\) are associated with skew-symmetric matrices \[V_{k}\Lambda^{2}(\mathfrak{T}^{4}) =\mathcal{L}\left((P^{k-1}(\mathfrak{T}^{4}))^{6}\right)\oplus \{\tilde{P}^{k-1}(\mathfrak{T}^{4})B_{1}+\tilde{P}^{k-1}(\mathfrak{T}^{4})B_{ 2}+\tilde{P}^{k-1}(\mathfrak{T}^{4})B_{3}+\tilde{P}^{k-1}(\mathfrak{T}^{4})B_ {4}\}\] \[=\mathcal{L}\left((P^{k-1}(\mathfrak{T}^{4}))^{6}\right)\oplus \left\{B\in\mathcal{L}\left((\tilde{P}^{k}(\mathfrak{T}^{4}))^{6}\right)|Bx=0 \right\}.\] The dimension of the second space above is the same as that of \((\tilde{P}^{k-1}(\mathfrak{T}^{4}))^{3}+\tilde{P}^{k-1}(\mathfrak{T}^{3})\). Please consult Appendix A for proof of this fact. Altogether, the dimension of the entire space is \[\dim\left(V_{k}\Lambda^{2}(\mathfrak{T}^{4})\right) =6\binom{k+3}{k-1}+3\binom{k+2}{3}+\binom{k+1}{2}\] \[=\frac{1}{4}k\left(k^{3}+8k^{2}+19k+12\right).\] Face and facet traces are well-defined for polynomial 2-forms on \(\mathfrak{T}^{4}\). Therefore, we specify the degrees of freedom corresponding to \(V_{k}\Lambda^{2}(\mathfrak{T}^{4})\) as \[\Sigma^{k,2}(\mathfrak{T}^{4})=\Sigma^{k,2}_{vol}(\mathfrak{T}^{4})\cup\Sigma^ {k,2}_{trace}(\mathfrak{T}^{4}),\] where \(\Sigma^{k,2}_{trace}(\mathfrak{T}^{4})\) are the trace degrees of freedom corresponding to the 10 triangular faces and 5 tetrahedral facets, as given by Eqs. (2.6) and (2.12). The dimension of this space is \[\dim\left(\Sigma^{k,2}_{trace}(\mathfrak{T}^{4})\right) =10\dim(P^{k-1}(\mathfrak{T}^{2}))+5\dim\left((P^{k-2}(\mathfrak{ T}^{3}))^{3}\right)\] \[=10\binom{k+1}{2}+15\binom{k+1}{3}\] \[=\frac{5}{2}k(k^{2}+2k+1).\] We can also specify volume degrees of freedom on \(\mathfrak{T}^{4}\) for a 2-form proxy \(F\) as \[\Sigma^{k,2}_{vol}(\mathfrak{T}^{4}):=\left\{\int_{\mathfrak{T}^{4}}F:q,\qquad q \in\mathcal{L}\left((P^{k-3}(\mathfrak{T}^{4}))^{6}\right)\right\}. \tag{4.13}\] The dimension of this space is \[\dim\left(\Sigma^{k,2}_{vol}(\mathfrak{T}^{4})\right)=6{k+1\choose 4}=\frac{1}{4}(k -2)(k-1)k(k+1).\] It can easily be confirmed that \[\dim\left(V_{k}\Lambda^{2}(\mathfrak{T}^{4})\right)=\dim\left(\Sigma^{k,2}( \mathfrak{T}^{4})\right)=\dim\left(\Sigma^{k,2}_{trace}(\mathfrak{T}^{4}) \right)+\dim\left(\Sigma^{k,2}_{vol}(\mathfrak{T}^{4})\right).\] Once again, unisolvency of the finite element will follow if we can establish that the vanishing of all dofs for an arbitrary \(u\in V_{k}\Lambda^{2}(\mathfrak{T}^{4})\) implies that \(u\equiv 0\). Following our analysis of the 1-forms, it is more convenient to work on the mapped element \(K\) whose vertices are given in Eq. (4.10). In addition, we follow a similar strategy as before in order to establish unisolvency. First, we show that if \(F\in V_{k}\Lambda^{2}(K)\) has vanishing curl, it must be that \(F=\operatorname{skwGrad}(E)\) for some \(E\in(P^{k}(K))^{4}\). Next, we show that if the trace degrees of freedom of \(F\) vanish, then \(\operatorname{curl}(F)=0\) on the facets of \(K\). This helps us establish that a 2-form \(F\in V_{k}\Lambda^{2}(K)\) with vanishing degrees of freedom has vanishing curl on the entirety of \(K\), and furthermore that \(F\) itself vanishes. **Lemma 4.8**.: _If \(F\in V_{k}\Lambda^{2}(K)\) has vanishing curl, then \(F\equiv\operatorname{skwGrad}(E)\) for some \(E\in(P^{k}(K))^{4}\)._ Proof.: Since \(\operatorname{curl}(F)=0\), \(F=\operatorname{skwGrad}(E)\) for some sufficiently smooth 1-form \(E\). In addition, since \(F\in V_{k}\Lambda^{2}(K)\), in accordance with Eq. (4.1b) we deduce that \[E=A+C,\quad A\in(P^{k}(K))^{4},\quad C\in(\widetilde{P}^{k+1}(K))^{4}.\] It remains for us to show that \(C=[c_{1},c_{2},c_{3},c_{4}]^{T}=0\) where \(c_{i}\in\widetilde{P}^{k+1}(K)\). From Remark (4.1), we easily verify that \(\operatorname{skwGrad}(C)\in\left\{B\in\mathcal{L}\left((\widetilde{P}^{k}(K) )^{6}\right)|Bx=0\right\}\) and hence \[\operatorname{skwGrad}(C)x=\begin{bmatrix}x\cdot\partial_{1}(C)-x\cdot \operatorname{grad}(c_{1})\\ x\cdot\partial_{2}(C)-x\cdot\operatorname{grad}(c_{2})\\ x\cdot\partial_{3}(C)-x\cdot\operatorname{grad}(c_{3})\\ x\cdot\partial_{4}(C)-x\cdot\operatorname{grad}(c_{4})\end{bmatrix}=0.\] But since \(c_{i}\) is a homogeneous polynomial, \(x\cdot\operatorname{grad}(c_{i})=(k+1)c_{i}\), and therefore \[x\cdot\partial_{i}(C)=(k+1)c_{i},\qquad i=1,2,3,4.\] This is only possible if \(c_{i}=0\) for each \(i\). As a result, it immediately follows that \(C=0\). **Lemma 4.9**.: _Let \(F\in V_{k}\Lambda^{2}(K)\) be a polynomial 2-form for which all the degrees of freedom \(\Sigma^{k,2}(K)\) vanish. Then \(\operatorname{Tr}(\operatorname{curl}(F))\equiv 0\) on the facets of \(K\)._ Proof.: Let \(\mathcal{F}\) be a tetrahedral facet of \(K\). Since \(\operatorname{curl}(F)\) is a 3-form, its trace on \(\mathcal{F}\) is a 3-form. The divergence theorem on \(\mathcal{F}\), and the vanishing of traces of \(F\) gives \[\int_{\mathcal{F}}\operatorname{Tr}(\operatorname{curl}(F))q\,dx=\int_{ \mathcal{F}}\nabla\cdot(\operatorname{Tr}(F))q\,dx=-\int_{\mathcal{F}}( \operatorname{Tr}(F))\cdot\nabla q\,dx=0, \tag{4.14}\] for any sufficiently smooth \(q\), and in particular for \(q\in P^{k-1}(K)\). Therefore, upon setting \(q=\nabla\cdot(\operatorname{Tr}(F))\) in Eq. (4.14), we find that \(\nabla\cdot(\operatorname{Tr}(F))=\operatorname{Tr}(\operatorname{curl}(F))=0\) on each \(\mathcal{F}\), and on the entire boundary of \(K\) **Theorem 4.10**.: _Let \(F\in V_{k}\Lambda^{2}(K)\) be a polynomial 2-form for which all the degrees of freedom \(\Sigma^{k,2}(K)\) vanish. Then \(F\equiv 0\)._ Proof.: We first observe from Eq. (3.3) that \[\left(\operatorname{tr}^{(2)}F\right)(E) =\int_{\partial K}\left(n\times F\right)\cdot E\,ds\] \[=\int_{K}\left(\operatorname{curl}F\right)\cdot E\,dx-\int_{K} \left(\operatorname{Curl}E\right):F\,dx,\] where \(E\in H\left(\operatorname{Curl},K,\mathbb{R}^{4}\right)\). Since all the trace degrees of freedom of \(F\in V_{k}\Lambda^{2}(K)\) vanish, \(F\) has zero trace, and therefore \[\int_{K}\left(\operatorname{curl}F\right)\cdot E\,dx=\int_{K}\left( \operatorname{Curl}E\right):F\,dx.\] Now, if we pick \(E\in(P^{k-2}(K))^{4}\) then \(E\in H\left(\operatorname{Curl},K,\mathbb{R}^{4}\right)\), and \(\operatorname{Curl}(E)\in\mathcal{L}\left((P^{k-3}(K))^{6}\right).\) Since the volumetric dofs vanish for \(F\), we set \(q=\operatorname{Curl}(E)\) in Eq. (4.13), and we obtain \[\int_{K}\left(\operatorname{curl}F\right)\cdot E\,dx=0,\qquad \forall E\in(P^{k-2}(K))^{4}. \tag{4.15}\] From the previous lemma, the trace of \(\operatorname{curl}(F)\) vanishes on the facets and so it is a 3-form bubble in \((P^{k-1}(K))^{4}\), and we can write \[\operatorname{curl}(F)=\begin{bmatrix}x_{1}\psi_{1}\\ x_{2}\psi_{2}\\ x_{3}\psi_{3}\\ x_{4}\psi_{4}\end{bmatrix},\qquad\psi_{i}\in P^{k-2}(K),\qquad i=1,2,3,4.\] Upon choosing \(E=[\psi_{1},\psi_{2},\psi_{3},\psi_{4}]^{T}\) in Eq. (4.15), we obtain \[0=\int_{K}\left(\operatorname{curl}F\right)\cdot E\,dx=\int_{K} \sum_{i=1}^{4}x_{i}\psi_{i}^{2}\,dx.\] But \(x_{i}\geq 0\) in \(K\) and so we are guaranteed that \(\psi_{i}=0\) for \(i=1,2,3,4\). This shows that \(\operatorname{curl}(F)=0\) in \(K\). Next, in accordance with Lemma 4.8, we can immediately deduce that \(F=\operatorname{skwGrad}(\mathcal{E})\) for some \(\mathcal{E}\in(P^{k}(K))^{4}\). In turn, the vanishing of the traces of \(F\) allows us to pick \(\mathcal{E}\) to also have vanishing traces. Recalling Eq. (3.6), we can obtain the following trace formula on the facet \(x_{4}=0\) \[\operatorname{tr}(\mathcal{E}) =\frac{1}{2}\begin{bmatrix}0&0&0&\mathcal{E}_{1}(x_{1},x_{2},x_{ 3},0)\\ 0&0&0&\mathcal{E}_{2}(x_{1},x_{2},x_{3},0)\\ 0&0&0&\mathcal{E}_{3}(x_{1},x_{2},x_{3},0)\\ -\mathcal{E}_{1}(x_{1},x_{2},x_{3},0)&-\mathcal{E}_{2}(x_{1},x_{2},x_{3},0)&- \mathcal{E}_{3}(x_{1},x_{2},x_{3},0)&0\end{bmatrix}=0,\] \[\Rightarrow\mathcal{E}_{1}(x_{1},x_{2},x_{3},0)=\mathcal{E}_{2} (x_{1},x_{2},x_{3},0)=\mathcal{E}_{3}(x_{1},x_{2},x_{3},0)=0.\] Similar considerations apply for the other facets, allowing us to obtain the following expression for \(\mathcal{E}\) \[\mathcal{E}=\begin{bmatrix}x_{2}x_{3}x_{4}g_{1}\\ x_{1}x_{3}x_{4}g_{2}\\ x_{1}x_{2}x_{4}g_{3}\\ x_{1}x_{2}x_{3}g_{4}\end{bmatrix},\quad g_{i}\in P^{k-3}(K),\qquad i=1,2,3,4.\] Furthermore \[\mathrm{skwGrad}(\mathcal{E})=\mathcal{L}\left(\begin{bmatrix}x_{3}x_{4}\left( \partial_{1}(x_{1}g_{2})-\partial_{2}(x_{2}g_{1})\right)\\ x_{2}x_{4}\left(\partial_{1}(x_{1}g_{3})-\partial_{3}(x_{3}g_{1})\right)\\ x_{2}x_{3}\left(\partial_{1}(x_{1}g_{4})-\partial_{4}(x_{4}g_{1})\right)\\ x_{1}x_{4}\left(\partial_{2}(x_{2}g_{3})-\partial_{3}(x_{3}g_{2})\right)\\ x_{1}x_{3}\left(\partial_{2}(x_{2}g_{4})-\partial_{4}(x_{4}g_{2})\right)\\ x_{1}x_{2}\left(\partial_{3}(x_{3}g_{4})-\partial_{4}(x_{4}g_{3})\right)\end{bmatrix} \right).\] We can now pick \(q\) in Eq. (4.13) as follows \[q=\mathcal{L}\left(\begin{bmatrix}q_{12}\\ q_{13}\\ q_{14}\\ q_{23}\\ q_{24}\\ q_{34}\end{bmatrix}\right)=\mathcal{L}\left(\begin{bmatrix}\partial_{1}(x_{1}g_ {2})-\partial_{2}(x_{2}g_{1})\\ \partial_{1}(x_{1}g_{3})-\partial_{3}(x_{3}g_{1})\\ \partial_{1}(x_{1}g_{4})-\partial_{4}(x_{4}g_{1})\\ \partial_{2}(x_{2}g_{3})-\partial_{3}(x_{3}g_{2})\\ \partial_{2}(x_{2}g_{4})-\partial_{4}(x_{4}g_{2})\\ \partial_{3}(x_{3}g_{4})-\partial_{4}(x_{4}g_{3})\end{bmatrix}\right),\] in order to obtain \[0=\int_{K}F:q\,dx=\int_{K}\mathrm{skwGrad}(\mathcal{E}):q\,dx\] \[=\int_{K}\left(x_{3}x_{4}q_{12}^{2}+x_{2}x_{4}q_{13}^{2}+x_{2}x_{ 3}q_{14}^{2}+x_{1}x_{4}q_{23}^{2}+x_{1}x_{3}q_{24}^{2}+x_{1}x_{2}q_{34}^{2} \right)dx.\] But then each \(q_{ij}=0\), and in turn, it is easy to check that each \(g_{i}\) must vanish. Finally, it follows that \(\mathcal{E}=0\) and \(F=\mathrm{skwGrad}(\mathcal{E})\) must vanish. #### 4.1.4 Dofs for 3-forms on \(\mathfrak{T}^{4}\) The polynomial 3-forms on \(\mathfrak{T}^{4}\) are associated with 4-vectors. The corresponding degrees of freedom have the following dimension \[\dim\left(V_{k}\Lambda^{3}(\mathfrak{T}^{4})\right) =\dim((P^{k-1}(\mathfrak{T}^{4}))^{4})+\dim(\tilde{P}^{k-1}( \mathfrak{T}^{4}))\] \[=4{k+3\choose 4}+{k+2\choose 3}=\frac{1}{6}k(k+1)(k+2)(k+4).\] Only facet traces are defined for 3-forms, as given by Eq. (2.13). Therefore \[\dim\left(\Sigma_{trace}^{k,3}(\mathfrak{T}^{4})\right) =5\dim\left(P^{k-1}(\mathfrak{T}^{3})\right)\] \[=5{k+2\choose 3}=\frac{5}{6}k(k+1)(k+2).\] We define the volume degrees of freedom for the 3-form proxy \(G\) as follows \[\Sigma_{vol}^{k,3}(\mathfrak{T}^{4}):=\left\{\int_{\mathfrak{T}^{4}}G\cdot q,\qquad q\in(P^{k-2}(\mathfrak{T}^{4}))^{4}\right\}. \tag{4.16}\] The dimension of this space is \[\dim\left(\Sigma_{vol}^{k,3}(\mathfrak{T}^{4})\right)=4{k+2\choose 4}=\frac{1}{6}(k- 1)k(k+1)(k+2).\] Then, we define \[\Sigma^{k,3}(\mathfrak{T}^{4}):=\Sigma^{k,3}_{trace}(\mathfrak{T}^{4})\cup\Sigma^ {k,3}_{vol}(\mathfrak{T}^{4}),\] and note that \[\dim\left(V_{k}\Lambda^{3}(\mathfrak{T}^{4})\right)=\dim\left(\Sigma^{k,3}( \mathfrak{T}^{4})\right)=\dim\left(\Sigma^{k,3}_{trace}(\mathfrak{T}^{4}) \right)+\dim\left(\Sigma^{k,3}_{vol}(\mathfrak{T}^{4})\right).\] Therefore, unisolvency will be guaranteed by establishing the following result. **Lemma 4.11**.: _Consider \(G\in V_{k}\Lambda^{3}(K)\) a polynomial 3-form for which all the degrees of freedom \(\Sigma^{k,3}(K)\) vanish. Then \(G\equiv 0\)._ Proof.: This proof closely follows the strategy outlined in [30], with reference element \(K\) given by Eq. (4.10). We begin by introducing \(v\in P^{k-1}(K)\). In accordance with integration by parts (Eq. (3.4)), the vanishing of trace and volume degrees of freedom for \(G\), and Eq. (4.16), one obtains \[\int_{K}(\mathrm{d}\kern-0.5pt{\rm i}v\,G)v=-\int_{K}G\cdot(\mathrm{grad}\,v)=0.\] By choosing \(v=\mathrm{div}(G)\), we see that \(\mathrm{div}(G)=0\) in \(K\). Now, \(G=p+\hat{r}x\) for \(p\in(P^{k-1}(K))^{4}\) and \(\hat{r}\in\tilde{P}^{k-1}(K)\) by definition (Eq. (4.1d)). In addition, it is easy to check that \(\mathrm{div}(\hat{r}x)=(k+4)\hat{r}\). Therefore, \[\mathrm{div}(G)=\mathrm{div}(p)+(k+4)\hat{r}\Rightarrow\hat{r}=-\frac{1}{k+4} \mathrm{div}(p)\in P^{k-2}(K).\] But this is not possible unless the degree \(k-1\) polynomial \(\hat{r}=0\). With this in mind, we observe the following \[G=p\in(P^{k-1}(K))^{4}\Rightarrow G=\begin{bmatrix}x_{1}\phi_{1}\\ x_{2}\phi_{2}\\ x_{3}\phi_{3}\\ x_{4}\phi_{4}\end{bmatrix},\quad\phi_{i}\in P^{k-2}(K).\] This reformulation is possible because \(G\) is a 3-form bubble with vanishing traces given by Eq. (3.8). If \(k>1\), we can pick \(q=[\phi_{1},\phi_{2},\phi_{3},\phi_{4}]^{T}\) in Eq. (4.16), from which it will follow that \(\phi_{i}=0\), (and hence \(G\equiv 0\)). If \(k=1\), then trivially \(\phi_{i}=0\). #### 4.1.5 Dofs for 4-forms on \(\mathfrak{T}^{4}\) The polynomial 4-forms on \(\mathfrak{T}^{4}\) are associated with scalars. Traces for 4-forms are not well-defined. Instead, we specify interior degrees of freedom for the 4-form proxy \(q\) as \[\Sigma^{k,4}_{vol}(\mathfrak{T}^{4}):=\left\{\int_{\mathfrak{T}^{4}}qp,\qquad p \in P^{k-1}(\mathfrak{T}^{4})\right\}. \tag{4.17}\] In a natural fashion, we have that \[\dim\left(V_{k}\Lambda^{4}(\mathfrak{T}^{4})\right) =\dim\left(\Sigma^{k,4}(\mathfrak{T}^{4})\right)=\dim\left( \Sigma^{k,4}_{vol}(\mathfrak{T}^{4})\right)\] \[=\begin{pmatrix}k+3\\ 4\end{pmatrix}=\frac{1}{24}k(k+1)(k+2)(k+3).\] It then remains to prove unisolvency. **Lemma 4.12**.: _Consider \(q\in V_{k}\Lambda^{4}(\mathfrak{T}^{4})\) a polynomial 4-form for which all the degrees of freedom \(\Sigma^{k,4}(\mathfrak{T}^{4})\) vanish. Then \(q\equiv 0\)._ Proof.: Suppose that \(q\in V_{k}\Lambda^{4}(\mathfrak{T}^{4})\) has vanishing degrees of freedom as given by Eq. (4.17); then setting \(p=q\) shows that \(q\equiv 0\) ## 5 Finite Elements on a Reference Tetrahedral Prism In this section, we introduce the finite element approximation spaces for \(s\)-forms on the tetrahedral prism \(\mathfrak{N}^{4}\). These finite element spaces are developed by taking tensor products of spaces on tetrahedra \(\mathfrak{T}^{3}\) with spaces on line segments \(\mathfrak{T}^{1}\). In accordance with the work of [32] and [29], we can construct tensor product elements using the spaces from two different sequences \[U_{0}\xrightarrow{d^{(0)}}U_{1}\xrightarrow{d^{(1)}}\cdots \xrightarrow{d^{(n-1)}}U_{n},\] \[W_{0}\xrightarrow{d^{(0)}}W_{1}\xrightarrow{d^{(1)}}\cdots \xrightarrow{d^{(m-1)}}W_{m},\] which are defined on domains \(\Omega\in\mathbb{R}^{n}\) and \(\underline{\Omega}\in\mathbb{R}^{m}\), respectively. The associated tensor product sequence can be written as follows \[\left(U\times W\right)_{0}\xrightarrow{d^{(0)}}\left(U\times W \right)_{1}\xrightarrow{d^{(1)}}\cdots\xrightarrow{d^{(n+m-1)}}\left(U\times W \right)_{n+m},\] which is defined on the domain \(\underline{\Omega}\in\mathbb{R}^{n+m}\). Each entry in the tensor product sequence above can be written as \[\left(U\times W\right)_{k}=\bigoplus_{i+j=k}\left(U_{i}\times W_{j}\right),\] where \(k=0,\ldots,n+m\). On the tetrahedral prism, we have that \(n=3\) and \(m=1\). As a result, we recover the following sequence \[\left(U\times W\right)_{0}\xrightarrow{d^{(0)}}\left(U\times W \right)_{1}\xrightarrow{d^{(1)}}\left(U\times W\right)_{2}\xrightarrow{d^{(2) }}\left(U\times W\right)_{3}\xrightarrow{d^{(3)}}\left(U\times W\right)_{4},\] where \[\left(U\times W\right)_{0} =U_{0}\times W_{0}, \tag{5.1a}\] \[\left(U\times W\right)_{1} =\left(U_{1}\times W_{0}\right)\oplus\left(U_{0}\times W_{1} \right),\] (5.1b) \[\left(U\times W\right)_{2} =\left(U_{1}\times W_{1}\right)\oplus\left(U_{2}\times W_{0} \right),\] (5.1c) \[\left(U\times W\right)_{3} =\left(U_{3}\times W_{0}\right)\oplus\left(U_{2}\times W_{1} \right),\] (5.1d) \[\left(U\times W\right)_{4} =U_{3}\times W_{1}. \tag{5.1e}\] The precise construction of the resulting tensor-product spaces is given in B. This construction is expressed in terms of differential forms. In what follows, we provide the equivalent spaces in terms of vector and matrix notation \[V_{k}\Lambda^{0}(\mathfrak{N}^{4}):= P^{k}\left(\mathfrak{T}^{1}\right)\times P^{k}\left( \mathfrak{T}^{3}\right),\] \[V_{k}\Lambda^{1}(\mathfrak{N}^{4}):= P^{k}\left(\mathfrak{T}^{1}\right)\times\left(\left\{p\; \middle|\;p\in\left[\tilde{P}^{k}\left(\mathfrak{T}^{3}\right),\tilde{P}^{k} \left(\mathfrak{T}^{3}\right),\tilde{P}^{k}\left(\mathfrak{T}^{3}\right),0 \right]^{T},p\cdot x=0\right\}\] \[\qquad\qquad\oplus\left[P^{k-1}\left(\mathfrak{T}^{3}\right),P^{k -1}\left(\mathfrak{T}^{3}\right),P^{k-1}\left(\mathfrak{T}^{3}\right),0 \right]^{T}\right)\oplus P^{k}\left(\mathfrak{T}^{3}\right)\times\left[0,0,0,P ^{k-1}\left(\mathfrak{T}^{1}\right)\right]^{T},\] \[V_{k}\Lambda^{2}(\mathfrak{N}^{4}):= P^{k-1}\left(\mathfrak{T}^{1}\right)\times\left(\left\{ \begin{bmatrix}0&0&0&p_{1}\\ 0&0&0&p_{2}\\ 0&0&0&p_{3}\\ *&*&*&0\end{bmatrix}\Big{|}\;p\in\left[\tilde{P}^{k}\left(\mathfrak{T}^{3} \right),\tilde{P}^{k}\left(\mathfrak{T}^{3}\right),\tilde{P}^{k}\left( \mathfrak{T}^{3}\right),0\right]^{T},p\cdot x=0\right\}\] \[\oplus\begin{bmatrix}0&0&0&P^{k-1}\left(\mathfrak{T}^{3}\right)\\ 0&0&0&P^{k-1}\left(\mathfrak{T}^{3}\right)\\ 0&0&0&P^{k-1}\left(\mathfrak{T}^{3}\right)\\ *&*&*&0\end{bmatrix}\right)\] \[\oplus P^{k}\left(\mathfrak{T}^{1}\right)\times\left(\begin{bmatrix} 0&P^{k-1}\left(\mathfrak{T}^{3}\right)&P^{k-1}\left(\mathfrak{T}^{3}\right)&0 \\ *&0&P^{k-1}\left(\mathfrak{T}^{3}\right)&0\\ *&*&0&0\\ 0&0&0&0\end{bmatrix}\oplus\tilde{P}^{k-1}\left(\mathfrak{T}^{3}\right) \begin{bmatrix}0&x_{3}&-x_{2}&0\\ *&0&x_{1}&0\\ *&*&0&0\\ 0&0&0&0\end{bmatrix}\right)\!,\] \[V_{k}\Lambda^{3}(\mathfrak{N}^{4}):= P^{k-1}\left(\mathfrak{T}^{3}\right)\times\left[0,0,0,P^{k} \left(\mathfrak{T}^{1}\right)\right]^{T}\] \[\oplus P^{k-1}\left(\mathfrak{T}^{1}\right)\times\left(\left[P^ {k-1}\left(\mathfrak{T}^{3}\right),P^{k-1}\left(\mathfrak{T}^{3}\right),P^{k -1}\left(\mathfrak{T}^{3}\right),0\right]^{T}\oplus\tilde{P}^{k-1}\left( \mathfrak{T}^{3}\right)\left[x_{1},x_{2},x_{3},0\right]^{T}\Bigg{)},\] \[V_{k}\Lambda^{4}(\mathfrak{N}^{4}):= P^{k-1}\left(\mathfrak{T}^{1}\right)\times P^{k-1}\left( \mathfrak{T}^{3}\right).\] We can now construct the following exact sequence \[V_{k}\Lambda^{0}(\mathfrak{N}^{4}) \xrightarrow{d^{(0)}} V_{k}\Lambda^{1}(\mathfrak{N}^{4}) \xrightarrow{d^{(1)}} V_{k}\Lambda^{2}(\mathfrak{N}^{4}) \xrightarrow{d^{(2)}} V_{k}\Lambda^{3}(\mathfrak{N}^{4}) \xrightarrow{d^{(3)}} V_{k}\Lambda^{4}(\mathfrak{N}^{4}).\] ### Nedelec-Raviart-Thomas Sequence We can construct a convenient sequence using the well-known Nedelec finite elements of the first kind and Raviart-Thomas finite elements \[V_{k}\Lambda^{0}(\mathfrak{N}^{4})= CG^{k}\left(\mathfrak{T}^{1}\right)\times CG^{k}\left(\mathfrak{T}^{3} \right), \tag{5.2a}\] \[V_{k}\Lambda^{1}(\mathfrak{N}^{4})= \left(CG^{k}\left(\mathfrak{T}^{1}\right)\times\begin{bmatrix}N^ {k-1}\left(\mathfrak{T}^{3}\right)\\ 0\end{bmatrix}\right)\oplus\left(\left[0,0,0,DG^{k-1}\left(\mathfrak{T}^{1} \right)\right]^{T}\times CG^{k}\left(\mathfrak{T}^{3}\right)\right),\] (5.2b) \[V_{k}\Lambda^{2}(\mathfrak{N}^{4})= \left(DG^{k-1}\left(\mathfrak{T}^{1}\right)\times\begin{bmatrix} \begin{bmatrix}0&0&0\\ 0&0&0\\ 0&0&0\end{bmatrix}&N^{k-1}\left(\mathfrak{T}^{3}\right)\\ *&*&0&0\end{bmatrix}\right)\] \[\oplus\left(CG^{k}\left(\mathfrak{T}^{1}\right)\times\begin{bmatrix} 0&RT_{3}^{k-1}(\mathfrak{T}^{3})&-RT_{2}^{k-1}(\mathfrak{T}^{3})&0\\ *&0&RT_{1}^{k-1}(\mathfrak{T}^{3})&0\\ *&*&0&0\\ 0&0&0&0\end{bmatrix}\right),\] (5.2c) \[V_{k}\Lambda^{3}(\mathfrak{N}^{4})= \left(\begin{bmatrix}0,0,0,CG^{k}\left(\mathfrak{T}^{1}\right) \end{bmatrix}^{T}\times DG^{k-1}\left(\mathfrak{T}^{3}\right)\right)\oplus \left(DG^{k-1}\left(\mathfrak{T}^{1}\right)\times\begin{bmatrix}RT^{k-1} \left(\mathfrak{T}^{3}\right)\\ 0\end{bmatrix}\right),\] (5.2d) \[V_{k}\Lambda^{4}(\mathfrak{N}^{4})= DG^{k-1}\left(\mathfrak{T}^{1}\right)\times DG^{k-1}\left( \mathfrak{T}^{3}\right). \tag{5.2e}\] Here, we can define the following well-known scalar spaces \[CG^{k}\left(\mathcal{T}_{h}\right):= \left\{u\in H^{1}\left(\Omega\right):u|_{\mathfrak{T}^{3}}\in P ^{k}(\mathfrak{T}^{3}),\;\forall\mathfrak{T}^{3}\in\mathcal{T}_{h}\right\},\] \[DG^{k}\left(\mathcal{T}_{h}\right):= \left\{u\in L^{2}\left(\Omega\right):u|_{\mathfrak{T}^{3}}\in P ^{k}(\mathfrak{T}^{3}),\;\forall\mathfrak{T}^{3}\in\mathcal{T}_{h}\right\},\] and the following vector spaces \[N^{k}\left(\mathcal{T}_{h}\right) :=\left\{u\in H\left(\mathrm{curl},\Omega\right):u|_{\mathfrak{T}^{3 }}\in\left(P^{k}\left(\mathfrak{T}^{3}\right)\right)^{3}\oplus\left[x\times \left(P^{k}\left(\mathfrak{T}^{3}\right)\right)^{3}\right],\;\forall\mathfrak{T} ^{3}\in\mathcal{T}_{h}\right\},\] \[RT^{k}\left(\mathcal{T}_{h}\right) :=\left\{u\in H\left(\mathrm{div},\Omega\right):u|_{\mathfrak{T}^ {3}}\in\left(P^{k}\left(\mathfrak{T}^{3}\right)\right)^{3}+xP^{k}\left( \mathfrak{T}^{3}\right),\;\forall\mathfrak{T}^{3}\in\mathcal{T}_{h}\right\}.\] ### Bubble Spaces Here, we introduce the following bubble spaces which act as complementary spaces to the full spaces in Eqs. (5.2a)-(5.2e) \[\overset{\circ}{V}_{k}\Lambda^{0}(\mathfrak{M}^{4}) :=\mathrm{span}\left\{\vartheta_{ij\ell}(x_{1},x_{2},x_{3}) \vartheta_{m}(x_{4})\right\}, \tag{5.3a}\] \[\overset{\circ}{V}_{k}\Lambda^{1}(\mathfrak{M}^{4}) :=\mathrm{span}\left\{\begin{bmatrix}\Phi_{ij\ell}^{r}(x_{1},x_{2 },x_{3})\\ 0\end{bmatrix}\vartheta_{m}(x_{4})\right\}\] \[\oplus \mathrm{span}\left\{\vartheta_{ij\ell}(x_{1},x_{2},x_{3})\left[0,0,0,\varrho_{m}(x_{4})\right]^{T}\right\},\] (5.3b) \[\overset{\circ}{V}_{k}\Lambda^{2}(\mathfrak{M}^{4}) :=\mathrm{span}\left\{\left[\begin{bmatrix}0&0&0\\ 0&0&0\\ 0&0&0\end{bmatrix}\Phi_{ij\ell}^{r}(x_{1},x_{2},x_{3})\\ 0&0\end{bmatrix}\varrho_{m}(x_{4})\right\}\] \[\oplus \mathrm{span}\left\{\begin{bmatrix}0&\left[\Psi_{ij\ell}^{r}(x_{ 1},x_{2},x_{3})\right]_{3}&-\left[\Psi_{ij\ell}^{r}(x_{1},x_{2},x_{3})\right]_{ 2}&0\\ *&0&\left[\Psi_{ij\ell}^{r}(x_{1},x_{2},x_{3})\right]_{1}&0\\ *&*&0&0\\ 0&0&0&0\end{bmatrix}\vartheta_{m}(x_{4})\right\},\] (5.3c) \[\overset{\circ}{V}_{k}\Lambda^{3}(\mathfrak{M}^{4}) :=\mathrm{span}\left\{\varrho_{ij\ell}(x_{1},x_{2},x_{3})\left[0,0,0,\vartheta_{m}(x_{4})\right]^{T}\right\}\] \[\oplus \mathrm{span}\left\{\begin{bmatrix}\Psi_{ij\ell}^{r}(x_{1},x_{2 },x_{3})\\ 0\end{bmatrix}\varrho_{m}(x_{4})\right\}. \tag{5.3d}\] Here, the \(\vartheta_{m}\)'s are H1-conforming bubble functions of degree \(k\) on line segments \[\vartheta_{m}(x_{4})=L_{m}\left(\nu_{2}\right),\] where \(m=2,\ldots,k\) is the indexing parameter, \(\nu=\nu(x_{4})=(\nu_{1},\nu_{2})\) are the barycentric coordinates for the segment, and \(L_{m}\) are integrated and scaled Legendre polynomials (see [4]). In a similar fashion, the \(\vartheta_{ij\ell}\)'s are H1-conforming bubble functions of degree \(k\) on tetrahedra \[\vartheta_{ij\ell}(x_{1},x_{2},x_{3})=L_{i}\left(\frac{\lambda_{2}}{\lambda_{1 }+\lambda_{2}}\right)L_{j}^{2i}\left(\frac{\lambda_{3}}{\lambda_{1}+\lambda_ {2}+\lambda_{3}}\right)L_{\ell}^{2(i+j)}\left(\lambda_{4}\right)\left(\lambda _{1}+\lambda_{2}\right)^{i}\left(\lambda_{1}+\lambda_{2}+\lambda_{3}\right)^{j},\] where \(i\geq 2\), \(j\geq 1\), \(\ell\geq 1\), \(n=i+j+\ell=4,\ldots,k\) are the indexing parameters, \(\lambda=\lambda(x_{1},x_{2},x_{3})=(\lambda_{1},\lambda_{2},\lambda_{3}, \lambda_{4})\) are barycentric coordinates for the tetrahedron, and \(L_{j}^{\alpha}\) are integrated and scaled Jacobi polynomials. In addition, the \(\Phi_{ij\ell}^{r}\)'s are H(curl)-conforming bubble functions of degree \(k-1\) on tetrahedra \[\Phi_{ij\ell}^{r}(x_{1},x_{2},x_{3})= P_{i}\left(\frac{\lambda_{b}}{\lambda_{a}+\lambda_{b}}\right)L_{j}^ {2i+1}\left(\frac{\lambda_{c}}{\lambda_{a}+\lambda_{b}+\lambda_{c}}\right)L_{ \ell}^{2(i+j)}\left(\lambda_{d}\right)\] \[\cdot\left(\lambda_{a}\nabla\lambda_{b}-\lambda_{b}\nabla\lambda_{ a}\right)\left(\lambda_{a}+\lambda_{b}\right)^{i}\left(\lambda_{a}+\lambda_{b}+ \lambda_{c}\right)^{j},\] where \(i\geq 0\), \(j\geq 1\), \(\ell\geq 1\), \(n=i+j+\ell=2,\ldots,k-1\) are the indexing parameters, and \(P_{i}\) are the shifted and scaled Legendre polynomials. In addition, for \(r=1,2,3\) we set \((a,b,c,d)=(1,2,3,4)\), \((a,b,c,d)=(2,3,4,1)\), and \((a,b,c,d)=(3,4,1,2)\), respectively. Next, the \(\Psi_{ij\ell}^{r}\)'s are H(div)-conforming bubble functions of degree \(k-1\) on tetrahedra \[\Psi_{ij\ell}^{r}(x_{1},x_{2},x_{3})= P_{i}\left(\frac{\lambda_{b}}{\lambda_{a}+\lambda_{b}}\right)P_{j}^ {2i+1}\left(\frac{\lambda_{c}}{\lambda_{a}+\lambda_{b}+\lambda_{c}}\right)L_{ \ell}^{2(i+j+1)}\left(\lambda_{d}\right)\] \[\cdot\left(\lambda_{a}\nabla\lambda_{b}\times\nabla\lambda_{c}+ \lambda_{b}\nabla\lambda_{c}\times\nabla\lambda_{a}+\lambda_{c}\nabla\lambda_{ a}\times\nabla\lambda_{b}\right)\left(\lambda_{a}+\lambda_{b}\right)^{i}\left( \lambda_{a}+\lambda_{b}+\lambda_{c}\right)^{j},\] where \(i\geq 0\), \(j\geq 0\), \(\ell\geq 1\), \(n=i+j+\ell=1,\ldots,k-1\) are the indexing parameters, and \(P_{j}^{\alpha}\) are the shifted Jacobi polynomials. In addition, for \(r=1,2,3\) we set \((a,b,c,d)=(1,2,3,4)\), \((a,b,c,d)=(2,3,4,1)\), and \((a,b,c,d)=(3,4,1,2)\), respectively. Furthermore, the \(\varrho_{m}\)'s are the L2-conforming bubble functions of degree \(k-1\) on line segments \[\varrho_{m}(x_{4})=P_{m}\left(\nu_{2}\right),\] where \(m=0,\ldots,k-1\) is the indexing parameter. Similarly, the \(\varrho_{ij\ell}\)'s are the L2-conforming bubble functions of degree \(k-1\) on tetrahedra \[\varrho_{ij\ell}(x_{1},x_{2},x_{3})=P_{i}\left(\frac{\lambda_{2}}{\lambda_{1}+ \lambda_{2}}\right)P_{j}^{2i+1}\left(\frac{\lambda_{3}}{\lambda_{1}+\lambda_{2 }+\lambda_{3}}\right)P_{\ell}^{2(i+j+1)}\left(\lambda_{4}\right)\left(\lambda_{ 1}+\lambda_{2}\right)^{i}\left(\lambda_{1}+\lambda_{2}+\lambda_{3}\right)^{j},\] where \(i\geq 0\), \(j\geq 0\), \(\ell\geq 0\), \(n=i+j+\ell=0,\ldots,k-1\) are the indexing parameters. ### Restatement of Polynomial Functions We can now restate the polynomial functions from the previous section in terms of polynomial spaces \(P^{k}(\mathfrak{T}^{1})\), \(P^{k}(\mathfrak{T}^{3})\), \(dP^{k}(\mathfrak{T}^{1})\), and \(dP^{k}(\mathfrak{T}^{3})\). The former two polynomial spaces, \(P^{k}(\mathfrak{T}^{1})\) and \(P^{k}(\mathfrak{T}^{3})\), are associated with dofs that reside on the boundary _and_ within the interior of each element, whereas the latter two spaces, \(dP^{k}(\mathfrak{T}^{1})\) and \(dP^{k}(\mathfrak{T}^{3})\), are associated with dofs which reside only within the interior. With this in mind, let us consider \[\vartheta_{m}(x_{4})=\vartheta^{b}(x_{4})p_{m}(x_{4}),\qquad\forall p_{m}(x_{ 4})\in P^{k-2}(\mathfrak{T}^{1}),\] where \[\vartheta^{b}(x_{4})=\nu_{1}\nu_{2}.\] Here, \(\vartheta^{b}(x_{4})\) is a non-negative bubble function shared by all members of the set. Next, consider \[\vartheta_{ij\ell}(x_{1},x_{2},x_{3})=\vartheta^{b}(x_{1},x_{2},x_{3})p_{ij \ell}(x_{1},x_{2},x_{3}),\qquad\forall p_{ij\ell}(x_{1},x_{2},x_{3})\in P^{k- 4}(\mathfrak{T}^{3}),\] where \[\vartheta^{b}(x_{1},x_{2},x_{3})=\lambda_{1}\lambda_{2}\lambda_{3}\lambda_{4}.\] In addition, consider \[\Phi_{ij\ell}^{r}(x_{1},x_{2},x_{3})=\Phi^{b,r}(x_{1},x_{2},x_{3})g_{ij\ell}^ {r}(x_{1},x_{2},x_{3})N^{r}(x_{1},x_{2},x_{3}),\qquad\forall g_{ij\ell}^{r}(x_ {1},x_{2},x_{3})\in P^{k-3}(\mathfrak{T}^{3}),\] where \[\Phi^{b,r}(x_{1},x_{2},x_{3})=\lambda_{c}\lambda_{d},\qquad N^{r}(x_{1},x_{2}, x_{3})=\lambda_{a}\nabla\lambda_{b}-\lambda_{b}\nabla\lambda_{a}.\] Furthermore, consider \[\Psi_{ij\ell}^{r}(x_{1},x_{2},x_{3})=\Psi^{b,r}(x_{1},x_{2},x_{3})w_{ij\ell}^ {r}(x_{1},x_{2},x_{3})\mathcal{N}^{r}(x_{1},x_{2},x_{3}),\qquad\forall w_{ij \ell}^{r}(x_{1},x_{2},x_{3})\in P^{k-2}(\mathfrak{T}^{3}),\] where \[\Psi^{b,r}(x_{1},x_{2},x_{3})=\lambda_{d},\qquad\mathcal{N}^{r}(x_{1},x_{2},x_{3}) =\lambda_{a}\nabla\lambda_{b}\times\nabla\lambda_{c}+\lambda_{b}\nabla\lambda_{c }\times\nabla\lambda_{a}+\lambda_{c}\nabla\lambda_{a}\times\nabla\lambda_{b}.\] Lastly, consider \[\varrho_{m}(x_{4})=v_{m}(x_{4}),\qquad\forall v_{m}(x_{4})\in dP^{k-1}( \mathfrak{T}^{1}),\] and \[\varrho_{ij\ell}(x_{1},x_{2},x_{3})=v_{ij\ell}(x_{1},x_{2},x_{3}),\qquad \forall v_{ij\ell}(x_{1},x_{2},x_{3})\in dP^{k-1}(\mathfrak{T}^{3}).\] ### Restatement of Bubble Spaces We can now restate the bubble space definitions (Eqs. (5.3a)-(5.3d)) in terms of the polynomial functions from the previous section \[\hat{V}_{k}\Lambda^{0}(\mathfrak{N}^{4}):= \,\mathrm{span}\left\{\vartheta^{b}(x_{1},x_{2},x_{3})\vartheta ^{b}(x_{4})p_{ij\ell}(x_{1},x_{2},x_{3})p_{m}(x_{4})\right\}, \tag{5.4a}\] \[\forall p_{m}(x_{4})\in P^{k-2}(\mathfrak{T}^{1}),\quad p_{ij\ell }(x_{1},x_{2},x_{3})\in P^{k-4}(\mathfrak{T}^{3}),\] \[\hat{V}_{k}\Lambda^{1}(\mathfrak{N}^{4}):= \mathrm{span}\left\{\begin{bmatrix}\Phi^{b,r}(x_{1},x_{2},x_{3}) \vartheta^{b}(x_{4})g^{r}_{ij\ell}(x_{1},x_{2},x_{3})p_{m}(x_{4})N^{r}(x_{1},x _{2},x_{3})\\ 0\end{bmatrix}\right\}\] \[\oplus \,\mathrm{span}\left\{\left[0,0,0,\vartheta^{b}(x_{1},x_{2},x_{3 })p_{ij\ell}(x_{1},x_{2},x_{3})v_{m}(x_{4})\right]^{T}\right\},\] (5.4b) \[\forall p_{m}(x_{4})\in P^{k-2}(\mathfrak{T}^{1}),\quad g^{r}_{ij \ell}(x_{1},x_{2},x_{3})\in P^{k-3}(\mathfrak{T}^{3}),\] \[\forall v_{m}(x_{4})\in dP^{k-1}(\mathfrak{T}^{1}),\quad p_{ij \ell}(x_{1},x_{2},x_{3})\in P^{k-4}(\mathfrak{T}^{3}),\] \[\hat{V}_{k}\Lambda^{2}(\mathfrak{N}^{4}):=\mathrm{span}\left\{ \mathcal{L}\left(\begin{bmatrix}0&0&\\ 0&0&\\ \end{bmatrix}\begin{bmatrix}\Phi^{b,r}(x_{1},x_{2},x_{3})g^{r}_{ij\ell}(x_{1},x _{2},x_{3})v_{m}(x_{4})N^{r}(x_{1},x_{2},x_{3})\\ 0&\\ \end{bmatrix}\begin{bmatrix}\Phi^{b,r}(x_{1},x_{2},x_{3})g^{r}_{ij\ell}(x_{1},x _{2},x_{3})v_{m}(x_{4})N^{r}(x_{1},x_{2},x_{3})\\ 0&\\ \end{bmatrix}\begin{bmatrix}\Phi^{b,r}(x_{1},x_{2},x_{3})g^{r}_{ij\ell}(x_{1},x _{2},x_{3})v_{m}(x_{4})N^{r}(x_{1},x_{2},x_{3})\\ \end{bmatrix}\begin{bmatrix}\Phi^{b,r}(x_{1},x_{2},x_{3})g^{r}_{ij\ell}(x_{1},x _{2},x_{3})v_{m}(x_{4})N^{r}(x_{1},x_{2},x_{3})\\ \end{bmatrix}\begin{bmatrix}\Phi^{b,r}(x_{1},x_{2},x_{3})g^{r}_{ij\ell}(x_{1},x _{2},x_{3})v_{m}(x_{4})N^{r}(x_{1},x_{2},x_{3})\\ 0&\\ \end{bmatrix}\begin{bmatrix}\Phi^{b,r}(x_{1},x_{2},x_{3})g^{b}_{ij\ell}(x_{1},x _{2},x_{3})v_{m}(x_{4})N^{r}(x_{1},x_{2},x_{3})\\ 0&\\ \end{bmatrix}\begin{bmatrix}\Phi^{b,r}(x_{1},x_{2},x_{3})\vartheta^{b}(x_{4})g^{r }_{ij\ell}(x_{1},x_{2},x_{3})p_{m}(x_{4})N^{r}(x_{1},x_{2},x_{3})\\ 0&\\ 0&\\ \end{bmatrix}\begin{bmatrix}\Phi^{b,r}(x_{1},x_{2},x_{3})\vartheta^{b}(x_{4})g^{r }_{ij\ell}(x_{1},x_{2},x_{3})p_{m}(x_{4})N^{r}(x_{1},x_{2},x_{3})\\ 0&\\ 0&\\ \end{bmatrix}\begin{bmatrix}\Phi^{b,r}(x_{1},x_{2},x_{3})\vartheta^{b}(x_{4})g^{r }_{ij\ell}(x_{1},x_{2},x_{3})p_{m}(x_{4})N^{r}(x_{1},x_{2},x_{3})\\ 0&\\ 0&\\ \end{bmatrix}\begin{bmatrix}\Phi^{b,r}(x_{1},x_{2},x_{3})\vartheta^{b}(x_{4})g^{r }_{ij\ell}(x_{1},x_{2},x_{3})p_{m}(x_{4})N^{r}(x_{1},x_{2},x_{3})\\ 0&\\ 0&\\ \end{bmatrix}\begin{bmatrix}\Phi^{b,r}(x_{1},x_{2},x_{3})\vartheta^{b}(x_{4})g^{r }_{ij\ell}(x_{1},x_{2},x_{3})p_{m}(x_{4})N^{r}(x_{1},x_{2},x_{3})\\ 0&\\ 0&\\ \end{bmatrix}\begin{bmatrix}\Phi^{b,r}(x_{1},x_{2},x_{3})\vartheta^{b}(x_{4})g^{r }_{ij\ell}(x_{1},x_{2},x_{3})p_{m}(x_{4})N^{r}(x_{1},x_{2},x_{3})\\ 0&\\ 0&\\ \end{bmatrix}\begin{bmatrix}\Phi^{b,r}(x_{1},x_{2},x_{3})\vartheta^{b}(x_{4})g^{r }_{ij\ell}(x_{1},x_{2},x_{3})p_{m}(x_{4})N^{r}(x_{1},x_{2},x_{3})\\ 0&\\ 0&\\ \end{bmatrix}\begin{bmatrix}\Phi^{b,r}(x_{1},x_{2},x_{3})\vartheta^{b}(x_{4})g^{r }_{ij\ell}(x_{1},x_{2},x_{3})p_{m}(x_{4})N^{r}(x_{1},x_{2},x_{3})\\ 0&\\ 0&\\ \end{bmatrix}\begin{bmatrix}\Phi^{b,r}(x_{1},x_{2},x_{3})\vartheta^{b}(x_{4})g^{r }_{ij\ell}(x_{1},x_{2},x_{3})p_{m}(x_{4})N^{r}(x_{1},x_{2},x_{3})\\ 0&\\ 0&\\ \end{bmatrix}\begin{bmatrix}\Phi^{b,r}(x_{1},x_{2},x_{3})\vartheta^{b}(x_{4})g^{r }_{ij\ell}(x_{1},x_{2},x_{3})p_{m}(x_{4})N^{r}(x_{1},x_{2},x_{3})\\ 0&\\ 0&\\ \end{bmatrix}\begin{bmatrix}\Phi^{b,r}(x_{1},x_{2},x_{3})\vartheta^{b}(x_{4})g^{r }_{ij\ell}(x_{1},x_{2},x_{3})p_{m}(x_{4})N^{r}(x_{1},x_{2},x_{3})\\ 0&\\ 0&\\ \end{bmatrix}\begin{bmatrix}\Phi^{b,r}(x_{1},x_{2},x_{3})\vartheta^{b}(x_{4})g^{r }_{ij\ell}(x_{1},x_{2},x_{3})p_{m}(x_{4})N^{r}(x_{1},x_{2},x_{3})\\ 0&\\ 0&\\ \end{bmatrix}\begin{bmatrix}\Phi^{b \[\begin{split}\hat{V}_{k}\Lambda^{3}(\mathfrak{N}^{4})&:= \operatorname{span}\left\{\left[0,0,0,\vartheta^{b}(x_{4})v_{ij\ell}(x_{1},x_{2}, x_{3})p_{m}(x_{4})\right]^{T}\right\}\\ &\oplus\operatorname{span}\left\{\left[\begin{bmatrix}\Psi^{b,r} (x_{1},x_{2},x_{3})w^{r}_{ij\ell}(x_{1},x_{2},x_{3})v_{m}(x_{4})\mathcal{N}^{ r}(x_{1},x_{2},x_{3})\\ &0\end{bmatrix}\right]\right\},\\ &\forall p_{m}(x_{4})\in P^{k-2}(\mathfrak{T}^{1}),\quad v_{ij\ell}(x_{1},x_{2},x_{3})\in dP^{k-1}(\mathfrak{T}^{3}),\\ &\forall v_{m}(x_{4})\in dP^{k-1}(\mathfrak{T}^{1}),\quad w^{r}_{ij\ell}(x_{1},x_{2},x_{3})\in P^{k-2}(\mathfrak{T}^{3}).\end{split} \tag{5.4d}\] ### Degrees of Freedom on the Reference Tetrahedral Prism, \(\mathfrak{N}^{4}\) Our objective is to construct degrees of freedom for the Nedelec-Raviart-Thomas-based sequence (Eqs. (5.2a)-(5.2e)) on the reference tetrahedral prism \(\mathfrak{N}^{4}\). We recall from Table 1, that the reference tetrahedral prism has 8 vertices, 16 edges, 8 triangular faces, 6 quadrilateral faces, 2 tetrahedral facets, and 4 triangular-prismatic facets. The degrees of freedom on these vertices, edges, faces, and facets of the tetrahedral prism are inherited directly from the degrees of freedom for lower dimensional entities in 0, 1, 2, and 3 dimensions, respectively. Therefore, it remains for us to construct degrees of freedom for the interior of the tetrahedral prism. We will construct explicit expressions for these degrees of freedom for 0-, 1-, 2-, 3-, and 4-forms in what follows. ### Dofs for 0-forms on \(\mathfrak{N}^{4}\) The polynomial 0-forms on the tetrahedral prism, \(V_{k}\Lambda^{0}(\mathfrak{N}^{4})\), have the following total dimension \[\dim\left(V_{k}\Lambda^{0}(\mathfrak{N}^{4})\right) =\dim(\Sigma^{k,0}(\mathfrak{N}^{4}))\] \[=\dim(CG^{k}\left(\mathfrak{T}^{1}\right)\times CG^{k}\left( \mathfrak{T}^{3}\right))=\frac{1}{6}(k+1)^{2}(k+2)(k+3).\] The 0-forms have vertex, edge, face, and facet traces in accordance with Eqs. (2.2), (2.4), (2.7), (2.10), and (2.14). As a result, the dimension of the trace degrees of freedom, \(\Sigma^{k,0}_{trace}(\mathfrak{N}^{4})\), can be computed as follows \[\begin{split}\dim(\Sigma^{k,0}_{trace}(\mathfrak{N}^{4}))& =8+16\dim\left(P^{k-2}(\mathfrak{T}^{1})\right)+8\dim\left(P^{k-3}( \mathfrak{T}^{2})\right)+6\dim\left(Q^{k-2,k-2}(\mathfrak{H}^{2})\right)\\ &+2\dim\left(P^{k-4}(\mathfrak{T}^{3})\right)+4\left(\dim(Q^{k-2 }(\mathfrak{H}^{1}))\times\dim(P^{k-3}(\mathfrak{T}^{2}))\right)\\ &=8+16(k-1)+\frac{8}{2}(k-2)(k-1)+6(k-1)^{2}\\ &+\frac{2}{6}(k-3)(k-2)(k-1)+\frac{4}{2}(k-2)(k-1)^{2}\\ &=\frac{1}{3}k(7k^{2}+17).\end{split}\] In addition, the volumetric degrees of freedom for the 0-form proxy \(u\) are given by \[\Sigma^{k,0}_{vol}(\mathfrak{N}^{4}):=\left\{u\to\int_{\mathfrak{N}^{4}}uq\, \vartheta^{b}(x_{1},x_{2},x_{3})\vartheta^{b}(x_{4}),\qquad q\in P^{k-2}( \mathfrak{T}^{1})\times P^{k-4}(\mathfrak{T}^{3})\right\}, \tag{5.5}\] and \[\dim(\Sigma^{k,0}_{vol}(\mathfrak{N}^{4}))=\frac{1}{6}(k-3)(k-2)(k-1)^{2}.\] In a natural fashion, one can show that the total number of degrees of freedom on the tetrahedral prism is equal to the sum of the trace and volumetric degrees of freedom \[\dim(\Sigma^{k,0}(\mathfrak{N}^{4}))=\dim(\Sigma^{k,0}_{trace}(\mathfrak{N}^{ 4}))+\dim(\Sigma^{k,0}_{vol}(\mathfrak{N}^{4})).\] It remains for us to prove unisolvency. **Lemma 5.1**.: _Let \(u\in V_{k}\Lambda^{0}(\mathfrak{N}^{4})\) be a polynomial 0-form for which all the degrees of freedom \(\Sigma^{k,0}(\mathfrak{N}^{4})\) vanish. Then \(u\equiv 0\)._ Proof.: Since all the trace degrees of freedom of the form given by Eqs. (2.2), (2.4), (2.7), (2.10), and (2.14) vanish, then we conclude that \(u\) resides in the bubble space, i.e. \(u\in\hat{V}_{k}\Lambda^{0}(\mathfrak{N}^{4})\). Therefore, we can express \(u\) in accordance with Eq. (5.4a) as follows \[u=\sum_{ij\ell m}\widehat{u}_{ij\ell m}\,p_{ij\ell}(x_{1},x_{2},x_{3})p_{m}(x_ {4})\vartheta^{b}(x_{1},x_{2},x_{3})\vartheta^{b}(x_{4}),\] where \[p_{ij\ell}(x_{1},x_{2},x_{3})\in P^{k-4}(\mathfrak{T}^{3}),\quad p_{m}(x_{4}) \in P^{k-2}(\mathfrak{T}^{1}).\] We complete the proof by substituting \(u\) (from above) and \[q=\sum_{ij\ell m}\widehat{u}_{ij\ell m}\,p_{ij\ell}(x_{1},x_{2},x_{3})p_{m}(x _{4}),\] into Eq. (5.5). Under these circumstances, the only way the volumetric degrees of freedom are guaranteed to vanish, is if \(u\) vanishes. ### Dofs for 1-forms on \(\mathfrak{N}^{4}\) The polynomial 1-forms on the tetrahedral prism, \(V_{k}\Lambda^{1}(\mathfrak{N}^{4})\), have the following total dimension \[\dim(V_{k}\Lambda^{1}(\mathfrak{N}^{4})) =\dim(\Sigma^{k,1}(\mathfrak{N}^{4}))\] \[=\dim\left(CG^{k}\left(\mathfrak{T}^{1}\right)\times\begin{bmatrix} N^{k-1}\left(\mathfrak{T}^{3}\right)\\ 0\end{bmatrix}\right)\] \[+\dim\left(\left[0,0,0,DG^{k-1}\left(\mathfrak{T}^{1}\right) \right]^{T}\times CG^{k}\left(\mathfrak{T}^{3}\right)\right)\] \[=\frac{1}{2}k(k+1)(k+2)(k+3)+\frac{1}{6}k(k+1)(k+2)(k+3)\] \[=\frac{2}{3}k(k+1)(k+2)(k+3).\] The 1-forms have edge, face, and facet traces in accordance with Eqs. (2.3), (2.5), (2.8), (2.11), and (2.15). As a result, the dimension of the trace degrees of freedom, \(\Sigma^{k,1}_{trace}(\mathfrak{N}^{4})\), can be computed as follows \[\dim\left(\Sigma^{k,1}_{trace}(\mathfrak{N}^{4})\right) =16\dim\left(P^{k-1}(\mathfrak{T}^{1})\right)+8\dim\left((P^{k-2 }(\mathfrak{T}^{2}))^{2}\right)\] \[+6\left(\dim(Q^{k-2,k-1}(\mathfrak{T}^{2}))+\dim(Q^{k-1,k-2}( \mathfrak{T}^{2}))\right)+2\dim\left((P^{k-3}(\mathfrak{T}^{3}))^{3}\right)\] \[+4\left(2\dim(Q^{k-2}(\mathfrak{T}^{1}))\times\dim(P^{k-2}( \mathfrak{T}^{2}))+\dim(Q^{k-1}(\mathfrak{T}^{1}))\times\dim(P^{k-3}( \mathfrak{T}^{2}))\right)\] \[=16k+8(k-1)k+6(2(k-1)k)+\frac{2}{2}(k-2)(k-1)k\] \[+4\left(k(k-1)^{2}+\frac{1}{2}(k-2)(k-1)k\right)\] \[=k(7k^{2}+3k+6).\] In addition, the volumetric degrees of freedom for the 1-form proxy \(E\) are as follows \[\Sigma^{k,1}_{vol,1}(\mathfrak{V}^{4}):= \Bigg{\{}E\to\int_{\mathfrak{V}^{4}}E\cdot\left(\sum_{r}q^{r}\Phi^{ b,r}(x_{1},x_{2},x_{3})\vartheta^{b}(x_{4})\begin{bmatrix}N^{r}(x_{1},x_{2},x_{3}) \\ 0\end{bmatrix}\right),\] \[q^{r}\in P^{k-2}(\mathfrak{T}^{1})\times P^{k-3}(\mathfrak{T}^{ 3})\Bigg{\}},\qquad r=1,2,3, \tag{5.6}\] \[\Sigma^{k,1}_{vol,2}(\mathfrak{V}^{4}):= \Bigg{\{}E\to\int_{\mathfrak{V}^{4}}E\cdot\left[0,0,0,q^{4} \vartheta^{b}(x_{1},x_{2},x_{3})\right]^{T},\] \[q^{4}\in dP^{k-1}(\mathfrak{T}^{1})\times P^{k-4}(\mathfrak{T}^ {3})\Bigg{\}}. \tag{5.7}\] Thereafter, we define \[\Sigma^{k,1}_{vol}(\mathfrak{V}^{4}):=\Sigma^{k,1}_{vol,1}(\mathfrak{V}^{4}) \cup\Sigma^{k,1}_{vol,2}(\mathfrak{V}^{4}),\] and \[\dim\left(\Sigma^{k,1}_{vol}(\mathfrak{V}^{4})\right) =\dim\left(\Sigma^{k,1}_{vol,1}(\mathfrak{V}^{4})\right)+\dim \left(\Sigma^{k,1}_{vol,2}(\mathfrak{V}^{4})\right)\] \[=\frac{1}{2}k(k-1)^{2}(k-2)+\frac{1}{6}(k-3)(k-2)(k-1)k.\] Evidently, we can show that the total number of degrees of freedom is composed from the sum of interior and facet degrees of freedom: \[\dim\left(\Sigma^{k,1}(\mathfrak{V}^{4})\right)=\dim\left(\Sigma^{k,1}_{trace }(\mathfrak{V}^{4})\right)+\dim\left(\Sigma^{k,1}_{vol}(\mathfrak{V}^{4}) \right).\] It then remains for us to prove unisolvency. **Lemma 5.2**.: _Let \(E\in V_{k}\Lambda^{1}(\mathfrak{V}^{4})\) be a polynomial 1-form for which all the degrees of freedom \(\Sigma^{k,1}(\mathfrak{V}^{4})\) vanish. Then \(E\equiv 0\)._ Proof.: Since all the trace degrees of freedom of the form given by Eqs. (2.3), (2.5), (2.8), (2.11), and (2.15) vanish, the polynomial 1-form \(E\) has zero traces, and is hence in \(\lx@overaccentset{{\circ}}{V}_{k}\Lambda^{1}(\mathfrak{V}^{4})\). It therefore has the form in Eq. (5.4b), i.e., \[E=\begin{bmatrix}\sum_{ij\ell mr}\lx@overaccentset{{\circ}}{E}_{ ij\ell m}g^{r}_{ij\ell}(x_{1},x_{2},x_{3})p_{m}(x_{4})\Phi^{b,r}(x_{1},x_{2},x_{3}) \vartheta^{b}(x_{4})N^{r}(x_{1},x_{2},x_{3})\\ \sum_{ij\ell m}\lx@overaccentset{{\circ}}{E}_{ij\ell m}^{4}p_{ij \ell}(x_{1},x_{2},x_{3})v_{m}(x_{4})\vartheta^{b}(x_{1},x_{2},x_{3})\end{bmatrix},\] where \[p_{m}(x_{4}) \in P^{k-2}(\mathfrak{T}^{1}),\quad g^{r}_{ij\ell}(x_{1},x_{2},x_{3} )\in P^{k-3}(\mathfrak{T}^{3}),\] \[v_{m}(x_{4}) \in dP^{k-1}(\mathfrak{T}^{1}),\quad p_{ij\ell}(x_{1},x_{2},x_{3} )\in P^{k-4}(\mathfrak{T}^{3}).\] The proof follows immediately by choosing test functions, \[q^{r} =\sum_{ij\ell m}\lx@overaccentset{{\circ}}{E}_{ij\ell m }^{r}g^{r}_{ij\ell}(x_{1},x_{2},x_{3})p_{m}(x_{4}),\] \[q^{4} =\sum_{ij\ell m}\lx@overaccentset{{\circ}}{E}_{ij\ell m }^{4}p_{ij\ell}(x_{1},x_{2},x_{3})v_{m}(x_{4}),\] and thereafter substituting these test functions and \(E\) (from above) into Eqs. (5.6) and (5.7). Under these conditions, the vanishing of the associated volumetric degrees of freedom is only possible if \(E\) vanishes. ### Dofs for 2-forms on \(\mathfrak{N}^{4}\) The polynomial 2-forms on the tetrahedral prism, \(V_{k}\Lambda^{2}(\mathfrak{N}^{4})\), have the following total dimension \[\dim\left(V_{k}\Lambda^{2}(\mathfrak{N}^{4})\right) =\dim\left(\Sigma^{k,2}(\mathfrak{N}^{4})\right)\] \[=\dim\left(DG^{k-1}\left(\mathfrak{T}^{1}\right)\times\begin{bmatrix} \begin{bmatrix}0&0&0\\ 0&0&0\\ 0&0&0\end{bmatrix}&N^{k-1}\left(\mathfrak{T}^{3}\right)\\ 0&0&0\\ \ast&0\end{bmatrix}\right)\] \[+\dim\left(CG^{k}\left(\mathfrak{T}^{1}\right)\times\begin{bmatrix} 0&RT_{3}^{k-1}(\mathfrak{T}^{3})&-RT_{2}^{k-1}(\mathfrak{T}^{3})&0\\ \ast&0&RT_{1}^{k-1}(\mathfrak{T}^{3})&0\\ \ast&\ast&0&0\\ 0&0&0&0\end{bmatrix}\right)\] \[=\frac{1}{2}k^{2}(k+2)(k+3)+\frac{1}{2}k(k+1)^{2}(k+3).\] The 2-forms have face and facet traces in accordance with Eqs. (2.6), (2.9), (2.12), and (2.16). As a result, the dimension of the trace degrees of freedom, \(\Sigma^{k,2}_{trace}(\mathfrak{N}^{4})\), can be computed as follows \[\dim\left(\Sigma^{k,2}_{trace}(\mathfrak{N}^{4})\right) =8\dim(P^{k-1}(\mathfrak{T}^{2}))+6\dim(Q^{k-1,k-1}(\mathfrak{H}^ {2}))+2\dim\left((P^{k-2}(\mathfrak{T}^{3}))^{3}\right)\] \[+4\left(2\dim(Q^{k-1}(\mathfrak{H}^{1}))\times\dim(P^{k-2}( \mathfrak{T}^{2}))+\dim(Q^{k-2}(\mathfrak{H}^{1}))\times\dim(P^{k-1}( \mathfrak{T}^{2}))\right)\] \[=\frac{8}{2}k(k+1)+6k^{2}+\frac{2}{2}(k-1)k(k+1)+4\left((k-1)k^{2 }+\frac{1}{2}(k-1)k(k+1)\right)\] \[=k(7k^{2}+6k+1).\] In addition, the volumetric degrees of freedom for the 2-form proxy \(F\) are as follows \[\Sigma^{k,2}_{vol,1}(\mathfrak{N}^{4}) :=\left\{F\rightarrow\int_{\mathfrak{N}^{4}}F:\mathcal{L} \begin{pmatrix}\begin{bmatrix}0\\ 0\\ \end{bmatrix}&0\\ \begin{bmatrix}\sum_{r}q^{r}\Phi^{b,r}(x_{1},x_{2},x_{3})N^{r}(x_{1},x_{2},x_ {3})\end{bmatrix}_{1}\\ 0\\ \begin{bmatrix}\sum_{r}q^{r}\Phi^{b,r}(x_{1},x_{2},x_{3})N^{r}(x_{1},x_{2},x_ {3})\end{bmatrix}_{2}\\ \begin{bmatrix}\sum_{r}q^{r}\Phi^{b,r}(x_{1},x_{2},x_{3})N^{r}(x_{1},x_{2},x_ {3})\end{bmatrix}_{3}\end{bmatrix}\right)\] \[q^{r}\in dP^{k-1}(\mathfrak{T}^{1})\times P^{k-3}(\mathfrak{T}^ {3})\Bigg{\}},\qquad r=1,2,3, \tag{5.8}\] \[\Sigma^{k,2}_{vol,2}(\mathfrak{N}^{4}) :=\left\{F\rightarrow\int_{\mathfrak{N}^{4}}F:\mathcal{L} \begin{pmatrix}\begin{bmatrix}\sum_{r}q^{r}\Psi^{b,r}(x_{1},x_{2},x_{3})\vartheta ^{b}(x_{4})\mathcal{N}^{r}(x_{1},x_{2},x_{3})\end{bmatrix}_{3}\\ -\begin{bmatrix}\sum_{r}q^{r}\Psi^{b,r}(x_{1},x_{2},x_{3})\vartheta^{b}(x_{4}) \mathcal{N}^{r}(x_{1},x_{2},x_{3})\end{bmatrix}_{2}\\ 0\\ \begin{bmatrix}\sum_{r}q^{r}\Psi^{b,r}(x_{1},x_{2},x_{3})\vartheta^{b}(x_{4}) \mathcal{N}^{r}(x_{1},x_{2},x_{3})\end{bmatrix}_{1}\\ 0\\ 0\\ 0\\ 0\end{bmatrix}\right)\] \[q^{r}\in P^{k-2}(\mathfrak{T}^{1})\times P^{k-2}(\mathfrak{T}^ {3})\Bigg{\}},\qquad r=1,2,3. \tag{5.9}\] Thereafter, we define \[\Sigma_{vol}^{k,2}(\mathfrak{M}^{4}):=\Sigma_{vol,1}^{k,2}(\mathfrak{M}^{4})\cup \Sigma_{vol,2}^{k,2}(\mathfrak{M}^{4}),\] and \[\dim\left(\Sigma_{vol}^{k,2}(\mathfrak{M}^{4})\right) =\dim\left(\Sigma_{vol,1}^{k,2}(\mathfrak{M}^{4})\right)+\dim \left(\Sigma_{vol,2}^{k,2}(\mathfrak{M}^{4})\right)\] \[=\frac{1}{2}k^{2}(k-1)(k-2)+\frac{1}{2}(k-1)^{2}k(k+1).\] Evidently, we can show that the following holds \[\dim\left(\Sigma^{k,2}(\mathfrak{M}^{4})\right)=\dim\left(\Sigma_{trace}^{k,2 }(\mathfrak{M}^{4})\right)+\dim\left(\Sigma_{vol}^{k,2}(\mathfrak{M}^{4}) \right).\] It then remains for us to prove unisolvency. **Lemma 5.3**.: _Let \(F\in V_{k}\Lambda^{2}(\mathfrak{M}^{4})\) be a polynomial 2-form for which all the degrees of freedom \(\Sigma^{k,2}(\mathfrak{M}^{4})\) vanish. Then \(F\equiv 0\)._ Proof.: Since all the trace degrees of freedom of the form given by Eqs. (2.6), (2.9), (2.12), and (2.16) vanish, the polynomial 2-form \(F\) has zero traces, and is hence in \(\breve{V}_{k}\Lambda^{2}(\mathfrak{M}^{4})\). It therefore has the form in Eq. (5.4c), i.e., \[F=\mathcal{L}\begin{pmatrix}\begin{bmatrix}\left[\sum_{ij\ell mr}\widetilde{F} _{ij\ell m}^{r}w_{ij\ell}^{r}(x_{1},x_{2},x_{3})p_{m}(x_{4})\Psi^{b,r}(x_{1},x_ {2},x_{3})\partial^{b}(x_{4})\mathcal{N}^{r}(x_{1},x_{2},x_{3})\right]_{3}\\ \left[\sum_{ij\ell mr}\widetilde{F}_{ij\ell m}^{r}w_{ij\ell}^{r}(x_{1},x_{2}, x_{3})p_{m}(x_{4})\Psi^{b,r}(x_{1},x_{2},x_{3})\partial^{b}(x_{4})\mathcal{N}^{r}(x_{1},x_{2},x_{3})\right]_{2}\\ \left[\sum_{ij\ell mr}\widetilde{F}_{ij\ell m}^{r}g_{ij\ell}^{r}(x_{1},x_{2}, x_{3})v_{m}(x_{4})\Phi^{b,r}(x_{1},x_{2},x_{3})N^{r}(x_{1},x_{2},x_{3}) \right]_{1}\\ \left[\sum_{ij\ell mr}\widetilde{F}_{ij\ell m}^{r}w_{ij\ell}^{r}(x_{1},x_{2}, x_{3})p_{m}(x_{4})\Psi^{b,r}(x_{1},x_{2},x_{3})\partial^{b}(x_{4})\mathcal{N}^{r}(x _{1},x_{2},x_{3})\right]_{1}\\ \left[\sum_{ij\ell mr}\widetilde{F}_{ij\ell m}^{r}g_{ij\ell}^{r}(x_{1},x_{2}, x_{3})v_{m}(x_{4})\Phi^{b,r}(x_{1},x_{2},x_{3})N^{r}(x_{1},x_{2},x_{3}) \right]_{2}\\ \left[\sum_{ij\ell mr}\widetilde{F}_{ij\ell m}^{r}g_{ij\ell}^{r}(x_{1},x_{2}, x_{3})v_{m}(x_{4})\Phi^{b,r}(x_{1},x_{2},x_{3})N^{r}(x_{1},x_{2},x_{3}) \right]_{3}^{2}\end{bmatrix},\] where \[\forall v_{m}(x_{4}) \in dP^{k-1}(\mathfrak{T}^{1}),\quad g_{ij\ell}^{r}(x_{1},x_{2}, x_{3})\in P^{k-3}(\mathfrak{T}^{3}),\] \[\forall p_{m}(x_{4}) \in P^{k-2}(\mathfrak{T}^{1}),\quad w_{ij\ell}^{r}(x_{1},x_{2}, x_{3})\in P^{k-2}(\mathfrak{T}^{3}).\] The proof follows immediately by choosing the following test functions \[q^{r} =\sum_{ij\ell m}\widehat{F}_{ij\ell m}^{r}g_{ij\ell}^{r}(x_{1},x_ {2},x_{3})v_{m}(x_{4}),\] \[q^{r} =\sum_{ij\ell m}\widetilde{F}_{ij\ell m}^{r}w_{ij\ell}^{r}(x_{1},x_{2},x_{3})p_{m}(x_{4}),\] and thereafter substituting these test functions and \(F\) (from above) into Eqs. (5.8) and (5.9), respectively. Under these conditions, the vanishing of the associated volumetric degrees of freedom is only possible if \(F\) vanishes. ### Dofs for 3-forms on \(\mathfrak{N}^{4}\) The polynomial 3-forms on the tetrahedral prism, \(V_{k}\Lambda^{3}(\mathfrak{N}^{4})\), have the following total dimension \[\dim\left(V_{k}\Lambda^{3}(\mathfrak{N}^{4})\right) =\dim(\Sigma^{k,3}(\mathfrak{N}^{4}))\] \[=\dim\left(\left[0,0,0,CG^{k}\left(\mathfrak{T}^{1}\right) \right]^{T}\times DG^{k-1}\left(\mathfrak{T}^{3}\right)\right)\] \[+\dim\left(DG^{k-1}\left(\mathfrak{T}^{1}\right)\times\begin{bmatrix} RT^{k-1}\left(\mathfrak{T}^{3}\right)\\ 0\end{bmatrix}\right)\] \[=\frac{1}{6}k(k+1)^{2}(k+2)+\frac{1}{2}k^{2}(k+1)(k+3).\] The 3-forms only have facet traces in accordance with Eqs. (2.13) and (2.17). As a result, the dimension of the trace degrees of freedom, \(\Sigma^{k,3}_{trace}(\mathfrak{N}^{4})\), can be computed as follows \[\dim\left(\Sigma^{k,3}_{trace}(\mathfrak{N}^{4})\right) =2\dim\left(P^{k-1}(\mathfrak{T}^{3})\right)+4\left(\dim(Q^{k-1} (\mathfrak{T}^{1}))\times\dim(P^{k-1}(\mathfrak{T}^{2}))\right)\] \[=\frac{2}{6}k(k+1)(k+2)+\frac{4}{2}k^{2}(k+1)\] \[=\frac{1}{3}k(7k^{2}+9k+2).\] In addition, the volumetric degrees of freedom for the 3-form proxy \(G\) are as follows \[\Sigma^{k,3}_{vol,1}(\mathfrak{N}^{4}):= \Bigg{\{}G\to\int_{\mathfrak{N}^{4}}G\cdot\left(\sum_{r}q^{r} \Phi^{b,r}(x_{1},x_{2},x_{3})\begin{bmatrix}\mathcal{N}^{r}(x_{1},x_{2},x_{3} )\\ 0\end{bmatrix}\right),\] \[q^{r}\in dP^{k-1}(\mathfrak{T}^{1})\times P^{k-2}(\mathfrak{T}^{ 3})\Bigg{\}},\qquad r=1,2,3, \tag{5.10}\] \[\Sigma^{k,3}_{vol,2}(\mathfrak{N}^{4}):= \Bigg{\{}G\to\int_{\mathfrak{N}^{4}}G\cdot\left[0,0,0,q^{4} \phi^{b}(x_{4})\right]^{T},\] \[q^{4}\in P^{k-2}(\mathfrak{T}^{1})\times dP^{k-1}(\mathfrak{T}^ {3})\Bigg{\}}. \tag{5.11}\] Thereafter, we define \[\Sigma^{k,3}_{vol}(\mathfrak{N}^{4}):=\Sigma^{k,3}_{vol,1}(\mathfrak{N}^{4}) \cup\Sigma^{k,3}_{vol,2}(\mathfrak{N}^{4}),\] and \[\dim\left(\Sigma^{k,3}_{vol}(\mathfrak{N}^{4})\right) =\dim\left(\Sigma^{k,3}_{vol,1}(\mathfrak{N}^{4})\right)+\dim \left(\Sigma^{k,3}_{vol,2}(\mathfrak{N}^{4})\right)\] \[=\frac{1}{2}(k-1)k^{2}(k+1)+\frac{1}{6}(k-1)k(k+1)(k+2).\] Evidently, we can show that the following holds \[\dim\left(\Sigma^{k,3}(\mathfrak{N}^{4})\right)=\dim\left(\Sigma^{k,3}_{trace }(\mathfrak{N}^{4})\right)+\dim\left(\Sigma^{k,3}_{vol}(\mathfrak{N}^{4}) \right).\] It then remains for us to prove unisolvency. **Lemma 5.4**.: _Let \(G\in V_{k}\Lambda^{3}(\mathfrak{N}^{4})\) be a polynomial 3-form for which all the degrees of freedom \(\Sigma^{k,3}(\mathfrak{N}^{4})\) vanish. Then \(G\equiv 0\)._ Proof.: The proof is straightforward, as it directly follows the proofs of Lemmas 5.1, 5.2, and 5.3 with \(G\), \(q^{r}\), and \(q^{4}\) constructed using the definition of the bubble space, \(\tilde{V}_{k}\Lambda^{3}(\mathfrak{N}^{4})\), in Eq. (5.4d). ### Dofs for 4-forms on \(\mathfrak{N}^{4}\) The polynomial 4-forms on the tetrahedral prism, \(V_{k}\Lambda^{4}(\mathfrak{N}^{4})\), have the following total dimension \[\dim\left(V_{k}\Lambda^{4}(\mathfrak{N}^{4})\right) =\dim\left(\Sigma^{k,4}(\mathfrak{N}^{4})\right)\] \[=\dim\left(DG^{k-1}\left(\mathfrak{T}^{1}\right)\times DG^{k-1} \left(\mathfrak{T}^{3}\right)\right)\] \[=\frac{1}{6}k^{2}(k+1)(k+2).\] The 4-forms have no facet degrees of freedom. As a result, all the degrees of freedom are volumetric. The volumetric degrees of freedom for the 4-form proxy \(q\) can be expressed as follows \[\Sigma^{k,4}_{vol}(\mathfrak{N}^{4}):=\left\{q\to\int_{\mathfrak{N}^{4}}qp, \qquad p\in dP^{k-1}(\mathfrak{T}^{1})\times dP^{k-1}(\mathfrak{T}^{3}) \right\}. \tag{5.12}\] It then remains for us to prove unisolvency. **Lemma 5.5**.: _Let \(q\in V_{k}\Lambda^{4}(\mathfrak{N}^{4})\) be a polynomial 4-form for which all the degrees of freedom \(\Sigma^{k,4}(\mathfrak{N}^{4})\) vanish. Then \(q\equiv 0\)._ Proof.: We begin by choosing a generic \(q\), such that \[q=\sum_{ij\ell m}q_{ij\ell m}v_{ij\ell}(x_{1},x_{2},x_{3})v_{m}(x_{4}),\] where \[v_{ij\ell}(x_{1},x_{2},x_{3})\in dP^{k-1}(\mathfrak{T}^{3}),\quad v_{m}(x_{4}) \in dP^{k-1}(\mathfrak{T}^{1}).\] The proof follows immediately by setting \(p=q\) in Eq. (5.12). Under these circumstances, the degrees of freedom are only guaranteed to vanish if \(q\) vanishes. ## 6 Conclusion This paper has introduced fully explicit H(skwGrad)-, H(curl)-, and H(div)-conforming finite element spaces, basis functions, and degrees of freedom for pentatopes and tetrahedral prism elements. This exercise has been performed with the aid of FEEC. In order to facilitate the implementation of these methods, whenever possible, we have dispensed with the language of differential forms and instead used the language of linear algebra in order to simplify the presentation. We hope that the resulting finite elements will be used extensively in space-time finite element methods. In particular, the finite element spaces on the pentatope should help facilitate space-time methods on both partially-unstructured and fully-unstructured pentatopic meshes. In addition, the finite element spaces on tetrahedral prisms will help facilitate space-time methods on extruded, partially-unstructured, tetrahedral-prismatic meshes. Looking ahead, there are many opportunities for continued development of four-dimensional finite elements. For example, we note that the recent paper of Petrov et al. [33] develops four-dimensional, hybrid meshes composed of cubic pyramids, bipentatopes, and tesseracts. To our knowledge, finite element spaces on the cubic pyramid and bipentatope have not been investigated outside of Petrov et al.'s work, (which itself only covers the H1- and L2-conforming cases for cubic pyramids). We anticipate that FEEC techniques can be used to construct a broader range of conforming finite element spaces on these non-simplicial elements in the near future. ## Declaration of Competing Interests The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. ## Funding This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.
2302.04414
Cosmology with the Galaxy Bispectrum Multipoles: Optimal Estimation and Application to BOSS Data
We present a framework for self-consistent cosmological analyses of the full-shape anisotropic bispectrum, including the quadrupole $(\ell=2)$ and hexadecapole $(\ell=4)$ moments. This features a novel window-free algorithm for extracting the latter quantities from data, derived using a maximum-likelihood prescription. Furthermore, we introduce a theoretical model for the bispectrum multipoles (which does not introduce new free parameters), and test both aspects of the pipeline on several high-fidelity mocks, including the PT Challenge suite of gigantic cumulative volume. This establishes that the systematic error is significantly below the statistical threshold, both for the measurement and modeling. As a realistic example, we extract the large-scale bispectrum multipoles from BOSS DR12 and analyze them in combination with the power spectrum data. Assuming a minimal $\Lambda$CDM model, with a BBN prior on the baryon density and a \textit{Planck} prior on $n_s$, we can extract the remaining cosmological parameters directly from the clustering data. The inclusion of the unwindowed higher-order $(\ell>0)$ large-scale bispectrum multipoles is found to moderately improve one-dimensional cosmological parameter posteriors (at the $5\%-10\%$ level), though these multipoles are detected only in three out of four BOSS data segments at $\approx 5\sigma$. Combining information from the power spectrum and bispectrum multipoles, the real space power spectrum, and the post-reconstructed BAO data, we find $H_0 = 68.2\pm 0.8~\mathrm{km}\,\mathrm{s}^{-1}\mathrm{Mpc}^{-1}$, $\Omega_m =0.33\pm 0.01$ and $\sigma_8 = 0.736\pm 0.033$ (the tightest yet found in perturbative full-shape analyses). Our estimate of the growth parameter $S_8=0.77\pm 0.04$ agrees with both weak lensing and CMB results.
Mikhail M. Ivanov, Oliver H. E. Philcox, Giovanni Cabass, Takahiro Nishimichi, Marko Simonović, Matias Zaldarriaga
2023-02-09T03:01:25Z
http://arxiv.org/abs/2302.04414v1
# Cosmology with the Galaxy Bispectrum Multipoles: ###### Abstract We present a framework for self-consistent cosmological analyses of the full-shape anisotropic bispectrum, including the quadrupole (\(\ell=2\)) and hexadecapole (\(\ell=4\)) moments. This features a novel window-free algorithm for extracting the latter quantities from data, derived using a maximum-likelihood prescription. Furthermore, we introduce a theoretical model for the bispectrum multipoles (which does not introduce new free parameters), and test both aspects of the pipeline on several high-fidelity mocks, including the PT Challenge suite of gigantic cumulative volume. This establishes that the systematic error is significantly below the statistical threshold, both for the measurement and modeling. As a realistic example, we extract the large-scale bispectrum multipoles from BOSS DR12 and analyze them in combination with the power spectrum data. Assuming a minimal \(\Lambda\)CDM model, with a BBN prior on the baryon density and a _Planck_ prior on \(n_{s}\), we can extract the remaining cosmological parameters directly from the clustering data. The inclusion of the unwindowed higher-order (\(\ell>0\)) large-scale bispectrum multipoles is found to moderately improve one-dimensional cosmological parameter posteriors (at the \(5\%-10\%\) level), though these multipoles are detected only in three out of four BOSS data segments at \(\approx 5\sigma\). Combining information from the power spectrum and bispectrum multipoles, the real space power spectrum, and the post-reconstructed BAO data, we find \(H_{0}=68.2\pm 0.8\) km s\({}^{-1}\)Mpc\({}^{-1}\), \(\Omega_{m}=0.33\pm 0.01\) and \(\sigma_{8}=0.736\pm 0.033\) (the tightest yet found in perturbative full-shape analyses). Our estimate of the growth parameter \(S_{8}=0.77\pm 0.04\) agrees with both weak lensing and CMB results. The estimators and data used in this work have been made publicly available. ###### Contents * 1 Introduction * 2 Summary of the Main Results * 3 The Bispectrum Multipoles * 3.1 Definition * 3.2 Idealized Estimators * 4 Window-Free Bispectrum Estimators * 4.1 Motivation * 4.2 Binned Bispectrum Components * 4.3 Maximum-Likelihood Estimators * 5 Theory Model Overview * 5.1 Idealized Form * 5.2 Observational Effects * 6 Data and Likelihood * 6.1 PT Challenge * 6.2 Nseries * 6.3 BOSS * 6.4 Codes & Priors * 7 Tests on Mock Catalogs * 7.1 PT Challenge * 7.2 Nseries * 8 Analysis of the BOSS data * 9 Discussion and Conclusions * A Gaussian Covariance for Bispectrum Multipoles * B Full constraints and parameter tables ## 1 Introduction The large scale structure (LSS) traced by the distribution of galaxies, has become one of the primary cosmological observables, allowing for precision tests of our theoretical models and numerical simulations. A key feature of this distribution is its statistical non-Gaussianity, induced by non-linear gravitational evolution. Any analysis aimed at maximizing the information yield of a galaxy survey should therefore include non-Gaussian statistics, the simplest of which is the three-point correlation function of the galaxy overdensity field, or its Fourier image, known as the bispectrum. Spectroscopic surveys observe the galaxy distribution in three dimensions, with the radial axis contaminated by line-of-sight velocities, through the phenomena of redshift space distortions (RSD). This anisotropy propagates to summary statistics such as the bispectrum [1; 2], and is a valuable probe of cosmological information encoded in the peculiar velocity field. To date, most bispectrum analyses to date have considered only the angle-averaged galaxy bispectrum, also called the bispectrum monopole moment [e.g., 3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16]. This moment, however, is only the first term of an infinite expansion in angular moments needed to capture the entire anisotropic clustering information present within the bispectrum [1; 17]. Including this information in analysis pipelines requires a systematic and efficient treatment, taking careful account of effects such as analytical modeling, robust statistical estimation, the impact of survey geometry, and discreteness effects. In this work, we present the first such analysis carried out on publicly available data using the twelfth data release of the Baryon Oscillation Spectroscopic Survey (BOSS) [18]. A number of previous works have studied the galaxy bispectrum beyond the monopole moment including Refs. [19; 20; 21; 22; 23; 24; 25; 26; 27] (see also Refs. [13; 14; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47] for other bispectrum analyses). Using a combination of Fisher forecasts and simulated data, several of these works have demonstrated that anisotropic bispectrum multipoles may lead to a significant tightening of our constraints on cosmological and astrophysical parameters of interest; for example, Ref. [25] studied the information content in the idealized setting of periodic box geometries with tree-level perturbation theory and derived cosmological parameters such as \(f\sigma_{8}(z)\). Here, our goal is to extend these studies by considering their application both to actual data (including all relevant observational effects and covariances) and to the measurement of underlying \(\Lambda\)CDM cosmology parameters, thus discovering whether the purported gains can be practically realized. An important step towards this was performed in Ref. [26], which analyzes observational data from the BOSS bispectrum quadrupole, using tree-level theory. This work finds more modest improvements from the redshift-space information, with only a small (\(<10\%\)) posterior shrinkage observed for \(\omega_{\rm cdm}\) (and \(\Omega_{m}\)). Here, we go beyond the former work by including a more detailed treatment of survey geometry effects (_i.e._ window-function convolution), and through testing the pipeline on high-quality large-volume simulations, ensuring that our results remain applicable to future high-precision surveys. Here, our goal is to perform a systematic, consistent, and efficient analysis of the large-scale galaxy bispectrum quadrupole and hexadecapole, as applied to realistic survey data. In this vein, we will address several key issues that have previously complicated anisotropic galaxy bispectrum analyses. First, we validate our perturbative theoretical model for the bispectrum multipoles (based on [15]) on the high-fidelity PT Challenge simulation dataset [48]. This allows us to test our fitting pipeline in the unprecedented conditions that correspond to the cumulative volume of \(566h^{-3}{\rm Gpc}^{3}\), which significantly exceeds the volume of upcoming and even futuristic surveys. To robustly account for the mixing of modes and multipoles induced by the survey geometry, we will construct new 'window-free' estimators for the bispectrum multipoles, based on the maximum-likelihood approaches outlined in [49; 50].1 This approach is tested using a suite of Nseries mocks, designed for precision tests of the official BOSS analysis pipeline [51]. Our new window-free estimator enables straightforward comparison of theory and data the need to forward model the effect of the window function on the former [11]. This allows us to avoid making simplified assumptions about the window function's action, which have led to the excision of large-scale modes in [26]; this could severely limit analyses of primordial non-Gaussianity. Whilst analytic methods for bispectrum convolution now exist (at least for the monopole, see [e.g., 52; 53] for recent progress), this route still leads to a significant amplification in model complexity, which may make typical Monte Carlo Markov Chain (MCMC) analyses (with \(\sim 10^{6}\) steps [54]) infeasible. Our efforts herein are a natural extension of our previous full-shape BOSS analyses of the galaxy power spectrum [55; 56; 57], BAO [58], real-space power spectrum proxy [59], and bispectrum monopole [60; 61; 16], based on the effective field theory of large-scale structure (EFTofLSS; [62; 63; 64; 65]). Alternative BOSS full-shape analyses have been carried out in Refs. [66, 67, 68, 69, 70, 71, 72, 73, 74, 75]. Throughout this work, we focus on the bispectrum multipoles on large scales, _i.e._ considering only modes with \(k<0.08~{}h\,{\rm Mpc}^{-1}\). For this reasons we use only the tree-level bispectrum likelihood, though extensions to higher \(k\) with the one-loop theory of [76] may prove interesting. Having extensively tested our pipeline on various mock data, we apply it to the BOSS DR12 anisotropic clustering measurements. Our overall conclusion is that the BOSS bispectrum multipoles do not carry a significant signal, but their inclusion in the analysis allows one to slightly improve constraints on cosmological parameters. In particular, using priors on the primordial power spectrum tilt \(n_{s}\) from _Planck_ 2018 [77] and a BBN prior on the physical baryon density \(\omega_{b}\), we find the Hubble constant \(H_{0}=68.2\pm 0.8~{}{\rm km\,s^{-1}Mpc}^{-1}\), the matter density fraction \(\Omega_{m}=0.33\pm 0.01\) and the late-time mass clustering amplitude \(\sigma_{8}=0.736\pm 0.033\). The latter two measurements can be combined into a growth parameter \(S_{8}\equiv\sigma_{8}(\Omega_{m}/0.3)^{0.5}=0.77\pm 0.04\), which agrees well with other independent estimates from the weak lensing and cosmic microwave background radiation surveys. Our paper is structured as follows. We begin in SS2 by summarizing our main results and placing them in context of other cosmological parameter estimates. In SS3 we define the bispectrum multipoles and present idealized estimators before considering their optimal unwindowed form in SS4. Then, SS5 reviews our theory model for the redshift-space bispectrum multipoles at the tree-level order in perturbation theory. Our data and likelihood are discussed in detail in SS6, and the pipeline validated on mock clustering data from PT Challenge and Nseries simulations in SS7. Finally, we present our of the BOSS survey data in SS8 before concluding with a discussion in SS9. ## 2 Summary of the Main Results We begin with a summary of our cosmological results. In this work, we have developed new window-free estimators for the bispectrum multipoles and applied them to the BOSS DR12 luminous red galaxy sample [51] (in two redshift bins and hemispheres), computing the monopole, quadrupole, and hexadecapole (\(\ell=0,2,4\)) of both the redshift-space power spectrum and bispectrum. We additionally analyze the Alcock-Paczynski parameters from reconstructed power spectrum (following Ref. [58]), and the real-space power spectrum proxy \(Q_{0}\)[59] (see also Refs. [78, 79, 80]). Our dataset matches that of our previous analysis [16], but supplemented with the bispectrum quadrupole and hexadecapole moments. For all the bispectrum moments used in this work, we focus on large-scale \begin{table} \begin{tabular}{c|c c c|c c c} \hline \hline **Dataset** & \(\omega_{\rm cdm}\) & \(H_{0}\) & \(\ln\left(10^{10}A_{s}\right)\) & \(n_{s}\) & \(S_{8}\) & \(\Omega_{m}\) & \(\sigma_{8}\) \\ \hline \(P_{\ell}+Q_{0}+B_{0}\) & \(0.140^{+0.010}_{-0.013}\) & \(69.3\pm 1.1\) & \(2.60\pm 0.13\) & \(0.872\pm 0.066\) & \(0.734\pm 0.039\) & \(0.339^{+0.016}_{-0.018}\) & \(0.691^{+0.035}_{-0.035}\) \\ \hline \(P_{\ell}+Q_{0}+B_{\ell}\) & \(0.1444^{+0.0098}_{-0.012}\) & \(69.19^{+0.98}_{-1.1}\) & \(2.60\pm 0.12\) & \(0.869\pm 0.060\) & \(0.760\pm 0.039\) & \(0.349^{+0.015}_{-0.017}\) & \(0.704^{+0.034}_{-0.039}\) \\ \hline \(P_{\ell}+Q_{0}+B_{0}\) & \(0.1262^{+0.0052}_{-0.0058}\) & \(68.32\pm 0.83\) & \(2.741\pm 0.095\) & – & \(0.745\pm 0.039\) & \(0.3197\pm 0.0096\) & \(0.722^{+0.032}_{-0.035}\) \\ \hline \(P_{\ell}+Q_{0}+B_{\ell}\) & \(0.1303\pm 0.0055\) & \(68.19\pm 0.78\) & \(2.740\pm 0.091\) & – & \(0.771\pm 0.039\) & \(0.3296\pm 0.0095\) & \(0.736\pm 0.033\) \\ \hline \hline \end{tabular} \end{table} Table 1: Marginalized constraints on \(\Lambda\)CDM cosmological parameters from the BOSS power spectrum multipoles, the real-space power spectrum proxy, and the bispectrum. We include BAO information from reconstructed power spectra in all cases. The first and third columns correspond to the likelihood with the bispectrum monopole only, whilst the second and fourth also contain the bispectrum quadrupole and hexadecapole. In each case, we display the mean value and the 68% confidence intervals. All results are obtained assuming the BBN prior on \(\omega_{b}\), with the lower two rows including the _Planck_ prior on \(n_{s}\). The final three parameters in each row are derived from the MCMC samples and not sampled directly. Figure 1: Bispectrum monopole, quadrupole, and hexadecapole extracted from the PT Challenge dataset (points), along with the best-fitting theory model curves (lines). We highlight squeezed and equilateral configurations as a function of wavenumber in the top panels, and show all configurations as a function of the triangle index in the lower panel. The errorbars shown correspond to the diagonal elements of the Gaussian tree-level covariance matrix (see Appendix A), which matches the total simulation volume of 566 (\(h^{-1}\)Gpc\()^{3}\). We note that the extension of the theory model to bispectrum multipoles does not add new parameters. Corresponding detection significances are given in Tab. 2. modes with \(k_{\rm max}^{B}=0.08~{}h\,{\rm Mpc}^{-1}\), and limit ourselves with \(k_{\rm min}^{B}=0.01~{}h\,{\rm Mpc}^{-1}\) to mitigate large-scale observation systematics. The power spectrum and bispectrum multipoles are measured with new maximum-likelihood estimators, as derived in SS4 (building on Refs. [49, 50]). These allow for robust comparison of theory and data without the need for window convolution. In terms of theory, we use a tree-level perturbative model for the bispectrum multipoles (in the form introduced in Ref. [15], and later used in Refs. [16, 60, 61], see also Refs. [26, 73, 75]). We consistently fit the BOSS bispectrum multipole data, recomputing the theoretical templates for each set of cosmological parameters sampled in our MCMC chains. We focus on the minimal \(\Lambda\)CDM model and assume a BBN prior on the physical baryon density \(\omega_{b}\)[56, 81, 82], with all other parameters fit directly from the BOSS data. Before analyzing the BOSS data, we test our fitting pipeline and estimators on a set of high-quality simulated galaxy catalogs, including the PT challenge mocks [48]. Our fits match these data well and we recover the true cosmological parameters in these cases, as shown in Fig. 1 for the PT challenge data and the best-fit theory model. This implies that our pipeline for the bispectrum multipoles is adequate at the percent precision level, which even exceeds the statistical power of futuristic surveys. Our main results are shown in Fig. 2 and Tab. 1. For comparison, we also display the constraints obtained from our previous BOSS likelihood that included only the bispectrum monopole (\(\ell=0\)) moment [16]. The inclusion of the bispectrum multipole moments is found to have only a marginal effect on the cosmological parameter posteriors. Considering the \(\Omega_{m}-\sigma_{8}\) plane, we find a slight reduction in the errorbars and a small posterior shift, which drives the clustering amplitude parameter \(S_{8}\equiv\sigma_{8}(\Omega_{m}/0.3)^{0.5}\) (at \(z=0\)) upwards by \(\approx 0.6\sigma\). The largest effect can be seen in the marginalized \(n_{s}\)-posterior, which narrows by \(\approx 10\%\) from the inclusion of \(\ell=2,4\) galaxy bispectrum moments. All other one-dimensional posteriors on cosmological parameters typically shrink by \(\lesssim 5\%\). These modest gains are a consequence of the relatively low signal-to-noise of the large-scale BOSS galaxy bispectrum multipoles. As shown in Fig. 3 and in Tab. 2, we could detect the higher order large-scale bispectrum multipoles only at \(\approx 5\sigma\) in three out of the four BOSS data chunks. In comparison, the bispectrum monopole moment is detected typically at more than \(10\sigma\) in all of the regions. This occurs due to the larger noise and reduced signal intrinsic to higher-order moments. We caution, however, that this \(\Delta\chi^{2}\) detection metric does not fully reflect the impact on parameter constraints, for which one should use appropriate Fisher derivatives. We further note that we do not detect the higher order multipoles in the high-z SGC data chunk (which is small in volume), with the anisotropic clustering signal even being disfavored at around \(2\sigma\). Whilst not significant, this result may be driven by neglecting the correlation with the power spectrum in our Figure 2: Constraints on \(\Lambda\)CDM cosmological parameters from the BOSS DR12 dataset. We compare results from the combined power spectrum, BAO, and bispectrum monopole (\(\ell=0\)) dataset (blue) and those adding the \(\ell=2,4\) bispectrum multipoles (red). The inclusion of bispectrum multipoles is found to tighten parameter constraints only slightly, with most significant variation found in \(n_{s}\) and \(\Omega_{m}\). Figure 3: Comparison of the measured and theoretical galaxy bispectrum multipoles. We show the BOSS NGC high-z (\(z=0.61\)) data, along with the best-fit theory curves from our MCMC analysis. The top, middle, and bottom panels show the monopole, quadrupole, and hexadecapole respectively. Data are shown for \(k_{\rm max}=0.08\)\(h\,{\rm Mpc}^{-1}\) with all elements stacked (with smallest scales shown on the right). Errorbars correspond to diagonal elements of the covariance matrix, estimated from mocks. Though the signal of the higher-order BOSS multipoles is relatively small (see Tab. 2), the model provides an excellent fit to the data, as evidenced by the simulation results in Fig. 1. estimate, or by a statistical fluctuation. In addition, we remind that the particular one-dimensional parameter projections may not completely reflect changes in the full multi-dimensional posterior. In particular, the impact of the higher order multipole moments may be larger in extended cosmological models, analogous to the improvements found for the power spectrum [57]. The parameter improvements continue to be modest when we include a _Planck_ prior on the primordial power spectrum tilt \(n_{s}\), as shown in the lower rows of Tab. 1. Finally, it is worth stressing that the inclusion of the new data sets such as reconstructed power spectra, \(Q_{0}\), and \(B_{\ell}\) (\(\ell=0,2,4\)) yields significant improvements over the usual power spectrum-alone analysis. Indeed, our final constraints on \(\sigma_{8}\) are \(\approx 30\%\) tighter than those from BOSS \(P_{\ell}(k)\) alone, cf. [16]. To place our results in context, let us compare the optimal value of \(S_{8}\) from our chains with those from other measurements. The direct measurements of this parameter from various weak lensing and galaxy clustering surveys (KIDS-1000 [83], DESY3 [84; 85; 86], HSC [87], unWISE+_Planck_[88], DESI+_Planck_[89]) are summarized in Fig. 4. We particularly focus our attention on the full-shape anisotropic galaxy clustering probes in redshift space [16; 67; 69; 70; 91; 71]. For comparison, we also show there the prediction of the \(\Lambda\)CDM fit to the primary _Planck_[77] and ACT+WMAP CMB [92] data, which may be considered an indirect probe of \(S_{8}\). Our notation and choice of data sets follow those of Ref. [70]. Our measurement is fully consistent with those of other BOSS full-shape analyses, obtained both using perturbation theory [67; 70] and simulation-based frameworks [69]. We find a small (and relatively insignificant) tension between the \(S_{8}\) measurements from ELG [90] and QSO samples [91] of the eBOSS survey [93], which may be either due to residual systematics, or simply a statistical fluctuation. Finally, we point out that our \(S_{8}\) posterior is broadly consistent with both CMB and various weak lensing probes. The latter two probes are in some \(\sim 2\sigma\) disagreement with each other, which is often known as the \(S_{8}\) tension (see Ref. [94] for a recent review). We conclude that our measurement does not yield evidence for this tension. ## 3 The Bispectrum Multipoles ### Definition The galaxy bispectrum is defined as the three-point expectation of the overdensity, \(\delta_{g}\): \[(2\pi)^{3}\delta_{\rm D}\left(\mathbf{k}_{123}\right)B_{\rm ggg}(\mathbf{k}_{1},\mathbf{k }_{2},\mathbf{k}_{3})\equiv\langle\delta_{g}(\mathbf{k}_{1})\delta_{g}(\mathbf{k}_{2}) \delta_{g}(\mathbf{k}_{3})\rangle\,, \tag{1}\] [e.g., 95], writing \(\mathbf{k}_{123}\equiv\mathbf{k}_{1}+\mathbf{k}_{2}+\mathbf{k}_{3}\) for Dirac delta \(\delta_{\rm D}\). In real-space, symmetry under translations and rotations forces the bispectrum to be a function only of three variables (usually chosen to be the side lengths \(k_{i}\equiv|\mathbf{k}_{i}|\)); this implies \(B_{\rm ggg}(\mathbf{k}_{1},\mathbf{k}_{2},\mathbf{k}_{3})\to B_{\rm ggg}(k_{1},k_{2},k_{3})\). Redshift-space distortions break symmetry with respect to the line-of-sight \(\hat{\mathbf{n}}\) (hereafter LoS), affording an additional two degrees of freedom to the bispectrum. Whilst this can be parametrize in a number of ways, a particularly well-motivated choice of variables are the angle of the triangle plane to the LoS, and the orientation of the triangle within the plane [e.g., 96; 17; 97]. In this approach, one can expand the bispectrum as a spherical harmonic series: \[B_{\rm ggg}(\mathbf{k}_{1},\mathbf{k}_{2},\mathbf{k}_{3})=\sum_{\ell=0}^{\infty}\sum_{m=- \ell}^{\ell}B_{\ell m}(k_{1},k_{2},k_{3})Y_{\ell m}(\theta_{\mathbf{k}},\phi_{\bm {k}}), \tag{2}\] where \(\theta_{\mathbf{k}}\) and \(\phi_{\mathbf{k}}\) specify the aforementioned orientation. Though this basis is complete, measuring \(B_{\ell m}\) is difficult, since the spherical harmonic cannot be separably decomposed into \(\mathbf{k}_{1}\), \(\mathbf{k}_{2}\), and \(\mathbf{k}_{3}\) pieces, yielding a non-factorizable estimator. This is not a problem for theoretical forecasts [e.g., 22; 24], but severely limits application to observational data. Consequently, several works [e.g., 17; 25] have considered only the \(m=0\) moment (independent of \(\phi\)), and set \(\cos\theta\equiv\hat{\mathbf{k}}_{3}\cdot\hat{\mathbf{n}}\), additionally fixing \(k_{1}\leq k_{2}\leq k_{3}\). This corresponds to representing the bispectrum as a Legendre series in \(\theta\): \[B_{\rm ggg}(\mathbf{k}_{1},\mathbf{k}_{2},\mathbf{k}_{3})\approx\sum_{\ell=0}^{\infty}B_{ \ell}(k_{1},k_{2},k_{3})\mathcal{L}_{\ell}(\hat{\mathbf{k}}_{3}\cdot\hat{\mathbf{n}}), \qquad(k_{1}\leq k_{2}\leq k_{3}) \tag{10}\] where \(\mathcal{L}_{\ell}\) is a Legendre polynomial and \(B_{\ell}\) the corresponding coefficient.2 We note that (10) is not a strict equality, since the bispectrum contains higher-order moments (with \(m\neq 0\)) not captured within its formalism; in the below, we will instead define the multipoles directly as integrals over \(\theta,\phi\). Footnote 2: Some works [e.g., 22] have instead expanded the bispectrum as a _double_ Legendre series in the two angles. A separable choice would be to expand in, say, \(\mathcal{L}_{\ell}(\hat{\mathbf{k}}_{2}\cdot\hat{\mathbf{n}})\) and \(\mathcal{L}_{\ell^{\prime}}(\hat{\mathbf{k}}_{3}\cdot\hat{\mathbf{n}})\); however, the corresponding coefficients are generally difficult to estimate robustly, since the two angles are not independent once the side-lengths are specified. Figure 4: A compilation of some direct and indirect measurements of the growth parameter \(S_{8}\), from spectroscopic surveys, weak lensing, and the CMB. Errorbars shown approximately correspond to the 68% CL, and our measurement is shown in the top row. Further detail is given in Ref. [70] and the main text. ### Idealized Estimators The decomposition of (3.1) can be used to construct estimators for the bispectrum multipoles, \(B_{\ell}\). For an idealized periodic-box geometry (such as an \(N\)-body simulation), the conventional estimator for the bispectrum multipoles is given by \[\widehat{B}_{\ell}^{abc}\Big{|}_{\text{periodic}} \equiv \frac{2\ell+1}{N_{T}^{abc}}\int_{\mathbf{k}_{1}\mathbf{k}_{2}\mathbf{k}_{3}}(2 \pi)^{3}\delta_{\text{D}}\left(\mathbf{k}_{123}\right)\Theta^{a}(k_{1})\Theta^{b}(k _{2})\Theta^{c}(k_{3})\] \[\times\,\delta_{g}(\mathbf{k}_{1})\delta_{g}(\mathbf{k}_{2})\delta_{g}(\bm {k}_{3})\mathcal{L}_{\ell}(\hat{\mathbf{k}}_{3}\cdot\hat{\mathbf{n}}),\] where \(\int_{\mathbf{k}}\equiv(2\pi)^{-3}\int_{\mathbf{k}}\)[17]. Here, \(a\leq b\leq c\) specify a triplet of \(k\)-bins of finite radius, defined by \(\Theta^{i}(k)\), which is unity if \(k\) is in bin \(i\), and zero else. (3.1) is simply an integral over three copies of the density field weighted by the Legendre polynomial in the longest side \(\mathcal{L}_{\ell}(\hat{\mathbf{k}}_{3}\cdot\hat{\mathbf{n}})\), with translation invariance enforced by the Dirac delta. This is normalized by the isotropic bin volume, defined by \[N_{T}^{abc}=\int_{\mathbf{k}_{1}\mathbf{k}_{2}\mathbf{k}_{3}}(2\pi)^{3}\delta_{\text{D}} \left(\mathbf{k}_{123}\right)\Theta^{a}(k_{1})\Theta^{b}(k_{2})\Theta^{c}(k_{3}). \tag{3.2}\] In this work, we regard (3.1) as the _definition_ of the binned bispectrum multipoles (rather than the approximate relation of 3.1). Theoretical predictions for the bispectrum multipoles can be similarly computed from the expectation of (3.1): \[B_{\ell}^{abc}\big{|}_{\text{theory}} \equiv \frac{2\ell+1}{N_{T}^{abc}}\int_{\mathbf{k}_{1}\mathbf{k}_{2}\mathbf{k}_{3}}( 2\pi)^{3}\delta_{\text{D}}\left(\mathbf{k}_{123}\right)\Theta^{a}(k_{1})\Theta^{b} (k_{2})\Theta^{c}(k_{3})\] \[\times\,B_{ggg}^{\text{theory}}(\mathbf{k}_{1},\mathbf{k}_{2},\mathbf{k}_{3}) \mathcal{L}_{\ell}(\hat{\mathbf{k}}_{3}\cdot\hat{\mathbf{n}}),\] for some theory model \(B_{\text{theory}}\) which is not yet averaged over angles. This will be discussed in SS5. In practice, we implement (3.1) by factorizing in \(\mathbf{k}_{i}\), following Ref. [17]. This is realized by rewriting the Dirac function as an exponential, yielding the asymmetric expression \[\widehat{B}_{\ell}^{abc}\Big{|}_{\text{periodic}}=\frac{2\ell+1}{N_{T}^{abc} }\int d\mathbf{x}\,F_{0}^{a}(\mathbf{x})F_{0}^{b}(\mathbf{x})F_{\ell}^{c}(\mathbf{x}),\quad N_ {T}^{abc}=\int d\mathbf{x}\,D^{a}(\mathbf{x})D^{b}(\mathbf{x})D^{c}(\mathbf{x}), \tag{3.3}\] using the definitions \[F_{\ell}^{i}(\mathbf{x})\equiv\int_{\mathbf{k}}e^{-i\mathbf{k}\cdot\mathbf{x}}\Theta^{i}(k) \delta(\mathbf{k})\mathcal{L}_{\ell}(\hat{\mathbf{k}}\cdot\hat{\mathbf{n}}),\qquad D^{i}( \mathbf{x})\equiv\int_{\mathbf{k}}e^{-i\mathbf{k}\cdot\mathbf{x}}\Theta^{i}(k). \tag{3.4}\] Each piece can be straightforwardly evaluated using fast Fourier transforms (FFTs) with \(N_{g}\log N_{g}\) complexity for \(N_{g}\) grid points. If we had defined the redshift-space components using \(Y_{\ell m}(\mathbf{\theta}_{\mathbf{k}},\phi_{\mathbf{k}})\) rather than \(\mathcal{L}_{\ell}(\hat{\mathbf{k}}_{3}\cdot\hat{\mathbf{n}})\), (or some other choice) the expression would not factorize in the above manner, and computation would scale as \(\mathcal{O}(N_{g}^{3})\). In realistic surveys, the LoS is not fixed, but varies depending on which galaxies are being considered.3 In this case, we can adopt the 'Yamamoto' prescription [17; 100], fixing \(\hat{\mathbf{n}}\) to the direction vector of the galaxy associated to \(\mathbf{k}_{3}\). This corresponds to the replacement Footnote 3: Strictly, a separate line-of-sight is required for each galaxy. The effects of assuming a single line-of-sight are small for typical survey sizes however [59; 98]. \[F_{\ell}^{i}(\mathbf{x}) \rightarrow \int_{\mathbf{k}}e^{-i\mathbf{k}\cdot\mathbf{x}}\Theta^{i}(k)\int d\mathbf{r}\,e^ {i\mathbf{k}\cdot\mathbf{r}}\delta(\mathbf{r})\mathcal{L}_{\ell}(\hat{\mathbf{k}}\cdot\hat{\bm {r}})\] \[\equiv \frac{4\pi}{2\ell+1}\sum_{m=-\ell}^{\ell}\int_{\mathbf{k}}e^{-i\mathbf{k }\cdot\mathbf{x}}\Theta^{i}(k)Y_{\ell m}(\hat{\mathbf{k}})\int d\mathbf{r}\,e^{i\mathbf{k} \cdot\mathbf{r}}\delta(\mathbf{r})Y_{\ell m}^{*}(\hat{\mathbf{r}}),\] with the latter equality allowing for fast estimation using the spherical harmonic addition theorem. Window-Free Bispectrum Estimators ### Motivation When applying the estimators described in SS3 to observational data, we must specify the density field \(\delta_{g}\). Usually, this is modelled by the pixelized field of "data-minus-randoms"; \(\delta_{g}(\mathbf{r})\propto n_{g}(\mathbf{r})-\alpha\,n_{r}(\mathbf{r})\), where \(n_{g}\) is the observed galaxy density field and \(n_{r}(\mathbf{r})\) is the random catalog (containing \(1/\alpha\) times more particles than the galaxy catalog). Since both data and randoms are multiplied by the survey mask, conventional estimators will measure only the _windowed_ bispectrum, \(B^{\rm win}_{ggg}\), rather than the true underlying statistic, \(B_{ggg}\). Before bin integration, the two are related by the following convolution integral: \[B^{\rm win}_{\rm ggg}(\mathbf{k}_{1},\mathbf{k}_{2},\mathbf{k}_{3}) = \int_{\mathbf{p}_{1}\mathbf{p}_{2}\mathbf{p}_{3}}(2\pi)^{3}\delta_{\rm D}\left( \mathbf{p}_{123}\right)\] \[\times W(\mathbf{k}_{1}-\mathbf{p}_{1})W(\mathbf{k}_{2}-\mathbf{p}_{2})W(\mathbf{k}_{3}-\mathbf{ p}_{3})B_{\rm ggg}(\mathbf{p}_{1},\mathbf{p}_{2},\mathbf{p}_{3}).\] To compare theory and data, we should similarly convolve the theory model. Due to its oscillatory nature, this is a difficult and time-consuming numerical operation (though see Ref. [52] for a possible \(\ell=0\) approach), thus the effect is often ignored or heavily simplified [e.g., 10, 11, 102, 73, 75, 101, 10]. This may lead to biases in data-analysis when large-scale modes (relevant to primordial non-Gaussianity studies) are included. A major goal of this work is the estimation of _unwindowed_ bispectrum multipoles. These are unbiased by the window function and can be robustly compared to theory models without the need to window-convolve the latter (via 4.1). Our approach follows Refs. [49, 50] for the power spectrum and \(\ell=0\) monopole (as well as Ref. [103] for the higher-point CMB correlators), themselves inspired by early work on the subject in [104, 105, 106]. ### Binned Bispectrum Components To define unwindowed estimators, we must first express the true bispectrum \(B_{ggg}(\mathbf{k}_{1},\mathbf{k}_{2},\mathbf{k}_{3})\) in terms of the quantity of interest: the set of bispectrum coefficients \(b_{\alpha}\equiv B_{\ell}^{abc}\) (using \(\alpha\) to denote the radial bin indices and multipole). This relation will then be used to form an estimator for \(b_{\alpha}\) via maximum-likelihood methods. As an _ansatz_, we will assume \[B_{\rm ggg}(\mathbf{k}_{1},\mathbf{k}_{2},\mathbf{k}_{3})=\sum_{\alpha}\frac{b_{\alpha}}{ \Delta_{\alpha}}\left[\Theta^{a}(k_{1})\Theta^{b}(k_{2})\Theta^{c}(k_{3}) \mathcal{L}_{\ell}(\hat{\mathbf{k}}_{3}\cdot\hat{\mathbf{n}})+5\ \text{perms.}\right]. \tag{4.2}\] This is similar in form to the Legendre decomposition of (3.3), but is defined for all arbitrary ordering of \(\{\mathbf{k}_{1},\mathbf{k}_{2},\mathbf{k}_{3}\}\), with the binning functions picking out the relevant permutation, such that we can represent the full bispectrum in terms of its binned components \(b_{\alpha}\) with \(a\leq b\leq c\). (4.2) includes a bin-specific normalization factor \(\Delta_{\alpha}\); this takes a simple form for \(\ell=0\) as in Ref. [50] but is more complex in general, as we show below, due to the omitted \(\phi\) integrals and exchange symmetry. Inserting (4.2) into the expectation of our idealized estimator (3.4) gives \[\left\langle\widehat{B}_{\ell}^{abc}\right\rangle = \frac{2\ell+1}{N_{T}^{abc}}\int_{\mathbf{k}_{1}\mathbf{k}_{2}\mathbf{k}_{3}} (2\pi)^{3}\delta_{\rm D}\left(\mathbf{k}_{123}\right)\Theta^{a}(k_{1})\Theta^{b}( k_{2})\Theta^{c}(k_{3})\mathcal{L}_{\ell}(\hat{\mathbf{k}}_{3}\cdot\hat{\mathbf{n}})\] \[\times\,\sum_{\beta}\frac{b_{\beta}}{\Delta_{\beta}}\left[\Theta^ {a^{\prime}}(k_{1})\Theta^{b^{\prime}}(k_{2})\Theta^{c^{\prime}}(k_{3})L_{ \ell^{\prime}}(\hat{\mathbf{k}}_{3}\cdot\hat{\mathbf{n}})+5\ \text{perms.}\right],\] where \(\beta\equiv\{a^{\prime},b^{\prime},c^{\prime},\ell^{\prime}\}\). Assuming non-overlapping bins, the integral will be non-zero only when \(\{a^{\prime},b^{\prime},c^{\prime}\}\) is some permutation of \(\{a,b,c\}\) (again restricting to \(a^{\prime}\leq b^{\prime}\leq c^{\prime}\)). Invoking global rotational invariance, we can average over the LoS, making use of the relation: \[\int\frac{d\hat{\mathbf{n}}}{4\pi}\mathcal{L}_{\ell}(\hat{\mathbf{k}}_{i} \cdot\hat{\mathbf{n}})\mathcal{L}_{\ell^{\prime}}(\hat{\mathbf{k}}_{j}\cdot\hat{\mathbf{n}}) =\frac{\delta_{\mathrm{K}}^{\ell\ell^{\prime}}}{2\ell+1}\mathcal{L}_{\ell}( \hat{\mathbf{k}}_{i}\cdot\hat{\mathbf{k}}_{j}). \tag{4.4}\] Writing out the permutations explicitly, this gives \[\left\langle\hat{B}^{abc}_{\ell}\right\rangle = \frac{1}{N_{\ell}^{abc}}\frac{b_{\alpha}}{\Delta_{\alpha}}\int_{ \mathbf{k}_{1},\mathbf{k}_{2}\mathbf{k}_{3}}(2\pi)^{3}\delta_{\mathrm{D}}\left(\mathbf{k}_{12 3}\right)\Theta^{a}(k_{1})\Theta^{b}(k_{2})\Theta^{c}(k_{3})\] \[\times\left\{\left[\mathcal{L}_{\ell}(\hat{\mathbf{k}}_{1}\cdot\hat {\mathbf{k}}_{3})\left[\delta_{\mathrm{K}}^{bb^{\prime}}\delta_{\mathrm{K}}^{ca^{ \prime}}+\delta_{\mathrm{K}}^{ba^{\prime}}\delta_{\mathrm{K}}^{cb^{\prime}} \right]\delta_{\mathrm{K}}^{ac^{\prime}}+\mathcal{L}_{\ell}(\hat{\mathbf{k}}_{2} \cdot\hat{\mathbf{k}}_{3})\left[\delta_{\mathrm{K}}^{aa^{\prime}}\delta_{\mathrm{K }}^{cb^{\prime}}\delta_{\mathrm{K}}^{ab^{\prime}}\delta_{\mathrm{K}}^{ca^{ \prime}}\right]\delta_{\mathrm{K}}^{bc^{\prime}}\right.\] \[\left.\qquad\qquad+\,\delta_{\mathrm{K}}^{aa^{\prime}}\delta_{ \mathrm{K}}^{bb^{\prime}}+\delta_{\mathrm{K}}^{ab^{\prime}}\delta_{\mathrm{K }}^{ba^{\prime}}\right]\delta_{\mathrm{K}}^{cc^{\prime}}\right\}.\] The Kronecker deltas demarcate four scenarios: (1) \(a\neq b\neq c\), (2) \(a=b\neq c\), (3) \(a\neq b=c\), (4) \(a=b=c\). The latter two are more complex since they involve additional Legendre polynomials of two different \(\mathbf{k}\) vectors. To simplify these, we define the term: \[N_{\ell}^{abc} \equiv \int_{\mathbf{k}_{1}\mathbf{k}_{2}\mathbf{k}_{3}}(2\pi)^{3}\delta_{\mathrm{D} }\left(\mathbf{k}_{123}\right)\Theta^{a}(k_{1})\Theta^{b}(k_{2})\Theta^{c}(k_{3}) \mathcal{L}_{\ell}(\hat{\mathbf{k}}_{2}\cdot\hat{\mathbf{k}}_{3})\] \[= \frac{4\pi}{2\ell+1}\sum_{m=-\ell}^{\ell}\int d\mathbf{x}\,\left[\int _{\mathbf{k}_{1}}e^{-i\mathbf{k}_{1}\cdot\mathbf{x}}\Theta^{a}(k_{1})\right]\left[\int_{ \mathbf{k}_{2}}e^{-i\mathbf{k}_{2}\cdot\mathbf{x}}\Theta^{c}(k_{2})Y_{\ell m}(\hat{\mathbf{k}} _{2})\right]\] \[\qquad\times\,\left[\int_{\mathbf{k}_{3}}e^{-i\mathbf{k}_{3}\cdot\mathbf{x}} \Theta^{c}(k_{3})Y_{\ell m}^{*}(\hat{\mathbf{k}}_{3})\right],\] rewriting the Dirac function as an exponential in the second line, allowing expression in terms of Fourier transforms. We note that \(N_{0}^{abc}\) is just the isotropic bin volume \(N_{T}^{abc}\). With the above definitions, we obtain the desired result \(\left\langle\hat{B}^{abc}_{\ell}\right\rangle=b_{\alpha}\) (_i.e._ an unbiased estimator) subject to the following definition: \[\Delta_{\alpha} \equiv \begin{cases}1&a\neq b\neq c\\ 2&a=b\neq c\\ \left(1+N_{\ell}^{abc}/N_{T}^{abc}\right)&a\neq b=c\\ 2\left(1+2N_{\ell}^{abc}/N_{T}^{abc}\right)&a=b=c.\end{cases} \tag{4.7}\] For \(\ell=0\), this reduces to the symmetry factors used in [50] (1 for scalene, 2 for isosceles, 6 for equilateral). This calculation generalizes the standard bispectrum definition (3.3) to the binned bispectrum beyond the narrow bin limit (whence \(a\neq b\neq c\) is guaranteed). ### Maximum-Likelihood Estimators We now consider the estimation of bispectrum coefficients \(b_{\alpha}\), given their relation to the full bispectrum \(B_{\mathrm{ggg}}(\mathbf{k}_{1},\mathbf{k}_{2},\mathbf{k}_{3})\). Following Refs. [49, 50, 103], our pathway to this will be: 1. Write down the likelihood for the observed pixellized data-minus-randoms field \(\mathbf{d}\) in terms of the pixel correlators \(\mathsf{C}_{ij}\equiv\left\langle d_{i}d_{j}\right\rangle\), \(\mathsf{B}_{ijk}\equiv\left\langle d_{i}d_{j}d_{k}\right\rangle\), _et cetera_, where \(i,j,\cdots\in[1,N_{\mathrm{pix}}]\) are pixel indices. 2. Express the relevant correlator (here \(\mathsf{B}_{ijk}\)) in terms of the coefficients of interest, _i.e._ the binned bispectrum multipoles \(b_{\alpha}\). 3. Maximize the log-likelihood with respect to \(b_{\alpha}\) forming a quasi-optimal estimator. 4. Simplify the resulting form such that it can be efficiently implemented on data using FFTs. In the weakly non-Gaussian regime, the likelihood of the data is given by the Edgeworth expansion [e.g., 107] \[-\log L[\mathbf{d}]=-\log L_{G}[\mathbf{d}]-\frac{1}{3!}\mathsf{B}^{ijk}\left(h_{i}h_{j}h _{k}-h_{i}\mathsf{C}_{jk}^{-1}-h_{j}\mathsf{C}_{ik}^{-1}-h_{k}\mathsf{C}_{ij}^{ -1}\right)+\cdots \tag{4.8}\] where \(L_{G}\) is the Gaussian piece (which we do not need here), and \(h_{i}\equiv\mathsf{C}_{ij}^{-1}d^{j}\) is the Wiener-filtered data. In this formalism, the optimal estimator for \(b_{\alpha}\) (which enters linearly in \(\mathsf{B}^{ijk}\)) is given by \[\widehat{b}_{\alpha}=\sum_{\beta}\left(F^{-1}\right)_{\alpha\beta}\widehat{b}_ {\beta}^{\text{num}}, \tag{4.9}\] defining the numerator and normalization: \[\widehat{b}_{\alpha}^{\text{num}} = \frac{1}{6}\frac{\partial\mathsf{B}^{ijk}}{\partial b_{\alpha}} \left[h_{i}h_{j}h_{k}-\left(h_{i}\mathsf{C}_{jk}^{-1}+2\text{ perms.}\right)\right] \tag{4.10}\] \[F_{\alpha\beta} = \frac{1}{6}\frac{\partial\mathsf{B}^{ijk}}{\partial b_{\alpha}} \mathsf{C}_{il}^{-1}\mathsf{C}_{jm}^{-1}\mathsf{C}_{kn}^{-1}\frac{\partial \mathsf{B}^{lmn}}{\partial b_{\beta}}.\] This is just the maximum likelihood solution of (4.8). In our case, the three-point function can be written as a Fourier-transform of the full redshift-space bispectrum \(B_{ggg}(\mathbf{k}_{1},\mathbf{k}_{2},\mathbf{k}_{3})\), noting that \(d_{i}\equiv n(\mathbf{r}_{i})\delta_{g}(\mathbf{r}_{i})\) for background density \(n(\mathbf{r})\): \[\mathsf{B}^{ijk}=n(\mathbf{r}_{i})n(\mathbf{r}_{j})n(\mathbf{r}_{k})\int_{\mathbf{k}_{1}\mathbf{k} _{2}\mathbf{k}_{3}}e^{i\mathbf{k}_{1}\cdot\mathbf{r}_{i}+i\mathbf{k}_{2}\cdot\mathbf{r}_{j}+i\mathbf{k }_{3}\cdot\mathbf{r}_{k}}(2\pi)^{3}\delta_{\text{D}}\left(\mathbf{k}_{123}\right)B_{ggg }(\mathbf{k}_{1},\mathbf{k}_{2},\mathbf{k}_{3}), \tag{4.11}\] Inserting (4.2), we can write the cumulant derivative as \[\frac{\partial\mathsf{B}^{ijk}}{\partial b_{\alpha}} = \frac{n(\mathbf{r}_{i})n(\mathbf{r}_{j})n(\mathbf{r}_{k})}{\Delta_{\alpha}} \int_{\mathbf{k}_{1}\mathbf{k}_{2}\mathbf{k}_{3}}\Big{[}\Theta^{a}(k_{1})\Theta^{b}(k_{2}) \Theta^{c}(k_{3})\mathcal{L}_{\ell}(\hat{\mathbf{k}}_{3}\cdot\hat{\mathbf{n}})+5\text{ perms.}\Big{]}\] \[\times e^{i\mathbf{k}_{1}\cdot\mathbf{r}_{i}+i\mathbf{k}_{2}\cdot\mathbf{r}_{j}+i \mathbf{k}_{3}\cdot\mathbf{r}_{k}}(2\pi)^{3}\delta_{\text{D}}\left(\mathbf{k}_{123}\right).\] Under the Yamamoto approximation, we fix the LoS to be \(\hat{\mathbf{n}}=\hat{\mathbf{r}}_{3}\), as above. Inserting the above results into (4.10), the numerator of the bispectrum estimator is found to be: \[\widehat{b}_{\alpha}^{\text{num}} = \frac{1}{\Delta_{\alpha}}\int d\mathbf{r}\left[g_{0}^{a}[\mathbf{d}](\bm {r})g_{0}^{b}[\mathbf{d}](\mathbf{r})g_{\ell}^{c}[\mathbf{d}](\mathbf{r})-\left(g_{0}^{a}[ \mathbf{d}](\mathbf{r})\left\langle g_{0}^{b}[\mathbf{a}](\mathbf{r})\tilde{g}_{\ell}^{c}[ \mathbf{a}](\mathbf{r})\right\rangle+2\text{ perms.}\right)\right] \tag{4.13}\] subject to the definitions \[g_{\ell}^{a}[\mathbf{y}](\mathbf{r}) = \int_{\mathbf{k}}e^{-i\mathbf{k}\cdot\mathbf{r}}\Theta^{a}(k)\int d\mathbf{r}^{ \prime}e^{i\mathbf{k}\cdot\mathbf{r}^{\prime}}n(\mathbf{r}^{\prime})[\mathsf{H}^{-1}\mathbf{ y}](\mathbf{r}^{\prime})\mathcal{L}_{\ell}(\hat{\mathbf{k}}\cdot\hat{\mathbf{r}}^{\prime}) \tag{4.14}\] \[\tilde{g}_{\ell}^{a}[\mathbf{y}](\mathbf{r}) = \int_{\mathbf{k}}e^{-i\mathbf{k}\cdot\mathbf{r}}\Theta^{a}(k)\int d\mathbf{r}^{ \prime}e^{i\mathbf{k}\cdot\mathbf{r}^{\prime}}n(\mathbf{r}^{\prime})[\mathsf{A}^{-1}\mathbf{ y}](\mathbf{r}^{\prime})\mathcal{L}_{\ell}(\hat{\mathbf{k}}\cdot\hat{\mathbf{r}}^{\prime}).\] \(g_{0}^{a}\) is equal to the \(g^{a}\) function of Ref. [50]. This is closely linked to the \(F_{\ell}\) functions found in the ideal estimator (3.8), but now includes the survey mask and custom weighting functions. Two points are of note: (a) we replace the \(\mathsf{C}^{-1}\) Wiener filtering by a more general weighting \(\mathsf{H}^{-1}\); (b) we introduce a set of random maps \(\mathbf{a}\) with known covariance \(\mathsf{A}\) following Ref. [108]. The former allows for a simple-to-implement estimator (since the full pixel covariance is difficult to compute and harder still to invert), and the latter allows one to compute the one-point terms via Monte Carlo summation (removing the need for a direct sum which has a prohibitive \(\mathcal{O}(N_{\text{pix}}^{2})\) scaling). Exploiting spherical harmonic factorizations, the two terms in (4.1) can be written in terms of forward and reverse Fourier-transforms \(\mathcal{F}\) and \(\mathcal{F}^{-1}\): \[g^{a}_{\ell}[\boldsymbol{y}](\boldsymbol{r}) \equiv \frac{4\pi}{2\ell+1}\sum_{m=-\ell}^{\ell}\mathcal{F}^{-1}\left[ \Theta^{a}(k)Y^{*}_{\ell m}(\hat{\boldsymbol{k}})\mathcal{F}\left[n\mathsf{H}^ {-1}\boldsymbol{y}\,Y_{\ell m}\right](\boldsymbol{k})\right](\boldsymbol{r}) \tag{4.15}\] \[\tilde{g}^{a}_{\ell}[\boldsymbol{y}](\boldsymbol{r}) \equiv \frac{4\pi}{2\ell+1}\sum_{m=-\ell}^{\ell}\mathcal{F}^{-1}\left[ \Theta^{a}(k)Y^{*}_{\ell m}(\hat{\boldsymbol{k}})\mathcal{F}\left[n\mathsf{A} ^{-1}\boldsymbol{y}\,Y_{\ell m}\right](\boldsymbol{k})\right](\boldsymbol{r}).\] The second part of the estimator is a data-independent normalization (or Fisher) matrix, \(F_{\alpha\beta}\). This acts to remove correlations between bins and multipoles and can be efficiently estimated via Monte Carlo methods. In the limit of ideal weighting (\(\mathsf{H}^{-1}\to\mathsf{C}^{-1}\)) and vanishing non-Gaussianity, the bispectrum covariance is equal to \(F^{-1}\). As in [50], this takes the form \[F_{\alpha\beta}=\frac{1}{12}\left(\left\langle\phi^{i}_{\alpha}\mathsf{H}^{-1} _{il}\tilde{\phi}^{i}_{\beta}\right\rangle-\left\langle\phi^{i}_{\alpha} \right\rangle\mathsf{H}^{-1}_{il}\left\langle\tilde{\phi}^{i}_{\beta}\right\rangle \right), \tag{4.16}\] with \(\phi^{i}_{\alpha}[\boldsymbol{a}]=\mathsf{B}^{ijk}_{,\alpha}\mathsf{H}^{-1}_{ jj^{\prime}}\mathsf{H}^{-1}_{kk^{\prime}}a^{j^{\prime}}a^{k^{\prime}}\) and analogously for \(\tilde{\phi}\) with \(\mathsf{H}^{-1}\to\mathsf{A}^{-1}\). (4.16) can be implemented by applying the linear map \(\mathsf{H}^{-1}\) to \(\tilde{\phi}\) then summing the result (multiplied by \(\phi\)) in pixel-space. Once again, the expectations can be computed by summation over Monte Carlo realizations \(\boldsymbol{a}\) with known covariance \(\mathsf{A}\) (e.g., Gaussian random fields). With the above form for the cumulant derivative (4.12), we can write the \(\phi\) field explicitly in terms of Fourier transforms: \[\phi^{i}_{\alpha}[\boldsymbol{a}] = \mathsf{B}^{ijk}_{,\alpha}\mathsf{H}^{-1}_{jj^{\prime}}\mathsf{H} ^{-1}_{kk^{\prime}}a^{j^{\prime}}a^{k^{\prime}}\] \[= \frac{n(\boldsymbol{r}_{i})}{\Delta_{\alpha}}\int d\boldsymbol{r} \int_{\boldsymbol{k}_{1}\boldsymbol{k}_{2}\boldsymbol{k}_{3}}e^{i\boldsymbol {k}_{1}\cdot\boldsymbol{r}_{i}}\left[\Theta^{a}(k_{1})\Theta^{b}(k_{2})\Theta^ {c}(k_{3})\mathcal{L}_{\ell}(\hat{\boldsymbol{k}}_{3}\cdot\hat{\boldsymbol{n} })+5\,\text{perms.}\right]\] \[\qquad\qquad\times e^{-i(\boldsymbol{k}_{123})\cdot\boldsymbol{r }}[n\mathsf{H}^{-1}\boldsymbol{a}](\boldsymbol{k}_{2})[n\mathsf{H}^{-1} \boldsymbol{a}](\boldsymbol{k}_{3})\] \[= \frac{2n(\boldsymbol{r}_{i})}{\Delta_{\alpha}}\left\{\mathcal{F}^ {-1}\left[\Theta^{a}(k)\mathcal{F}\left[b^{b}_{\boldsymbol{0}}[\boldsymbol{a}] g^{c}_{\ell}[\boldsymbol{a}]\right](\boldsymbol{k})\right](\boldsymbol{k}) \right](\boldsymbol{r}_{i})+(a\leftrightarrow b)\right.\] \[\qquad\qquad\left.+\,\mathcal{F}^{-1}\left[\Theta^{c}(k) \mathcal{L}_{\ell}(\hat{\boldsymbol{k}}\cdot\hat{\boldsymbol{r}}_{i}) \mathcal{F}\left[g^{a}_{0}[\boldsymbol{a}]g^{b}_{0}[\boldsymbol{a}]\right]( \boldsymbol{k})\right](\boldsymbol{r}_{i})\right\},\] with an analogous form for \(\tilde{\phi}_{\alpha}\) involving \(\tilde{g}^{n}_{\ell}\). The final term involves a Legendre polynomial; using spherical harmonic decompositions, this can be simplified to yield the form: \[\left.\phi^{i}_{\alpha}[\boldsymbol{a}]\right|_{\rm III}=\frac{2n(\boldsymbol{r }_{i})}{\Delta_{\alpha}}\frac{4\pi}{2\ell+1}\sum_{m=-\ell}^{\ell}Y_{\ell m}( \hat{\boldsymbol{r}}_{i})\mathcal{F}^{-1}\left[\Theta^{c}(k)Y^{*}_{\ell m}( \hat{\boldsymbol{k}})\mathcal{F}\left[g^{a}_{0}[\boldsymbol{a}]g^{b}_{0}[ \boldsymbol{a}]\right](\boldsymbol{k})\right](\boldsymbol{r}_{i}). \tag{4.18}\] Collecting results, the full estimator for the bispectrum is given by \[\widehat{b}_{\alpha}=\sum_{\beta}F^{-1}_{\alpha\beta}\widehat{b}^{\rm num}_{ \beta}. \tag{4.19}\] This is unbiased for any choice of \(\mathsf{H}^{-1}\), unwindowed, and, for \(\mathsf{H}^{-1}\approx\mathsf{C}^{-1}\), close-to optimal (partly due to the inclusion of a linear term [cf. 108]). These properties are derived formally in [50]. Both the numerator and Fisher matrix can be efficiently computed using \(N_{\rm mc}\) Monte Carlo simulations, with the finite number of simulations incurring an error proportional to \(\sqrt{1+1/N_{\rm mc}}\). Whilst the latter is computationally expensive (requiring \(\mathcal{O}(N_{\rm bins})\) Fourier transforms), it only has to be estimated once for a given survey geometry. We will discuss the specifics of our implementation in SS6. A public Python implementation can be found online.4 Footnote 4: GitHub.com/OliverPhilcox/Spectra-Without-Windows. Theory Model Overview ### Idealized Form To model the galaxy bispectrum models, we will use the tree-level theory introduced in Ref. [15] (see also [109, 110, 111, 112, 113, 114, 9, 46, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 222, 231, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 291, 292, 293, 294, 295, 296, 297, 298, 300, 301, 302, 303, 304, 305, 306, 307, 308, 309, 310, 311, 312, 313, 314, 315, 316, 317, 318, 319, 320, 321, 322, 323, 324, 325, 326, 327, 328, 329, 333, 340, 341, 342, 343, 344, 345, 346, 347, 348, 349, 350, 351, 352, 353, 354, 355, 356, 357, 358, 359, 360, 361, 362, 363, 364, 365, 366, 367, 368, 369, 370, 371, 372, 373, 374, 375, 376, 377, 378, 379, 380, 381, 382, 383, 384, 385, 386, 387, 388, 389, 390, 391, 392, 393, 394, 395, 396, 397, 398, 399, 400, 401, 402, 403, 404, 405, 406, 407, 408, 409, 410, 411, 412, 413, 414, 415, 416, 417, 418, 419, 420, 421, 422, 423, 424, 424, 425, 426, 427, 428, 429, 430, 431, 432, 433, 434, 435, 436, 437, 438, 439, 444, 445, 439, 446, 447, 448, 450, 449, 451, 452, 453, 454, 455, 456, 457, 458, 459, 460, 461, 462, 463, 464, 465, 466, 467, 468, 469, 470, 471, 472, 473, 474, 475, 476, 477, 478, 479, 480, 481, 482, 483, 484, 485, 486, 487, 488, 489, 490, 491, 492, 493, 494, 495, 496, 497, 498, 499, 499, 500, 501, 502, 503, 504, 505, 506, 507, 508, 509, 510, 511, 525, 513, 525, 536, 540, 514, 525, 541, 542, 543, 544, 545, 557, 558, 561, 562, 563, 564, 565, 566, 567, 568, 569, 570, 571, 572, 573, 574, 575, 576, 577, 578, 579, 580, 581, 582, 583, 584, 585, 586, 587, 588, 589, 590, 591, 592, 593, 594, 595, 596, 597, 598, 599, 600, 611, 62, 63, 64, 65, 666, 67, 68, 69, 70, 61, 64, 65, 66, 69, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 80, 84, 87, 88, 89, 81, 83, 84, 85, 86, 87, 88, 89, 80, 84, 88, 89, 82, 85, 86, 87, 89, 80, 81, 82, 87, 88, 88, 89, 80, 82, 89, 80, 83, 84, 85, 86, 87, 88, 89, 82, 80, 84, 89, 83, 85, 87, 88, 89, 80, 84, 88, 85, 86, 88, 89, 80, 85, 89, 80, 86, 87, 88, 89, 81, 84, 88, 89, 80, 87, 88, 89, 82, 89, 80, 88, 89, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 80, 88, 89, 81, 82, 83, 84, 85, 86, 87, 88, 89, 80, 88, 89, 81, 84, 89, 82, 85, 87, 88, 89, 80, 86, 88, 87, 88, 88, 89, 80, 87, 88, 89, 82, 89, 80, 88, 89, 80, 89, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 80, 88, 82, 89, 81, 84, 89, 83, 85, 86, 88, 89, 80, 89, 81, 82, 84, 85, 87, 88, 89, 82, 86, 89, 83, 88, 89, 80, 84, 85, 87, 88, 89, 80, 86, 89, 82, 89, 83, 84, 85, 86, 87, 88, 89, 80, 89, 82, 83, 85, 87, 88, 89, 80, 84, 89, 80, 85, 86, 89, 80, 87, 88, 88, 89, 80, 88, 89, 80, 88, 81, 82, 83, 84, 85, 86, 88, 87, 88, 89, 80, 89, 80, 81, 82, 83, 84, 85, 86, 89, 80, 87, 88, 89, 80, 88, 89, 81, 82, 85, 89, 80, 86, 89, 80, 87, 88, 89, 80, 88, 82, 89, 80, 83, 84, 85, 86, 87, 88, 89, 80, 89, 80, 88, 8, 82, 83, 86, 89, 80, 88, 82, 83, 84, 85, 86, 89, 80, 89, 80, 81, 82, 83, 84, 85, 86, 87, 89, 80, 86, 89, 81, 82, 85, 89, 80, 87, 88, 89, 80, 89, 82, 89, 80, 89, 80, 81, 82, 83, 84, 85, 86, 87, 88, 8, 89, 80, 89, 82, 89, 80, 89, 80, 89, 80, 89, 80, 89 scales. As one moves to shorter scales, a full set of counterterms becomes necessary, along with the appropriate one-loop corrections, as demonstrated in Ref. [76]. The third piece of our model is the stochastic contribution \[B_{\rm stoch}(\mathbf{k}_{1},\mathbf{k}_{2},\mathbf{k}_{3})=Z_{1}(\mathbf{k}_{1} )\frac{P_{11}(k_{1})}{\bar{n}}\left(b_{1}B_{\rm shot}+f\mu^{2}(1+P_{\rm shot}) \right)+\frac{1+A_{\rm shot}}{\bar{n}^{2}}\,, \tag{10}\] where \(\bar{n}\) is the galaxy number density, and \(A_{\rm shot},B_{\rm shot},P_{\rm shot}\) are free \(\mathcal{O}(1)\) shot-noise parameters that capture deviations from Poissonian stochasticity. Note that mathematical consistency requires that the \(P_{\rm shot}\) parameter is the same as that appearing in the power spectrum model. We additionally note that, in contrast to [15], we do not make any assumptions on \(A_{\rm shot}\), and keep this parameter free in the fit. The last purely theoretical ingredient of our model is infrared (IR) resummation, which captures the non-linear evolution of baryon acoustic oscillations [123; 124; 125]. This is implemented using the prescription outlined in Refs. [126; 127; 128; 15], developed within the context of time-sliced perturbation theory [129]. ### Observational Effects Two practical effects must also be taken into account in our model. The first is the coordinate distortion imprinted by the assumption of a fiducial cosmology (known as the Alcock-Paczynski effect, when applied to the shifts of the BAO peak [130]). The relationship between the true underlying wavenumbers and angles \((q,\nu)\) and the observed wavenumbers and angles \((k,\mu)\) is given by \[\begin{split}& q^{2}=k^{2}\left[\alpha_{\parallel}^{-2}\mu^{2}+ \alpha_{\perp}^{-2}(1-\mu^{2})\right]\,,\\ &\nu^{2}=\alpha_{\parallel}^{-2}\mu^{2}\left[\alpha_{\parallel}^{ -2}\mu^{2}+\alpha_{\perp}^{-2}(1-\mu^{2})\right]^{-1}\,,\end{split} \tag{11}\] where \[\alpha_{\parallel}=\frac{H_{\rm fid}(z)}{H_{\rm true}(z)}\,\frac{H_{0,\rm true }}{H_{0,\rm fid}}\,,\quad\alpha_{\perp}=\frac{D_{\rm true,A}(z)}{D_{\rm fid,A }(z)}\,\frac{H_{0,\rm true}}{H_{0,\rm fid}}\,, \tag{12}\] for angular diameter distance \(D_{\rm A}\) and Hubble parameter \(H\). Note that we have explicitly taken into account that wavenumbers are measured in units of \(h\,{\rm Mpc}^{-1}\), yielding additional factors \(H_{0,\rm true}/H_{0,\rm fid}\). The bispectrum multipoles in physical redshift space are then given by [36] (see SS3) \[\begin{split}& B_{\ell}(k_{1},k_{2},k_{3})\\ &=\frac{2\ell+1}{2\alpha_{\parallel}^{2}\alpha_{\perp}^{4}}\int_ {0}^{2\pi}\frac{d\phi}{2\pi}\int_{-1}^{1}d\mu_{3}\ \mathcal{L}_{\ell}(\mu_{3})\ B_{\rm gge}(q_{1}[k_{1},\mu_{1}],q_{2}[k_{2},\mu_{ 2}],q_{3}[k_{3},\mu_{3}],\nu_{1}[\mu_{1}],\nu_{2}[\mu_{2}],\nu_{3}[\mu_{3}]) \,,\end{split} \tag{13}\] where \(\mu_{1},\mu_{2}\) are defined by \(\mu_{3}\) and \(\phi\). The observed angles being subject to (11). In what follows we will focus on the \(\ell=0,2,4\) moments. Higher order moments are also present, but they generate negligible signal on large scales, and can thus be ignored for the purposes of this paper. The last observational effect is related to the discrete sampling of Fourier modes. We account for this effect following Ref. [15] (with alternative binning methods discussed in Refs. [47; 25; 114; 46]). Our method consists of two steps. As a first step (known as the "continuum approximation"), one assumes that there is an infinitely dense continuum of Fourier modes, in which case the binning effects simplify to an integration of the bispectrum model over the chosen wavenumber bins. As a second step, deviations from the continuum approximation are taken into account by means of "discreteness weights", defined as the ratio between the true binned bispectrum built out of discrete Fourier modes, and its continuous approximation, _i.e._ \[w=\frac{\hat{B}_{\ell,\text{disc}}}{\hat{B}_{\ell,\text{int}}}\,, \tag{10}\] where \(\hat{B}_{\ell,\text{int}}\) is the bin-integrated bispectrum, and \(\hat{B}_{\ell,\text{disc}}\) is the explicitly-computed bispectrum model calculated on a discrete \(k\)-grid. Note that the angular integral (9) is replaced with a discrete sum over the available angular modes in this case. The discreteness weights \(w\) (which are expensive to compute) are defined for some fiducial cosmology that is consistent with the data. The residual cosmology-dependence of the weights is quite weak, and in principle, can be taken into account iteratively [15]. All in all, our theory model is given by \[B_{\ell}^{\text{th}}=w_{\ell}(k_{1},k_{2},k_{3})B_{\ell}^{\text{int}}(k_{1},k_ {2},k_{3})\,. \tag{11}\] ## 6 Data and Likelihood This paper uses three different types of data and corresponding likelihoods. First, we will analyze mock galaxy clustering data from the PT Challenge and Nseries mocks, with the former boasting huge volume and the latter including BOSS observational effects. In the second part of the paper, we analyze the observed BOSS DR12 LRG clustering data. ### PT Challenge The PT Challenge simulation suite was created to test analytic modeling of the large-scale clustering of BOSS-like galaxies at the per-mile level [48], covering a cumulative volume of 566 (\(h^{-1}\text{Gpc}\))\({}^{3}\). These are periodic box simulations that are free of many observational effects, such as those of the lightcone (radial selection), window function, and fiber collisions. The mocks, however, include the Alcock-Paczynski effect. The publicly available simulation suite consists of 10 independent realizations with three snapshots at \(z=0.38,0.51,0.61\). In this work, we will focus on a single snapshot at \(z=0.61\), which matches the properties of the "high-z" BOSS DR12 data chunk. This dataset has been used to validate various analyses of EFT-based theoretical models for the galaxy power spectra and bispectra in Refs. [15, 48, 57, 59, 76, 131]. Here, we extend these analyses to the galaxy bispectrum multipole moments. Our full data vector is given by \[\{P_{0},P_{2},P_{4},Q_{0},B_{0},B_{2},B_{4}\}\,, \tag{12}\] where \(P_{\ell}\) (\(\ell=0,2,4\)) are the galaxy power spectrum multipoles with \(k_{\text{max}}^{P}=0.16\)\(h\,\text{Mpc}^{-1}\), \(Q_{0}\equiv P_{0}-\frac{1}{2}P_{2}+\frac{3}{8}P_{4}\) is the real space galaxy power spectrum proxy (taken for \(k_{\text{min}}^{Q}=0.16\)\(h\,\text{Mpc}^{-1}\) and \(k_{\text{max}}^{Q}=0.4\)\(h\,\text{Mpc}^{-1}\)), and \(B_{\ell}\) (\(\ell=0,2,4\)) are the bispectrum multipole moments taken for \(k_{\text{min}}^{B}=0.01\)\(h\,\text{Mpc}^{-1}\) and \(k_{\text{max}}^{B}=0.08\)\(h\,\text{Mpc}^{-1}\), and estimated using the periodic-box estimators of (4). The power spectrum likelihood for \(P_{\ell}\) and \(Q_{0}\) has been discussed in detail in [59], with that of the tree-level bispectrum monopole considered in [15]. Note that these scale cuts have been chosen by requiring the parameter estimation from PT Challenge mocks to be unbiased. In principle, one could measure the scale cut \(k_{\text{max}}\) without knowing the true underlying cosmology, e.g., using the theoretical error approach [54, 30]. In this work, we assume a Gaussian likelihood for the data vector (12) with the covariance matrix computed in the Gaussian tree-level approximation, as verified for the power spectrum and the tree-level bispectrum likelihood in Ref. [15] (see also [132, 133, 134, 135]). In particular, it has been found that the cross-covariance between the power spectrum and the bispectrum is negligible for our scale cuts. For the bispectrum multipoles, we also compute their covariances in the Gaussian tree-level approximation, as detailed in Appendix A. Note that the correlation between various multipoles appears already in this approximation (similar to the correlation between different \(P_{\ell}\) multipoles), though we ignore the correlation between the bispectrum multipoles and the power spectrum, as before. Based on the results of [15], this approximation is adequate for our choice of \(k_{\rm max}^{B}\). ### Nseries The second type of simulation data we consider is the Nseries mock suite [51; 93] (see also [136; 137]). This suite consists of 84 pseudo-independent realizations of the BOSS-like halo occupation distribution-based galaxies, covering a cumulative effective volume of, approximately,5 235 (\(h^{-1}{\rm Gpc}\))\({}^{3}\). The Nseries mocks include all necessary observational effects present in the actual BOSS CMASS sample: the redshift distribution, fiber collisions, and the survey window function. As such, these mocks are appropriate to test our window-free estimator, as well as our galaxy clustering model. These mocks were used for validating the official BOSS DR12 data analysis pipeline. Footnote 5: This value is based on the CMASS NGC effective sky area and redshift range given in [138]. The effective redshift of the Nseries mocks is \(z_{\rm eff}=0.55\) and we analyze the same dataset as in (6.1) but with \(k_{\rm max}^{P_{\ell}}=0.2\ h\,{\rm Mpc}^{-1}\), and \(k_{\rm min}^{Q_{0}}=0.2\ h\,{\rm Mpc}^{-1}\), consistent with the analysis of Ref. [16]. The power spectrum and bispectrum multipoles are measured with the unwindowed estimator described in SS4. This uses 100 Monte Carlo realizations to compute the Fisher matrix and one-point terms. For the pixel weighting, we assume the FKP limit \({\sf H}^{-1}\to\delta_{\rm D}({\mathbf{r}}_{i}-{\mathbf{r}}_{j})n^{-1}({\mathbf{r}})[1+n({ \mathbf{r}})P_{\rm FKP}]^{-1}\) for \(P_{\rm FKP}=10^{4}h^{3}{\rm Mpc}^{-3}\), with the window function \(n({\mathbf{r}})\) computed from the survey mask and redshift distribution. Our initial bispectra are computed with \(k_{\rm max}^{B}=0.11\ h\,{\rm Mpc}^{-1}\) then trimmed to \(k_{\rm max}^{B}=0.08\ h\,{\rm Mpc}^{-1}\) to minimize window-function-induced correlations with. modes not included in the analysis. In the final data vector, we use 62 bispectrum bins with \(\Delta k=0.01\ h\,{\rm Mpc}^{-1}\) for each multipole. Here, we assume the likelihood for the dataset to be Gaussian (valid since we limit to quasi-linear scales). Since the window function induces non-negligible correlations between the power spectrum and bispectrum (which enters the covariance but not the mean datavector), we cannot use the analytic approximations described above; instead, we use the empirical covariance extracted from the NGC MultiDark Patchy CMASS mocks [139; 140]. This set of approximate mocks has a selection function and geometry closely matching that of the BOSS CMASS sample.We use 2048 mocks in our covariance estimator, which guarantees that the sampling noise is heavily suppressed (though see [134] for compression-based appraoches). We stress that all our consistency checks are carried out on realistic mocks such as PT Challenge and Nseries, which are based on exact N-body simulations. The MultiDark Patchy mocks, which are generated with approximate gravity solvers, are used only to build covariance matrices. ### Boss Finally, we analyze real clustering data, from the twelfth data release (DR12, 2016) of BOSS [51]. The data is split into four different chunks depending on the redshift coverage and sky position, denoted NGCz1, SGCz1, NGCz3, and SGCz3, where SGC and NGC refer to South and North Galactic Cap survey regions, and z1\(=0.38\) and z3\(=0.61\) are the sample effective redshifts. The power spectrum and bispectrum multipoles are computed using the window-free estimator described in SS4 (see also [49; 50]). We supplement the data vector 6.1 with BAO measurements from the reconstructed power spectrum measurements, condensed into Alcock-Paczynski parameters \(\alpha_{\parallel}\), \(\alpha_{\perp}\). These are extracted for each data chunk as described in Ref. [58]. The likelihood for the full data vector for each of the four BOSS data samples, \[\{P_{0},P_{2},P_{4},Q_{0},B_{0},B_{2},B_{4},\alpha_{\parallel},\alpha_{\perp}\}\,, \tag{109}\] is assumed to be Gaussian, with the empirical covariance obtained from the suite of MultiDark Patchy mocks generated separately for each data sample. Note that the bispectrum covariance is very close to the one computed in the Gaussian tree-level approximation, _i.e._ the window function effects are small when using our window-free estimator (though not guaranteed to be zero). ### Codes & Priors We evaluate our theoretical predictions for the power spectrum and bispectrum with the open source CLASS-PT code [141] (see also [142; 72]). MCMC chains are computed with the Montepython code [143; 144]. Finally, let us discuss priors on nuisance parameters. For the power spectrum is concerned, we adopt the same priors as in previous BOSS EFT full-shape analyses, detailed in Refs. [16; 76; 141] (with conventions described in Appendix D of [15]). For the bispectrum nuisance parameters, we assume \[A_{\rm shot}\sim\mathcal{N}(0,1^{2})\,,\quad B_{\rm shot}\sim\mathcal{N}(1,1^ {2})\,,\quad c_{1}\sim\mathcal{N}(0,5^{2})\,, \tag{110}\] which are motivated by naturalness, which implies that the EFT parameters should be \(\mathcal{O}(1)\) (after removing their physical scalings). ## 7 Tests on Mock Catalogs In this section we test our analysis pipeline on the realistic mock catalogs described above, starting with the PT Challenge mocks. These cover a huge effective volume, and do not contain survey systematics effects, thereby allowing clear tests of our theory model for the anisotropic bispectrum. After this, we will proceed to the Nseries mock suite, which cover a somewhat smaller volume, are not exactly independent (the 84 mocks in the suite are based on only 7 independent N-body realizations), but include all necessary observational effects present in the actual data, and are thus analyzed using window-free estimators. In both cases, we will fit for the cosmological parameters of the minimal \(\Lambda\)CDM model. These are the Hubble constant \(H_{0}\), the physical dark matter density \(\omega_{cdm}\), the primordial power spectrum amplitude \(A_{s}\) and tilt \(n_{s}\). We also consider the derived parameters \(\Omega_{m}\) and \(\sigma_{8}\). The CMB temperature \(T_{0}\) is kept fixed to the FIRAS value [77].6 The physical baryon fraction, \(\omega_{b}\), is kept fixed to the true value of the mocks in order to simulate the effect of the \(\omega_{b}\) prior from either Big Bang Nucleosynthesis (BBN) [81; 82] or the CMB. Finally, the neutrino masses are set to zero, as in the simulations. We will find that our pipeline successfully recovers the input cosmological parameters from both types of mocks in this setup. Footnote 6: This parameter is not relevant for the LSS data. We require it here only to convert the measured baryon-to-photon and dark-matter-to-photon ratios into \(\omega_{b}\) and \(\omega_{cdm}\)[145]. ### PT Challenge We begin by considering the likelihood of the PT Challenge power spectrum and bispectrum multipoles. For comparison, we also present results obtained from the bispectrum monopole likelihood, _i.e._ that excluding higher-order angular moments. The latter results are equivalent to those present in Ref. [15]. The posteriors of cosmological, linear and quadratic bias parameters extracted from the PT Challenge simulation data are displayed in Fig. 5, with the one-dimensional marginalized limits given in Tab. 3. Since the PT challenge is still on-going, the presented cosmological parameters are normalized to their true values that we keep unknown to the reader. A similar logic holds for the linear bias parameter, \(b_{1}\), whose ground truth value is taken from fits to the real-space one-loop galaxy power spectrum and bispectrum datasets [76]. For the quadratic bias parameters, we instead display \(\Delta b_{2}=b_{2}-b_{2}^{\rm truth}\), \(\Delta b_{\mathcal{G}_{2}}=b_{\mathcal{G}_{2}}-b_{\mathcal{G}_{2}}^{\rm truth}\), where the ground truth values are adapted from [15]. Looking at Fig. 5 and Tab. 3, we see that our fitting pipeline successfully recovers the cosmological and main nuisance parameters from the PT Challenge data. The second relevant observation is that the addition of the bispectrum multipoles does not have a strong impact on the cosmological parameter recovery. One can notice some \(\lesssim 0.5\sigma\) shifts in the posterior means for some cosmological parameters, and a modest shrinking of the errorbars. The largest effect is on \(\sigma_{8}\) (and \(b_{1}\)), whose posteriors narrow by \(\lesssim 10\%\). In contrast to cosmological parameters, the effect on the quadratic bias parameters is more pronounced, with \(b_{2}\) and \(b_{\mathcal{G}_{2}}\) posteriors shrinking by \(30\%\) and \(10\%\), respectively. The best-fitting theory models for the bispectrum multipoles are shown in Fig. 1. Here, we display the full bispectrum dataset as a function of the triangle index, as well as squeezed and equilateral configurations as functions of relevant wavenumbers of the bin centers. As expected, we find excellent agreement between theory and data for all multipoles considered. ### Nseries Let us now move to the Nseries mocks. Our results for this dataset are shown in Fig. 6 and in Tab. 4. As before, we observe that our pipeline successfully recovers the input cosmological parameters used in the simulation, thus validating the window-free estimators of SS4. Once again, the bispectrum multipoles have the strongest impact on the \(\sigma_{8}\) posteriors, which are \(\approx 5\%\) narrower than those from the bispectrum monopole likelihood. In addition, the \(b_{2}\) and \(b_{\mathcal{G}_{2}}\) posteriors shrink by \(20\%\) and \(5\%\) respectively. Overall, the improvements obtained found for the Nseries mocks are somewhat smaller than the improvements seen in the PT Challenge case. We believe that this difference is caused by the Gaussian tree-level approximation for the bispectrum likelihood used in the PT Challenge case. For the Nseries dataset we use the full covariance extracted from mocks, which is more reliable than the naive Gaussian approximation, and accounts for mode-coupling induced by non-linear clustering. All in all, we conclude that our pipeline is capable of unbiased recovery of cosmological parameters from the actual data. We have demonstrated that the theory model works well on both high-fidelity periodic box data, as well as on mocks with realistic survey geometry and observational \begin{table} \begin{tabular}{c|c c|c c} \hline \hline **Dataset** & \(\Delta\omega_{\rm cdm}/\omega_{\rm cdm}\) & \(\Delta H_{0}/H_{0}\) & \(\Delta A_{s}/A_{s}\) & \(\Delta n_{s}/n_{s}\) \\ \hline \(\overline{P_{\ell}}+Q_{0}+B_{0}\) & \(-0.004\pm 0.010\) & \(-0.0007\pm 0.0017\) & \(0.007\pm 0.019\) & \(0.0085\pm 0.0077\) \\ \(P_{\ell}+Q_{0}+B_{\ell}\) & \(0.0011\pm 0.0099\) & \(-0.0001\pm 0.0017\) & \(-0.017\pm 0.017\) & \(0.0064\pm 0.0077\) \\ \hline \hline **Dataset** & \(\Delta\Omega_{m}/\Omega_{m}\) & \(\Delta\sigma_{8}/\sigma_{8}\) & \(\Delta b_{1}/b_{1}\) & \(\Delta b_{2}\) & \(\Delta b_{\mathcal{G}_{2}}\) \\ \hline \(\overline{P_{\ell}}+Q_{0}+B_{0}\) & \(-0.0021\pm 0.0068\) & \(0.0040\pm 0.0069\) & \(-0.0026\pm 0.0072\) & \(-0.111\pm 0.079\) & \(0.025\pm 0.024\) \\ \(P_{\ell}+Q_{0}+B_{\ell}\) & \(0.0011\pm 0.0067\) & \(-0.0056\pm 0.0063\) & \(0.0102\pm 0.0063\) & \(0.053\pm 0.058\) & \(0.043\pm 0.022\) \\ \hline \hline \end{tabular} \end{table} Table 3: One-dimensional marginalized constraints on cosmology and low-order bias parameters extracted from the PT Challenge dataset. The top table shows directly sampled cosmological parameters whilst the bottom shows derived parameters and biases. In each case, we give results including both the bispectrum monopole and multipoles. effects. Our tests on Nseries mocks additionally imply that our window-free estimator robustly recovers the true bispectrum of anisotropic galaxy clustering. It is also important to estimate the importance of effects arising from our choice of Gaussian priors, since these may shift the posteriors of a Bayesian analysis away from the true values [16, 57, 55].To this end we repeat our Nseries analysis, but using a covariance corresponding to the BOSS cumulative volume 6 (\(h^{-1}\)Gpc)\({}^{3}\), with the datavector still given by a mean over 84 Nseries realizations. This set-up simulates the situation where we analyze separately 84 (semi)-independent Figure 5: Posteriors on cosmological and main bias parameters extracted from the power spectrum and bispectrum of the PT Challenge simulation. All parameters are normalized to their true values (or their proxy for bias coefficients). The the power spectrum data is the same in both analyses. Blue contours correspond to the bispectrum monopole, whilst those in red result from the addition of the bispectrum quadrupole and hexadecapole moments. We find only small shifts in cosmological parameters, consistent with the errors, and a slight posterior shrinkage. realizations (with the BOSS covariance each), and average over our results instead of combining them (changing the ratio of likelihood to prior relative to the above test). In what follows we will call the covariance corresponding to the true cumulative simulation volume "true covariance," and the covaraince rescaled to match the BOSS volume as the "BOSS covariance." The outcome of this analysis is shown in Fig. 7 and Tab. 4. We see that the mean value of \(\sigma_{8}\) from the analysis with the BOSS covariance is lower than that from the analysis with the true covariance of 84 realizations (emulating a much larger survey). Since both likelihoods are identical except for an overall multiplication of the covariance, we interpret the observed shifts as a result of prior volume (marginalization) effects. The maximum-likelihood (but not maximum a posteriori) value of \(\sigma_{8}\) remains the same in both analyses as it is not affected by the rescaling of the covariance matrix. Let us denote the one-dimensional marginalized errorbar on \(\sigma_{8}\) from the BOSS analysis as \(\sigma_{\rm BOSS}\). From the true-covariance results, we find that the best-fit is biased up by \(\approx 2\%\) with respect to the true value of \(\sigma_{8}\), or by \(0.4\sigma_{\rm BOSS}\). This may be interpreted as a true systematic error, although it is small enough that we cannot robustly rule out the possibility that it is a statistical fluctuation. The average mean value resulting from the BOSS covariance analysis is shifted by \(0.4\sigma_{\rm BOSS}\) away from the actual input value and \(0.8\sigma_{\rm BOSS}\) from the best-fit (which nearly coincides with the mean of the analysis with the true covariance). However, the actual metric we are interested in is the shift of the average mean with respect to the true fiducial value, which is well below the errorbars. We thus conclude that the prior volume effects are not significant for our analysis. ## 8 Analysis of the BOSS data We now present parameter constraints from the BOSS DR12 dataset and estimate the information content of the galaxy bispectrum multipoles, see table 2. The full constraint table including the nuisance parameters is presented in Appendix B. We begin by considering the actual measurements from the data, obtained using the unwindowed estimators of SS4. In Fig. 3 we present the window-free galaxy bispectrum multipoles extracted from the NGCz3 data chunk. Our first relevant observation \begin{table} \begin{tabular}{|c|c c c c|} \hline \hline **Dataset** & \(\omega_{\rm cdm}\) & \(H_{0}\) & \(\ln\left(10^{10}A_{s}\right)\) & \(n_{s}\) \\ \hline \(P_{\ell}+Q_{0}+B_{0}\) & \(0.1158\pm 0.0021\) & \(70.09\pm 0.21\) & \(3.103\pm 0.033\) & \(0.986\pm 0.014\) \\ \(P_{\ell}+Q_{0}+B_{\ell}\) & \(0.1153\pm 0.0020\) & \(70.09\pm 0.20\) & \(3.114\pm 0.032\) & \(0.986\pm 0.013\) \\ \(P_{\ell}+Q_{0}+B_{\ell},V_{\rm BOSS}\) & \(0.1198^{+0.0092}_{-0.012}\) & \(70.4^{+1.0}_{-1.2}\) & \(2.99\pm 0.16\) & \(0.959\pm 0.067\) \\ \hline \hline **Dataset** & \(\Omega_{m}\) & \(\sigma_{8}\) & \(b_{1}\) & \(\Delta b_{2}\) & \(\Delta b_{2}\) \\ \hline \(P_{\ell}+Q_{0}+B_{0}\) & \(0.2825\pm 0.0032\) & \(0.838\pm 0.010\) & \(1.980\pm 0.024\) & \(-0.27\pm 0.11\) & \(-0.252\pm 0.050\) \\ \(P_{\ell}+Q_{0}+B_{\ell}\) & \(0.2815\pm 0.0031\) & \(0.8407\pm 0.0097\) & \(1.968\pm 0.023\) & \(-0.312\pm 0.091\) & \(-0.207\pm 0.045\) \\ \(P_{\ell}+Q_{0}+B_{\ell},V_{\rm BOSS}\) & \(0.288^{+0.015}_{-0.018}\) & \(0.801^{+0.043}_{-0.052}\) & \(2.07\pm 0.12\) & \(-0.07^{+0.41}_{-0.47}\) & \(-0.16\pm 0.22\) \\ \hline \hline \end{tabular} \end{table} Table 4: Marginalized constraints on cosmology and low-order bias parameters extracted from the Nseries dataset. As in Tab. 3, we show sampled cosmological parameters in the first table and derived parameters and low-order biases in the second. The first and second row shows results for the 84 Nseries mocks with the single mock covariance divided by 84 to match the true cumulative volume, whilst the third row gives results for the same mean data vector, but with the covariance rescaled to match the BOSS volume \(V_{\rm BOSS}\approx 6\)\(h^{-3}\)Gpc\({}^{3}\), thus probing prior-volume effects. The true cosmological parameter values are given by \(\omega_{cdm}=0.11711\), \(H_{0}=70\) km s\({}^{-1}\)Mpc\({}^{-1}\), \(n_{s}=0.96\), \(\ln(10^{10}A_{s})=3.0657\), \(\Omega_{m}=0.286\), and \(\sigma_{8}=0.82\). is that only the monopole moment carries a high signal, _i.e._ it is detected at \(\approx 20\sigma\). The quadrupole is detected at a relatively lower significance, \(\approx 5\sigma\), whilst the hexadecapole contribution is not detected at all. Although the detection significance of the large-scale bispectrum multipoles is lower than that of the monopole, it does not mean that they are devoid of cosmological information. Indeed, what is relevant for actual cosmological constraints is not the signal-to-noise _per se_, but the amplitude of Fisher derivatives. In other words, the bispectrum multipoles may still be useful, e.g. in the breaking of certain parameter degeneracies. To check this, we proceed now to the actual MCMC analysis of our likelihood containing the bispectrum multipole moments. In this vein, we will compare the parameter constraints from our likelihood including the bispectrum multipoles to that containing only the bispectrum monopole. We begin with the _Planck_-independent \(\Lambda\)CDM analysis, _i.e._ that with free tilt \(n_{s}\). Our results Figure 6: As Fig. 5, but for the Nseries dataset. We give one-dimensional posteriors in Tab. 4. are displayed in Fig 2 and Tab. 1, showing results for the cosmological parameters only. We find that the bispectrum multipoles narrow the posteriors only marginally, by \(\lesssim 10\%\), with the largest effect on \(n_{s}\), whose errorbar has shrunk by \(10\%\). We also find a (broadly insignificant) \(\approx 0.2\sigma\) upward shift in the \(\Omega_{m}-\sigma_{8}\) plane. Imposing the _Planck_ prior on \(n_{s}\) does not qualitatively change the situation: we observe marginal improvements on all cosmological parameters in addition to a small upward shift of the \(\Omega_{m}-\sigma_{8}\) posterior, see Fig. 2. To investigate the origin of this shift, we have repeated our analysis with the same data, but with a covariance matrix in which we have artificially removed the correlation between \(P_{\ell}\) and \(B_{\ell}\) data sets. In this case, we find that the mean values do not noticeably Figure 7: As Fig. 6, but comparing constraints on Nseries power constraints between analyses using a covariance matching the entire Nseries volume (\(\approx 235\ h^{-3}\)Gpc\({}^{3}\)) and that of BOSS (\(\approx 6\ h^{-3}\)Gpc\({}^{3}\)). Whilst there is some evidence prior volume effects (such as in \(\sigma_{8}\)), the corresponding shifts are subdominant compared to the errorbars. shift with respect to the \(P_{\ell}+Q_{0}\)+BAO+\(B_{0}\) analysis. In particular, we find \(\Omega_{m}=0.3156^{+0.0094}_{-0.0099}\), \(H_{0}=68.21^{+0.85}_{-0.86}\)\(\mathrm{km\,s^{-1}Mpc^{-1}}\), \(\sigma_{8}=0.7262^{+0.032}_{-0.036}\) (cf. Tab. 1). Further investigation reveals that certain elements of the \(P_{\ell}-B_{\ell}\) correlation matrix are enhanced relative to the linear theory Gaussian approximation, which may be a result of the non-trivial survey window function geometry, or a limitation of the (approximate) Patchy simulations. Our study suggests that it is this correlation that produces the apparent \(\sim 0.5\sigma\) shift in the \(\Omega_{m}-\sigma_{8}\) plane. We leave further investigation of this effect for future work. We note that the addition of the bispectrum multipoles leads to a significantly more Gaussian posterior for \(\sigma_{8}\): we find \(\sigma_{8}=0.736\pm 0.033\). In addition, our result is now in greater harmony with the _Planck_ 2018 \(\Lambda\)CDM constraint \(\sigma_{8}=0.811\pm 0.006\)[77]. We close by noting that our final \(\sigma_{8}\) result is nominally the strongest of all previously reported full-shape measurements based on the EFTofLSS. Figure 8: As Fig. 2, but for an analysis with \(n_{s}\) fixed to the _Planck_ best-fit value. Discussion and Conclusions In this work we have performed a cosmological analysis of the BOSS galaxy power spectrum and bispectrum, that for the first time self-consistently includes the large-scale (\(k<0.08\ h\,{\rm Mpc}^{-1}\)) bispectrum quadrupole and hexadecapole. The BOSS bispectrum moments are extracted using a novel window-free estimator, derived within a maximum-likelihood formalism. This allows us to reconstruct the underlying anisotropic bispectrum (_i.e._ that unconvolved with the survey window function), and significantly simplifies consequent data analyses, since our measurements can be directly compared with theory. Our pipeline has been validated using two sets of mocks, which have established that the method's systematic errors are significantly below the statistical ones. In particular, we have analyzed the multipole moments of the PT Challenge simulation suite, which covers a gigantic volume of \(566\ h^{-3}{\rm Gpc}^{3}\). We obtained an excellent fit of theory and simulation, and were able to recover unbiased true cosmological parameters in all our tests. This implies that our pipeline matches the precision requirements of future surveys such as DESI [146] and Euclid [147; 148; 149]. Assuming the minimal \(\Lambda\)CDM model, we have found that the inclusion of the higher galaxy bispectrum multipoles narrow the constraints only moderately (with typical improvements for the one-dimensional posterior distributions at the level of \((5-10)\%\)). The main reason for this is that the higher bispectrum multipoles contain much less signal and much larger noise than the large-scale power spectrum and bispectrum monopole. This is consistent with previous work [26], which showed that the addition of the large-scale BOSS bispectrum quadrupole data only improved the constraint on \(\Omega_{m}\) by \(\sim 10\%\). Nevertheless, taking into account the information in the bispectrum monopole as well, these results imply that the total improvement from the redshift-space bispectrum compared the power spectrum alone can be significant, and as large as \(\sim 20\%\). It is also worth commenting on Ref. [25], which found some noticeable improvement on \(f\sigma_{8}(z)\) from the bispectrum multipoles. Our analysis is principally different from [25] in that we analyze the bispectrum multipoles in conjunction with the power spectrum and BAO data. Our results suggest that for this type of analysis the \(f\sigma_{8}(z)\) constrains are largely dominated by the power spectrum likelihood, and the impact of the bispectrum multipoles is somewhat modest. The information gain may be bigger if one pushes the analysis to smaller scales, which would require either a one-loop perturbative model [26; 76] or a simulation-based emulator [13; 33]. We plan to explore the first option in the future. Another important caveat is that our analysis has been performed only for the minimal \(\Lambda\)CDM model. One might hope that the relative improvement from the bispectrum multipoles is larger for extended cosmological models (as observed for the power spectrum multipoles, e.g., [57; 74; 150; 151], see also [27] for the bispectrum quadrupole in the context of interacting dark energy models). In particular, the bispectrum is a sensitive probe of early universe physics [28; 29; 32; 34; 60; 61; 152; 153] and hypothetical violations of the equivalence principle [43] that are motivated, for example, by Lorentz-violating dark matter models [154; 155], long-range forces in the dark sector [156] or non-trivial dark energy theories [44; 45]. In addition, it would be interesting to understand if the bispectrum multipoles can sharpen full-shape constraints on other non-minimal dark matter models [157; 158; 159; 160; 161; 162], additional long-range interactions in the dark sector [156] or some non-minimal dark energy theories [44; 45]. We leave the exploration of these interesting possibilities to future work. AcknowledgmentsWe would like to thank Kazuyuki Akitsu and Shi-Fan Chen for useful discussions. The work of MMI has been supported by NASA through the NASA Hubble Fellowship grant #HST-HF2-51483.001-A awarded by the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Incorporated, under NASA contract NAS5-26555. OHEP is a Junior Fellow of the Simons Society of Fellows and thanks the Simons Foundation for support. OHEP also acknowledges the Institute for Advanced Study for their hospitality and venison selection. GC acknowledges support from the Institute for Advanced Study. MZ is supported by the Canadian Institute for Advanced Research (CIFAR) program on Gravity and the Extreme Universe and the Simons Foundation Modern Inflationary Cosmology initiative. This work was supported in part by MEXT/JSPS KAKENHI Grant Number JP19H00677, JP20H05861, JP21H01081 and JP22K03634. We also acknowledge financial support from Japan Science and Technology Agency (JST) AIP Acceleration Research Grant Number JP20317829. The simulation data analysis was performed partly on Cray XC50 at Center for Computational Astrophysics, National Astronomical Observatory of Japan. Data analysis was partly performed on the Helios cluster at the Institute for Advanced Study, Princeton, and partly using the Princeton Research Computing resources at Princeton University, which is a consortium of groups led by the Princeton Institute for Computational Science and Engineering (PICSciE) and the Office of Information Technology's Research Computing Division. ## Appendix A Gaussian Covariance for Bispectrum Multipoles In this section we present analytic formulae for the Gaussian tree-level bispectrum multipole covariance in the narrow bin approximation, \(\Delta k\ll k\)[17]. As in (3.4), the ideal estimator for the bispectrum multipole \(\ell\) is given by \[\hat{B}_{\ell}(k_{1},k_{2},k_{3})=\frac{(2\ell+1)}{N_{T}^{123}}\prod_{i=1}^{3 }\int_{\mathbf{k}_{1}\mathbf{k}_{2}\mathbf{k}_{3}}(2\pi)^{3}\delta_{D}^{(3)}(\mathbf{k}_{ 123})\delta_{g}(\mathbf{k}_{1})\delta_{g}(\mathbf{k}_{2})\delta_{g}(\mathbf{ k}_{3})\mathcal{L}_{\ell}(\hat{\mathbf{z}}\cdot\hat{\mathbf{k}}_{3})\,,\] (A.1) where \(N_{T}^{123}=8\pi^{2}k_{1}k_{2}k_{3}\Delta k^{3}V^{2}/(2\pi)^{6}\) (in the thin-bin limit), \(V=(2\pi)^{3}k_{f}^{-3}\), and \(k_{f}\) is the fundamental wavenumber. At linear order, the galaxy density can be written \(\delta_{g}(\mathbf{k})=\delta(\mathbf{k})(1+\beta\mu^{2})+\epsilon\)[163], where \(\beta\equiv f/b_{1}\) and \(\epsilon\) is the stochastic density component, whose power spectrum we assume to be equal to \(\bar{n}^{-1}\). Using Eq. (A.1), we obtain the bispectrum covariance between triangle configurations \(T\) and \(T^{\prime}\), \[\langle\hat{B}_{\ell}(k_{1},k_{2},k_{3})\hat{B}_{\ell^{\prime}}( k_{1}^{\prime},k_{2}^{\prime},k_{3}^{\prime})\rangle=C_{TT^{\prime}}^{\ell \ell^{\prime}}=(2\ell+1)(2\ell^{\prime}+1)\frac{(2\pi)^{3}\pi}{k_{1}k_{2}k_{3} \Delta k^{3}V}\delta_{TT^{\prime}}\] (A.2) \[\times\left(F_{\ell\ell^{\prime}}(k_{1},k_{2},k_{3})\prod_{i=1}^ {3}P_{11}(k_{i})+\frac{1}{\bar{n}}\sum_{i<j,i=1}^{j=3}P_{11}(k_{i})P_{11}(k_{j })G_{\ell\ell^{\prime}}(k_{i},k_{j})\right.\] \[+\left.\frac{1}{\bar{n}^{2}}\sum_{n=1}^{3}P_{11}(k_{n})H_{\ell \ell^{\prime}}(k_{n})+J_{\ell\ell^{\prime}}\frac{1}{\bar{n}^{3}}\right),\] where the multipole-dependent form factors for the purely continuous part are given by (defining writing the \(\mu_{1}\),\(\mu_{2}\) angles in terms of \(\mu\equiv\mu_{3}\) and \(\phi\)) \[F_{\ell\ell^{\prime}}^{\text{general}}= \int_{0}^{2\pi}\frac{d\phi}{2\pi}\int_{0}^{1}d\mu\;(1+\beta\mu^{2 })^{2}(1+\beta\mu_{1}(\mu,\phi)^{2})^{2}(1+\beta\mu_{2}(\mu,\phi)^{2})^{2} \mathcal{L}_{\ell}(\mu)\mathcal{L}_{\ell^{\prime}}(\mu)\,,\] (A.3) \[F_{\ell\ell^{\prime}}^{\text{isosceles I}}= 2F_{\ell\ell^{\prime}}^{\text{general}}\,,\] \[F_{\ell\ell^{\prime}}^{\text{isosceles II}}= \int_{0}^{2\pi}\frac{d\phi}{2\pi}\int_{0}^{1}d\mu\;(1+\beta\mu^{2 })^{2}(1+\beta\mu_{1}(\mu,\phi)^{2})^{2}(1+\beta\mu_{2}(\mu,\phi)^{2})^{2}\] \[\times \mathcal{L}_{\ell}(\mu)(\mathcal{L}_{\ell^{\prime}}(\mu)+\mathcal{ L}_{\ell^{\prime}}(\mu_{1}))\,,\] \[F_{\ell\ell^{\prime}}^{\text{equilateral}}= \int_{0}^{2\pi}\frac{d\phi}{2\pi}\int_{0}^{1}d\mu\;(1+\beta\mu^{2 })^{2}(1+\beta\mu_{1}(\mu,\phi)^{2})^{2}(1+\beta\mu_{2}(\mu,\phi)^{2})^{2}\] \[\times 2\mathcal{L}_{\ell}(\mu)(\mathcal{L}_{\ell^{\prime}}(\mu)+ \mathcal{L}_{\ell^{\prime}}(\mu_{1})+\mathcal{L}_{\ell^{\prime}}(\mu_{2}))\,,\] the continuous \(\times\) stochastic terms are (assuming \(i=1,2,\ j=2,3,j>i\)): \[\begin{split} G_{\ell\ell^{\prime}}^{\text{general}}&= \int_{0}^{2\pi}\frac{d\phi}{2\pi}\int_{0}^{1}d\mu\ (1+\beta\mu_{i}(\mu,\phi)^{2})^{2}(1+\beta\mu_{j}(\mu,\phi)^{2})^{2}\mathcal{L}_ {\ell}(\mu)\mathcal{L}_{\ell^{\prime}}(\mu)\,,\\ G_{\ell\ell^{\prime}}^{\text{isosceles I}}&=2G_{ \ell\ell^{\prime}}^{\text{general}}\,,\\ G_{\ell\ell^{\prime}}^{\text{isosceles II}}&=\int_{0 }^{2\pi}\frac{d\phi}{2\pi}\int_{0}^{1}d\mu\ (1+\beta\mu_{i}(\mu,\phi)^{2})^{2}(1+\beta\mu_{j}(\mu,\phi)^{2})^{2}\mathcal{L }_{\ell}(\mu)(\mathcal{L}_{\ell^{\prime}}(\mu)+\mathcal{L}_{\ell^{\prime}}( \mu_{1}))\,,\\ G_{\ell\ell^{\prime}}^{\text{equilateral}}&=2\int_{0 }^{2\pi}\frac{d\phi}{2\pi}\int_{0}^{1}d\mu\ (1+\beta\mu_{i}(\mu,\phi)^{2})^{2}(1+\beta\mu_{j}(\mu,\phi)^{2})^{2}\mathcal{L }_{\ell}(\mu)(\mathcal{L}_{\ell^{\prime}}(\mu)+\mathcal{L}_{\ell^{\prime}}( \mu_{1})+\mathcal{L}_{\ell^{\prime}}(\mu_{2}))\,,\end{split} \tag{100}\] and (\(n=1,2,3\)) \[\begin{split}& H_{\ell\ell^{\prime}}^{\text{general}}=\int_{0}^{2 \pi}\frac{d\phi}{2\pi}\int_{0}^{1}d\mu\ (1+\beta\mu_{n}(\mu,\phi)^{2})^{2}\mathcal{L}_{\ell}(\mu)\mathcal{L}_{\ell^{ \prime}}(\mu)\,,\\ & H_{\ell\ell^{\prime}}^{\text{isosceles I}}=2H_{\ell\ell^{ \prime}}^{\text{general}}\,,\\ & H_{\ell\ell^{\prime}}^{\text{isosceles II}}=\int_{0}^{2\pi} \frac{d\phi}{2\pi}\int_{0}^{1}d\mu\ (1+\beta\mu_{n}(\mu,\phi)^{2})^{2}\mathcal{L}_{\ell}(\mu)(\mathcal{L}_{\ell^{ \prime}}(\mu)+\mathcal{L}_{\ell^{\prime}}(\mu_{1})\,,\\ & H_{\ell\ell^{\prime}}^{\text{equilateral}}=2\int_{0}^{2\pi} \frac{d\phi}{2\pi}\int_{0}^{1}d\mu\ (1+\beta\mu_{n}(\mu,\phi)^{2})^{2}\mathcal{L}_{\ell}(\mu)(\mathcal{L}_{\ell^{ \prime}}(\mu)+\mathcal{L}_{\ell^{\prime}}(\mu_{1})+\mathcal{L}_{\ell^{\prime}} (\mu_{2}))\,,\end{split} \tag{101}\] whilst the purely stochastic contributions are \[\begin{split}& J_{\ell\ell^{\prime}}^{\text{general}}=\int_{0}^{2 \pi}\frac{d\phi}{2\pi}\int_{0}^{1}d\mu\ \mathcal{L}_{\ell}(\mu)\mathcal{L}_{\ell^{\prime}}(\mu)\propto\delta_{\ell\ell^{ \prime}}\,,\\ & J_{\ell\ell^{\prime}}^{\text{isosceles I}}=2J_{\ell\ell^{ \prime}}^{\text{general}}\,,\\ & J_{\ell\ell^{\prime}}^{\text{isosceles II}}=\int_{0}^{2\pi} \frac{d\phi}{2\pi}\int_{0}^{1}d\mu\ \mathcal{L}_{\ell}(\mu)(\mathcal{L}_{\ell^{\prime}}(\mu)+\mathcal{L}_{\ell^{ \prime}}(\mu_{1}))\,,\\ & J_{\ell\ell^{\prime}}^{\text{equilateral}}=2\int_{0}^{2\pi} \frac{d\phi}{2\pi}\int_{0}^{1}d\mu\ \mathcal{L}_{\ell}(\mu)(\mathcal{L}_{\ell^{\prime}}(\mu)+\mathcal{L}_{\ell^{ \prime}}(\mu_{1})+\mathcal{L}_{\ell^{\prime}}(\mu_{2}))\,,\end{split} \tag{102}\] where we recall that we have chosen \(k_{1}\leq k_{2}\leq k_{3}\) without loss of generality and defined \[\begin{split}\text{general:}& k_{1}<k_{2}<k_{3}\,,\\ \text{equilateral:}& k_{1}=k_{2}=k_{3}\,,\\ \text{isosceles I:}& k_{1}=k_{2}<k_{3}\,,\\ \text{isosceles II:}& k_{1}<k_{2}=k_{3}\,.\end{split} \tag{103}\] In the absence of the AP distortions, the integrals in the form factors \(F,G,H,J\) can be evaluated analytically. Since the AP effect is typically quite weak, \(\mathcal{O}(1\%)\), we ignore it when evaluating the covariance matrix. Finally, we note that we use the Gaussian covariance for bispectrum multipoles only in the analysis of the PT challenge data. For the Nseries mocks and the BOSS data we use the covariance estimated from the Multi-Dark Patchy mocks, allowing us to incorporate the effects of window functions and non-linear gravity. ## Appendix B Full constraints and parameter tables In Tabs. 5 & 6, we display one-dimensional marginalized constraints on cosmological and nuisance parameters for the \(\Lambda\)CDM fits to the BOSS data with, respectively, free \(n_{s}\) and \(n_{s}\) fixed to the _Planck_ best-fit value. In the left and right panels we show results before and after adding the bispectrum multipoles.
2308.11724
MolSieve: A Progressive Visual Analytics System for Molecular Dynamics Simulations
Molecular Dynamics (MD) simulations are ubiquitous in cutting-edge physio-chemical research. They provide critical insights into how a physical system evolves over time given a model of interatomic interactions. Understanding a system's evolution is key to selecting the best candidates for new drugs, materials for manufacturing, and countless other practical applications. With today's technology, these simulations can encompass millions of unit transitions between discrete molecular structures, spanning up to several milliseconds of real time. Attempting to perform a brute-force analysis with data-sets of this size is not only computationally impractical, but would not shed light on the physically-relevant features of the data. Moreover, there is a need to analyze simulation ensembles in order to compare similar processes in differing environments. These problems call for an approach that is analytically transparent, computationally efficient, and flexible enough to handle the variety found in materials based research. In order to address these problems, we introduce MolSieve, a progressive visual analytics system that enables the comparison of multiple long-duration simulations. Using MolSieve, analysts are able to quickly identify and compare regions of interest within immense simulations through its combination of control charts, data-reduction techniques, and highly informative visual components. A simple programming interface is provided which allows experts to fit MolSieve to their needs. To demonstrate the efficacy of our approach, we present two case studies of MolSieve and report on findings from domain collaborators.
Rostyslav Hnatyshyn, Jieqiong Zhao, Danny Perez, James Ahrens, Ross Maciejewski
2023-08-22T18:30:53Z
http://arxiv.org/abs/2308.11724v2
# MolSieve: A Progressive Visual Analytics System for Molecular Dynamics Simulations ###### Abstract Molecular Dynamics (MD) simulations are ubiquitous in cutting-edge physio-chemical research. They provide critical insights into how a physical system evolves over time given a model of interatomic interactions. Understanding a system's evolution is key to selecting the best candidates for new drugs, materials for manufacturing, and countless other practical applications. With today's technology, these simulations can encompass millions of unit transitions between discrete molecular structures, spanning up to several milliseconds of real time. Attempting to perform a brute-force analysis with data-sets of this size is not only computationally impractical, but would not shed light on the physically-relevant features of the data. Moreover, there is a need to analyze simulation ensembles in order to compare similar processes in differing environments. These problems call for an approach that is analytically transparent, computationally efficient, and flexible enough to handle the variety found in materials-based research. In order to address these problems, we introduce MolSieve, a progressive visual analytics system that enables the comparison of multiple long-duration simulations. Using MolSieve, analysts are able to quickly identify and compare regions of interest within immense simulations through its combination of control charts, data-reduction techniques, and highly informative visual components. A simple programming interface is provided which allows experts to fit MolSieve to their needs. To demonstrate the efficacy of our approach, we present two case studies of MolSieve and report on findings from domain collaborators. Molecular dynamics, time-series analysis, visual analytics ## 1 Introduction Molecular dynamics (MD) simulations allow scientists to observe how systems of atoms evolve over time using a potential energy function that calculates interatomic forces. Understanding the nanoscale behavior of matter has widespread applications, from guiding protein mutations in bio-medical research [24] to validating the robustness of a material in engineering contexts [32]. A large family of software packages have been developed in order to generate MD simulations, such as GROMACS [8] for biological simulations, LAMMPS [48] for materials modeling, as well as countless others, e.g. [27, 37]. A recently introduced simulation management tool called ParSplice [34] has enabled MD simulations to span time-scales reaching into the hundreds of thousands of nano-seconds (milliseconds), two orders of magnitude larger than simulations typically performed with biological systems. The time-scales ParSplice is able to simulate typically contain millions of discrete transitions between molecular configurations. Some of these systems suffer from the heterogeneous energy barrier problem [34], a prevalent issue in long MD trajectories [35, 36]. Trajectories with a heterogeneous energy barrier distribution are difficult to analyze since relevant regions within a trajectory are buried amongst a myriad of repetitive transitions within so-called super-states. While calculating all of the energy paths between every state and visualizing them seems like a solution at first glance, not only are the computational costs involved impractical, but the results generated by this method are impossible to sift through manually. To further compound the problem, ParSplice generates trajectories as ensembles because MD is inherently a stochastic process; attempting to generalize the behavior of a system from an individual simulation could lead to brittle conclusions. These issues dictate the need to develop an analysis tool that highlights the essential components of a trajectory (i.e., its transition regions), while understating the parts of a trajectory where there is little to no change in the structure of the system (i.e., its super-states), as well as facilitating comparisons between trajectories. A number of visual analytics systems enable the exploration of molecular dynamics simulations, e.g. [11, 14, 23, 31, 51]. However, most existing systems focus on biological simulations, which typically do not involve the same time-scales as their inorganic counterparts, rendering them impractical for analyzing the data-sets produced by ParSplice. To address this gap, we worked closely with domain experts to develop MolSieve, a visual analytics system that aggressively reduces molecular dynamics simulations to their essential components (super-states and transition regions) to facilitate their analysis and comparison. To evaluate the efficacy of MolSieve, we performed two case studies with materials science experts on data-sets from their daily workflows. They demonstrate that our system is not only efficient in extracting insight but is also adaptable to an expert's needs. This work contributes: * A novel combination of coordinated multiple views consisting of temporal charts for examining long sequences by distinguishing regions of interest and uninteresting regions; * A novel state space chart for visualizing discrete temporal events in a limited screen space while outlining their general trend; * An efficient, scalable, and customizable progressive visual analytics system that supports analyzing large materials MD trajectory ensembles in real-time with the aforementioned visual designs. ## 2 Related Work In this section, we review various methods to analyze long-duration molecular dynamics simulations. We also discuss the visualization techniques and analytical methods that inspired our system. ### _Molecular Dynamics Analysis Approaches_ Many approaches exist for exploring long-duration molecular dynamics trajectories which utilize various methods of reducing the data-set to a size tractable for real-time analysis. We found that these approaches are typically tailored for specific analyses of biological systems. For example, PyContact [43] enables the exploration of non-covalent interactions within molecular dynamics trajectories. It aims to provide access to points of interest within the trajectory by filtering on the amount of contact molecules within the simulation at any given time-step. However, PyContact requires the calculation of every molecular contact before the data-set can be analyzed, which can be time-consuming. VIA-MD [46] allows the exploration of long duration biological molecular systems through a combination of linked 2D and 3D views, which work together to highlight events of interest in both the spatial and temporal domains. Our proposed solution differs in locating regions of interest due to the difference in scale - VIA-MD was tested on a biological simulation that spanned twenty-three nano-seconds, while our case studies average five thousand nano-seconds. To extract insights from data-sets of this size, we developed a unique data simplification scheme based on the internal dynamics of the simulation. To the best of our knowledge, this simplification scheme has not yet been explored. ExaViz [14] enables the in-situ analysis of biological molecular systems. This in-situ approach reduces the data-set by allowing experts to decide what portions of the trajectory are relevant before saving them for long-term storage, which requires a tremendous amount of computing power and tedious manual analysis. Byska et al. [11] built a focus+context visual analytics system that tied statistical properties of simulations to their 3D renders. Building on this work, sMolBoxes [51] utilized a data-flow model embedded in CAVER [23] to identify important snapshots within long duration bio-molecular simulations. sMolBoxes identifies important snapshots (states) within a trajectory by relying on domain specific information provided by analysts, e.g., using the root-mean-square deviation (RMSD) between states to identify abnormal structural changes in proteins. Analysts are able to select individual parts of a protein to track throughout the trajectory. Unfortunately, this powerful interaction is inherently coupled with the spatial dimension of the data, which reduces its scope to biological systems. Duran et al. [15] explore building a similar system using traditional statistical charts and linking them to a 3D visualization of the protein being studied. Non-biological systems do not behave in the same manner as proteins, reducing the effectiveness of these approaches as a general solution to identifying regions of interest within a molecular dynamics simulation. Chae et al. [12] used a deep learning model to reduce the dimensionality of a molecular dynamics simulation to a 3D space for easier exploration, using multiple views to display the original data alongside the 3D embedding. LaSCA [49] is a visual analytics system which identifies crystalline structures within large molecular systems in great detail; however, the system does not support analyzing these structures within the context of a MD trajectory. Wu et al. [54] proposed a visualization pipeline to identify point defects in nuclear materials - as with LaSCA, this approach does not consider the trajectory as a whole. To the best of our knowledge, the visual analytics systems currently available do not offer an efficient method to identify and compare analyst-defined regions of interest within MD simulations of materials. A number of programming tool-kits also provide solutions for MD trajectories [10, 28, 38, 47]. However, these tool-kits cannot identify regions of interest within a trajectory without being integrated into a larger framework. Blindly applying these tool-kits to long simulations without a scheme to filter and organize their output will simply produce large bodies of data that are difficult to interpret. ### _Visual Analytics Methods for Time-series Exploration_ In this section, we discuss several works that directly inspired views in MolSieve. Tominski et al. [50] developed a multi-attribute temporal view for a spatial trajectory by stacking horizon charts representing each attribute. This stacked trajectory chart is then rendered on top of 3D map data to facilitate a spatio-temporal analysis of the data-set. DQNVis [52] also took a similar approach to visualize multi-variate sequence data by stacking line charts, bar charts, and area charts on top of each other to provide a multi-dimensional view of the behavior of a machine learning model. Additionally, their approach provides methods to identify and compare patterns within the trajectory using segment mining and dynamic time warping. MolSieve does not use dynamic time warping for comparing sequences, as the structure of a system is far too complex to be modeled by dynamic time wrapping; instead, we use domain-specific methods to compare analyst-defined regions. Our approach combines the visual elements of the aforementioned systems and uses trajectory information to generate and arrange charts based on the detected importance of a region. SignalLens [26] uses a distorted scale where interesting parts of an electronic signal are magnified while uninteresting regions are minimized in their sequence view. Regardless of the level of distortion, context is maintained, which is essential to navigating long time-series on a screen limited by size. MolSieve distorts the trajectory's sequence to emphasize transition regions while minimizing super-states. For a comprehensive review of time series visualization techniques, we refer to Aigner et al [5]. ## 3 Analytical Tasks, Requirements, and Definitions In this section, we define tasks for MD analysis, the requirements for an analytical tool, and domain-specific definitions. ### _Definitions_ ParSplice simulations typically generate tens to hundreds of thousands of unique configurations of the system being simulated, with each discrete configuration being referred to as a _state_. Each configuration has its own state ID. These states contain meta-data about the system being simulated at a given point in time, such as the positions, chemical species, velocities, etc., of its atoms. This meta-data can be used to calculate properties that characterize its structure and geometry. A _trajectory_ is a sequence of states, and a single trajectory describes one of the many possible ways a system can evolve. To _transition_ between two states, the system must overcome the _energy barrier_ between them; therefore, state transitions that have a low energy barrier tend to occur exponentially more frequently than state transitions associated with larger barriers. This causes states to repeat throughout a trajectory, since structurally similar states are easier to transition to than radically different ones. Each transition has a discrete time-step associated with it to organize it temporally; transitions take a variable amount of time, but usually they usually occur in the span of hundreds of picoseconds. The frequency of low energy transitions causes trajectories to often get trapped in so-called _super-states_, subsets of states connected together by low energy barriers, separated from outside regions by high energy barriers. Parspline simulations tend to visit these super-states for long periods of time before transitioning to another super-state. These movements between super-states are referred to as _transition regions_, which typically contain the most important kinetic information of a system because it controls its long-term behavior. Transition regions are often comparatively short compared to the time spent trapped within super-states, while intra-super-state transitions occur very frequently. When analyzing the structure of molecules, experts often investigate the neighbors of each atom and determine the shapes that these neighborhoods form in order to characterize a system. Mutations in the shape and crystalline structure of a system have a strong influence on its properties. There are seven main types of crystalline structures commonly found in materials, and our case studies are focused around analyzing cubic (face-centered cubic - FCC, body-centered cubic - BCC) and icosahedral (ICO) structures as they commonly occur in nano-particles; please refer to Misra [33] for a thorough discussion. ### _Analytical Tasks_ We adopted an iterative design process to develop MolSieve with two domain experts who work in computational materials science; one of them has over twenty years of experience, and the other has more than six. We met bi-weekly for two years, using the feedback from these meetings to refine MolSieve's functionality and visual design. Through the design process, we identified a set of analytical tasks that are essential for gaining insight into long duration molecular dynamics simulations. Simplifying these tasks became one of the core design objectives of MolSieve (Figure 2). **T1: Classify super-states and transition regions in individual trajectories.** The first step in analyzing large simulations is to identify super-states and the transition regions that separate them, which are not known _a priori_. Transition regions are critical because they control how rapidly the system will experience significant changes that could affect its properties. This separation reduces the data-set to a manageable size and allows experts to concentrate their analysis on transition regions. **T2: Identify critical sub-regions, relevant patterns and motifs within transition regions.** There are a number of patterns and motifs to be discovered within the transition regions of a trajectory. Patterns of state transitions often signify the presence of a structural change, but they can also be misleading due to the nature of long duration simulations, where repeated behavior is often due to the system making rapid low-energy transitions between states. The challenge lies in identifying patterns and sub-regions within transition regions where meaningful changes occur while ignoring low information density portions. The analysis of these sub-regions is the crux of molecular dynamics research; understanding how the structure of a material changes allows domain experts to make decisions on whether or not to use a certain material in an engineering application. **T3: Compare regions of interest between trajectories.** MD trajectories are generated in a stochastic manner, so it is unlikely that two trajectories will contain the same behavior and physical structures. Therefore, there is a need to develop flexible methods that can differentiate robust features of the dynamics that are common to many simulations. ### _Requirements_ After identifying the primary tasks found in MD analysis, we derived the following set of requirements for a visual analytics system. _R1: Guide the analyst to transition regions._ Analysts should be guided to regions that are most likely to reveal significant changes in a system's structure. _R2: Automatic calculation of analyst-defined properties._ The trajectory should be populated with automatically calculated properties that can be defined by an analyst. Time should only be spent computing properties for regions that are potentially interesting. The results should be stored in a data-base for future use. _R3: Highlight potentially interesting sub-regions._ Once the expert-defined properties are rendered, the analyst should be guided towards sub-regions within transition regions that potentially express a change in the system's behavior. While **R2** focuses on calculating properties, guided visual exploration is another crucial aspect that accelerates the Fig. 2: MolSieve is designed to extract insight from MD simulations in three stages. First, an analyst uses a modal window to set the system’s exploratory parameters. When the initial simplification completes, analyst-defined properties are mined on portions of the trajectory and progressively rendered in a Trajectory Component. While the properties are being calculated, the analyst uses the Trajectory Component’s embedded views to interactively identify regions of interest (**T1**). If no regions of interest are found, the exploratory parameters can be reconfigured. If the analyst finds a sub-region of interest, they can select and examine it in detail using the Sub-Sequence Component and the State Detail Widget (**T2**). The analyst can use the comparison interactions provided by MolSieve to explore other trajectories in the context of their new discovery (**T3**). discovery process. _R4: Select, compare, and inspect regions of interest in detail._ Integrating **RI-3** should enable the analyst to effectively select and refine regions of interest in a responsive manner, as well as allowing them to inspect a set of customized properties through expressive visualizations. _R5: On-demand calculation of detailed analyses._ The selection process detailed in **R4** generates sub-regions that may include states that express behaviors of interest. Understanding their behavior requires physically grounded analyses which can be computationally expensive. The analyst should be able to request these analyses on demand and be able to continue exploring the trajectory. _R6: Extensibility._ An intuitive extension of **R2** is the ability to define new properties. The solution should accommodate a broad spectrum of simulation types, enabling analysts to provide customized scripts for calculating system-specific properties. By providing this amount of flexibility, analysts can define properties which typically denote changes in a system. They can then use the visualizations and interactions provided by the solution to quickly identify regions of interest based on these properties. _R7: Ease of use and performance._ The analyst should be able to easily navigate and discern patternswithin trajectories. Additionally, the proposed solution must remain responsive during computationally intensive tasks and progressively render partially calculated data while waiting for results. The analyst should receive feedback regarding the progress of complex calculations as well as any errors that may occur, with the ability to adjust or cancel them as needed. ## 4 MolSieve MolSieve is a visual analytics system implemented using a FastAPI [1] back-end, and an interface powered by D3 [9], React [2], and Redux [3]. The back-end provides a powerful method for simplifying dense MD trajectories; its results are mapped to the views in the interface (Figure 1). The interface is designed to quickly guide analysts to potential regions of interest within MD trajectories (**T1**) and provides tools to interactively verify (**T2**) and compare (**T3**) multiple data-sets. Due to the tremendous amount of data that needs to be processed and stored on the fly, we designed our approach based on the progressive visual analytics paradigm [16]. To support a wide range of simulations, MolSieve automatically executes, stores, and renders the results of analyst-defined Python scripts (**R2, R6**). This feature enables analysts to specify properties that indicate a region of interest for the simulation they are studying. These scripts are provided access to Atomic Simulation Environment (ASE) [28] representations of each state, which can be leveraged to calculate physically relevant properties of dynamic systems, e.g., the Common Neighbor Analysis (CNA) [19] counts for atomic structures. These \(n\) properties are calculated and assigned to each state within the trajectory (Figure 1). To further accelerate the process of discovery, these properties are calculated and rendered progressively, allowing analysts to gather insights throughout the data-set without having to wait for computations to finish (**R7**). **Background - Trajectory Simplification** We used Generalized Perron Cluster Analysis (GPCCA) [41] as implemented by pyGPCCA [40] as the basis for MolSieve's simplification scheme; GPCCA is a generalization of the robust Perron Cluster Cluster Analysis (PCCA+) [13]. PCCA+ has been proven to accurately simplify MD trajectories by clustering together groups of kinetically linked states [20, 21]. GPCCA can be applied to simulations where transitions are modeled as a Markov chain. MolSieve simplifies the trajectory by dividing it into tentative transition regions and super-states. This is achieved by first running GPCCA on the trajectory, which divides it into \(N\) dominant super-states, referred to as _clusters_. Here, dominant super or macro-states denote meta-stable states, in the case of reversible dynamics, or, e.g., cyclic states, in the case of non-reversible dynamics [39]. GPCCA assigns a vector of \(N\)_cluster membership probabilities_ to each individual state which describes how strongly it belongs to each cluster (Figure 2). Then, each individual state's membership probability is compared to a threshold set by the analyst (Figure 3); if its maximum membership probability is **above** the threshold, it is considered part of a _super-state_; otherwise, it is considered to be part of a possible _transition region_ (i.e., it occurs in regions where the trajectory moves between clusters). If the simplification threshold is set to its maximum value of 1.0, no portion of the trajectory will be simplified, and every state will be considered a transition region. When initially loading a trajectory, analysts have the opportunity to set a range for the GPCCA clusterings they are interested in, as GPCCA is not guaranteed to yield results for all numbers of clusters. The back-end uses the range to determine and return the optimal GPCCA clustering for the trajectory and then simplifies it using the simplification threshold. Simultaneously, analyst-defined properties ( Figure 3: The simplification scheme employed by MolSieve and its relation to the visual components in the system. (1) displays a portion of a sample trajectory’s sequence, where each rectangle represents a state, the capital letter represents its state ID, and \(P_{1}\) to \(P_{r}\) represent its analyst-defined properties. These properties are not required for the simplification and can be calculated and assigned afterward. (2) GPCCA is performed on this sequence and it yields the maximum cluster membership probability for each state. Then, the simplification (3) is applied using an analyst-defined threshold (75% by default). States with a maximum cluster membership probability **above** this percentage are rendered as **super-states**, and states **below** are rendered as **transition regions**. These regions are mapped to views in a Trajectory Component which consists of the Timeline View and statistical views. The Timeline View (4) provides temporal context for the statistical views below. Regions drawn with dashed outlines are transition regions, while regions without outlines are super-states. The statistical views (5) are arranged temporally and split vertically into partitions, with each partition corresponding to a single property; we include axes to indicate the relative scale for each property. Super-State Views display small multiples of violin plots that outline the distributions of each property within a super-state. Transition Region Views are small multiples of control charts for each property that are accompanied by a State Space Chart. These charts collaborate to describe the most frequently occurring states within evenly divided segments; the number of states in a segment directly correlates to the number of unique states visited. Segments with large numbers of states usually indicate a structural change is occurring. to \(P_{n}\)) are calculated and assigned to each state within the trajectory. The optimal clustering may not always reveal the best possible splits between transition regions and super-states, so analysts are free to adjust the GPCCA cluster counts as well as the simplification threshold within the interface. The simplification threshold is set to a default value of 0.75 and the GPCCA clustering range to 2-20 which provides a reasonable starting point for exploration. A simplification threshold value of 0.75 tends to reveal sets of states that are weakly clustered, regardless of the GPCCA cluster count. The default GPCCA clustering range is set wide enough to ensure a clustering is found. Once a trajectory is simplified, its results are directly mapped to MolSieve's Trajectory Components (Figure 2 right). ### Trajectory Components Trajectory Components adopt a focus+context approach [17] to assist analysts in identifying regions of interest through the use of a variable number of Transition Region and Super-State Views (Figure 2.11). Each trajectory belongs to a separate component, organized on the main area of the screen. The Timeline View (Figure 3.4) provides temporal context and a means of control for the statistical views (Figure 4). **Timeline View** The Timeline View (Figure 4) displays the regions that are currently being rendered as statistical views (Figure 4) and allows experts to adjust which regions are visible to focus their analysis. Regions are colored according to the GPCCA cluster they are assigned; transition regions are rendered with a dashed outline and super-states with no outline and a slightly lighter color in order to differentiate between them. We colored the clusters with a color scheme adapted from ColorBrewer's [18] qualitative set. Hovering over either type of statistical view highlights its corresponding region in the Timeline View. Brushing the view adjusts the visible extent of the trajectory, saturating regions that are outside of the brush's extent and reorganizing the statistical views. Double clicking the view zooms it in on the currently brushed region, which allows analysts to view regions that may have been rendered too small initially. There are two additional interactions provided by buttons next to each Timeline View. Clicking the \(\boxed{\boxed{\boxed{\boxed{\boxed{\boxed{\boxed{\boxed{\boxed{\boxed{\boxed{ \boxed{\boxed{\boxed{\boxed **Super-State View** Super-states revealed by the simplification algorithm tend to constitute the majority of ParSplice simulations. To maximize performance, we elected to use aggregate statistical charts [53] when representing super-states. A Super-State View (Figures 1.3a, 1.3b, Figure 3.5) is a small multiple of violin plots that describes the overall distribution of each property. They highlight the evolution of each property throughout a simulation in a compact manner. We originally used box-plots to display the distributions of these properties, but we found that they were highly cluttered due to the small amount of screen space allotted to them and they did not capture the variance of each distribution as well as violin plots. Each violin plot is constructed using the property values from the so-called **dominant** states of the region plus a randomly selected 1%. Dominant states within the super-state are states that occur with larger than median frequency. Using a small randomly sampled portion of the region provides a reasonable overview without having to compute a prohibitive amount of data. In order to ensure that important details about states are not hidden from analysts, we implemented a expansion feature that allows experts to explore super-states in more detail. Double clicking any of the control charts causes the Transition Region View to "expand" (Figure 6b), revealing its state space and the moving averages of its neighbors. Expansion occurs 100 time-steps at a time to avoid loading unnecessary data. **Interactions** The toolbar above all Trajectory Components provides interactions that enhance the analyst's ability to examine a trajectory in detail (Figure 1, top left). Analysts are able to construct multivariate control charts [30] with the properties they provided by clicking the (\(\boxplus\)Add multi-variable chart) button which opens a modal window (Figure 6c). These multi-variate charts (Figures 1.3a, 1.3b) are dynamically added to each Transition Region View, allowing the analyst to combine various properties to generate more powerful control charts that highlight synchronized movements across property values that are difficult to detect using single variable charts, fulfilling **R5**. The (\(\boxplus\)Sway trajectory) button allows analysts to swap the vertical positions of Trajectory Components to facilitate direct comparisons. The (\(\boxplus\)Clear selection) button allows experts to undo a selection they are currently making if they decide they want to abort the process. ### Sub-Sequence Component Since the State Space charts within Transition Region Views only provide an overview, there is a need to look at sub-regions in more detail. Sub-Sequence Components (Figure 2.T2) are added to the bottom of the screen once an analyst completes a selection in a Transition Region View using the (\(\boxplus\)Select sub-region) button (Figure 6a). They are designed to fulfill **R4** and **R5**, as they allow experts to glean additional insight from regions that they deem to be interesting, and correspond to the abstract/claborate interaction category in Yi et al. [55]. Each Sub-Sequence Component provides a small multiple of 3D state visualizations, which serves as an overview of the structural changes occurring within the selection. To generate the overview, we developed a greedy search algorithm that uses the _Frobenius norm_ (provided by ASE [28]) of the spatial distance between all atomic coordinates. A high distance between states indicates that they are structurally different. The algorithm iterates over the selection and takes the distance between the state being queried and the rest. To find states that are as different as possible, we start at the initial state of the selection, find its most dissimilar counterpart, and start the search again at this state until we reach a maximum iteration count or the end of the selection. At the bottom of each Sub-Sequence Component is a traditional state ID vs. time-step plot of the selection's constituent states (Figure 5 top). The Sub-Sequence Component also supports running the Nudged Elastic Band calculation [22] using the (\(\boxplus\)RunNE) button. Clicking the button (Figure 2.T2) opens a modal window that allows analysts to adjust the parameters of the calculation and make a selection on the sub-sequence that will be used in the calculation. The results from the calculation are used to generate a potential energy graph which shows the minimum energy pathways for the selection they made, fulfilling **R5**. Potential energy graphs are commonly used by analysts to determine if a sequence of states constitutes a structural change in the simulation. An exceedingly high potential energy barrier between any two pairs of states in the sequence followed by any number of low energy barriers, usually indicates a transition. This is because particles are known to move towards their lowest energy configurations. **State Detail Widget** Whenever a state is clicked throughout the UI (e.g., within State Space Charts, Sub-Sequence Components etc.), the State Detail Widget (Figure 2.T2) is updated. It displays a static 3D visualization of the state, inspired by guidelines outlined in Byska et al. [11] that suggest linking 3D visualizations of a system to its properties. Additionally, a table is shown below the 3D render displaying the properties of the state that was selected. Since states are all colored consistently throughout the visual interface, we included a bar under all 3D renders that displays the selected state's color, making it easy to visually link the state to other visualizations. The (\(\boxplus\)Modify 3D render) button in the trajectory toolbar allows analysts to change the way states are rendered in 3D throughout the interface by Python scripts inside the vis_scripts folder in the source code (**R6**). Experts pick the visualization script they want to use with a pop-up menu that is populated with the contents of the vis_scripts folder. Analysts are expected to define a function that takes an OVITO [47] rendering pipeline object as a parameter which they can modify to suit their needs. Figure 2.T2 demonstrates an example: the default view is swapped for a visualization of crystalline structure neighborhoods where each atom is colored according to its structural classification (see Section 3.1). Customizing the visualization gives analysts an additional method to verify their conclusions made from the 2D charts in MolSieve and is integral to certain types of analyses (Section 5.2). ### Comparison Widgets and Interactions MD ensembles are practically impossible to analyze due to the amount of data that needs to be compared. To address this, we included a variety of comparison interactions that quantify the difference between regions of interest from multiple trajectories (Figure 2.T3 and Figure 6). The (\(\boxplus\)Compare regions/selections) button allows experts to select regions or sub-sequences they want to compare directly. When two are selected, a Region Comparison Widget is placed at the bottom of the screen which contains asymmetrical violin plots that compare the distributions of each property (Figure 6d). Transition Region Views can also be selected with this interaction, making it easy to compare them with Super-State Views; MolSieve uses the properties from the dominant states in transition regions to compute the distribution of each property, allowing for a fair comparison. Comparing regions this way reduces the cognitive load of having to look back and forth between two distributions that are visually separated. When two Sub-Sequence Components are selected, a Sub-Sequence Comparison Widget is generated, which displays a state similarity heat-map (Figure 6e). State similarity is defined as the inverse of the distance used in the 3D overview for Sub-Sequence Components, see Section 4.2. The (\(\boxplus\)Find similar regions) button lets an expert select a Transition Region View to quickly compare to all other Transition Region Views that are currently selected using the Timeline View's brush, which corresponds to a Connect interaction in Yi et al. [55]. Once the selection is complete, MolSieve computes the difference between their state distributions and then displays the result with a tooltip rendered above Fig. 5: The original design (top) for state ID vs. time-step charts was flawed, as it attempted to render a large amount of data in a small space. States would often occlude each other, and each render would be computationally costly, as each rectangle is an SVG element. Our final design (bottom) underlines the behavior of a sub-region succinctly and makes it easier to read and interpret when many states are in a region. Note the highlighted segment, which captures the transition between two minor repetitive sub-regions. each region (Figure 6f). This computation provides a crude preview of similarities between two transition regions, which can be used to narrow down which regions require an in-depth comparison. Clicking the \(\backslash\) State clustering button clusters all of the states in the transition regions visible on the screen based on their properties (Figure 6g). MolSieve uses the OPTICS clustering algorithm [6] to generate clusters to color the states by. Clustering states together based on their properties provides a slow, but flexible method to directly compare trajectories. These interactions were designed to replace one of the inherent visual features of chord diagrams in an earlier design, where regions could be rendered as arcs on a circle and linked together based on similarity. When we attempted to implement chord diagrams in MolSieve, we found them to be cluttered and confusing when trying to interpret the temporal structure of the data. Clicking anywhere in the Super-State View updates the State Detail Widget with the state that occurs most frequently within the region, referred to as the region's _characteristic state_. Characteristic states describe the general properties of these regions [44]. Thus, this interaction allows analysts to quickly determine if any structural changes occurred between super-states. Once a change has been identified, analyst can seek more detailed information about the change within the Transition Region View between the two differing super-states. ## 4 Case Studies We demonstrate the efficacy of MolSieve by presenting two case studies in which we conducted pairwise analysis [7] with our domain experts E1 and E2. The first case study involves analyzing two long-duration trajectories of platinum nano-particles, first by determining sub-regions in each trajectory where the particle undergoes a structural change and then comparing them. The second case study focuses on atom vacancy analysis, where a reference atomic configuration is compared to states within the trajectory. In atom vacancy analyses, experts typically look for regions within a simulation where the "missing" atoms begin to displace in tandem. ### _Platinum Nano-particles_ Our nano-particle expert (E1) aimed to identify and characterize significant fluctuations in the shape of a platinum nano-particle subjected to high temperatures. E1 began the case study by loading a simulation of a platinum nano-particle at 750 kelvins, which consists of approximately eighteen million transitions and twenty-five thousand unique states (Table I), with each state representing different configurations of a nano-particle with 147 platinum atoms. Based on a prior study of nano-particles [21], the analyst decided that the best properties to analyze this simulation were the Common Neighbor Analysis (CNA) [19], Ackland-Jones [4] (AJ), and Polyhedral Template Matching [29] (PTM) atom characterization counts. These analyses attempt to characterize the structure of a nano-particle based on descriptors of the local environment around each component atom and have been found to be strong indicators of transition regions. The analyst wrote a script that used OVITO [47] to compute these properties and loaded them into MolSieve (**R2**). Since it was difficult to tell what was occurring to the nano-particle from the default 3D render, our analyst wrote a visualization script that highlights CNA counts within states (Figure 1.6a, 1.6b, 1.7, 1.11a, 1.11b, 1.14). The CNA visualization script renders HCP atoms as red, ICO atoms as yellow, and FCC atoms as green. **Identify Transition Regions (T1):** E1 decided to load the trajectory with a GPCCA clustering range of 2-20 and a simplification threshold of 0.75. GPCCA split the trajectory into two clusters, yielding a small red cluster in between a dominant real cluster (Figure 1.1) which E1 zoomed in on using the Timeline View. This revealed a busy region with many possible transitions; however, the Super-State Views showed that the super-state distributions did not vary greatly between each other, so the analyst increased the number of clusters to 4, hoping to reveal more fine-grained super-states (Figure 1.2). Once the simplification was rendered, they found that there were a number of transition regions between super states where the ICO and HCP counts of the nano-particle were rising (**R1**; (Figure 1.3a and 1.3b). **Analyze Transition Patterns (T2):** The analyst added a multi-variate control chart using the ICO counts from all three analyses to see if they would all point towards the same regions (Figure 1.4a and 1.4b). The analyst then found two sub-regions within a Transition Region View where the control charts indicated that the structure of the nano-particle changed (Figures 1.5a, 1.5b; **R3**). Next, E1 clicked on the Super-state Views (Figure 1.3a, 1.3b) surrounding that Transition Region View to get an understanding of how the nano-particle changed from the first super-state to the second; the characteristic states of each super-state are shown in Figures 1.6a and 1.6b. Since it was difficult to tell what was occurring to the nano-particle from the default 3D render, the analyst changed the 3D view to highlight CNA counts. This revealed a sudden change in the ICO count, where the two green atoms disappear in Figure 1.3a disappear. To verify that the sudden change in ICO count was not a random event, they double-clicked the Transition Region View to expand it. This confirmed that the nano-particle stays in the same configuration for some time before suddenly undergoing a drastic change in the FCC and ICO counts. Satisfied, they made a selection in the region where the ICO count suddenly changed from zero to one (Figure 1.5a), which rendered a Sub-Sequence component. Then, they clicked through the states in the Sub-Sequence Component to get a detailed look at what was occurring to the particle (**R4**). This revealed that the trajectory was undergoing a transformation (Figure 1.7) within the region the analyst selected (Figure 1.5a); the nano-particle started the transition with two FCC atoms (Figure 1.6a) and lost them (Figure 1.6b). They ignored the Fig. 6: MolSieve’s unique interactions. (a) lets analysts select sub-regions of interest within a Transition Region View to create a Sub-Sequence Component. (b) Double clicking a Transition Region View causes it to expand into its neighbors, which makes it possible to view parts of super-states in detail. (c) allows experts to create multi-variate charts. The \(\backslash\) Compare regions/selections) button can compare any Transition Region or Super-State View by creating a Region Comparison Widget (d) which is a small multiple of asymmetrical violin plots detailing the distributions from each selected region. The button also works with Sub-Sequence Components, creating a Sub-Sequence Comparison Widget (e) that contains a heat-map detailing the similarities between the selections. (f) allows experts to select a single Transition Region View, which MolSieve uses to compare with all other visible transition regions, automatically highlighting their similarity on the Timeline View. (g) recolors all of the states in the interface according to the OPTICS clustering algorithm using analyst-defined properties. other sub-region where the ICO count dropped (Figure 1.5b), stating that "This is normal behavior in simulations with a heterogeneous energy barrier: the system tries to escape its configuration but is not able to, causing it to change before returning to its previous configuration; this is why I wanted to check the region on the left." Once the transition was found, they decided to run Nudged Elastic Band (NEB) calculations on the both ends of the suspected transition region (**R5**). The NEBs confirmed that the transition to and from the suspected transition region took a large amount of energy, thus demonstrating that our system is a significant improvement in terms of detecting regions of interest in large molecular dynamics simulations. **Ensemble Analysis (T3):** Once the transition was confirmed, the analyst decided to load another platinum nano-particle trajectory at 800K. E1aimed to determine if the structural changes they observed in the particle at 750K were similar to the ones observed at 800. The 800K simulation contains thirteen million transitions and fifty-three thousand unique states (Table 1). Once the simulation was loaded, the analyst used a similar workflow to determine where the transition regions occurred in the trajectory by carefully adjusting the simplification threshold until a suitable number of possible transition regions were displayed. Starting at the simplification threshold's default value of 0.75 did not yield any transition regions; however, it led the analyst to zoom into a sequence of super-states where the ICO count was changing from zero to one. Increasing the simplification threshold to 0.85 revealed super-states undergoing a transition similar to the 750K trajectory (Figure 1.8). The state IDs overlapped between the two trajectories, and many regions that contained the same 4 states that the 750K simulation spent large amounts of time in (Figure 1.12a and 1.12b; **R1, R3**). The analyst then decided to click the [B Find similar regions] button and select the transition region they discovered in the 750K trajectory. This revealed many regions shared a large portion of states with the selection, a region which scored 12% similarity based on the set of unique states present in each region, which can be seen on the Timeline View (Figure 1.9) (**R4**). A similarity of this magnitude is significant due to the fact that simulations are unlikely to contain the same states in a small temporal region. They then used the [B Compare regions/selections] button to examine the difference in distributions between the regions that scored highest on similarity and the original transition region they discovered (Figure 1.10). While the region that was 12% similar did not have the same transition characteristics, the analyst found a region that had a similar shift in its ICO and HCP count. Moreover, when the analyst clicked on the two Super-State Views surrounding the region, they found that the first super-state had the same characteristic state as the first in the previously found region, and the second super-state was a rotation of the previous tailing super-state (Figure 1.12a, 1.12b). While the nature of the transition was similar based on the control charts, E1 wondered if the states were truly structurally similar, so they went to use the [O State clustering] button. Recoloring the states based on their structural cluster revealed that the region shared many states, particularly around the sub-regions where the analyst believed a transition was occurring (Figures 1.12a and 1.12b). The analyst also used the Sub-Sequence Comparison Widget to compare the two selections (Figure 1.13), which verified that the transitions were similar in nature as the states were rotational analogs of each other. The combination of these comparisons reassured the analyst in their conclusion that these transitions were of a similar nature (**R4**). While not identical to the one found in the 750K simulation, the sub-region found by the analyst also describes how the nano-particle loses FCC atoms and gains an ICO atom (Figure 1.14); this slight difference is to be expected due to the fact that MD simulations are stochastic by nature. This discovery demonstrates that our system is effective in not only detecting regions of interest in one long-duration simulation but is also capable of detecting similar physical occurrences in multiple simulations. ### Bulk Tungsten Defect Analysis The goal of a defect analysis is to understand the way the point defects in a crystalline structure evolve over the course of a simulation; these defects determine the properties of a given material. Typically, analysts use the Wigner-Seitz cell method [56] to visualize the difference between a state in a defect simulation and a reference structure that does not have any defects. Our analyst, who specializes in cell defects (E2), provided a reference Tungsten lattice with 2,000 atoms, which represented a perfect, defect-free crystalline structure, as well as a Python script from his daily workflow that compares a state and the reference structure using the Wigner-Seitz analysis. The script they provided outputs the defective atoms in each state and displays them, which the analyst used as the state view for the case study, seen in the renders for Figures 7.A and 7.B. Additionally, the analyst used the output from the script to create three properties which described the center of mass of the atoms that were defective (**R6**). To begin the case study, the analyst loaded their scripts and a simulation of a Tungsten crystalline lattice being subject to various deformations at 1000 kelvin. This data-set was considerably smaller than the nano-particle case study, having only approximately 800 transitions and only 50 unique states (Table 1). However, the size of each state was considerably larger, as each state represented a Tungsten lattice with 1996 atoms. **Analyze Transition Patterns (T2):** MolSieve initially classified the entire trajectory as 3 super-states, which meant that the GPCCA simplification was not useful for this data-set. This prompted E2 to set the simplification threshold to 1.0, and rendering all of the GPCCA clusters as transition regions, allowing the analyst to see the control charts for each property. Once it was re-rendered, the analyst noticed that the moving average time period for each Transition Region View was very high, obscuring potentially interesting transitions, so they set the moving average time period for each transition region to 10. Once the system was configured properly, the control charts exposed regions where the center of mass changed rapidly in all three dimensions. MolSieve immediately identified diffusive transitions (Figure 7.A), highlighting them among the numerous repetitive thermal vibrational motions (Figure 7.B) that were composed of single vacancies moving back and forth (**R1, R3**). E2 then selected several regions highlighted by the control charts and was able to identify and follow the chain of events for several diffuse transitions. This case study was able to demonstrate that MolSieve is effective in finding regions of interest in diverse analysis scenarios. ### Domain Expert Feedback To evaluate MolSieve, we conducted an hour-long semi-structured interview session with E1 and E2. During the interview, we asked them to compare their daily workflow to using MolSieve and solicited their suggestions on improving the system. A typical workflow for a molecular dynamics analyst consists of running scripts for several days on simulation data and sifting through states manually. They typically visualize the states in OVITO [47] and then click frame-by-frame to get an idea of what changes the system is going through. The greatest challenge in analyzing simulations this way is the amount of data that needs to be processed which makes it difficult to keep track of transitions and one's temporal context within the trajectory. E1, our nano-particle expert, remarked that "The overall layout of MolSieve makes it easy to analyze these data-sets. It is very easy to understand where you are in the trajectory, just by looking at the Timeline View. This helps me think about what is going on in the simulation as a whole, and I don't feel like I have tunnel vision while examining data." They continued their reflection on the system by comparing the experience of examining regions of interest in MolSieve with their daily workflow, specifically praising the 3D overview within Sub-Sequence Components, saying that "The 3D overview [within the Sub-Sequence Component] provides a very nice, pictorial, visual effect that gives a preview of what the particle is going through. I don't have to waste time clicking back and forth between states to get an idea of what I'm looking at." E2, our defect analysis expert, reflected on MolSieve's visual design by saying, "The combination of the control charts and the aggregate state space chart make it easy to find regions of interest within a transition region. The aggregate state chart also tells me which regions to avoid selecting, since it's so easy to see where the simulation gets stuck jumping between a small set of states." The experts found that MolSieve was efficient, providing a massive productivity increase over their accustomed workflows. E1 said, "The system is exciting, as it takes an unimaginable amount of data and makes it interpretable. The nano-particle simulations we examined could take several lifetimes to sift through, and MolSieve manages to make it look trivial, with near real-time performance." E2 added, "The amount of data I was able to comb through with MolSieve would have normally taken a few weeks to do, and I managed to do this in just a few minutes," which indicates that we fulfilled **R7**. The customizability of the system was a major selling point, as E2 stated, "That is what really makes it come to life - this makes it applicable for a wide array of applications and will save us a considerable amount of time in the future." E2 continued the discussion by suggesting that analysts should be able to customize the simplification algorithm. This idea stems from the results of the atom vacancy case study (Section 5.2), where the simplification algorithm failed to produce transition regions. To get around this, E2 increased the simplification threshold to include the entire trajectory. E2 warned that, "In principle, the simplification scheme in MolSieve should work on most data-sets but molecular dynamics simulations are often analyzed in various modalities, some of which are not captured by dividing the trajectory using GTOCA." Thus, allowing experts to customize how the trajectory is simplified could make it easier to find relevant regions for various analyses. E2 added that the distance metric used in both the overview in the Sub-Sequence Component (Section 4.2) as well as the heatmap in the Sub-Sequence Comparison Widget did not effectively describe the difference between two states. This was due to the fact that we were studying the **absence** of atoms within a state. To make these comparisons more useful, they suggested that the distance functions in MolSieve should also be customizable. Finally, E1 felt that the MolSieve was lacking a feature for comparing multiple individual states. We focused on comparing sub-regions within trajectories and did not consider the importance of being able to easily compare two or more states. The Sub-Sequence Comparison Widget supports this to a limited extent, but E1 suggested an interaction that could "save" states and show them on demand. ## 5 Conclusion In this work, we present MolSieve, a visual analytics system for long-duration molecular dynamics simulations modeled by discrete Markov chains. Through the use of multiple coordinated visualizations powered by a data simplification scheme unique to MD simulations, MolSieve makes it possible to analyze previously unexplored simulation data-sets. The comparison interactions offered by the system provide support for analyzing simulation ensembles. Additionally, MolSieve's Python programming interface lets it accommodate a wide variety of simulations. To demonstrate the effectiveness of MolSieve's design, we analyzed three simulations alongside our domain experts: two nano-particle simulations and one atom vacancy simulation. Table 1 provides a detailed look at the efficiency of the system. However, it became apparent that some of its components need to support further customization. We found that the simplification algorithm would sometimes return many regions in a trajectory, which led to the screen being highly cluttered. This would require the analyst to zoom in using the Timeline View to get a better idea of the general trend within the trajectory. This can be mitigated by reformulating the way regions are rendered to only show large regions until the zoom level is appropriate. We also found that some visual design elements must be adjusted; these issues are particularly prevalent in the color encodings of the interface. The analysts found that coloring states by their IDs made it difficult to distinguish them from one another once a large number of states were rendered on the screen, which we attempted to remedy by implementing the state clustering function. However, the state clustering function would sometimes also have color overlap, which could be reduced by mapping the number of clusters to a set of salient colors. Alternatively, we could explore using different visual encodings to distinguish a large number of classes. Another limitation is the inability to view a list of the most frequently occurring states within a Super-State. This can be addressed by adding a widget that shows all of the most frequently occurring states in a region. In the future, we plan to address some of the limitations of the system, including the ramped visual encoding space and the need for extra customization. Providing additional support for exploring biological simulations would be of particular interest, as this could lead to a truly general MD region-of-interest visual analytics system. To continue scaling, we plan to switch the rendering engine to use WebGL instead of SVG, allowing MolSieve to take advantage of the current innovations in consumer graphics technology. Moreover, a number of techniques have yet to be integrated into our system - improving the 3D rendering pipeline will allow MolSieve to support a number of novel analyses (e.g., [54, 55, 25] and rendering techniques [42]. Future work will also include a method to recall expert selections, a direct state comparison view, and better 3D rendering support. \begin{table} \begin{tabular}{c c c c c c c} \multirow{2}{*}{**Simulation**} & \multirow{2}{*}{\begin{tabular}{} \end{tabular} } & \multirow{2}{*}{\begin{tabular}{} \end{tabular} } & \multirow{2}{*}{\begin{tabular}{} \end{tabular} } & \multirow{2}{*}{\begin{tabular}{} \end{tabular} } & \multirow{2}{*}{ \begin{tabular}{} \end{tabular} } \\ & & & & & & \\ \hline \hline \end{tabular} \end{table} Table 1: Several simulations that were tested on MolSieve are presented here. This table displays the total time it took to generate each simulation in ParSplice, the length of time the simulation represents in nanoseconds, the number of discrete timesteps in the simulation, the number of unique states, the time it takes to load the simulation when cached, and how long it takes to load each simulation. Figure 7: Results of the defect analysis case study. (A) Demonstrates an example of the diffusive transitions discovered within the simulation, which are a set of unique structural changes occurring to the defective region within the Tungsten crystalline lattice. (B) Demonstrates an example of “fluttering”, where the defects within the lattice move back and forth between two configurations, one atom at a time. These kinds of transitions are the predominant transformations occurring to the lattice throughout the trajectory. The dashed rectangles represent the colors of each state’s ID, and demonstrate that the State Space Charts were effective in capturing the regions of interest. Supplementary Materials We included a demo video that showcases the first case study and an instruction manual for MolSieve as supplementary material. MolSieve's source code is available at [https://github.com/rostymh/MolSieve](https://github.com/rostymh/MolSieve). ## Acknowledgments This project was supported by the Exascale Computing Project (17-SC-20-SC), a collaborative effort of the US Department of Energy Office of Science and the National Nuclear Security Administration and the U.S. Department of Homeland Security under Grant Award Number 17STQ/AC00001-06-04. Los Alamos National Laboratory is operated by Triad National Security LLC, for the National Nuclear Security administration of the U.S. DOE under Contract No. 89233218CNA000001. We graciously acknowledge computing resources from the Los Alamos National Laboratory. The views and conclusions contained in this document are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of the U.S. Department of Homeland Security. We also would like to acknowledge Jiayi Hong and Andrew Garmon for their discussions and contributions to the paper.
2306.11288
Simulations of spin/polarization-resolved laser-plasma interactions in the nonlinear QED regime
Strong-field quantum electrodynamics (SF-QED) plays a crucial role in ultraintense laser matter interactions, and demands sophisticated techniques to understand the related physics with new degrees of freedom, including spin angular momentum. To investigate the impact of SF-QED processes, we have introduced spin/polarization-resolved nonlinear Compton scattering, nonlinear Breit-Wheeler and vacuum birefringence processes into our particle-in-cell (PIC) code. In this article, we will provide details of the implementation of these SF-QED modules and share known results that demonstrate exact agreement with existing single particle codes. By coupling normal PIC with spin/polarization-resolved SF-QED processes, we create a new theoretical platform to study strong field physics in currently running or planned petawatt or multi-petawatt laser facilities.
Feng Wan, Chong Lv, Kun Xue, Zhen-Ke Dou, Qian Zhao, Mamutjan Ababekri, Wen-Qing Wei, Zhong-Peng Li, Yong-Tao Zhao, Jian-Xing Li
2023-06-20T05:02:41Z
http://arxiv.org/abs/2306.11288v3
# Simulations of spin/polarization-resolved laser-plasma interactions in the nonlinear QED regime ###### Abstract Strong-field quantum electrodynamics (SF-QED) plays a crucial role in ultraintense laser-matter interactions, and demands sophisticated techniques to understand the related physics with new degrees of freedom, including spin angular momentum. To investigate the impact of SF-QED processes, we have introduced spin/polarization-resolved nonlinear Compton scattering, nonlinear Breit-Wheeler and vacuum birefringence processes into our particle-in-cell (PIC) code. In this article, we will provide details of the implementation of these SF-QED modules and share known results that demonstrate exact agreement with existing single particle codes. By coupling normal PIC with spin/polarization-resolved SF-QED processes, we create a new theoretical platform to study strong field physics in currently running or planned petawatt or multi-petawatt laser facilities. + Footnote †: preprint: AIP/123-QED Introduction Laser-matter interactions can trigger strong-field quantum-electrodynamics (SF-QED) processes when the laser intensity \(I_{0}\) reaches or exceeds \(10^{22}\) W/cm\({}^{21,2}\). For example, when the laser intensity is in the order of \(10^{21}\)-\(10^{22}\) W/cm\({}^{2}\), i.e., the normalized peak laser field strength parameter \(a_{0}\equiv eE_{0}/m_{e}c\omega_{0}\sim 10\), electrons can be accelerated to GeV energies[3; 4] (with Lorentz factor \(\gamma_{e}\sim 10^{3}\) or higher) in a centimeter-long gas plasma, where \(-e,m_{e}\) are the charge and mass of the electron, respectively, \(E_{0},\omega_{0}\) are the field strength and angular frequency of the laser, respectively, and \(c\) is the light speed in vacuum (here, for convenience, \(\omega_{0}=\frac{2\pi c}{\lambda_{0}}\) and the wavelength of the laser \(\lambda_{0}=1\mu\)m are assumed). When the laser is reflected by a plasma mirror and collides with the accelerated electron bunch, the transverse electromagnetic (EM) field in the electron's instantaneous frame can reach the order of \(a^{\prime}\simeq 2\gamma a_{0}\sim 10^{4}\)-\(10^{5}\). Such a field strength is close to the QED critical field strength (Schwinger critical field strength) \(E_{\rm Sch}\equiv\frac{m_{e}^{2}c^{3}}{e\hbar}\), i.e., \(a_{\rm Sch}=\frac{m_{e}c^{2}}{\hbar a_{0}}\simeq 4.1\times 10^{5}\), within one or two orders of magnitude. In this regime, the probabilities of nonlinear QED processes are comparable to those of linear ones, and depend on three parameters as \(W(\chi,f,g)\), with \(\chi\equiv\frac{e\sqrt{(F_{\mu\nu}\rho^{\prime\prime})^{2}}}{m^{3}}\sim a^{ \prime}/a_{\rm Sch}\), \(f\equiv\frac{e^{2}F_{\mu\nu}F^{\mu\nu}}{4m^{4}}\sim\frac{(\alpha_{e}^{2}- \alpha_{B}^{2})}{4a_{\rm Sch}^{2}}\) and \(g\equiv\frac{e^{2}F_{\mu\nu}F^{\mu\nu}}{4m^{4}}\sim\frac{\alpha_{E}\cdot \alpha_{B}}{4a_{\rm Sch}^{2}}\) (here, \(\alpha_{E,B}\) denote the normalized field strength of electric and magnetic component, respectively)[5; 6]. For most cases of weak field (\(a_{0}\ll a_{\rm Sch}\)) condition, \(f,g\ll\chi^{2}\), and \(W(\chi,f,g)\sim W(\chi)\), i.e., the probability only depends on a single parameter \(\chi\). For electrons/positrons, the nonlinear Compton scattering (NCS, \(e+n\omega_{L}\to e^{\prime}+\omega_{\gamma}\)) is the dominant nonlinear QED process in the strong field regime, and for photons, the nonlinear Breit-Wheeler pair production (NBW, \(\omega_{\gamma}+n\omega_{L}\to e^{+}+e^{-}\)) is the dominant one, where \(\omega_{L},\omega_{\gamma}\) denote the laser photon and the emitted \(\gamma\) photon, respectively, and \(n\) is the photon absorption number. Apart from these kinetic effects, the spin/polarization effects also arise with the possibility of generating polarized high-energy particle beams or when particles traverse large-scale intense transient fields in laser-plasma interactions. Classically, the spin of a charged particle will precess around the instantaneous magnetic field, i.e. \({\rm d}{\bf s}/{\rm d}t\propto{\bf B}\times{\bf s}\), where \({\bf s}\) denotes the classical spin vector[7]. In storage rings, due to the radiation reaction, the spin of an electron/positron will flip to the direction parallel/antiparallel to the external magnetic field, i.e., the Sokolov-Ternov effect (an unpolarized electron beam will be polarized to a degree of \(\sim 92.5\%\))[8], and a similar process also occurs in the NCS[9; 10; 11]. Some recent studies have shown that with specific configurations, for example, when elliptically or linearly polarized lasers scatter with high-energy electron bunches (or plasmas), the polarization degree of the electrons can reach 90% and be used to diagnose the transient fields in plasmas [12; 13]. Meanwhile, the photons created by NCS can be polarized, and when these polarized photons decay into electron/positron pairs, the contribution to the probability from polarization can reach \(\sim 30\%\)[14], and will be inherited by the subsequent QED cascade. For example, in the laser-plasma/beam interactions, the polarization degree for linearly polarized (LP) photons is about 60% or higher and for circular polarized (CP) \(\gamma\) photons can reach 59% when employing longitudinally polarized primaries [11; 15; 16; 17]. Analytical solutions in ultraintense laser-matter interactions are scarce due to the high nonlinearity and complexity of the problem. Moreover, the micro-level processes such as ionization, recombination and Coulomb collisions, etc., coupled with the complicated configurations of lasers and plasmas make the explicit derivation almost impossible. Fortunately, computer simulation methods provide alternative and more robust tools to study those unsolvable processes even in more realistic situations [18]. In general, simulation methods for laser-plasma (ionized matter) interactions can be categorized as kinetic or fluid simulations, and specifically kinetic methods include the Fokker-Planck equation (F-P) (or the Vlasov equation for the collisionless case) and the particle-in-cell (PIC) method, and the fluid method mainly uses the magnetohydrodynamic equations (MHD) [19]. Among these methods, both F-P and MHD discretize the momentum space of particles and are prone to the nonphysical multi-stream instability, which may obscure the real physics, such as the emergence of turbulence, physical instabilities, etc. In comparison, the PIC method can provide much more detailed information on the discrete nature and intrinsic statistical fluctuations of the system, regardless of the stiffness of the problem. Therefore, the PIC method has been widely used in the simulation of ultraintense laser-plasma interactions [18; 19; 20]. Thanks to the emerging PIC simulation methods, the development of parallelism and large-scale cluster deployment, simulations of laser-plasma wakefield acceleration, laser-ion acceleration, THz radiation and also SF-QED, etc., have become accessible for general laser-plasma scientists [18; 21; 22; 23; 24]. However, the spin and polarization properties of the plasma particles and QED products are not widely introduced to the mainstream due to the lack of appropriate algorithms. In some recent studies, the spin and polarization resolved SF-QED processes have been investigated in the laser-beam colliding configurations and have shown that these processes are prominent in generating polarized beams [10; 11; 14; 16; 17; 25]. And these local-constant-approximated version of these probabilities can be readily introduced into any PIC code. In this paper, we briefly review the common PIC simulation algorithms and present some recent implementations in spin/polarization averaged/summed QED. The formulas and algorithms for the spin/polarization-dependent SF-QED processes are given in detail and have been coded into our PIC code SLIPs (which stands for "Spin-resolved **L**aser **I**nteraction with **P**lasma **s**imulation code"). These formulas and algorithms presented in this paper, especially the polarized version, can be easily adopted by any other PIC code and used to simulate the ultraintense laser-matter interactions that are already available or will be achievable in the near future multi-petawatt (PW) to exawatt (EW) laser facilities [26], such as Apollo [27; 28], ELI [29], SULF [30] and SEL, etc. Throughout the paper, Gaussian units will be adopted, and all quantities are normalized as follows: time \(t\) with \(1/\omega\) (i.e., \(t^{\prime}\equiv t/(1/\omega)=\omega t\)), position \(x\) with \(1/k=\frac{\lambda}{2\pi}\), momentum \(p\) with \(m_{e}c\), velocity \(v\) with \(c\), energy \(\varepsilon\) with \(m_{e}c^{2}\), EM field \(E,B\) with \(\frac{m_{e}c\omega}{\epsilon}\), force \(F\) with \(m_{e}c\omega\), charge \(q\) with \(e\), charge density \(\rho\) with \(k^{3}e\), current density \(J\) to \(k^{3}ec\), where \(\lambda\) and \(\omega=\frac{2\pi c}{\lambda}\) are the reference wavelength and frequency, respectively. ## II PIC algorithm The simulation of laser-plasma interactions requires two essential components: the evolution of the EM field and the motion of particles. The corresponding governing equations are the Maxwell equations (either with **A**-\(\phi\) or **E**-**B** formulations) and the Newton-Lorentz equations. Therefore, the fundamentals of PIC codes consist of four kernel parts: force depositing to particles, particle pushing, particles depositing to charge and current densities, and solving Maxwell equations; see Figure. 1. Here, we review each part briefly (these algorithms are used in the SLIPs) and refer to the standard literature or textbooks for more details [18; 19]. Figure 1: Standard particle-in-cell (PIC) loop with four kernel parts. ### Particle pushing When radiation reaction is weak (the radiation power is much smaller than the energy gain power), the motion of charged particles is governed by the Newton-Lorentz equation: \[\frac{d\mathbf{p}}{dt} = \frac{q}{m}(\mathbf{E}+\mathbf{\beta}\times\mathbf{B}), \tag{1}\] \[\frac{d\mathbf{x}}{dt} = \frac{\mathbf{p}}{\gamma}, \tag{2}\] where \(\mathbf{p}\equiv\gamma m\mathbf{v}\), \(\mathbf{x}\), \(q\), \(m\), \(\gamma\), \(\mathbf{v}\), and \(\mathbf{\beta}\equiv\mathbf{v}/c\) are the momentum, position, charge, mass, Lorentz factor, velocity, and normalized velocity of the particle, respectively. These coupled equations are discretized using a leapfrog algorithm as \[\frac{\mathbf{p}^{n+1/2}-\mathbf{p}^{n-1/2}}{\Delta t} = \frac{q}{m}\left(\mathbf{E}^{n}+\frac{\mathbf{p}^{n}}{\gamma^{n}} \times\mathbf{B}^{n}\right), \tag{3}\] \[\frac{\mathbf{x}^{n+1}-\mathbf{x}^{n}}{\Delta t} = \mathbf{v}^{n+1/2}, \tag{4}\] and solved using the standard Boris rotation [31; 32; 33]: \[\mathbf{p}^{n-1/2} = \mathbf{p}^{-}-\frac{q\Delta t}{2m}\mathbf{E}^{n}, \tag{5}\] \[\mathbf{p}^{n+1/2} = \mathbf{p}^{+}+\frac{q\Delta t}{2m}\mathbf{E}^{n},\] (6) \[\mathbf{p}^{\prime} = \mathbf{p}^{-}+\mathbf{p}^{-}\times\mathbf{\tau},\] (7) \[\mathbf{p}^{+} = \mathbf{p}^{-}+\mathbf{p}^{\prime}\times\varsigma,\] (8) \[\mathbf{\tau} = \frac{q\Delta t}{2m\gamma^{n}}\mathbf{B}^{n},\] (9) \[\varsigma = \frac{2\mathbf{\tau}}{1+\tau^{2}}, \tag{10}\] where \(\gamma^{n}=\sqrt{1+(\mathbf{p}^{-})^{2}}=\sqrt{1+(\mathbf{p}^{+})^{2}}\). The update in momentum and position are asynchronized by half a time step, i.e., a leapfrog algorithm is used here. This leapfrog algorithm ensures the self-consistency of the momentum and position evolution. ### Field solving In the ultraintense laser-plasma interactions, the plasma particles are assumed to be distributed in the vacuum and immersed in the EM field. Therefore, the field evolution is governed by the Maxwell equations in vacuum with sources. After normalization, the Maxwell equations in differential form are given by \[\nabla\cdot\mathbf{E} = \rho \tag{11}\] \[\nabla\cdot\mathbf{B} = 0\] (12) \[\nabla\times\mathbf{E} = -\frac{\partial\mathbf{B}}{\partial t}\] (13) \[\nabla\times\mathbf{B} = \frac{\partial\mathbf{E}}{\partial t}+\mathbf{J}. \tag{14}\] The standard finite-difference method in the time domain for the Maxwell equations is to discretize field variables on the spatial grid and advance forward in time. Here, following the well-known Yee-grid [34], we put \(\mathbf{E},\mathbf{B}\) as in Figure. 2 (a), which automatically satisfies the two curl equations. For lower-dimension simulations, extra dimensions are squeezed, as shown in a 2D example in Figure. 2 (b). In the dimension-reduced simulations, field components in the disappeared dimension can be seen as uniform, i.e., the gradient is 0. By using Esierkepov's method of current deposition [35], the current is calculated from the charge density via charge conservation, i.e., \(\partial_{t}\rho+\nabla\cdot\mathbf{J}=0\). Once the initial condition obeys Gauss's law, \(\nabla\cdot\mathbf{E}=\rho\), Gauss's law is automatically embedded. This can be verified with a gradient on Eq. (14) \(0=\nabla\cdot(\nabla\times\mathbf{B})=\partial_{t}(\nabla\cdot\mathbf{E})+ \nabla\cdot\mathbf{J}=\partial_{t}(\nabla\cdot\mathbf{E}-\rho)\), i.e., the temporal variation in the violation of Gauss's law is 0. Therefore, in the field solver, only the two curl equations are solved. Here, we take the \(E_{y}\) and \(B_{z}\) components as examples: 1D case (squeezing the \(y,z\) directions): Figure 2: (a) and (b): Yee-grid and position of each field component in 3D and 2D case, respectively. In (b), the \(z\) direction is squeezed. \[\frac{E_{y}^{n+1}-E_{y}^{n}}{\Delta t}\bigg{|}_{i+1/2}= -\frac{B_{i+1,j}-B_{i,j}}{\Delta x}\bigg{|}_{z}^{n+1/2}-J_{y,i+1/2 }^{n+1/2} \tag{15}\] \[\frac{B_{z}^{n+1/2}-B_{z}^{n-1/2}}{\Delta t}\bigg{|}_{i}= -\frac{E_{i+1/2}-E_{i-1/2}}{\Delta x}\bigg{|}_{y}^{n}\] 2D case (squeezing the \(z\) direction): \[\frac{E_{y}^{n+1}-E_{y}^{n}}{\Delta t}\bigg{|}_{i+1/2,j}= -\frac{B_{i+1,j}-B_{i,j}}{\Delta x}\bigg{|}_{z}^{n+1/2}-J_{y,i+1/2,j}^{n+1/2} \tag{16}\] \[\frac{B_{z}^{n+1/2}-B_{z}^{n-1/2}}{\Delta t}\bigg{|}_{i+1/2,j}= -\frac{E_{i+1/2,j}-E_{i+1/2,j}}{\Delta x}\bigg{|}_{y}^{n}+\] \[\frac{E_{i+1/2,j+1/2}-E_{i+1/2,j-1/2}}{\Delta y}\bigg{|}_{x}^{n+1 /2}\] 3D case: \[\frac{E_{y}^{n+1}-E_{y}^{n}}{\Delta t}\bigg{|}_{i+1/2,j,k+1/2}= -\frac{B_{i+1,j,k+1/2}-B_{i,j,k+1/2}}{\Delta x}\bigg{|}_{z}^{n+1/2}+ \tag{17}\] \[\frac{B_{i+1/2,j,k+1}-B_{i+1/2,j,k}}{\Delta z}\bigg{|}_{x}^{n+1/2 }-J_{y,i+1/2,j,k+1/2}^{n+1/2},\] \[\frac{B_{z}^{n+1/2}-B_{z}^{n-1/2}}{\Delta t}\bigg{|}_{i,j,k+1/2}= -\frac{E_{i+1/2,j,k}-E_{i-1/2,j,k}}{\Delta x}\bigg{|}_{y}^{n}+\] \[\frac{E_{i+1/2,j+1,k}-E_{i+1/2,j,k}}{\Delta y}\bigg{|}_{x}^{n}\] where the lower indices with \(i,j,k\) denote the spatial discretization and upper indices with \(n\) indicate the time discretization. The time indices are assigned using the leapfrog algorithm; see Sec. II.6. ### Current deposition We calculate the charge current density using Esirkepov's method, which conserves charge by satisfying the Gauss law [36] \[\partial_{t}\rho+\nabla\cdot\mathbf{J}=0, \tag{18}\] and removes the need for Coulomb correction [19]. This algorithm computes the charge density at time step \(t-\frac{1}{2}\Delta t\) and \(t+\frac{1}{2}\Delta t\) on each grid cell from the particle positions and velocities, i.e., \[\rho_{i,jk}^{n+1/2} = \frac{1}{\Delta V}\sum_{r}W(\mathbf{x}_{r}^{n}+\frac{1}{2}\mathbf{ v}^{n+1/2}\Delta t)q_{r}, \tag{19}\] \[\rho_{i,jk}^{n-1/2} = \frac{1}{\Delta V}\sum_{r}W(\mathbf{x}_{r}^{n}-\frac{1}{2} \mathbf{v}^{n}\Delta t)q_{r},\] (20) \[\delta^{n}\rho = \rho^{n+1/2}-\rho^{n-1/2} \tag{21}\] where \(r\) denotes the particle index, \(|\mathbf{x}_{r}-\mathbf{x}_{i,jk}|\leq(\Delta x,\Delta y,\Delta z)\), and \(\Delta V=\Delta x\Delta y\Delta z\) is the cell volume. We then interpolate the charge density to the current grid to obtain the current density; see Ref. [36] for more details. ### Force deposition We deposit the updated field variables from the Maxwell solver to the particles for calculating acceleration or further SF-QED processes. The field deposition to the particles follows a similar procedure as the charge density deposition. For each particle at position \(\mathbf{x}_{r}\), we find its nearest grid point \((i,j,k)^{g}=\mathrm{floor}\left(\frac{\mathbf{x}_{r}}{\Delta x}+\frac{1}{2}\right)\) and its nearest half grid point \((i,j,k)^{h}=\mathrm{floor}\left(\frac{\mathbf{x}_{r}}{\Delta x}\right)\), where \(\Delta\mathbf{x}=(\Delta x,\Delta y,\Delta z)\) is the spatial grid size. We then weight the field to the particle by summing over all nontrivial terms of \(W(i,j,k)\cdot F(i,j,k)\), where \(W(i,j,k)\) is the particle weighting function (see Sec. II.5 for more details) on the grid (half grid) \((i,j,k)\) and \(F(i,j,k)\) is the field component of \(\mathbf{E}\) or \(\mathbf{B}\) on the spatial grid with proper staggering according to Figure. 2. ### Particle shape function The weighting function \(W\) in the current and force deposition is determined by the form factor (shape factor) of the macro-particle, which is a key concept in modern PIC code algorithms. The form factor gives the macro-particle a finite size (composed of thousands of real particles) and reduces the nonphysical collisions [19]. Various particle shape function models have been proposed, such as the Nearest Grid Point (NGP) and Cloud-in-Cell (CIC) methods. The NGP and CIC methods use the nearest one and two grid fields as the full contribution, respectively. Higher orders of particle shape function can suppress the unphysical noise and produce smoother results. We use the triangle shape function (triangular shape cloud, TSC) in each dimension [36] \[W_{\rm spline}=\left\{\begin{array}{ll}\frac{3}{4}-\delta^{2},&\mbox{for}j\\ \frac{1}{2}\left(\frac{1}{2}\pm\delta\right),&\mbox{for}j\pm 1\end{array}\right., \tag{22}\] where \(\delta=\frac{x-X_{j}}{\Delta x}\), \(x\) is the particle position, \(j\) the nearest grid/half grid number and \(X_{j}\equiv j\Delta x\). We obtain higher dimension functions by multiplying 1D shape function in each dimension: \(W_{\rm 2D}(i,j)=W_{x}(i)W_{y}(j)\) and \(W_{\rm 3D}(i,j,k)=W_{x}(i)W_{y}(j)W_{z}(k)\). ### Time ordering In SLIPs, the simplest forward method is used to discretize all differential equations that are reduced to first order with respect to time [18]. To minimize the errors introduced by the discretization, some variables are updated at integer time steps and others at half-integer time steps. For example, the EM field variables **E** and **B** are updated alternately at integer and half-integer time steps; and the position **x** and momentum **p** of particles are updated alternately as well; see Figure. 3. The leapfrog updating is also applied to the current deposition and field interpolation. ## III QED algorithm This section presents some SF-QED processes (with unpolarized/polarized version) that are relevant for laser-plasma interactions. The classical and quantum radiation correction to the Newton-Lorentz equations, namely the Laudau-Lifshitz (LL) equation and the modified Landau-Lifshitz (MLL) equation; and their discretized algorithms are reviewed first. The classical and quantum corrected equations of motion (EOM) for the spin, namely the Thomas-Bargmann-Michel-Telegdi (T-BMT) equation and the Radiative T-BMT equation; and their discretized algorithms are reviewed next. NCS with unpolarized and polarized version and their Monte-Carlo (MC) algorithms are Figure 3: Leap-frog algorithm of particle pushing and field advancing. reviewed. NBW with unpolarized and polarized version and their MC implementations are presented as well. Finally, the implementations of high-energy bremsstrahlung and vacuum birefringence in the conditions of weak pair productions (\(\chi_{\gamma}\lesssim 0.1\)) are briefly discussed. ### Radiative particle pusher Charged particles moving in strong fields can emit either classical fields or quantum photons. This leads to energy/momentum loss and braking of the particles, i.e., radiation reaction. A well-known radiative equation of motion (EOM) for charged particles is the Lorentz-Abraham-Dirac (LAD) equation [37]. However, this equation suffers from the runaway problem, as the radiation reaction terms involve the derivative of the acceleration. To overcome this issue, several alternative formalisms have been proposed, among which the Landau-Lifshitz (LL) version is widely adopted [38]. The LL equation can be obtained from the LAD equation by applying iterative and order-reduction procedures [39; 40], which are valid when the radiation force is much smaller than the Lorentz force. More importantly, in the limit of \(\hbar\to 0\), the QED results in the planewave background field are consistent with both the LAD equation and LL equation [41; 42]. Depending on the value of the quantum nonlinear parameter \(\chi_{e}\) (defined in Sec. III.1.1), the particle dynamics can be governed by either the LL equation or its quantum-corrected version [1; 23; 38; 43]. #### iii.1.1 Landau-Lifshitz equation The Landau-Lifshitz equation can be employed when the radiation is relatively weak (weak radiation reaction, \(\chi_{e}\ll 10^{-2}\)) [38]: \[\begin{split}\mathbf{F}_{\text{RR,classical}}&= \frac{2e^{3}}{3mc^{3}}\bigg{\{}\gamma\bigg{[}\bigg{(}\frac{\partial}{\partial t }+\frac{\mathbf{p}}{\gamma m}\cdot\nabla\bigg{)}\mathbf{E}+\frac{\mathbf{p}}{ \gamma mc}\times\bigg{(}\frac{\partial}{\partial t}+\frac{\mathbf{p}}{\gamma m }\cdot\nabla\bigg{)}\mathbf{B}\bigg{]}\\ &+\frac{e}{mc}\bigg{[}\mathbf{E}\times\mathbf{B}+\frac{1}{\gamma mc }\mathbf{B}\times(\mathbf{B}\times\mathbf{p})+\frac{1}{\gamma mc}\mathbf{E}( \mathbf{p}\cdot\mathbf{E})\bigg{]}\\ &-\frac{e\gamma}{m^{2}c^{2}}\mathbf{p}\bigg{[}\bigg{(}\mathbf{E }+\frac{\mathbf{p}}{\gamma mc}\times\mathbf{B}\bigg{)}^{2}-\frac{1}{\gamma^{2} m^{2}c^{2}}(\mathbf{E}\cdot\mathbf{p})^{2}\bigg{]}\bigg{\}},\end{split} \tag{23}\] where all quantities are given in Gaussian units, and the dimensionless one is \[\mathbf{F}_{\text{RR,classical}} =\frac{2}{3}\alpha_{f}\xi_{L}\bigg{\{}\gamma[\bigg{(}\frac{\partial }{\partial t}+\frac{\mathbf{p}}{\gamma}\cdot\nabla\bigg{)}\mathbf{E}+\frac{ \mathbf{p}}{\gamma}\times\bigg{(}\frac{\partial}{\partial t}+\frac{\mathbf{p}} {\gamma}\cdot\nabla\bigg{)}\mathbf{B}\bigg{]} \tag{24}\] \[+\bigg{[}\mathbf{E}\times\mathbf{B}+\frac{1}{\gamma}\mathbf{B} \times(\mathbf{B}\times\mathbf{p})+\mathbf{E}(\mathbf{p}\cdot\mathbf{E}) \bigg{]}\] \[-\gamma\mathbf{p}\bigg{[}\bigg{(}\mathbf{E}+\frac{\mathbf{p}}{ \gamma}\times\mathbf{B}\bigg{)}^{2}-\frac{1}{\gamma^{2}}(\mathbf{E}\cdot \mathbf{p})^{2}\bigg{]}\bigg{\}},\] with \(\alpha_{f}=\frac{e^{2}}{c\hbar}\) is the fine structure constant and \(\xi_{L}=\frac{\hbar\omega}{m_{e}c^{2}}\) is the normalized reference photon energy. In the ultra-intense laser interacting with plasmas, the dominant contribution comes from the last two terms [44]. In the ultrarelativisitc limits, only the third term dominates the contribution, and the radiation reaction force can be simplified as \[\mathbf{F}_{\text{RR,cl}}\simeq\frac{2}{3}\alpha_{f}\frac{\chi_{e}^{2}}{\xi_ {L}}\mathbf{\beta}, \tag{25}\] where \(\chi_{e}=\frac{c\hbar}{m^{3}c^{4}}\sqrt{|F^{\mu\nu}p_{\nu}|^{2}}\equiv\xi_{L} \gamma_{e}\sqrt{(\mathbf{E}+\mathbf{\beta}\times\mathbf{B})^{2}-(\mathbf{\beta}\cdot( \mathbf{\beta}\cdot\mathbf{E}))^{2}}\simeq\gamma_{e}E_{\perp}\xi_{L}(1-\cos\theta)\) is a nonlinear quantum parameter signifies the strength of the NCS, with \(\theta\) denote the angle between electron momentum and EM field wavevector, and \(E_{\perp}\) denotes the perpendicular component of electric field. This reduced form gives the importance of the radiation reaction when one estimates the ratio between \(F_{\text{RR}}\) and Lorentz force \(F_{L}\): \[\mathcal{R}\equiv|F_{\text{RR}}|/|F_{\text{L}}|\sim\frac{2}{3}\alpha_{f} \gamma_{e}\chi_{e}\simeq 2\times 10^{-8}a_{0}\gamma_{e}^{2}\text{ (for wavelength = 1 $\mu$m)}, \tag{26}\] apparently, once \(\gamma_{e}^{2}a_{0}\gtrsim 10^{6}\), the radiation reaction force should be considered. #### ii.2.2 Modified Landau-Lifshitz equation The LL equation is only applicable when the radiation reaction force is much weaker than the Lorentz force, or, the radiation per laser period is much smaller than \(m_{e}c\)[245]. Once \(\chi_{e}\) is larger than \(10^{-2}\) and above, the quantum nature of the radiation dominates the process. On one hand, the radiation spectra will be suppressed and deviate from the radiation force in LL equation; on the other hand, the radiation will be stochastic and discontinuous. However, when the stochasticity is not relevance for the detection and one only cares about the average effect (integrated spectra), a correction to the radiation force can be made, i.e., quantum correction [46; 47; 48; 49] \[\mathbf{F}_{\text{RR,quantum}}=q(\chi)\mathbf{F}_{\text{RR,classical}}, \tag{27}\] where \[q(\chi) = \frac{I_{\rm QED}}{I_{\rm C}}, \tag{28}\] \[I_{\rm QED} = mc^{2}\int c(k\cdot k^{\prime})\frac{dW_{fi}}{d\eta dr_{0}}dr_{0},\] (29) \[I_{\rm C} = \frac{2e^{4}E^{\prime 2}}{3m^{2}c^{3}}, \tag{30}\] with \(W_{fi}\) being the radiation probability [50], \(\eta=k_{0}z-\omega_{0}t\), \(r_{0}=\frac{2(k\cdot k^{\prime})}{3\chi(k\cdot p_{i})}\), and \(E^{\prime}\) the electric fields in the instantaneous frame of the electron. \(p_{i}\) is the four-momentum of the electron before radiation. \(k\) and \(k^{\prime}\) are the four-wavevector of local EM field, and the radiated photon, respectively. See \(q(\chi)\) in Figure. 4. Here, the ratio between the QED radiation power and the classical one, i.e., the re-scaling factor \(q(\chi)\), is the same with the factor in Ref. [51]: \[q(\chi_{e})\approx\frac{1}{\left[1+4.8(1+\chi_{e})\ln(1+1.7\chi_{e})+2.44\chi _{e}^{2}\right]^{2/3}}, \tag{31}\] or \[q(\chi_{e})\approx\frac{1}{(1+8.93\chi_{e}+2.41\chi_{e}^{2})^{2/3}}. \tag{32}\] In the ultrarelativistic limit, an alternative formula can be employed as [52, 53] \[{\bf F}_{\rm RR,quantum}=q(\chi)P_{\rm cl}\chi_{e}^{2}\beta/\beta^{2}c. \tag{33}\] Apparently, once \(\chi\gtrsim 10^{-2}\), the quantum corrected version should be used. #### iii.1.3 **Algorithms of the Radiative Pusher** Here, we plug the radiative correction (either classical or quantum corrected version) into the standard Boris pusher as follows [44]: \[\frac{\mathbf{p}^{n+1/2}-\mathbf{p}^{n-1/2}}{\Delta t}=\mathbf{F}^{n}=\mathbf{F} ^{n}_{L}+\mathbf{F}^{n}_{R}. \tag{34}\] First we use the Boris step \[\frac{\mathbf{p}^{n+1/2}_{L}-\mathbf{p}^{n-1/2}_{L}}{\Delta t}=\mathbf{F}^{n}_ {L}, \tag{35}\] and then use the radiative correction push \[\frac{\mathbf{p}^{n+1/2}_{R}-\mathbf{p}^{n-1/2}_{R}}{\Delta t}=\mathbf{F}^{n}_ {R}, \tag{36}\] where \(\mathbf{p}^{n-1/2}_{L}=\mathbf{p}^{n-1/2}_{R}=\mathbf{p}^{n-1/2}\), and the final momentum is given by \[\mathbf{p}^{n+1/2}=\mathbf{p}^{n+1/2}_{L}+\mathbf{p}^{n+1/2}_{R}-\mathbf{p}^{ n-1/2}=\mathbf{p}^{n+1/2}_{L}+\mathbf{F}^{n}_{R}\Delta t. \tag{37}\] With this algorithm, the Boris pusher is attained. See a comparison between different solver calculated dynamics in Figure. 5. For the Lorentz equation without radiation, the particle momentum and energy are analytically given by [54] \[\mathbf{p}(\tau) = \mathbf{p}_{0}-\mathbf{A}(\tau)+\hat{k}\frac{\mathbf{A}^{2}(\tau )-2\mathbf{p}_{0}\cdot\mathbf{A}(\tau)}{2(\gamma_{0}-\mathbf{p}_{0}\cdot\hat{ k})} \tag{38}\] \[\gamma(\tau) = \gamma_{0}+\frac{\mathbf{A}^{2}(\tau)-2\mathbf{p}_{0}\cdot \mathbf{A}(\tau)}{2(\gamma_{0}-\mathbf{p}_{0}\cdot\hat{k})} \tag{39}\] where \(\mathbf{A}(\tau)=-\int\tau_{0}{}^{\tau}\mathbf{E}(\tau)d\tau\) is the external field vector potential, \(\tau\) is the proper time, \(\hat{k}\) is the normalized wavevector of the field, \(\gamma,\mathbf{p}\), and \(\gamma_{0},\mathbf{p}_{0}\) the instantaneous and initial (with notation of 0) Lorentz factor and momentum of the particle, respectively. For a planewave with a temporal profile, the momentum and energy gain vanish as \(\mathbf{A}(\infty)=\mathbf{A}(-\infty)=0\). The planewave solution with radiation reaction can be found in Ref. [55]. However, no explicit solution exists when the quantum correction term is included, as shown in Fig. 5. ### Spin dynamics The consideration of electron/positron spin becomes crucial in addition to the kinetics when plasma electrons are polarized or when there is an ultrastrong EM field interacting with electron/positron and \(\gamma\) photons. The significance of this aspect has been highlighted in recent literature, particularly in the context of relativistic charged particles in EM waves and laser-matter interactions [56; 57]. This issue can be addressed either by employing the computational Dirac solver [58] or by utilizing the Foldy-Wouthuysen transformation and the quantum operator formalism, such as the reduction of the Heisenberg equation to a classical precession equation [59; 60]. However, these approaches are not directly applicable to many-particle systems. Here and throughout this paper, the spin is defined as a unit vector \(\mathbf{S}\). In the absence of radiation, the electron/positron spin precesses around the magnetic field in the rest frame and can be described by the classical Thomas-Bargmann-Michel-Telegdi (T-BMT) equation. This equation is equivalent to the quantum-mechanical Heisenberg equation of motion for the spin operator or the polarization vector of the system [7; 59; 60]. When radiation becomes significant, the electron/positron spin also undergoes flipping to quantized axes, typically aligned with the magnetic field in the rest frame. By neglecting stochasticity, this effect can be appropriately accounted for by incorporating the radiative correction to the T-BMT equation, which is analogous to the quantum correction to the Landau-Lifshitz equation. #### iii.2.1 T-BMT equation The non-radiative spin dynamics of an electron is given by \[\begin{split}\left(\frac{\mathrm{d}\mathbf{S}}{\mathrm{d}t} \right)_{T}=\mathbf{S}\times\mathbf{\Omega}\equiv\mathbf{S}\times\left[-\left( \frac{g}{2}-1\right)\frac{\gamma_{e}}{\gamma_{e}+1}\left(\mathbf{\beta}\cdot \mathbf{B}\right)\cdot\mathbf{\beta}\right.\\ \left.+\left(\frac{g}{2}-1+\frac{1}{\gamma_{e}}\right)\mathbf{B }-\left(\frac{g}{2}-\frac{\gamma_{e}}{\gamma_{e}+1}\right)\mathbf{\beta} \times\mathbf{E}\right],\end{split} \tag{40}\] where \(\mathbf{E}\) and \(\mathbf{B}\) are the normalized electric and magnetic fields, respectively, and \(g\) is the electron Lande factor, respectively. Since this equation is a pure rotation around the precession frequency of \(\mathbf{\Omega}\), the Boris rotation is much more preferable than other solver for ordinary differential equations (for instance, the Runge-Kutta, etc.). Here, \(\mathbf{\Omega}\) plays the role of \(\mathbf{B}/\gamma\) in the Eqs. (3) and (5-10). For other particle species, both the charge, mass and Lande factor for that species should be employed. #### iii.2.2 Radiative T-BMT equation When the radiation damping is no longer negligible, the radiation can also affect the spin dynamics. In the weakly radiation regime, this radiation induced modification of the spin dynamics can be handled with a similar way as in the Landau-Lifshitz equation. Here, the modified version of T-BMT equation or the radiative T-BMT equation is thus given by \[\frac{\mathrm{d}\mathbf{S}}{\mathrm{d}t}=\left(\frac{\mathrm{d}\mathbf{S}}{ \mathrm{d}t}\right)_{T}+\left(\frac{\mathrm{d}\mathbf{S}}{\mathrm{d}t}\right) _{R}, \tag{41}\] with the first (labeled with "T") and second (labeled with "R") terms denote the non-radiative precession in Eq. (40), and radiative correction, respectively. The radiative term is given by \[\left(\frac{\mathrm{d}\mathbf{S}}{\mathrm{d}t}\right)_{R}=-P\left[\psi_{1}( \chi)\mathbf{S}+\psi_{2}(\chi)(\mathbf{S}\cdot\mathbf{\beta})\mathbf{\beta}+\psi_{3} (\chi)\mathbf{\hat{n}}_{B}\right], \tag{42}\] where, \(P=\frac{\alpha_{f}}{\sqrt{3}\pi\gamma_{e}\xi_{L}}\), \(\psi_{1}(\chi_{e})=\int_{0}^{\infty}u^{\prime\prime}\mathrm{d}uK_{\frac{3}{2} }(u^{\prime})\), \(\psi_{2}(\chi_{e})=\int_{0}^{\infty}u^{\prime\prime}\mathrm{d}u\int_{u^{ \prime}}^{\infty}\mathrm{d}x\mathrm{K}_{\frac{1}{2}}(x)\)-\(\psi_{1}(\chi_{e})\), \(\psi_{3}(\chi)=\int_{0}^{\infty}u^{\prime\prime}\mathrm{d}uK_{\frac{1}{2}}(u^ {\prime})\), \(u^{\prime}=2u/3\chi_{e}\), \(u^{\prime\prime}=u^{2}/(1+u)^{3}\), and \(\mathrm{K}_{n}\) is the \(n\)th-order modified Bessel function of the second kind, \(\mathbf{\hat{n}}_{B}=\mathbf{\beta}\times\mathbf{\hat{a}}\), \(\mathbf{\beta}\) and \(\mathbf{\hat{a}}\) denote the normalized velocity and acceleration vector [61; 62]. #### iii.2.3 Algorithms of simulating the spin precession The simulation algorithms of spin precession are quite similar to the cases of EOM, i.e., Lorentz equation and radiative EOM, i.e., LL/MLL equations. Therefore, the T-BMT equation is simulated via the Boris rotation without the pre- and post- acceleration term, but with only the rotation term \(\mathbf{\Omega}\). In SLIPs, a standard Boris algorithm is used \[\mathbf{S}^{\prime} = \mathbf{S}^{n-1/2}+\mathbf{S}^{n-1/2}\times\mathbf{t}, \tag{43}\] \[\mathbf{S}^{n+1/2} = \mathbf{S}^{n-1/2}+\mathbf{S}^{\prime}\times\mathbf{o},\] (44) \[\mathbf{t} = \frac{q\Delta t}{2}\mathbf{\Omega}^{n},\] (45) \[\mathbf{o} = \frac{2\mathbf{t}}{1+t^{2}}. \tag{46}\] For the radiative T-BMT equation, there will be an extra term \(\left(\frac{d\mathbf{S}}{dt}\right)_{R}\) which is equivalent to the electric field term in the Lorentz equation. Therefore, the straightforward algorithm is given by \[\mathbf{S}_{T}^{n-1/2}=\mathbf{S}^{n-1/2}+\frac{\Delta t}{2}\left(\frac{d \mathbf{S}}{dt}\right)_{R}, \tag{47}\] Boris T-BMT Eqs. (43-46), \[\mathbf{S}^{n+1/2}=\mathbf{S}_{T}^{n+1/2}+\frac{\Delta t}{2}\left(\frac{d \mathbf{S}}{dt}\right)_{R}. \tag{48}\] Figure. 6 shows the comparison between the T-BMT and radiative T-BMT equations for different cases: Lorentz equation + BMT equation ("A"), Lorentz equation + M-BMT equation ("B"), LL equation + M-BMT equation ("C"), and MLL equation + M-BMT equation ("D"). The evolution of each spin component depends on different terms. In our setup, the magnetic field is along the \(z\) direction, so the spin precession occurs in the \(x\)-\(y\) plane, affecting \(S_{x}\) and \(S_{y}\). The radiation reaction mainly affects \(S_{z}\). In the case without radiation reaction, case "A", \(S_{x}\) and \(S_{y}\) oscillate due to the precession and are conserved in Fig. 6(d). In the case with only the spin radiation reaction, case "B", \(S_{x}\) is strongly damped by the term \(\left(\frac{d\mathbf{S}}{dt}\right)_{R}\). \(S_{y}\) and \(S_{z}\) oscillate due to the combined effects of precession and radiation reaction, as shown in Figs. 6(a) and (b). When both spin and momentum radiation reactions are included, case "C" (LL equation), the particle's momentum and energy decrease, i.e., \(\gamma_{e}\) decreases, which lowers the spin radiation reaction term \(\left(\frac{d\mathbf{S}}{dt}\right)_{R}(\chi_{e})\) and the damping of \(S_{x}\) and \(S_{z}\) (see Fig. 6(c) for the comparison of "B", "D" and "C" in terms of \(S_{z}\) amplitude). Simultaneously, the precession term \(\left(\frac{d\mathbf{S}}{dt}\right)_{T}\propto B/\gamma_{e}\) grows with decreasing \(\gamma_{e}\), which amplifies the oscillation of \(S_{y}\), as shown by the contrast of "B" (Lorentz), "D" (MLL) and "C" (LL) in Fig. 6(b). ### Nonlinear Compton scattering When the radiation is strong (\(\chi_{e}\gtrsim 0.1\)), the stochastic nature of the radiation can no longer be neglected in the laser-beam/plasma interactions. And the photon dynamics should be taken into account. In this regime, the full stochastic quantum process is required to describe the strong radiation, i.e., the nonlinear Compton scattering (NCS) [63; 64; 2]. Therefore, the radiation reaction and photon emission process will be calculated via the MC simulation based on the NCS probabilities. Besides, the spin of electron/positron and polarization of the NCS photons will be also included in the MC simulations. #### iv.3.1 Spin-resolved/summed nonlinear Compton scattering When the laser intensity \(a_{0}\) and the electron energy \(\gamma_{e}\) permits the local-constant-cross field approximate (LCFA), i.e., \(a_{0}\gg 1,\chi_{e}\gtrsim 1\), the polarization- and spin-resolved emission rate for the NCS is given by [65; 12; 15] \[\frac{\mathrm{d}^{2}W_{fi}}{\mathrm{d}u\mathrm{d}t}=\frac{W_{R}}{2}\left(F_{0} +\xi_{1}F_{1}+\xi_{2}F_{2}+\xi_{3}F_{3}\right), \tag{49}\] where, the photon polarization is represented by the Stokes parameters (\(\xi_{1}\), \(\xi_{2}\), \(\xi_{3}\)), defined with respect to the axes \(\hat{\mathbf{P}}_{1}=\hat{\mathbf{a}}-\hat{\mathbf{n}}(\hat{\mathbf{n}}\cdot \hat{\mathbf{a}})\) and \(\hat{\mathbf{P}}_{2}=\hat{\mathbf{n}}\times\hat{\mathbf{P}}_{1}\)[66], with the photon emission direction \(\hat{\mathbf{n}}=\mathbf{p}_{e}/|\mathbf{p}_{e}|\) along the momentum \(\mathbf{p}_{e}\) of the ultrarelativistic electron. The variables introduced in Eq. (49) read: \[F_{0} = -(2+u)^{2}\left[\mathrm{IntK}_{\frac{1}{3}}(u^{\prime})-2\mathrm{ K}_{\frac{3}{3}}(u^{\prime})\right](1+S_{if})+u^{2}(1-S_{if}) \tag{50}\] \[\left[\mathrm{IntK}_{\frac{1}{3}}(u^{\prime})+2\mathrm{K}_{\frac{ 3}{3}}(u^{\prime})\right]+2u^{2}S_{if}\mathrm{IntK}_{\frac{1}{3}}(u^{\prime})- (4u+2u^{2})\] \[(\mathbf{S}_{f}+\mathbf{S}_{i})\cdot\left[\hat{\mathbf{n}}\times \hat{\mathbf{a}}\right]\mathrm{K}_{\frac{1}{3}}(u^{\prime})-2u^{2}(\mathbf{S} _{f}-\mathbf{S}_{i})\cdot\left[\hat{\mathbf{n}}\times\hat{\mathbf{a}}\right] \mathrm{K}_{\frac{1}{3}}(u^{\prime})\] \[-4u^{2}\left[\mathrm{IntK}_{\frac{1}{3}}(u^{\prime})-\mathrm{K}_{ \frac{3}{3}}(u^{\prime})\right](\mathbf{S}_{i}\cdot\hat{\mathbf{n}})(\mathbf{ S}_{f}\cdot\hat{\mathbf{n}}),\] \[F_{1} = -2u^{2}\mathrm{IntK}_{\frac{1}{3}}(u^{\prime})\left\{(\mathbf{S} _{i}\cdot\hat{\mathbf{a}})\mathbf{S}_{f}\cdot\left[\hat{\mathbf{n}}\times \hat{\mathbf{a}}\right]+(\mathbf{S}_{f}\cdot\hat{\mathbf{a}})\mathbf{S}_{i} \cdot\left[\hat{\mathbf{n}}\times\hat{\mathbf{a}}\right]\right\}+ \tag{51}\] \[4u\left[(\mathbf{S}_{i}\cdot\hat{\mathbf{a}})(1+u)+(\mathbf{S}_ {f}\cdot\hat{\mathbf{a}})\right]\mathrm{K}_{\frac{1}{3}}(u^{\prime})+\] \[2u(2+u)\hat{\mathbf{n}}\cdot[\mathbf{S}_{f}\times\mathbf{S}_{i }]\mathrm{K}_{\frac{2}{3}}(u^{\prime}),\] \[F_{2} = -\left\{2u^{2}\left\{({\bf S}_{i}\cdot\hat{\bf n}){\bf S}_{f}\cdot [\hat{\bf n}\times\hat{\bf a}]+({\bf S}_{f}\cdot\hat{\bf n}){\bf S}_{i}\cdot[ \hat{\bf n}\times\hat{\bf a}]\right\}+2u(2+u)\right. \tag{52}\] \[\left.\hat{\bf a}\cdot\left[{\bf S}_{f}\times{\bf S}_{i}\right] \right\}{\rm K}_{\frac{1}{3}}(u^{\prime})-4u\left[({\bf S}_{i}\cdot\hat{\bf n })+({\bf S}_{f}\cdot\hat{\bf n})(1+u)\right]\] \[{\rm IntK}_{\frac{1}{3}}(u^{\prime})+4u(2+u)\left[({\bf S}_{i} \cdot\hat{\bf n})+({\bf S}_{f}\cdot\hat{\bf n})\right]{\rm K}_{\frac{2}{3}}(u^ {\prime}),\] \[F_{3} = 4\left[1+u+(1+u+\frac{u^{2}}{2})S_{if}-\frac{u^{2}}{2}({\bf S}_ {i}\cdot\hat{\bf n})({\bf S}_{f}\cdot\hat{\bf n})\right]{\rm K}_{\frac{2}{3}}(u ^{\prime}) \tag{53}\] \[+2u^{2}\left\{{\bf S}_{i}\cdot[\hat{\bf n}\times\hat{\bf a}]\,{ \bf S}_{f}\cdot[\hat{\bf n}\times\hat{\bf a}]-({\bf S}_{i}\cdot\hat{\bf a})({ \bf S}_{f}\cdot\hat{\bf a})\right\}{\rm IntK}_{\frac{1}{3}}(u^{\prime})\] \[-4u\left[(1+u){\bf S}_{i}\,[\hat{\bf n}\times\hat{\bf a}]+{\bf S }_{f}\,[\hat{\bf n}\times\hat{\bf a}]\right]{\rm K}_{\frac{1}{3}}(u^{\prime}),\] where \(W_{R}=\alpha_{f}/\left[8\sqrt{3}\pi\xi_{L}(1+u)^{3}\right]\), \(u^{\prime}=2u/3\chi\), \(u=\omega_{\gamma}/\left(\varepsilon_{i}-\omega_{\gamma}\right)\), \({\rm IntK}_{\frac{1}{3}}(u^{\prime})\equiv\int_{u^{\prime}}^{\infty}{\rm d}z{ \rm K}_{\frac{1}{3}}(z)\), \(\omega_{\gamma}\) the emitted photon energy, \(\varepsilon_{i}\) the electron energy before radiation, \(\hat{\bf a}={\bf a}/|{\bf a}|\) the direction of the electron acceleration \({\bf a}\), \({\bf S}_{i}\) and \({\bf S}_{f}\) denote the electron spin vectors before and after radiation, respectively, \(|{\bf S}_{i,f}|=1\), and \(S_{if}\equiv{\bf S}_{i}\cdot{\bf S}_{f}\). Summing over the photon polarization, the electron spin-resolved emission probability can be written as [12; 15; 67]: \[\frac{{\rm d}^{2}W_{fi}}{{\rm d}u{\rm d}t}=W_{R}\left\{-(2+u)^{2} \left[{\rm IntK}_{\frac{1}{3}}(u^{\prime})-2{\rm K}_{\frac{2}{3}}(u^{\prime}) \right](1+S_{if})+\right.\] \[\left.u^{2}\left[{\rm IntK}_{\frac{1}{3}}(u^{\prime})+2{\rm K}_{ \frac{2}{3}}(u^{\prime})\right](1-{\bf S}_{if})+2u^{2}S_{if}{\rm IntK}_{\frac{ 1}{3}}(u^{\prime})-\right.\] \[\left.(4u+2u^{2})({\bf S}_{f}+{\bf S}_{i})\,[{\bf n}\times\hat{ \bf a}]\,{\rm K}_{\frac{1}{3}}(u^{\prime})-2u^{2}({\bf S}_{f}-{\bf S}_{i})\,[{ \bf n}\times\hat{\bf a}]\right.\] \[\left.{\rm K}_{\frac{1}{3}}(u^{\prime})-4u^{2}\left[{\rm IntK}_{ \frac{1}{3}}(u^{\prime})-{\rm K}_{\frac{2}{3}}(u^{\prime})\right]({\bf S}_{i} \cdot{\bf n})({\bf S}_{f}\cdot{\bf n})\right\}. \tag{54}\] Summing over the final states \({\bf S}_{f}\), the initial spin-resolved radiation probability is obtained: \[\frac{{\rm d}^{2}\overline{W}_{fi}}{{\rm d}u{\rm d}t} = 8W_{R}\left\{-(1+u){\rm IntK}_{\frac{1}{3}}(u^{\prime})+(2+2u+u^ {2}){\rm K}_{\frac{2}{3}}(u^{\prime})\right. \tag{55}\] \[\left.-u{\bf S}_{i}\cdot[{\bf n}\times\hat{\bf a}]\,{\rm K}_{\frac{ 1}{3}}(u^{\prime})\right\}.\] And by averaging the electron initial spin, one obtains the widely used radiation probability for the unpolarized initial particles [5; 68; 46]. During the photon emission simulation, electron/positron spin transitions to either a parallel or antiparallel orientation with respect to the spin quantized axis (SQA), depending on the occurrence of emission. Upon photon emission, the SQA is choosen to obtains the maximum transition probabily, which is along the energy-resolved average polarization \[{\bf S}_{\rm f}^{\rm R}=\frac{{\bf g}}{(w+{\bf f}\cdot{\bf S}_{i})}. \tag{56}\] These are obtained by summing over the photon polarization and keeps the dependence on initial and final spin of electrons: \[\frac{\mathrm{d}^{2}W_{\mathrm{rad}}}{\mathrm{d}u\mathrm{d}t}\ =\ W_{\mathrm{r}}(w+ \mathbf{f}\cdot\mathbf{S}_{i}+\mathbf{g}\cdot\mathbf{S}_{f}), \tag{57}\] where \[w =\ -(1+u)\mathrm{K}_{\frac{1}{3}}(\rho^{\prime})+(2+2u+u^{2}) \mathrm{K}_{\frac{3}{3}}(\rho^{\prime}),\] \[\mathbf{f} =\ u\mathrm{Int}\mathrm{K}_{\frac{1}{3}}(\rho^{\prime})\mathbf{ \hat{v}}\times\mathbf{\hat{a}},\] \[\mathbf{g} =\ -(1+u)\left[\mathrm{K}_{\frac{1}{3}}(\rho^{\prime})-2\mathrm{K}_{ \frac{3}{3}}(\rho^{\prime})\right]\mathbf{S}_{i}-(1+u)u\] \[\times\mathrm{Int}\mathrm{K}_{\frac{1}{3}}(\rho^{\prime})\mathbf{ \hat{v}}\times\mathbf{\hat{a}}-u^{2}\left[\mathrm{K}_{\frac{1}{3}}(\rho^{ \prime})-\mathrm{K}_{\frac{3}{3}}(\rho^{\prime})\right](\mathbf{S}_{i}\cdot \mathbf{\hat{v}})\mathbf{\hat{v}}.\] Conversely, without emission, the SQA aligns with another SQA [12; 69]. In both cases, the final spin is determined by assessing the probability density for alignment, either parallel or antiparallel, with the SQA. We account for the stochastic spin flip during photon emission using four random numbers \(r_{1,2,3,4}\in[0,1)\). The procedure is as follows: First, at each simulation time step \(\Delta t\), a photon with energy \(\omega_{\gamma}=r_{1}\gamma_{e}\) is emitted if the spin-dependent radiation probability in Eq.(55), \(P\equiv\mathrm{d}^{2}\overline{W}_{fi}(\chi_{e},r_{1},\gamma_{e},\mathbf{S}_ {i})/\mathrm{d}u\mathrm{d}t\cdot\Delta t\), meets or exceeds \(r_{2}\), following the "von Neumann's rejection method". The final momentum for the electron and photon is given by \(\mathbf{p}_{f}=(1-r_{1})\mathbf{p}_{i}\) and \(\hbar\mathbf{k}=r_{1}\mathbf{p}_{i}\), respectively. Next, the electron spin flips either parallel (spin-up) or antiparallel (spin-down) to the SQA with probabilities of \(P_{\mathrm{flip}}\equiv W_{fi}^{\uparrow}/P\) and \(W_{fi}^{\downarrow}/P\), respectively, where \(W_{fi}^{\uparrow,\downarrow}\equiv\mathrm{d}^{2}W_{fi}^{\uparrow,\downarrow}/ \mathrm{d}u\mathrm{d}t\cdot\Delta t\) from Eq. (57). In other words, the final spin \(\mathbf{S}_{f}\) will flip parallel to the SQA if \(r_{3}<P_{\mathrm{flip}}\), and vice versa; see the flow chart of the NCS in Figure. 7. In the alternative scenario, i.e., no photon is emitted, the average final spin is given by \(\overline{\mathbf{S}}_{f}=\frac{\mathbf{S}_{f}(1-W\Delta t)-\mathbf{f}\Delta t} {1-(W+\mathbf{f}\cdot\mathbf{S}_{i})\Delta t}\), where, \(W\equiv 16W_{R}(-(1+u)\mathrm{Int}\mathrm{K}_{1/3}(u^{\prime})+(2+2u+u^{2}) \mathrm{K}_{2/3}(u^{\prime}))\), and \(\mathbf{f}\equiv-16W_{R}(\mathbf{n}\times\mathbf{\hat{a}}\mathrm{K}_{1/3}(u^{ \prime}))\)[12; 69]. Then the SQA is given by \(\overline{\mathbf{S}}_{f}/\overline{\mathbf{S}}_{f}|\), and the probability for the aligned case is given by \(|\overline{\mathbf{S}}_{f}|\), and \(1-|\overline{\mathbf{S}}_{f}|\) for the anti-parallel case. Finally, the polarization of the emitted photon is determined by considering that the average polarization is in a mixed state. The basis for the emitted photon is chosen as two orthogonal pure states with the Stokes parameters \(\mathbf{\hat{\xi}}^{\pm}\equiv\pm(\overline{\xi}_{1},\,\overline{\xi}_{2},\, \overline{\xi}_{3})/\overline{\xi}_{0}\), where \(\overline{\xi}_{0}\equiv\sqrt{(\overline{\xi}_{1})^{2}+(\overline{\xi}_{2})^{2 }+(\overline{\xi}_{3})^{2}}\). The probabilities for the photon emission in these states \(W_{fi}^{\pm}\) are given by Eq. (49). A stochastic procedure is defined using the 4th random number \(r_{4}\): if \(W_{fi}^{+}/\overline{W}_{fi}\geq r_{4}\), the polarization state \(\mathbf{\hat{\xi}}^{+}\) will be chosen; otherwise, the polarization state will be assigned as \(\mathbf{\hat{\xi}}^{-}\). Here, \(\overline{W}_{fi}\equiv W_{R}F_{0}\) and \(W_{fi}^{\pm}\equiv W_{R}(F_{0}+\sum_{j=1,3}\xi_{j}^{\pm}F_{j})\). Between photon emissions, the electron dynamics in the external laser field are described by the Lorentz equations, \(d\mathbf{p}/dt=-e(\mathbf{E}+\mathbf{\beta}\times\mathbf{B})\), and are simulated using the Boris rotation method, as shown in Eqs. (5-10). Due to the small emission angle for an ultrarelativistic electron, the photon is assumed to be emitted along the parental electron velocity, i.e., \(\mathbf{p}_{f}\approx(1-\omega\gamma/|\mathbf{p}_{i}|)\mathbf{p}i\). Besides, in this simulation, interference effects between emissions in adjacent coherent lengths (\(l_{f}\simeq\lambda_{L}/a_{0}\)) are negligible when the employed laser intensity is ultrastrong, i.e., \(a_{0}\gg 1\). Therefore, the photon emissions happening in each coherent length are independent of each other. Examples of the electron dynamics and spin can be seen in Figure.8, apparently, the average value matches the MLL equations for dynamics and MLL + M-BMT equation for spins. Besides, the beam evolution is shown in Figure. 9. The energy spectra of electrons and photons, as well as the photon polarization, can be observed in Figure. 10. #### iv.2.2 Definition and Transformation of Stokes Parameters In the context of NCS and the subsequent nonlinear Breit-Wheeler (NBW) pair production, the polarization state of a photon can be characterized by the polarization unit vector \(\mathbf{\hat{P}}\), which functions as the spin component of the photon wave function. An arbitrary polarization \(\mathbf{\hat{P}}\) can be Figure 7: Flowchart of the spin- and polarization-resolved NCS represented as a superposition of two orthogonal basis vectors[70]: \[\mathbf{\hat{P}}=\cos(\theta_{\alpha})\mathbf{\hat{P}}_{1}+\sin(\theta_{\alpha}) \mathbf{\hat{P}}_{2}\cdot e^{i\theta_{\beta}}, \tag{58}\] here, \(\theta_{\alpha}\) denotes the angle between \(\mathbf{P}\) and \(\mathbf{\hat{P}}_{1}\), while \(\theta_{\beta}\) represents the absolute phase. In quantum mechanics, the photon polarization state corresponding to \(\mathbf{P}\) can be described by the density matrix: \[\rho=\frac{1}{2}\left(1+\mathbf{\xi}\cdot\mathbf{\sigma}\right)=\frac{1}{2}\begin{pmatrix} 1+\xi_{3}&\xi_{1}-i\xi_{2}\\ \xi_{1}+i\xi_{2}&1-\xi_{3}\end{pmatrix} \tag{59}\] where \(\mathbf{\sigma}\) represents the Pauli matrix, and \(\mathbf{\xi}=(\xi_{1},\xi_{2},\xi_{3})\) denotes the Stokes parameters, with \(\xi_{1}=\sin(2\theta_{\alpha})\cos(\theta_{\beta}),\xi_{2}=\sin(2\theta_{ \alpha})\sin(\theta_{\beta})\), \(\xi_{3}=\cos(2\theta_{\alpha})\). The calculation of the probability of pair creation requires the transformation of the Stokes parameters from the initial frame of the photon (\(\mathbf{\hat{P}}_{1}\), \(\mathbf{\hat{P}}_{2}\), \(\mathbf{\hat{n}}\)) to the frame of pair production (\(\mathbf{\hat{P}}_{1}^{\prime}\), \(\mathbf{\hat{P}}_{2}^{\prime}\), Figure 8: Dynamics of 1000 electrons via the stochastic NCS with the simulation parameters as those in Figure. 6. Blue lines are 10 sampled electrons, and black ones are the average value over 1000 sample particles. \(\mathbf{\hat{n}}\)). The vector \(\mathbf{\hat{P}}_{1}^{\prime}\) is given by \([\mathbf{E}-\mathbf{\hat{n}}\cdot(\mathbf{\hat{n}}\cdot\mathbf{E})+\mathbf{\hat {n}}\times\mathbf{B}]/[\mathbf{E}-\mathbf{\hat{n}}\cdot(\mathbf{\hat{n}}\cdot \mathbf{E})+\mathbf{\hat{n}}\times\mathbf{B}]\), and the vector \(\mathbf{\hat{P}}_{2}^{\prime}\) is obtained by taking the cross product of \(\mathbf{\hat{n}}\) and \(\mathbf{\hat{P}}_{1}^{\prime}\). Here, \(\mathbf{\hat{n}}\) represents the direction of propagation of the photon, and \(\mathbf{E}\) and \(\mathbf{B}\) denote the electric and magnetic fields, respectively. Two groups of polarization vector are connected via a rotation of angle \(\psi\): \[\mathbf{\hat{P}}_{1}^{{}^{\prime}} = \mathbf{\hat{P}}_{1}{\rm cos}(\psi)+\mathbf{\hat{P}}_{2}{\rm sin} (\psi), \tag{60}\] \[\mathbf{\hat{P}}_{2}^{{}^{\prime}} = -\mathbf{\hat{P}}_{1}{\rm sin}(\psi)+\mathbf{\hat{P}}_{2}{\rm cos }(\psi). \tag{61}\] Thus, the Stokes parameters with respect to the vectors \(\mathbf{\hat{P}}_{1}^{{}^{\prime}}\), \(\mathbf{\hat{P}}_{2}^{{}^{\prime}}\) and \(\mathbf{\hat{n}}\) are the follows: \[\xi_{1}^{{}^{\prime}} = \xi_{1}{\rm cos}(2\psi)-\xi_{3}{\rm sin}(2\psi),\] \[\xi_{2}^{{}^{\prime}} = \xi_{2},\] \[\xi_{3}^{{}^{\prime}} = \xi_{1}{\rm sin}(2\psi)+\xi_{3}{\rm cos}(2\psi), \tag{62}\] which is equivalent to a rotation[71, 72]: \[\left(\begin{array}{c}\xi_{1}^{\prime}\\ \xi_{2}^{\prime}\\ \xi_{3}^{\prime}\end{array}\right)=\left(\begin{array}{ccc}\cos 2\psi&0&- \sin 2\psi\\ 0&1&0\\ \sin 2\psi&0&\cos 2\psi\end{array}\right)\left(\begin{array}{c}\xi_{1}\\ \xi_{2}\\ \xi_{3}\end{array}\right)\equiv{\rm ROT}(\psi)\cdot\mathbf{\xi}. \tag{63}\] Figure 9: Dynamics of an electron beam (particle number \(N_{e}=10^{4}\)) with colors denote the number density in arbitrary units and logarithm scale (a. u.), other parameters are the same as those in Figure. 6. ### Nonlinear Breit-Wheeler pair production When the energy of a photon exceeds the rest mass of an electron-positron pair, i.e., \(\omega_{\gamma}\geq 2m_{e}c^{2}\), and is subjected to an ultraintense field of \(a_{0}\gg 1\), the related nonlinear quantum parameter \(\chi_{\gamma}\) can reach unity. Here, \(\chi_{\gamma}\equiv\frac{c\hbar}{m^{2}c^{3}}\sqrt{|F^{\mu\nu}k_{\nu}|^{2}}\) and is approximately equal to \(2\omega_{\gamma}a_{0}/m_{e}^{2}\) in the colliding geometry. In this scenario, the photon can decay into an electron-positron pair through the nonlinear Breit-Wheeler pair production (NBW) process (\(\omega_{\gamma}+n\omega_{L}\to e^{+}+e^{-}\))\({}^{2}\). Refs. [73; 74; 75; 25; 76] proposed the spin- and polarization-resolved NBW MC method, and we followed the detailed methods in Ref. [74]. #### iv.4.1 NBW Probability The polarization-resolved NBW probability rate with dependence on the positron energy is given by \[\frac{\mathrm{d}^{2}W_{\mathrm{pair}}^{\pm}}{\mathrm{d}\varepsilon_{+}\mathrm{ d}t}=\frac{1}{2}(G_{0}+\xi_{1}G_{1}+\xi_{2}G_{2}+\xi_{3}G_{3}), \tag{64}\] where the polarization-independent term \(G_{0}\) and polarization-related terms \(G_{1,2,3}\) are given by Figure 10: (a) Energy spectra of the scattered electrons (black line) and generated photons (red line), respectively. (b) Energy-dependent Stokes parameter \(\tilde{\xi}_{2},\tilde{\xi}_{3}\), i.e., circular and linear polarization with respect to \(y\)-\(z\) axis. Simulation parameters are the same as those in Figure. 6. \[G_{0} = \frac{W_{0}}{2}\left\{{\rm IntK}_{\frac{1}{3}}(\rho)+\frac{\varepsilon _{-}^{2}+\varepsilon_{+}^{2}}{\varepsilon_{-}\varepsilon_{+}}{\rm K}_{\frac{2}{ 3}}(\rho)+\left[{\rm IntK}_{\frac{1}{3}}(\rho)-2{\rm K}_{\frac{2}{3}}(\rho) \right]({\bf S}_{-}\cdot{\bf S}_{+})+\right. \tag{65}\] \[\left.{\rm K}_{\frac{1}{3}}(\rho)\left[-\frac{\varepsilon_{\gamma} }{\varepsilon_{+}}\left({\bf S}_{+}\cdot\hat{\bf b}_{+}\right)+\frac{ \varepsilon_{\gamma}}{\varepsilon_{-}}\left({\bf S}_{-}\cdot\hat{\bf b}_{+} \right)\right]+\left[\frac{\varepsilon_{-}^{2}+\varepsilon_{+}^{2}}{ \varepsilon_{-}\varepsilon_{+}}{\rm IntK}_{\frac{1}{3}}(\rho)\right.\right.\] \[\left.\left.-\frac{(\varepsilon_{+}-\varepsilon_{-})^{2}}{ \varepsilon_{-}\varepsilon_{+}}{\rm K}_{\frac{2}{3}}(\rho)\right]({\bf S}_{+ }\cdot\hat{\bf v}_{+})({\bf S}_{-}\cdot\hat{\bf v}_{+})\right\},\] \[G_{1} = \frac{W_{0}}{2}\left\{{\rm K}_{\frac{1}{3}}(\rho)\left[-\frac{ \varepsilon_{\gamma}}{\varepsilon_{-}}({\bf S}_{+}\cdot\hat{\bf a}_{+})+\frac{ \varepsilon_{\gamma}}{\varepsilon_{+}}({\bf S}_{-}\cdot\hat{\bf a}_{+}) \right]+\frac{\varepsilon_{+}^{2}-\varepsilon_{-}^{2}}{2\varepsilon_{-} \varepsilon_{+}}{\rm K}_{\frac{2}{3}}(\rho)({\bf S}_{-}\times{\bf S}_{+}) \cdot\hat{\bf v}_{+}\right.\] (66) \[\left.-\frac{\varepsilon_{\gamma}^{2}}{2\varepsilon_{-}\varepsilon _{+}}{\rm IntK}_{\frac{1}{3}}(\rho)\left[({\bf S}_{+}\cdot\hat{\bf a})({\bf S }_{-}\cdot\hat{\bf b})+({\bf S}_{-}\cdot\hat{\bf a}_{+})({\bf S}_{+}\cdot\hat {\bf b}_{+})\right]\right\},\] \[G_{2} = \frac{W_{0}}{2}\left\{\frac{\varepsilon_{\gamma}^{2}}{2 \varepsilon_{-}\varepsilon_{+}}{\rm K}_{\frac{1}{3}}(\rho)({\bf S}_{-}\times{ \bf S}_{+})\cdot\hat{\bf a}_{+}+\right.\] (67) \[\left.\frac{\varepsilon_{+}^{2}-\varepsilon_{-}^{2}}{2 \varepsilon_{-}\varepsilon_{+}}{\rm K}_{\frac{1}{3}}(\rho)\left[({\bf S}_{-} \cdot\hat{\bf v}_{+})({\bf S}_{+}\cdot\hat{\bf b}_{+})+({\bf S}_{+}\cdot\hat {\bf v}_{+})({\bf S}_{-}\cdot\hat{\bf b}_{+})\right]+\right.\] \[\left.\left[\frac{\varepsilon_{\gamma}}{\varepsilon_{-}}{\rm IntK }_{\frac{1}{3}}(\rho)-\frac{\varepsilon_{+}^{2}-\varepsilon_{-}^{2}}{ \varepsilon_{-}\varepsilon_{+}}{\rm K}_{\frac{2}{3}}(\rho)\right]({\bf S}_{-} \cdot\hat{\bf v}_{+})+\right.\] \[\left.\left.\left[\frac{\varepsilon_{\gamma}}{\varepsilon_{+}}{ \rm IntK}_{\frac{1}{3}}(\rho)+\frac{\varepsilon_{+}^{2}-\varepsilon_{-}^{2}}{ \varepsilon_{-}\varepsilon_{+}}{\rm K}_{\frac{2}{3}}(\rho)\right]({\bf S}_{+} \cdot\hat{\bf v}_{+})\right\},\] \[G_{3} = \frac{W_{0}}{2}\left\{-{\rm K}_{\frac{2}{3}}(\rho)+\frac{\varepsilon _{-}^{2}+\varepsilon_{+}^{2}}{2\varepsilon_{-}\varepsilon_{+}}{\rm K}_{\frac{ 2}{3}}(\rho)({\bf S}_{-}\cdot\hat{\bf S}_{+})-\right.\] (68) \[\left.{\rm K}_{\frac{1}{3}}(\rho)\left[\frac{\varepsilon_{\gamma} }{\varepsilon_{+}}({\bf S}_{-}\cdot\hat{\bf b}_{+})-\frac{\varepsilon_{\gamma} }{\varepsilon_{-}}({\bf S}_{+}\cdot\hat{\bf b}_{+})\right]+\right.\] \[\left.\frac{\varepsilon_{\gamma}^{2}}{2\varepsilon_{-}\varepsilon _{+}}{\rm IntK}_{\frac{1}{3}}(\rho)\left[({\bf S}_{+}\cdot\hat{\bf b}_{+})({\bf S }_{-}\cdot\hat{\bf b}_{+})\right.\right.\right.\] \[\left.\left.({\bf S}_{+}\cdot\hat{\bf a}_{+})({\bf S}_{-}\cdot \hat{\bf a}_{+})\right]-\frac{(\varepsilon_{+}-\varepsilon_{-})^{2}}{2 \varepsilon_{-}\varepsilon_{+}}{\rm K}_{\frac{2}{3}}(\rho)({\bf S}_{+}\cdot\hat{ \bf v}_{+})({\bf S}_{-}\cdot\hat{\bf v}_{+})\right\},\] where \(W_{0}=\alpha/\left(\sqrt{3}\pi\omega_{\gamma}^{2}\right)\), \(\omega_{\gamma}^{\prime}=\varepsilon_{\gamma}/m_{e}c^{2}\), \(\rho=2\varepsilon_{\gamma}^{2}/\left(3\chi_{\gamma}\varepsilon_{-}\varepsilon_{+ }\right)=2/\left[3\delta(1-\delta)\right]\), \(\delta=\varepsilon_{+}/\varepsilon_{\gamma}\), \({\rm IntK}_{\frac{1}{3}}(\rho)\equiv\int_{\rho}^{\infty}{\rm d}z{\rm K}_{ \frac{1}{3}}(z)\), \({\rm K}_{n}\) is the \(n\)-order modified Bessel function of the second kind, \(\alpha\) the fine structure constant, \(\varepsilon_{\gamma}\), \(\varepsilon_{-}\) and \(\varepsilon_{+}\) the energies of parent photon, created electron and positron, respectively, \(\hat{\bf v}_{+}={\bf v}_{+}/|{\bf v}_{+}|\) with the positron velocity \({\bf v}_{+}\), \(\hat{\bf a}_{+}={\bf a}_{+}/|{\bf a}_{+}|\) with the positron acceleration \({\bf a}_{+}\) in the rest frame of positron, \(\hat{\bf b}_{+}={\bf v}_{+}\times{\bf a}_{+}/|{\bf v}_{+}\times{\bf a}_{+}|\), \(\xi_{1}\), \(\xi_{2}\) and \(\xi_{3}\) are the Stokes parameters of \(\gamma\) photon, and \({\bf S}_{+}\) (\({\bf S}_{-}\)) denotes the positron (electron) spin vector. Note that the Stokes parameters must be transformed from the photon initial frame (\(\hat{\bf P}_{1}\), \(\hat{\bf P}_{2}\), \(\hat{\bf n}\)) to the pair production frame (\(\hat{\bf P}_{1}^{\prime}\), \(\hat{\bf P}_{2}^{\prime}\), \(\hat{\bf n}\)); see transformations of the Stokes parameters in Sec. III.3.2. Summing over the electron spin, the pair production depending on the positron spin \({\bf S}_{+}\) and the photon polarization \(\xi\) is obtained: \[\frac{\mathrm{d}^{2}W_{\mathrm{pair}}^{+}}{\mathrm{d}\varepsilon_{+} \mathrm{d}t}=W_{0}\bigg{\{}\mathrm{IntK}_{\frac{1}{3}}(\rho)+\frac{\varepsilon_{-} ^{2}+\varepsilon_{+}^{2}}{\varepsilon_{-}\varepsilon_{+}}\mathrm{K}_{\frac{2}{ 3}}(\rho)-\frac{\varepsilon_{\gamma}}{\varepsilon_{+}}\mathrm{K}_{\frac{1}{3} }(\rho)(\mathbf{S}_{+}\cdot\hat{\mathbf{b}}_{+})\] \[-\xi_{1}\bigg{[}\frac{\varepsilon_{\gamma}}{\varepsilon_{-}} \mathrm{K}_{\frac{1}{3}}(\rho)(\mathbf{S}_{+}\cdot\hat{\mathbf{a}}_{+})\bigg{]} +\xi_{2}\bigg{[}\frac{\varepsilon_{+}^{2}-\varepsilon_{-}^{2}}{\varepsilon_{-} \varepsilon_{+}}\mathrm{K}_{\frac{2}{3}}(\rho)+\frac{\varepsilon_{\gamma}}{ \varepsilon_{+}}\mathrm{IntK}_{\frac{1}{3}}(\rho)\bigg{]}\times\] \[(\mathbf{S}_{+}\cdot\hat{\mathbf{v}}_{+})\bigg{]}-\xi_{3}\bigg{[} \mathrm{K}_{\frac{1}{3}}(\rho)-\frac{\varepsilon_{\gamma}}{\varepsilon_{-}} \mathrm{K}_{\frac{1}{3}}(\rho)(\mathbf{S}_{+}\cdot\hat{\mathbf{b}}_{+})\bigg{]} \bigg{\}}. \tag{69}\] It can be rewritten as: \[\frac{\mathrm{d}^{2}W_{\mathrm{pair}}^{+}}{\mathrm{d}\varepsilon_{+} \mathrm{d}t}\ =\ W_{0}(C+\mathbf{S}_{+}\cdot\mathbf{D}), \tag{70}\] where \[C = \mathrm{IntK}_{\frac{1}{3}}(\rho)+\frac{\varepsilon_{-}^{2}+ \varepsilon_{+}^{2}}{\varepsilon_{-}\varepsilon_{+}}\mathrm{K}_{\frac{2}{3}}( \rho)-\xi_{3}\mathrm{K}_{\frac{2}{3}}(\rho), \tag{71}\] \[\mathbf{D} = -\bigg{(}\frac{\varepsilon_{\gamma}}{\varepsilon_{+}}-\xi_{3} \frac{\varepsilon_{\gamma}}{\varepsilon_{-}}\bigg{)}\mathrm{K}_{\frac{1}{3}}( \rho)\hat{\mathbf{b}}_{+}-\xi_{1}\frac{\varepsilon_{\gamma}}{\varepsilon_{-}} \mathrm{K}_{\frac{1}{3}}(\rho)\hat{\mathbf{a}}_{+}+\] (72) \[\xi_{2}\bigg{[}\frac{\varepsilon_{+}^{2}-\varepsilon_{-}^{2}}{ \varepsilon_{-}\varepsilon_{+}}\mathrm{K}_{\frac{2}{3}}(\rho)+\frac{ \varepsilon_{\gamma}}{\varepsilon_{+}}\mathrm{IntK}_{\frac{1}{3}}(\rho)\bigg{]} \hat{\mathbf{v}}_{+}.\] When a photon decays to pair, the positron spin state is instantaneously collapsed into one of its basis states defined by the instantaneous SQA, along the energy-resolved average polarization \(\mathbf{S}_{+}^{(e_{+})}=\mathbf{D}/C\). Similarly, summing over the positron spin, the pair production probability depending on the electron spin \(\mathbf{S}_{-}\) and the photon polarization is obtained: \[\frac{\mathrm{d}^{2}W_{\mathrm{pair}}^{-}}{\mathrm{d}\varepsilon_{+ }\mathrm{d}t}=W_{0}(C+\mathbf{S}_{-}\cdot\mathbf{D}^{{}^{\prime}}), \tag{73}\] \[\mathbf{D}^{{}^{\prime}}=\bigg{(}\frac{\varepsilon_{\gamma}}{ \varepsilon_{-}}-\xi_{3}\frac{\varepsilon_{\gamma}}{\varepsilon_{+}}\bigg{)} \mathrm{K}_{\frac{1}{3}}(\rho)\hat{\mathbf{b}}_{+}+\xi_{1}\frac{\varepsilon_{ \gamma}}{\varepsilon_{+}}\mathrm{K}_{\frac{1}{3}}(\rho)\hat{\mathbf{a}}_{+}\] \[-\xi_{2}\bigg{[}\frac{\varepsilon_{+}^{2}-\varepsilon_{-}^{2}}{ \varepsilon_{-}\varepsilon_{+}}\mathrm{K}_{\frac{2}{3}}(\rho)-\frac{ \varepsilon_{\gamma}}{\varepsilon_{-}}\mathrm{IntK}_{\frac{1}{3}}(\rho)\bigg{]} \hat{\mathbf{v}}_{+}. \tag{74}\] The pair production probability, relying solely on photon polarization, is determined by summing over both positron and electron spins: \[\frac{\mathrm{d}^{2}W_{\mathrm{pair}}}{\mathrm{d}\varepsilon_{+} \mathrm{d}t}=2W_{0}\left\{\mathrm{IntK}_{\frac{1}{3}}(\rho)+\frac{\varepsilon_{ -}^{2}+\varepsilon_{+}^{2}}{\varepsilon_{-}\varepsilon_{+}}\mathrm{K}_{\frac{2} {3}}(\rho)-\xi_{3}\mathrm{K}_{\frac{2}{3}}(\rho)\right\}. \tag{75}\] #### iv.2.2 MC algorithm The algorithm for simulating pair creation with polarization is illustrated in Figure. 11. At every simulation step \(\Delta t\), a pair is generated with positron energy \(\varepsilon_{+}=r_{1}\varepsilon_{\gamma}\) when the probability density \(P\equiv\mathrm{d}^{2}W_{\mathrm{pair}}/\mathrm{d}\varepsilon_{+}\mathrm{d}t\cdot\Delta t\) of pair production is greater than or equal to a random number \(r_{2}\) within the range [0,1). Here, \(\mathrm{d}^{2}W_{\mathrm{pair}}/\mathrm{d}\varepsilon_{+}\mathrm{d}t\) is computed using Equation (75). The momentum of the created positron (electron) is parallel to that of the parent photon, and the energy of the electron \(\varepsilon_{-}\) is determined as \(\varepsilon_{\gamma}-\varepsilon_{+}\). The final spin states of the electron and positron are determined by the four probability densities \(P_{1,2,3,4}\), each representing spin parallel or antiparallel to SQA, where \(P_{1,2,3,4}\) is computed from Equation (64). Finally, a random number \(r_{3}\) is used to sample the final spin states for the electron and positron. Note that, here all random numbers are sampled uniformly from \([0,1)\) as in the NCS algorithm. An example of the production of secondary electrons and positrons resulting from a collision between a laser and an electron beam is illustrated in Figure. 12. ### High-energy bremsstrahlung The high energy bremsstrahlung is another important emission mechanism, which can also be modeled using a MC collision model [76]. The MC collision model was tested using the Geant4 code [77], and the results are presented in the following section. The bremsstrahlung emission is Figure 11: Flowchart of the spin- and polarization-resolved nonlinear Breit-Wheeler (NBW) pair production process. described as described by the cross-section in Ref. [78] \[\begin{split}\frac{d\sigma_{eZ}}{d\omega}(\omega,y)=&\frac {\alpha r_{0}^{2}}{\omega}[(\frac{4}{3}-\frac{4}{3}y+y^{2})\\ &\times[Z^{2}(\phi_{1}-\frac{4}{3}\text{ln}Z-4f)+Z(\psi_{1}-\frac {8}{3}\text{ln}Z)]\\ &+\frac{2}{3}(1-y)[Z^{2}(\phi_{1}-\phi_{2})+Z(\psi_{1}-\psi_{2}) ]],\end{split} \tag{76}\] where \(y=\hbar\omega/E_{e}\) is the energy ratio of the emitted photon to the incident electron, \(r_{0}\) is the classical electron radius, functions \(\phi_{1,2}\) and \(\psi_{1,2}\) depend on the screening potential by atomic electrons, while the Coulomb correction term is denoted by \(f\). When the atomic number of the target is greater than 5, we use Eqs. (3.38-3.41) from Ref. [78] to calculate these functions. However, for targets with \(Z<5\), the approximated screen functions are unsuitable and require modification. The PENELOPE code [79] utilizes another method that involves tabulated data from Ref. [80]. This method transforms the "scaled" bremsstrahlung differential cross-section (DCS) to differential cross-section by using the following equation [79]: \[\frac{d\sigma_{br}}{d\omega}=\frac{Z^{2}}{\beta^{2}}\frac{1}{\omega}\chi(Z,E_ {e},y), \tag{77}\] where \(\beta=v/c\) is the normalized electron velocity. Integrating this expression over the photon frequencies yields a tabulated total cross-section \(\sigma_{br}(E_{e},y)\) MC simulation, i.e, the direct sampling method can be used. The DCS for electron and positron are related by the equation \[\frac{d\sigma_{br}^{+}}{d\omega}=F_{p}(Z,E_{e})\frac{d\sigma_{br}^{-}}{d\omega}, \tag{78}\] Figure 12: (a) Normalized energy spectra (black solid line) and energy-resolved longitudinal spin polarization (red solid line) of positrons. (b) Statistics of the longitudinally spin components of generated positrons. The laser and electron beam parameters are consistent with those in Figure. 9. where \(F_{p}(Z,E_{e})\) is an analytical approximation factor that can be found in Ref. [79]. This reference demonstrates a high level of accuracy, with a difference of only approximately 0.5% compared to Ref. [81]. The bremsstrahlung implementation is based on a direct MC sampling. Given an incident electron with energy \(E_{e}\) and velocity \(v\), the probability of triggering a bremsstrahlung event is calculated as \(P_{br}=1-e^{\Delta s/\lambda}\), where \(\Delta s=v\Delta t\), \(v=|\vec{v}|\) is the incident particle velocity, \(\Delta t\) is the time interval, \(\lambda=1/n\sigma(E_{e})\), \(n\) represents the target particle density and \(\sigma(E_{e})\) is the total cross-section, respectively. A random number \(r_{1}\) is then generated and compared to \(P_{br}\). If \(r_{1}<P_{br}\), a bremsstrahlung event is triggered. The energy of the resulting photon is determined by generating another random number \(r_{2}\), which is then multiplied by \(\sigma_{br}(E_{e})\) to obtain the energy ratio \(y\) through \(\sigma(y,E_{e})=\sigma(E_{e})r_{2}\). Finally, a photon with energy \(\hbar\omega=E_{e}y\) and momentum direction \(\vec{k}/|\vec{k}|=\mathbf{v}/|\mathbf{v}|\) is generated. To improve computational efficiency, low energy photons are discarded by setting a minimum energy threshold. This probabilistic approach is similar to the method used to calculate the random free path [79]. The implementation of Bethe-Heitler pair production follows a similar process. The implementation of Bremsstrahlung emission was tested using the Geant4 software [77], which is widely used for modeling high-energy particle scattering with detectors. In this study, we utilized electron bunches of 1 GeV and 100 MeV with \(10^{5}\) primaries to collide with a 5 mm Au target with \(Z=79\), \(\rho=19.3\) g/cm\({}^{3}\) and a 5 mm Al target with \(Z=13\), \(\rho=2.7\) g/cm\({}^{3}\). We disabled the field updater and weighting procedure in the PIC code, and only enabled the particle pusher and Bremsstrahlung MC module. The electron and photon spectra were found to be in good agreement with the Geant4 results, except for a slightly higher photon emission in the high energy tail (which is due to the difference in the cross-section data). Figure. 13 displays the spectra of electrons and photons from a 100 MeV electron bunch normally incident onto the aluminum and gold slabs, while similar distributions for a 1 GeV electron bunch are shown in Figure. 14. ### Vacuum birefringence Another important process for polarized photons in ultraintense laser matter interactions is vacuum birefringence (VB), in addition to the NBW processes. In this paper, we utilize Eq.(4.26) from Ref. [82] to calculate the refractive index \(n\) for a photon with arbitrary energy \(\omega\) (wavelength \(\lambda\)) in a constant weak EM field [\(|E|(|B|)\ll E_{cr}\)]. We include the electric field and assume relativistic units \(c=\hbar=1\). The resulting expression is: \[n\approx 1-\frac{\alpha\chi_{\gamma}^{2}m^{2}}{16\pi\omega^{2}}\int_{-1}^{1}d \upsilon(1-\upsilon^{2})\left\{\begin{matrix}\frac{1}{2}(1+\frac{1}{3} \upsilon^{2})\\ 1-\frac{1}{3}\upsilon^{2}\end{matrix}\right\}\left[\pi x^{4/3}\text{Gi}^{ \prime}(x^{2/3})-i\frac{x^{2}}{\sqrt{3}}\text{K}_{2/3}\left(\frac{2}{3}x \right)\right]\,, \tag{79}\] Figure 14: (color online). Bremsstrahlung of 1 GeV electrons. (a) for the scattered electron spectra, and (b) for the yield photon spectra. Solid lines represent PIC results and dashed lines represent Geant4 results. These figures are obtained from Ref. [76]. Figure 13: (color online). Bremsstrahlung of 100 MeV electrons. (a) for the scattered electron spectra, and (b) for the yield photon spectra. Solid lines represent PIC results and dashed lines represent Geant4 results. These figures are obtained from Ref. [76]. where \(\alpha\) is the fine structure constant, \(m\) is the electron mass, \(\chi_{\gamma}\) is the nonlinear quantum parameter as defined before, \(x=\frac{4}{(1-\nu^{2})\chi_{\gamma}}\), \(\mathrm{Gi}^{\prime}(x)\) is the derivative of the Scorer's function, and \(\mathrm{K}_{n}(x)\) is the \(n\)th-order modified Bessel function of the second kind [38]. \(\mathbf{E}_{\mathrm{red,\perp}}=\mathbf{E}_{\perp}+\hat{k}\times\mathbf{B}_{\perp}\) is the transverse reduced field (acceleration field for electrons). The first and second columns correspond to the eigenmodes parallel and perpendicular to the reduced field, respectively. By extracting a factor of \[\mathcal{D}=\frac{\alpha}{90\pi}\left(\frac{e|\mathbf{E}_{\mathrm{red,\perp}}| }{m^{2}}\right)^{2}\equiv\frac{\alpha}{90\pi}\frac{\chi_{\gamma}^{2}}{\omega^ {2}/m^{2}}.\] Eq. (79) arrives at \[\mathrm{Re}(n) = \tag{80}\] \[\mathrm{Im}(n) = \tag{81}\] In the weak field limit of \(\chi_{\gamma}\ll 1\), the imaginary part associated with pair production, is negligible. Now we define \[\begin{split} M(\chi_{\gamma})&=-\tfrac{45}{4}\int_ {0}^{1}d\nu(1-\nu^{2})\begin{cases}\tfrac{1}{2}(1+\tfrac{1}{3}\nu^{2})\\ 1-\tfrac{1}{3}\nu^{2}\end{cases}\left[\pi x^{4/3}\mathrm{Gi}^{\prime}(x^{2/3}) \right],\text{ yielding}\\ \mathrm{Re}(n)&=1+M(\chi_{\gamma})\mathcal{D}\equiv 1+M(\chi_{ \gamma})\tfrac{\alpha}{90\pi}\tfrac{\chi_{\gamma}^{2}}{\omega^{2}/m^{2}}.\end{cases}\end{split} \tag{82}\] The numerical results of \(M(\chi_{\gamma})\) and comparisons with the low-energy-limit (\(\omega_{\gamma}\ll m\)) constants are given in Figure. 15. While, in the limit of \(\chi_{\gamma}\ll 1\), the real part simplifies to \[\mathrm{Re}(n)=1+\mathcal{D}\begin{cases}4_{+}\\ 7_{-}\end{cases}, \tag{83}\] and can be used to simulate the vacuum birefringence (VB) effect with good accuracy for \(\chi_{\gamma}\ll 1\). Note that these results are identical to those in Refs. [82; 83; 84]. For large \(\chi_{\gamma}\), two interpolated refractive indexes are used. The phase retardation between two orthogonal components is given by \(\delta\phi=\phi_{+}-\phi_{-}=\Delta n\frac{2\pi l}{\lambda}=-3\mathcal{D}\frac {2\pi l}{\lambda}\) with \(l\) denotes the propagation length, and the VB effect is equivalent to a rotation of the Stokes parameters: \[\begin{split}\left(\begin{array}{c}\xi_{1}^{\prime}\\ \xi_{2}^{\prime}\\ \xi_{3}^{\prime}\end{array}\right)&=\left(\begin{array}{ccc}\cos\delta \phi&-\sin\delta\phi&0\\ \sin\delta\phi&\cos\delta\phi&0\\ 0&0&1\end{array}\right)\left(\begin{array}{c}\xi_{1}\\ \xi_{2}\\ \xi_{3}\end{array}\right)&\equiv\mathrm{QED}(\delta\phi)\cdot\mathbf{\xi}.\end{split} \tag{84}\] The VB effect of the probe photons in the Particle-In-Cell code is simulated with the following algorithms[85]: Figure 15: (a): \(M(\chi_{\gamma})\) and the corresponding low-energy-limit constant with red and blue dash-dotted lines equal to 4 and 7, respectively. (b): Relative error between \(M(\chi_{\gamma})\) and the low-energy-limit constant. ``` 1Initialization part; 2PIC initialization; 3foreach photon in photonListdo 4\(\text{photon.}\mathbf{\xi}=\mathbf{\xi}_{0}\); 5\(\text{photon.}\hat{a}_{\pm}=(\hat{x},\hat{y})\); 6 end foreach 7evolution part; 8whilenot final stepdo 9doPIC loop...; 10foreach photondo 11 get \(\mathbf{E},\mathbf{B}\); 12 get \(\theta\), and \(\hat{a}_{+}(\theta)\parallel\mathbf{E}_{\text{red\perp}},\hat{a}_{-}(\theta)= \hat{k}\times\hat{a}(\theta)_{+}\); 13 rotate \(\mathbf{\xi}\) from \(\hat{a}_{\pm}\) to \(\hat{a}_{\pm}(\theta)\) via Eqs. (63); 14 calculate new \(\mathbf{\xi}\) via Eq. (84) ; 15 update the polarization basis: \(\text{photon.}\hat{a}_{\pm}=\hat{a}_{\pm}(\theta)\). 16 end foreach 17 18 end while 19Post-processing; 20select a detection plane (polarization basis), for instance \(\hat{x},\hat{y}\); 21foreach\(x,y\) in the detectordo 22foreach photon in this areado 23 rotate \(\mathbf{\xi}\) from \(\hat{a}_{\pm}\) to \((\hat{x},\hat{y})\) via Eqs. (63); 24 average all \(\mathbf{\xi}\). 25 end foreach 26 27 end foreach ``` **Algorithm 1**VB effect in SLPs ## IV Framework of SLIPS These physical processes have been incorporated into the spin-resolved laser-plasma interaction simulation code, known as SLIPs. The data structure and framework layout are illustrated in Figs. 17 and 18. As depicted in Figure. 17, SLIPs utilize a toml file to store simulation information, which is then parsed into a SimInfo structure that includes domainInfo, speciesInfo, boundaryInfo, laserInfo, pusherInfo, and other metadata. Subsequently, this metadata is employed to generate a SimBox that comprises all ParticleList and Fields, and define the FieldSolver, EOMSolver and initialize QED processes. The internal data structure of SLIPs is constructed using the open-source numerical library, Figure 17: Data structure of the SLIPs. Armadillo C++[86; 87]. String expressions are parsed using the Exprtk library[88]. The data is then dumped using serial-hdf5 and merged with external Python scripts to remove ghost cells. The spin-resolved processes, i.e., tagged as Spin-QED in the diagram in Figure. 18, are implemented in conjunction with the Lorentz equation. In the coding, the Spin-QED part is arranged as a sequential series of processes. For example, Lorentz and T-BMT are followed by radiative correction, VB, NBW, and NCS with Bremsstrahlung: Lorentz & T-BMT Radiative correction\(\Rightarrow\)VB\(\Rightarrow\)NBW\(\Rightarrow\)NCS & Bremss. ## V Polarized particles simulations In this section, we present known results that were calculated from the single-particle mode using the SLIPs. The spin-resolved NCS/NBW are evaluated by generating spin-polarized electrons/positrons. The simulation setups used in this study are identical to those described in Refs.[10] and[67]. ### Polarized electron/positron simulation To simulate the generation of spin-polarized electrons, we utilized an elliptically polarized laser with an intensity of \(a_{0}=30\), a wavelength of \(\lambda_{0}=1\mu\)m, and an ellipticity of \(a_{y,0}/a_{x,0}=3\%\). This laser was directed towards an ultrarelativistic electron bunch with an energy of 10 GeV, which was Figure 18: Framework of the SLIPs. produced through laser-wakefield acceleration. The resulting polarized electrons are depicted in Figure. V.1, and shows good agreement with the previously published results in Ref. [25]. ### Polarized \(\gamma\)-photons via NCS The polarization state of emitted photons can be determined in the spin/polarization-resolved NCS. Here, follow Ref. [25], we utilized a linearly polarized (LP) laser to collide with an unpolarized electron bunch to generate LP \(\gamma\)-photons. Additionally, we used an LP laser to collide with a longitudinally polarized electron bunch to generate circularly polarized (CP) \(\gamma\)-photons, which were also observed in a previous study (Ref. [12]). The final polarization states of LP and CP \(\gamma\)-photons are presented in Figs. 20 and 21, respectively. ### Laser-plasma interactions Finally, we present a simulation result demonstrating the interaction between an ultra-intense laser with a normalized intensity of \(a_{0}=1000\) and a fully ionized 2\(\mu\)m-thick aluminum target. Note, this configuration, previously examined in Ref. [89] with a thickness of 1\(\mu\)m, employs a thicker target in the present study to enhance the SF-QED processes. When the laser is directed towards a solid target, the electrons experience acceleration and heating due to the laser and plasma fields. As high-energy electrons travel through the background field, they can emit \(\gamma\) photons via NCS. The EM field distribution and number density of target electrons, NBW positrons, and NCS \(\gamma\) photons are shown in Figure. 22, both of which show good consistency with Ref. [89]. The laser is Figure 19: Generation of polarized electrons: (a) number density \(dN/d\theta_{x}d\theta_{y}\), (b) spin polarization \(S_{x}\). linearly polarized along the \(y\) direction, indicating that the polarization frame is mainly in the \(y\)-\(z\) plane with two polarization bases, \(\mathbf{e}_{1}\equiv\mathbf{\beta}\times\mathbf{\hat{\beta}}\) and \(\mathbf{e}_{2}\equiv\mathbf{\hat{n}}\times\mathbf{e}_{1}\), where \(\mathbf{\hat{n}}\) denotes the momentum direction of the photon. The polarization angle-dependence observed in this study is consistent with prior literature. However, the average linear polarization degree is approximately 60% (\(\bar{\xi}_{3}\simeq 0.6\)), as illustrated in Figs. 23(b) and (d). Notably, low-energy photons contribute primarily to the polarization, as demonstrated in Figs. 23(a) and (c). Additionally, during the subsequent NBW process, the self-generated strong magnetic field couples with the laser field dominate the positrons' SQA. As a result, the positrons' polarization is aligned with the \(z\) direction, contingent on their polarization. The polarization angle-dependence observed in this study is consistent with prior momentum direction, as shown in Figure. 24. These findings constitute a novel contribution to the investigation of polarization-resolved laser-plasma interactions. ## VI Outlook Computer simulation techniques for laser and plasma interactions are constantly evolving, not only in the accuracy of high-order or explicit/implicit algorithms but also in the complexity of new physics with more degrees of freedom. The rapid development of ultraintense techniques not only provides opportunities for experimental verification of SF-QED processes in the high-energy density regime (which serves as a micro-astrophysics lab) but also presents challenges in theoretical analysis. The introduction of Spin-QED into widely accepted PIC algorithms may address this urgent demand and pave the way for studies in laser-QED physics, laser-nuclear physics (astrophysics), and even physics beyond the standard model. Figure 22: The laser plasma interaction via 2D simulation: spatial distribution of (a) \(E_{x}\), (b) \(E_{y}\) and (c) \(B_{z}\); and the target electron distribution (d), generated NBW positron (e) and NCS \(\gamma\) photon (f). ## VII Acknowlegement The work is supported by the National Natural Science Foundation of China (Grants No. 12275209, 12022506, and U2267204), Open Foundation of Key Laboratory of High Power Laser and Physics, Chinese Academy of Sciences (SGKF202101), the Foundation of Science and Technology on Plasma Physics Laboratory (No. JCKYS2021212008), and the Shaanxi Fundamental Science Research Project for Mathematics and Physics (Grant No. 22JSY014).
2305.08289
Variational quantum metrology for multiparameter estimation under dephasing noise
We present a hybrid quantum-classical variational scheme to enhance precision in quantum metrology. In the scheme, both the initial state and the measurement basis in the quantum part are parameterized and optimized via the classical part. It enables the maximization of information gained about the measured quantity. We discuss specific applications to 3D magnetic field sensing under several dephasing noise modes. Indeed, we demonstrate its ability to simultaneously estimate all parameters and surpass the standard quantum limit, making it a powerful tool for metrological applications.
Trung Kien Le, Hung Q. Nguyen, Le Bin Ho
2023-05-15T01:09:58Z
http://arxiv.org/abs/2305.08289v2
# Variational quantum metrology for multiparameter estimation under dephasing noise ###### Abstract We present a hybrid quantum-classical variational scheme to enhance precision in quantum metrology. In the scheme, both the initial state and the measurement basis in the quantum part are parameterized and optimized via the classical part. It enables the maximization of information gained about the measured quantity. We discuss specific applications to 3D magnetic field sensing under several dephasing noise modes. Indeed, we demonstrate its ability to simultaneously estimate all parameters and surpass the standard quantum limit, making it a powerful tool for metrological applications. ## I Introduction Quantum metrology is an estimation process that utilizes unique quantum phenomena such as entanglement and squeezing to improve the precision of estimation beyond classical limits [6; 15; 44]. Recent development in quantum computing leads to numerous optimal algorithms for enhancing precision in single-parameter estimation, such as adaptive measurements [3; 9; 13; 54], quantum error correction [25; 56], and optimal quantum control [50; 36; 51]. So far, a variational algorithm has been demonstrated by combining the advantages of both quantum and classical systems for quantum-enhanced metrology [21; 27; 29; 51]. A similar protocol for spin systems was also introduced [22; 55]. More recently, such a variational toolbox for multiparameter estimation was proposed [32], which is a generalization from the previous work mentioned above [29]. While using variational schemes is promising, their potential significance in multiparameter quantum metrology has yet to fully understand, even in principle. Furthermore, determining the optimal quantum resources and measurement strategy to extract maximum information about all parameters is challenging due to the trade-offs in estimating incompatible observables [39; 57]. Therefore, a suitable method for precisely estimating of multiparameter remains a thriving area of research in quantum metrology. In this work, we propose a variational scheme to enhance the precision of multiparameter estimation in the presence of dephasing noise. The basic idea is to use a quantum computer to prepare a trial state (an ansatz) that depends on a set of trainable variables. The state is subjected to a series of control operations, representing unknown multiparameter and noise, and then is measured through observables determined by other trainable variables. The measurement results are used to update the trainable variables and optimize the estimation of the unknown multiparameter. Optimizing both the initial quantum state and the measurement operators allows us to identify suitable conditions for the quantum probe to increase sensitivity and achieve the ultimate quantum limit for all parameters. In numerical simulations, we estimate a 3D magnetic field under a dephasing noise model and find that sensitivity for all parameters can simultaneously reach the ultimate quantum bound. We also examine a time-dependent Ornstein-Uhlenbeck model [45] and observe results surpassing the standard quantum limit by increasing the probe's number of particles. This approach holds promise for a wide range of metrological applications, including external field sensing, precision spectroscopy, gravitational wave detection, and others. ## II Results ### Variational quantum metrology The goal of multiparameter estimation is to evaluate a set of unknown parameters \(\mathbf{\phi}=(\phi_{1},\phi_{2},\cdots,\phi_{M})^{\intercal}\), which are imprinted onto a quantum probe via a unitary evolution \(\mathbf{U}(\mathbf{\phi})=\exp(-it\mathbf{H}\mathbf{\phi})=\exp(-it\sum_{k=1}^{M}H_{k}\phi_{k})\), where \(\mathbf{H}=(H_{1},H_{2},\cdots,H_{M})\) are non-commuting Hermitian Hamiltonians. The precision of estimated parameters \(\mathbf{\phi}\) is evaluated using a mean square error matrix (MSEM) \(V=\sum_{m}p(m|\mathbf{\phi})\big{[}\mathbf{\phi}(m)-\mathbf{\phi}\big{]}\big{[}\mathbf{\phi}( m)-\mathbf{\phi}\big{]}^{\intercal}\), where \(p(m|\mathbf{\phi})=\mathrm{Tr}[\rho(\mathbf{\phi})E_{m}]\) is the probability for obtaining an outcome \(m\) when measuring the final state \(\rho(\mathbf{\phi})\) by an element \(E_{m}\) in a positive, operator-value measure (POVM). The MSEM obeys the Cramer-Rao bounds (CRBs) [17; 19] \[\mathrm{Tr}\big{[}WV\big{]}\geq\mathsf{C}_{\mathsf{F}}\geq\mathsf{C}_{\mathsf{ H}}\geq\mathsf{C}_{\mathsf{Q}}, \tag{1}\] where \(W\) is a scalar weight matrix, \(\mathsf{C}_{\mathsf{F}}=\mathrm{Tr}[W\mathsf{F}^{-1}]\) is a classical bound, where \(F\) is the classical Fisher information matrix (CFIM) with elements \(F_{ij}=\sum_{m}\frac{1}{p(m|\mathbf{\phi})}\big{[}\phi_{\phi_{j}}p(m|\mathbf{\phi}) \big{]}\big{[}\phi_{\phi_{j}}p(m|\mathbf{\phi})\big{]}\)[37]. The Holevo bound \(\mathsf{C}_{\mathsf{H}}\) is given via semidefinite programming, i.e., \(\mathsf{C}_{\mathsf{H}}=\min_{\{X_{i}\}}\bigl{(}\mathrm{Tr}[W\mathrm{Re} \mathsf{Z}+\|\sqrt{W}\mathrm{Im}\mathcal{Z}\sqrt{W}\|]\|_{1}\bigr{)}\)[16; 19], where \(Z\) is a positive semidefinite matrix with elements \(Z_{ij}=\mathrm{Tr}[X_{i}X_{j}\rho(\mathbf{\phi})]\), and a set of matrices \(\{X_{i}\}\) satisfies \(\mathrm{Tr}[X_{i}\partial_{\mathbf{\phi}_{j}}\rho(\mathbf{\phi})]=\delta_{ij}\) Finally, \(\mathsf{C_{Q}}=\mathrm{Tr}[WQ^{-1}]\) is a quantum bound where \(Q_{ij}=\mathrm{Re}\big{[}\mathrm{Tr}[\rho(\phi)L_{i}L_{j}]\big{]}\) is the real symmetric quantum Fisher information matrix (QFIM) that defined through the symmetric logarithmic derivative (SLD) \(2\partial_{\phi j}\rho(\phi)=\{L_{j},\rho(\phi)\}\)[37]. Although optimal estimators can achieve \(\mathsf{C_{F}}\)[23] and asymptotic achievement of \(\mathsf{C_{H}}\) is possible [1; 12; 42; 49; 52], it is impossible to attain the quantum bound for multiparameter estimation [16]. In some instances, \(\mathsf{C_{H}}=\mathsf{C_{Q}}\) if a weak commutativity condition \(\mathrm{Im}(\mathrm{Tr}[L_{j}L_{i}\rho(\phi)])=0\) is met [10; 16]. The same condition also applies to attain \(\mathsf{C_{F}}=\mathsf{C_{Q}}\)[31]. However, this condition alone is insufficient to achieve the quantum bound practically; a proper POVM is also required. This paper presents a variational quantum metrology (VQM) scheme following Meyer et al. toolbox [32] as sketched in Fig. 1 to optimize both the preparation state and POVM. A quantum circuit \(\mathbf{U}(\mathbf{\theta})\) is used to generate a variational preparation state with trainable variables \(\mathbf{\theta}\). Similar quantum circuit with variables \(\mathbf{\mu}\) is used to generate a variational POVM \(\mathbf{E}(\mathbf{\mu})=\{E_{m}(\mathbf{\mu})=\mathbf{U}^{\dagger}(\mathbf{\mu})E_{m}\mathbf{U}(\mathbf{ \mu})>0\}\big{|}\sum_{m}E_{m}(\mathbf{\mu})=\mathbf{I}\}\). Using classical computers, a cost function \(\mathcal{C}(\mathbf{\theta},\mathbf{\mu})\) can be optimized to generate new variables for quantum circuits, resulting in enhanced information extraction. The scheme is repeated until it converges. To investigate the ultimate quantum bound, we define the cost function by a relative difference [1] \[\mathcal{C}(\mathbf{\theta},\mathbf{\mu})=1-\frac{\mathsf{C_{Q}}}{\mathsf{C_{F}}}, \tag{2}\] which is positive semidefinite according to Eq. (1). The variables are trained by solving the optimization task \(\mathop{\arg\min}\limits_{\{\mathbf{\theta},\mathbf{\mu}\}}\mathcal{C}(\mathbf{\theta}, \mathbf{\mu})\). As the value of \(\mathcal{C}(\mathbf{\theta},\mathbf{\mu})\) approaches zero, we reach the ultimate quantum bound where \(\mathrm{Tr}[WV]=\mathsf{C_{F}}=\mathsf{C_{Q}}\). A vital feature of the VQM is using variational quantum circuits, which allow for optimizing the circuits to extract the maximum information about the estimated parameters. ### Ansatzes We propose three variational circuits: a star topology ansatz, a ring topology ansatz, and a squeezing ansatz. The first two ansatzes are inspired by quantum graph states, which are useful resources for quantum metrology [41; 48]. A conventional graph state is formed by a collection of vertices \(V\) and edges \(D\) as \(G(V,D)=\prod_{i,j\in D}\mathrm{C}\mathrm{Z}^{ij}|+\rangle^{V}\), where \(\mathrm{C}\mathrm{Z}^{ij}\) represents the controlled-Z gate connecting the \(i\) and \(j\) qubits, and \(|+\rangle\) is an element in the basis of Pauli \(\sigma_{x}\). The proposed ansatzes here incorporate \(y\)-rotation gates (\(R_{y}(\theta)=e^{-i\theta\sigma_{y}/2}\)) at every vertex prior to CZ gates (see Fig. 2a,b). The squeezing ansatz in Fig. 2c is inspired by squeezing states, which is another useful resource for quantum metrology [7; 14; 30]. It has \(x\)-\(y\)-rotation gates and global McInnes [34], where \(U_{x(z)}=\exp(-i\sum_{j=1}^{N}\sum_{k=j+1}^{N}\sigma_{x(z)}\otimes\sigma_{x(z) }\frac{\chi_{jk}^{jk}}{2})\) for an \(N\)-qubit circuit [34]. The trainable variables for one layer are \(2N-2\), \(2N\), and \(N(N+1)\) for the star, ring, and squeezing ansatz, respectively. Hereafter, we use these ansatzes for generating variational preparation states and variational POVM in the VQM scheme. ### Multiparameter estimation under dephasing noise After preparing a variational state \(\rho(\mathbf{\theta})=\mathbf{U}(\mathbf{\theta})\rho_{0}\mathbf{U}(\mathbf{\theta})\), we use it to estimate a 3D magnetic field under dephasing noise. The field is imprinted onto every single qubit via the Hamiltonian \(\mathbf{H}=\sum_{i\in(x,y,z)}\phi_{i}\sigma_{i}\), where \(\mathbf{\phi}=(\phi_{x},\phi_{y},\phi_{z})\), and \(\sigma_{i}\) is a Pauli matrix. Under dephasing noise, the variational state \(\rho(\mathbf{\theta})\) evolves to [28] \[\mathcal{E}_{t}(\rho)=\Bigg{[}\prod_{k=1}^{N}e^{\gamma t\mathcal{L}^{(k)}} \Bigg{]}e^{-it\mathcal{H}}\rho, \tag{3}\] where we omitted \(\mathbf{\theta}\) in \(\rho(\mathbf{\theta})\) for short. The superoperator \(\mathcal{H}\) generates a unitary dynamic \(\mathcal{H}\rho=[\mathbf{H},\rho]\), and \(\mathcal{L}^{(k)}\) is a non-unitary dephasing superoperator with \(\gamma\) is the decay rate. In terms of Kraus operators, the dephasing superoperator gives \[e^{\gamma t\mathcal{L}^{(k)}}\rho=K_{1}^{(k)}\rho[K_{1}^{(k)}]^{\dagger}+K_{2}^ {(k)}\rho[K_{2}^{(k)}]^{\dagger}, \tag{4}\] where \(K_{1}=\begin{pmatrix}\sqrt{1-\lambda}&0\\ 0&1\end{pmatrix}\) and \(K_{2}=\begin{pmatrix}\sqrt{\lambda}&0\\ 0&0\end{pmatrix}\) are Kraus operators, and \(\lambda=1-e^{-\gamma t}\) is the dephasing probability. Finally, the state is measured in the variational POVM \(\mathbf{E}(\mathbf{\mu})\) and yields the probability \(p(m)=\mathrm{Tr}[\mathcal{E}_{t}(\rho)E_{m}(\mathbf{\mu})]\). Note that \(p\) also depends on \(\mathbf{\theta}\), \(\phi\), and \(\mathbf{\mu}\). It is important to attain the ultimate quantum bound, i.e., \(\mathsf{C_{F}}=\mathsf{C_{Q}}\). We thus compare numerical results for the cost function, \(\mathsf{C_{F}}\), and \(\mathsf{C_{Q}}\) as shown in the top panels of Fig. 3. The cost function is plotted after stopping the training by Figure 1: **Variational quantum metrology**. (1) use quantum circuit \(\mathbf{U}(\mathbf{\theta})\) to prepare a variational state; (2) encode multiparameter \(\phi\) and noise using \(\mathbf{U}(\mathbf{\phi})\) and noise channels; (3) use circuit \(\mathbf{U}(\mathbf{\mu})\) to create a variational POVM for measurement; (4) send measurement results to a classical computer to optimize cost function \(\mathcal{C}(\mathbf{\theta},\mathbf{\mu})\) using a gradient-based optimizer. Update new training variables and repeat the scheme until it converges. Figure 2: **Ansatzes for preparation state and POVM**. (a) star topology entangled ansatz. (b) ring topology entangled ansatz. (c) squeezing ansatz. In the circuits, \(R_{x(y)}\): \(x(y)\)-rotation gate, \(U_{x(z)}\): global Molmer–Sørensen gate, \(\bullet\bullet\): controlled-Z gate. EarlyStopping callback [24]. The numerical results are presented at \(N=3\), and the number of layers is chosen from their optimal values as shown in the Method and Fig. 7. We find that for small noises, \(\mathsf{C}_{F}\) reaches the ultimate quantum bound, which is consistent with earlier numerical findings [12]. We also compare the performance of the star ansatz to that of the ring and squeezing ansatzes. It saturates the ultimate quantum limit for dephasing probabilities \(\lambda<0.5\), whereas the ring and squeezing ansatzes only reach the limit for \(\lambda<0.2\). The star graph exhibits a central vertex connected to the remaining \(N-1\) surrounding vertices, which facilitates robust quantum metrology, as discussed in [41]. Furthermore, we evaluate the tradeoff between the CFIM and QFIM by introducing a function \(\mathcal{T}=\mathrm{Tr}[FQ^{-1}]\). For unknown \(M\) parameters, the naive bound is \(\max(\mathcal{T})=M\), leading to simultaneous optimization of all parameters. The results are shown in the bottom panels of Fig. 3 and agree well with the CRBs presented in the top panels, wherein \(\mathcal{T}\to 3\) whenever the quantum bound is reached. So far, we observe that \(\mathcal{T}>M/2\) for all cases, which is better than the theoretical prediction previously [46]. This observation exhibits an advantage of the VQM approach. ### Barren Plateaus Variational quantum circuits under the influence of noises will exhibit a barren plateau (BP), where the gradient along any direction of the variables' space vanishes exponentially with respect to the noise level [47]. The BP prevents reaching the global optimization of the training space, thereby reducing the efficiency and trainability of the variational quantum circuit. The deviation of CRBs shown in Fig. 3 may be subject to the BP raised by noise. We examine such dependent and show the results in Fig. 4. We plot \(\mathrm{Var}[\partial_{\theta_{1}}\mathcal{C}]\) where \(\mathcal{C}\) is defined in Eq. (2) after 200 runs with random initialization of \(\mathbf{\theta}\) and \(\mathbf{\mu}\) for each value of \(\lambda\). As predicted, \(\mathrm{Var}[\partial_{\theta_{1}}\mathcal{C}]\) exponentially vanishes with the slope of -3.158, -4.103, and -3.965 for the star, ring, and squeezing ansatz, respectively. The star ansatz exhibits slower gradient decay as \(\lambda\) approaches 1 due to its smaller trainable variables' space than the ring and squeezing ansatz. This indicates better training and less susceptibility to vanishing gradients, leading to better achievement of the ultimate quantum bound. ### Multiparameter estimation under the Ornstein-Uhlenbeck model We consider the Ornstein-Uhlenbeck model, where the noise is induced by the stochastic fluctuation of the external Figure 4: **Barren plateau**. The variance of gradient \(\mathrm{Var}[\partial_{\theta_{1}}\mathcal{C}]\) is plotted as a function of the dephasing probability \(\lambda\). The slope of each fit line indicates the exponential decay of the gradient, which is a sign of the barren plateau effect. Figure 3: **Variational quantum metrology under dephasing noise**. (Top): plot of the optimal cost function \(\mathcal{C}(\mathbf{\theta},\mathbf{\mu})\), classical bound \(\mathsf{C}_{\mathsf{F}}\), and quantum bound \(\mathsf{C}_{\mathsf{Q}}\) as functions of dephasing probability. From left to right: star, ring, and squeezing ansatz. (Bottom): plot of corresponding tradeoff \(\mathcal{T}\). Numerical results are calculated at \(N=3\), the optimal number of layers in Fig. 7, and the results are averaged after 10 samples. (magnetic) field [45]. The Kraus operators are [53] \[K_{1}(t)=\begin{pmatrix}\sqrt{1-q(t)}&0\\ 0&1\end{pmatrix},\ K_{2}(t)=\begin{pmatrix}\sqrt{q(t)}&0\\ 0&0\end{pmatrix}, \tag{5}\] where \(q(t)=1-e^{-f\ (t)}\) with \(f(t)=\gamma[t+\tau_{c}(e^{-t/\tau_{c}}-1)]\), and \(\tau_{c}\) represents the memory time of the environment. In the Markovian limit (\(\tau_{c}\to 0\)), \(f(t)=\gamma t\), which corresponds to the previous dephasing case. In the non-Markovian limit with large \(\tau_{c}\), such as \(t/\tau_{c}\ll 1\), we have \(f(t)=\frac{2\gamma t^{2}}{2\tau_{c}}\). In the numerical simulation, we fixed \(\gamma=0.1\) and \(\tau_{c}=20\) (for non-Markovian) \[q(t)=\begin{cases}1-\exp(-0.1t)&\text{Markovian},\\ 1-\exp(-\frac{t^{2}}{400})&\text{non-Markovian}.\end{cases} \tag{6}\] We use this model to study the relationship between sensing time, Markovianity, and ultimate attainability of the quantum bound. Figure 5a displays the optimal CRBs for Markovian and non-Markovian noises as functions of sensing time \(t\). As previously reported in [18], there exists an optimal sensing time that minimizes the CRBs for each case examined here. Moreover, the non-Markovian dephasing (nMar) provides lower metrological bounds as compared to the Markovian case (Mar). So far, the minimum CRBs for different \(N\) are presented in Figure 5b. The results demonstrate that with an increase in \(N\), the non-Markovian noise attains a better bound than the standard quantum limit (SQL) for both classical and quantum bounds. This observation is in qualitative agreement with results reported using semidefinite programming [2], indicating the potential of variational optimization for designing optimal non-Markovian metrology experiments. Finally, we note that in the Ornstein-Uhlenbeck model, the quantum bound is unachievable, as indicated by \(\mathsf{C}_{\mathsf{F}}>\mathsf{C}_{\mathsf{Q}}\). It remains a question for future research on whether one can attain the quantum bound \(\mathsf{C}_{\mathsf{Q}}\) with probe designs, and the possibility for tight bounds in the non-Markovian scenario. ## III Discussion We discuss how the three ansatzes create entangled states and the role of entangled resources in achieving the quantum bound in VQM. We analyze entanglement using the concentrable entanglement (CE) defined by [4] \[\xi(\psi)=1-\frac{1}{2^{|s|}}\sum_{\alpha\in\mathcal{P}(s)}\mathrm{Tr}[\rho_{ \alpha}^{2}], \tag{7}\] where \(\mathcal{P}(s)\) is the power set of \(s,\forall s\in\{1,2,\cdots,N\}\), and \(\rho_{\alpha}\) is the reduced state of \(|\psi\rangle\) in the subsystem \(\alpha\) with \(\rho_{\emptyset}:=\mathbf{I}\). Practically, \(\xi(\psi)\) can be computed using the SWAP test circuit as stated in Ref. [4], where \(\xi(\psi)=1-p(\mathbf{0})\), with \(p(\mathbf{0})\) is the probability of obtaining \(|00\cdots 0\rangle\). The ability of the SWAP test to compute CE is due to the equivalence between conditional probability distribution and the definition of CE. We first train the three ansatzes to evaluate their ability of entangled-state generation. Particularly, the training process aims to generate quantum states with \(\xi(\psi)=\{\xi_{\mathrm{sep}},\xi_{\mathrm{GHZ}},\xi_{\mathrm{AME}}\}\), where \(\xi_{\mathrm{sep}}=0\) for a separable state, \(\xi_{\mathrm{GHZ}}=\frac{1}{2^{N}}-\frac{1}{2^{N}}\) for a GHZ state, and \(\xi_{\mathrm{AME}}=1-\frac{1}{2^{N}}\sum_{\alpha=0}^{N}\left(\frac{N}{2}\right) \frac{1}{2^{\alpha(N-N)}}\) for an absolutely maximally entangled (AME) state [8; 11]. The top panels in Fig. 6 display the results for star, ring, and squeezing ansatz, from left to right, at \(N=4\) and (2-2) layers of each ansatz as an example. All the ansatzes examined can reach the separable and GHZ state, but hard to achieve the AME state. This observation is consistent with the CEs for conventional graph states [8]. We next discuss the role of entanglement in achieving the ultimate quantum bound. In the bottom panels of Fig. 6, we graph the corresponding CEs at the optimal CRBs shown in Fig. 3, which apparently do not require the maximum entanglement (e.g., GHZ) to achieve the ultimate quantum bound. This phenomenon can be explained by the fact that maximum entan Figure 5: **Variational quantum metrology under time-dephasing noise**. (a): we present the CRBs as functions of the sensing time, demonstrating an optimal sensing time for achieving each minimum CRB. The non-Markovian dephasing (nMar) produces lower metrological bounds in comparison to the Markovian one (Mar). (b): plot of the minimal bounds for cases in (a), comparing them with the standard quantum limit (SQL) and the Heisenberg limit (HL). For non-Markovian metrology, the bounds surpass the SQL, as predicted. Figure 6: **Entanglement generation**. (Top): from left to right: the distribution of training CEs corresponds to the star, ring, and squeezing ansatzes, respectively. All the ansatzes can produce separable and GHZ states, but generating an AME state is challenging. The results are shown at \(N=4\) and (2-2) layers for each ansatz. (Bottom): the CEs are ploted at the optimal CRBs in Fig. 3, using the same circuit setup that in the figure. Again, \(\lambda\) is the dephasing probability. glement is not required for high-precision quantum metrology, as previously noted in Refs. [5; 35; 43]. Therefore, emphasizing the robustness of easily preparable entangled probe states and non-local POVM schemes would be advantageous for quantum metrological applications exposed to Markovian and non-Markovian noises. ## IV Methods ### Quantum circuit training In numerical simulations, we employ the ADAM optimizer to train the VQM variables [26], where the variables at step \(k+1\) are given by \[\mathbf{\theta}^{k+1}=\mathbf{\theta}^{k}-\alpha\frac{\hat{m}_{k}}{\sqrt{\hat{v}_{k}} +\epsilon}, \tag{8}\] where \(m_{k}=\beta_{1}m_{k-1}+\left(1-\beta_{1}\right)\nabla_{\mathbf{\theta}}\mathcal{C }(\mathbf{\theta}),v_{k}=\beta_{2}v_{k-1}+\left(1-\beta_{2}\right)\nabla_{\mathbf{ \theta}}^{2}\mathcal{C}(\mathbf{\theta}),\hat{m}_{k}=m_{k}/\left(1-\beta_{k}^{k} \right),\hat{v}_{k}=v_{k}/\left(1-\beta_{k}^{k}\right),\) with the hyper-parameters are chosen as \(\alpha=0.2,\beta_{1}=0.8,\beta_{2}=0.999\) and \(\epsilon=10^{-8}\). The gradient \(\partial_{\theta_{i}}\mathcal{C}(\mathbf{\theta})\) is given through the parameter-shift rule [33; 40]. The simulations are performed in Qiskit Aer simulator [38]. The number of iterations is chosen using the EarlyStopping callback [24]. To determine the appropriate number of layers for the preparation state and POVM ansatzes, we analyze the cost function (2) with different number of layers. We use \((\star,\uparrow\text{-}\hat{\star})\) to denote the minimum cost function, the number of layers for variational state preparation, and the number of layers for variational POVM. The results are shown in Fig. 7 with \((\star,\uparrow\text{-}\hat{\star})=\left(0.057,\,2\text{-}2\right)\), \((0.04,\,3\text{-}2)\), and \((0.054,\,2\text{-}2)\) for the star, ring, and squeezing ansatz, respectively. Obviously, the metrological performances of these ansatzes demonstrate that deep ansatzes are unnecessary. For the numerical simulations presented in this paper, we keep the number of layers fixed at these values. ### Computing Fisher information Classical and quantum Fisher information matrices can be computed in quantum circuits using the finite difference approximation. For the CFIM, we first derive an output probability as \(\partial_{\phi_{i}}p=\frac{p(\phi_{i}+s)-p(\phi_{i}-s)}{2s}\), for a small shift \(s\). We then compute the CFIM from \(F_{ij}=\sum_{m}\frac{1}{p(m|\phi_{i})}\left[\partial_{\phi_{i}}p(m|\phi)\right] \left[\partial_{\phi_{j}}p(m|\phi)\right]\). For the QFIM, we explicitly derive \(Q_{ij}=2\mathrm{vec}[\partial_{\phi_{i}}\rho(\phi)]^{\dagger}\left[\rho(\phi)^ {*}\otimes\mathbf{I}+\mathbf{I}\otimes\rho(\phi)\right]^{*}\mathrm{vec}[\partial_{\phi _{j}}\rho(\phi)]\), where \(\mathrm{vec}[\cdot]\) is the vectorization of a matrix, and the superscript '+' denotes the pseudo-inversion [20]. Again, we apply the finite difference to compute \(\partial_{\phi_{i}}\rho=\frac{\rho(\phi_{i}+s)-p(\phi_{i}-s)}{2s}\), and substitute into the above equations to compute the QFIM. ###### Acknowledgements. We thank C.Q. Nguyen for assisting with the initial code. This work is supported by JSPS KAKENHI Grant Number 23K13025.
2310.14416
ConViViT -- A Deep Neural Network Combining Convolutions and Factorized Self-Attention for Human Activity Recognition
The Transformer architecture has gained significant popularity in computer vision tasks due to its capacity to generalize and capture long-range dependencies. This characteristic makes it well-suited for generating spatiotemporal tokens from videos. On the other hand, convolutions serve as the fundamental backbone for processing images and videos, as they efficiently aggregate information within small local neighborhoods to create spatial tokens that describe the spatial dimension of a video. While both CNN-based architectures and pure transformer architectures are extensively studied and utilized by researchers, the effective combination of these two backbones has not received comparable attention in the field of activity recognition. In this research, we propose a novel approach that leverages the strengths of both CNNs and Transformers in an hybrid architecture for performing activity recognition using RGB videos. Specifically, we suggest employing a CNN network to enhance the video representation by generating a 128-channel video that effectively separates the human performing the activity from the background. Subsequently, the output of the CNN module is fed into a transformer to extract spatiotemporal tokens, which are then used for classification purposes. Our architecture has achieved new SOTA results with 90.05 \%, 99.6\%, and 95.09\% on HMDB51, UCF101, and ETRI-Activity3D respectively.
Rachid Reda Dokkar, Faten Chaieb, Hassen Drira, Arezki Aberkane
2023-10-22T21:13:43Z
http://arxiv.org/abs/2310.14416v1
ConViViT - A Deep Neural Network Combining Convolutions and Factorized Self-Attention for Human Activity Recognition ###### Abstract The Transformer architecture has gained significant popularity in computer vision tasks due to its capacity to generalize and capture long-range dependencies. This characteristic makes it well-suited for generating spatiotemporal tokens from videos. On the other hand, convolutions serve as the fundamental backbone for processing images and videos, as they efficiently aggregate information within small local neighborhoods to create spatial tokens that describe the spatial dimension of a video. While both CNN-based architectures and pure transformer architectures are extensively studied and utilized by researchers, the effective combination of these two backbones has not received comparable attention in the field of activity recognition. In this research, we propose a novel approach that leverages the strengths of both CNNs and Transformers in an hybrid architecture for performing activity recognition using RGB videos. Specifically, we suggest employing a CNN network to enhance the video representation by generating a 128-channel video that effectively separates the human performing the activity from the background. Subsequently, the output of the CNN module is fed into a transformer to extract spatiotemporal tokens, which are then used for classification purposes. Our architecture has achieved new SOTA results with 90.05 %, 99.6%, and 95.09% on HMDB51, UCF101, and ETRI-Activity3D respectively. Activity Recognition, Transformer, CNN ## I Introduction Activity recognition can be defined as allowing the machine to recognize/detect the activity based on information received from different sensors. These sensors can be cameras, wearable sensors, and sensors attached to objects of daily use or deployed in the environment. In this work, we are interested in activity Recognition using RGB videos. RGB videos are a complex type of data, that contain complex spatiotemporal dependency. To extract spatial features, we utilized a CNN module inspired by [1]. The primary objective of this CNN module is to enhance the representation of the video by extracting spatial tokens. The advantages of applying CNNs to video data have been extensively discussed in [1]. It has been affirmed that by applying small filters to localized neighborhoods of pixels, CNNs are capable of extracting fine-grained spatial tokens. Furthermore, it has been demonstrated that CNNs outperform self-attention mechanisms in terms of spatial token extraction. Moreover, by employing CNNs as a first step, the subsequent transformer module is able to perform self-attention on a reduced spatial set of tokens and extract temporal dependencies. To extract temporal features, we have used a video transformer architecture inspired by [2]. Within this framework, we have investigated two distinct architectures, each utilizing a different type of self-attention: Factorised Dot-Product and Factorised Self-attention. Both of these attention mechanisms apply self-attention to both spatial and temporal axes. Factorized Self-attention first applies spatial attention, followed by temporal attention on the output of the spatial attention. In contrast, the Dot-Product attention applies spatial and temporal attention to the input and subsequently fuses the results. We have proved through our experiments that factorized self-attention yields better results compared to other approaches. The proposed architecture has achieved state-of-the-art performance on three benchmark datasets. We conducted tests on HMDB51 [3], UCF101 [4] and ETRI-Activity3D [5] and obtained state-of-the-art results on HMDB51, UCF101, and ETRI-Activity3D with 90.05 %, 99.6%, and 95.09% respectively. ## II Related Works The existing literature on activity recognition and computer vision can be broadly categorized into three main groups: (1) CNN-based approaches, (2) Transformer-based approaches and (3) Hybrid architectures combining CNNs and Transformers. ### _CNN-based approaches_ 3D Convolutional Neural Networks (CNNs) have traditionally been the primary choice for visual data processing, encompassing various types of visual data such as images and videos. Consequently, they have held a dominant position in the field of computer vision for a considerable time. However, with the adaptation of attention mechanisms and the transformer architecture from Natural Language Processing (NLP) to Computer Vision (CV), the landscape has witnessed significant changes [6]. Previous studies have attempted to address the challenges of activity recognition using purely 3D and 2D Convolutional Neural Networks [7, 8]. However, optimizing the results and achieving satisfactory performance with a pure CNN architecture for activity recognition has proven to be challenging, primarily due to the high computational demands associated with these architectures. To overcome these challenges, the I3D approach [9] introduced the concept of inflating pre-trained 2D convolution kernels, which allowed for better optimization of the network. In addition, other prior works focused on factorizing 3D convolution kernels in various dimensions to reduce computational complexity [10, 11, 12]. More recent studies have proposed techniques to enhance the temporal modeling ability of 2D CNNs [13, 14]. However, due to the inherent nature of CNNs, which aggregate information within a small window of the neighborhood, these approaches did not achieve significant improvements in performance. Taken together, these prior works have explored different strategies to address the challenges of activity recognition using CNN architectures. While attempts have been made to optimize and enhance the performance of pure CNNs, limitations related to computational requirements and the inherent nature of CNNs' spatial aggregation persist. ### _Transformer-based approaches_ Since the introduction of the vision transformer [6], numerous studies have embraced the transformer architecture for computer vision tasks. These works have consistently surpassed the results achieved by CNNs. This is due to the transformers' ability to capture long-range dependencies and effectively attend to important regions of the input through self-attention mechanisms. Several notable works have contributed to the adoption and advancement of the transformer architecture in computer vision. These include works such as [15, 16, 17, 18], which propose various variants for spatiotemporal learning in video analysis. These variants aim to harness the power of transformers in capturing both spatial and temporal information for more comprehensive video understanding. Video Vision Transformers (Timesformer [19] and ViViT [2]) are among the early Transformers approaches for action recognition. They introduce innovative embedding schemes and adaptations to ViT [17] and other related Transformers for modeling video clips. In [19], the authors propose a tokenization scheme called uniform frame sampling based on a randomly selected frames from a video. Along similar lines, ViViT [2] introduced Tubelets Embedding to effectively preserve contextual time data within videos and handles 3D volumes instead of frames. Four different variants were proposed based on the attention technique: Spatiotemporal attention, Factorized Encoder, Factorized self-attention, and Factorized dot-product attention. Simultaneously, other research efforts have focused on mitigating the computational cost associated with transformer architectures while still achieving impressive results. An example of such work is the Swin Transformer [20], which presents an innovative architecture designed to strike a balance between computational efficiency and powerful performance. Collectively, these works have significantly propelled the adoption of transformer architectures in computer vision. By capitalizing on their ability to capture long-range dependencies and leverage self-attention mechanisms, these architectures have demonstrated remarkable capabilities in various visual tasks. The self-attention mechanism is considered inefficient when it comes to encoding low-level features. To address this limitation, the Swin Transformer approach introduces a solution by applying attention within a local 3D window. This localized attention mechanism allows for more efficient encoding of low-level features. ### _Hybrid approaches_ In recent research, efforts have been made to incorporate convolutional neural networks into the transformer architecture for image recognition tasks. However, these approaches have not adequately addressed the spatiotemporal aspect of videos. Recognizing this limitation, Uniformer [1] presented an architecture specifically tailored for video understanding, utilizing a concise transformer format based on 3D convolutions that unifies convolutions and transformers. By integrating convolutional operations into the transformer architecture, Uniformer aims to combine the strengths of both convolutional neural networks and transformers, resulting in an improved framework for feature encoding. In this work, we adopt 3D convolutions to address the inefficiency of self-attention in encoding low-level features. In fact, 3D convolutions enhance the capability of our model to capture both spatial and temporal information effectively. ## III Proposed Method ### _Architecture overview_ The proposed architecture consists of two main modules, as depicted in Figure 1: a CNN module to extract spatial features followed by a transformer module. The CNN module plays a crucial role in capturing spatial information from the input data. It applies convolutional operations to extract relevant features that describe the spatial characteristics of the input images. The primary objective of this CNN module is to enhance the representation of the video by extracting spatial tokens. Its output is later fed to the transformer module. The transformer module is the factorized self-attention transformer proposed in [2]. It takes advantage of its self-attention mechanism to capture long-range dependencies and model the interactions between spatial features. This module leverages the encoded spatial information from the CNN Module to extract spatiotemporal features that are vital for accurate action classification. Inspired by the contributions of these two works, we propose an hybrid approach that combines the strengths of CNNs for extracting spatial cues and the transformer architecture for extracting spatiotemporal tokens. Our CNN module aims to enhance the video representation by transforming it from a three-channel video to a 128-channel video. Subsequently, the transformer takes the output of the CNN module and applies a Patch Embedding similarly to [1]. Then a factorized self-attention is applied generating a rich spatiotemporal representation that will be used by the classification head. ### _Patch Embedding_ The main purpose of Patch Embedding is to provide the order information to the transformer by slicing the input into \(16\times 16\) patches. Our proposed patch embedding block is inspired by the design and implementation of Uniformer's Dynamic Position Embedding (DPE) architecture [1] since the use of DPE improves the state-of-the-art results by \(0.5\%\) and \(1.7\%\) on ImageNet and Kinetics-400, respectively. This shows that by encoding the position information, DPE can maintain the spatiotemporal order, thus contributing to better learning of the spatiotemporal representation [1]. The main contribution here is the use of a CNN layer with \(0\) padding to create a more adequate representation. The fact that the transformer does not take video as input makes the application of a CNN layer more advantageous because this layer allows to manipulate and adjust how we introduce the order information to obtain better results. ### _CNN Block_ The proposed CNN block aims to extract spatial information and offers a compact spatial representation to be fed to the transformer block. It is based on 3D-CNNs and Depth-Wise CNNs architectures (see figure 1) as follows: * **DW 3D-CNN \(3\times 3\times 3\)**: the depth-wise 3D-CNN aims to extract the spatial features of each 3D neighborhood (\(3\times 3\times 3\)). It consists in applying a single convolutional filter for each input channel. This allows to better extract spatial features. * **3D-CNN \(1\times 1\times 1\)**: the 3D-CNN (\(1\times 1\times 1\)) aims to reduce the dimension of the input before applying the (\(5\times 5\times 5\)) filter to save computation time. * **3D-CNN \(5\times 5\times 5\)**: The application of a 3D filter of size 5 (larger than 3) allows to have a more global representation of the spatial neighborhood. * **3D-CNN \(1\times 1\times 1\)**: the 3D-CNN (\(1\times 1\times 1\)) is intended to increase the size of the input to give more information to the following steps. ### _Spatiotemporal Transformer Module_ The transformer block is the most important part of the proposed architecture. It takes the output of the CNN Module (spatial representation) in order to create a spatiotemporal representation. The application of attention allows any transformer to focus on the most important parts of the input as well as to understand the long sequences which allows extracting the temporal dependencies from a spatial sequence. To extract temporal features, two types of self-attention could be chosen, the factorized dot product and the factorized self-attention proposed in [2]. Both apply attention to the spatial and temporal axes. Factorised self-attention _;_ It consists in applying attention on the spatial axis of the input followed by temporal attention. The application of attention on the spatial axis first allows us to take into consideration the dependencies between Fig. 1: Overall proposed architecture for Human Activity Recognition the spatial tokens and to deduce the most important parts that characterize this axis, thus preparing the input for the next operation of self-attention. The result of the self-attention on the spatial axis will be the input for the self-attention on the time axis that has the goal of the extraction of the spatiotemporal features. Factorized Dot-Product attentionFactorized self-attention is a way of applying self-attention. The idea is to apply attention on the spatial axis of the input X and then have an output Y that we will apply attention on its temporal axis. The application of attention on the spatial axis first allows to take into consideration the dependencies between the spatial tokens and to deduce the most important parts that characterize this axis, thus preparing the input for the next operation of self-attention. The result of the self-attention on the spatial axis will be the input for the self-attention on the time axis that has for the goal of the extraction of the spatiotemporal features. Although both variants are interesting, the factorized attention seems to be more suitable for our architecture (non-video inputs). ## IV Experiments ### _Datasets_ * **ETRI-3D**[5]: ETRI-3D is the first large-scale RGB-D dataset of the daily activity of the elderly (\(112\ 620\) samples). ETRI-3D is collected by Kinect v2 sensors and consists of three synchronized data modalities: RGB video, depth maps, and skeleton sequences. To film the visual data, 50 elderly subjects are recruited. The elderly subjects are in a wide age range from 64 to 88 years old, which leads to a realistic intra-class variation of actions. In addition, they acquired a dataset for 50 young people in their twenties in the same manner as the elderly. * **UCF101**[4]: UCF101 is an action recognition dataset of realistic action videos collected from Youtube with 101 action categories. With 13320 videos of 101 action categories, UCF 101 offers a wide variety of actions with the presence of large variations in camera movement, object appearance and pose, object scale, viewpoint, cluttered background, lighting conditions, etc. * **HMDB51**[3]: The HMDB51 dataset is a large collection of realistic videos from a variety of sources, including movies and web videos. The dataset consists of 6,849 video clips from 51 action categories (such as "jump" and "laugh"), with each category containing at least 101 clips. The original evaluation scheme uses three different training/test divisions. Within each division, each action class has 70 clips for training and 30 clips for testing. ### _Visualization of ConViViT shallow and Deep Layers outputs_ Figure 2 is a visualization of the effect of our spatial and spatiotemporal block in the shallow and deep layers. We observe that the output of the first CNN block allows to localize the person (see figure 2(a)). Then the output of the next 3D-CNN block which is the output of the spatial module gives more importance to the person who makes the action (see figure 2(b)). Figure 2(c) shows five attention maps of ConViViT computed for a sequence of five images showing an action from HMDB51 dataset. Each attention map refers to the visualizations of the attention weights computed between each patch in the image. The last attention map which refers to the last frame of the action sequence shows that our model has succeeded to capture the entire trajectory of the action (red zones). ### _Ablation study_ In this section, we will investigate the influence of different modules of the proposed architecture on the overall performance. All experiments in this study are conducted on the HMDB51 dataset. Mainly we focus on the usefulness of using a CNN block to extract spatial features as well as the usefulness of factorized attention. As a reminder, the factorized attention block is inspired by the Vivit [2] architecture and the CNN block by Uniformer [1]. In order to study the impact of CNN and factorized attention, we compare the variants of the proposed architecture with Uniformer and Vivit. So, the Vivit architecture is based only on the transformer which applies the factorized attention directly to the input. The Uniformer consists of a hybrid architecture based on the general formula of attention introduced in the first vision transformer [6]. #### Iv-C1 Factorized Attention vs Factorized Dot Product Attention We compare the two variants of self-attention: The factorized self-attention used in our architecture and the factorized dot product. As illustrated in Table I (first two rows), the factorized attention outperforms the factorized dot-product architecture. #### Iv-C2 Impact of the spatial block To illustrate the importance of the proposed spatial block, we compare our results with those obtained by Vivit [2]. The table I (first and third rows) shows that the proposed architecture exceeds Vivit by \(9.87\%\). We opted for a hybrid architecture of CNN and transform it to prepare a spatial representation before applying any type of attention. The result of our test with dot-product attention supports our hypothesis regarding the use of CNN before transformers in computer vision. In order to study the impact of the number of the required CNN blocks in the CNN Module, we compare the results obtained by our architecture based on a single CNN Block with two CNN Blocks. We notice that using two CNN Blocks yields to \(90.05\%\) accuracy versus \(64.47\%\) using one CNN block (see Table I - first and fourth rows). #### Iv-C3 Spatio-temporal block To prove the value of our spatio-temporal block we can compare our results with the results of Uniformer [1] since they opted for a hybrid architecture too but they used the general formula of attention introduced in the first vision transformer [6]. This experiment reveals an improvement of accuracy by \(4.69\%\) when comparing our architecture (\(90.05\%\)) to first vision transformer [6] (\(85.36\%\)) (see Table I - First and fifth rows). Actually, applying normal attention to the output of CNN gives good results but our choice to apply factorized attention is better because applying attention on an axis allows the model to see the dependencies between the elements on that axis. Thus, by applying attention to the spatial and then temporal axis we obtained results that exceeded that of Uniformer. ### _Comparison with state-of-the-art_ The table II represents a comparison between our architecture and the state-of-the-art architectures on HMDB51, we added the results of Uniformer and Vivit since they were not tested on HMDB51. Our architecture improved the previous state of the art by \(2.49\%\) and achieved a new SOTA result of \(90.05\%\). The table III represents a comparison between our architecture and seven other state-of-the-art architectures tested on UCF101. The results of Uniformer and Vivit have not been included because they were not tested on UCF101. Our architecture improves the previous state of the art by 0.22% and achieved a new SOTA result of \(99.6\%\). The table IV represents a comparison between our architecture and the FSA-CNN [5] architecture. FSA-CNN is a CNN architecture that takes as input videos (RGB) and/or skeleton data. It is based on a deep CNN network and an innovative approach to replace the activation functions. Our model outperforms the FSA-CNN [5] RGB model by more than \(4.9\%\) and FSA-CNN RGB+S by \(1.39\%\). Although the authors in [5] claim to include spatiotemporal variation in Fig. 2: Visualization of spatial and spatiotemporal Transformer modules outputs on a clip from HMDB51 dataset action data, a CNN network that takes RGB video and skeleton data as input is not able to extract a complete spatiotemporal representation due to the nature of CNN and the complexity of the data (2 modalities of very different types). Our architecture outperforms FSA-CNN thanks to the proposed transformer that supports the extraction of spatiotemporal dependencies. ## Conclusion and perspectives In this paper, we propose a novel sequential combination of 3D CNN convolution and a spatiotemporal transformer. Experiments show that the proposed architecture achieves new SOTA results with \(90.05\%\), \(99.6\%\), and \(95.09\%\) on HMDB51, UCF101, and ETRI-Activity3D datasets respectively. In future work, we will investigate further schemes for combining CNN architectures with transformers.
2310.15708
Tuning the topological character of half-Heusler systems: A comparative study on Y$T$Bi ($T$ = Pd, Pt)
Half-Heusler systems host a plethora of different ground states, especially with non-trivial topology. However, there is still a lack of spectroscopic insight into the corresponding band inversion in this family. In this work, we locally explore the half-Heuslers Y$T$Bi ($T =$ Pt and Pd) by means of scanning tunneling microscopy/spectroscopy. From our analysis of the (120) surface plane, we infer that the increase of the spin--orbit coupling upon going from Pd to Pt is the main player in tuning the surface states from trivial to topologically non-trivial. Our measurements unveil a ($2 \times 1$) reconstruction of the (120) surface of both systems. Using density functional theory calculations, we show that the observed different behavior of the local density of states near the Fermi level in these two materials is directly related to the presence of metallic surface states. Our work sheds new light on a well known tunable family of materials and opens new routes to explore the presence of topological states of matter in half-Heusler systems and its microscopic observation.
J. C. Souza, M. V. Ale Crivillero, H. Dawczak-Dębicki, Andrzej Ptok, P. G. Pagliuso, S. Wirth
2023-10-24T10:35:33Z
http://arxiv.org/abs/2310.15708v1
Tuning the topological character of half-Heusler systems: A comparative study on Y\(T\)Bi (\(T\) = Pd, Pt) ###### Abstract Half-Heusler systems host a plethora of different ground states, especially with non-trivial topology. However, there is still a lack of spectroscopic insight into the corresponding band inversion in this family. In this work, we locally explore the half-Heuslers Y\(T\)Bi (\(T\) = Pt and Pd) by means of scanning tunneling microscopy/spectroscopy. From our analysis of the (120) surface plane, we infer that the increase of the spin-orbit coupling upon going from Pd to Pt is the main player in tuning the surface states from trivial to topologically non-trivial. Our measurements unveil a \((2\times 1)\) reconstruction of the (120) surface of both systems. Using density functional theory calculations, we show that the observed different behavior of the local density of states near the Fermi level in these two materials is directly related to the presence of metallic surface states. Our work sheds new light on a well known tunable family of materials and opens new routes to explore the presence of topological states of matter in half-Heusler systems and its microscopic observation. half-Heusler, spin-orbit coupling, topological materials, scanning tunneling microscopy ## I Introduction The seminal works on the quantum (spin) Hall effect [1; 2; 3] were crucial to definitely incorporate topology into the analysis of electronic band structure of solids [4; 5; 6]. The net result was the prediction and observation of a plethora of quantum topological states of matter, such as topological insulators (TIs) [6], Dirac and Weyl semimetals [5; 7] and even more exotic excitations [8; 9]. Due to its unique physical properties, in which surface states often play a decisive role [5; 6], the application of such systems can reach from spintronics to quantum computing [10]. Despite such potential applications, two key ingredients have been limiting factors for a broader use of TIs [11; 10]. The first one is that many materials have their Fermi energy \(E_{\rm F}\) located in one of the bands derived from the bulk. In other words, experimentally the bulk is not fully insulating, as, e.g. typically observed in layered chalcogenides [11; 12; 13; 14]. Secondly, the Dirac point is often located sizeably away from \(E_{\rm F}\), preventing these materials from potential usage in, e.g., transport applications. A solution for both problems may reside in correlated systems, where the many body interactions pin the Dirac point close to \(E_{\rm F}\), within the bulk gap [15; 16]. However, their correlated phases normally appear only at low temperatures, which implies that those topological phases may not be suitable for applications [17; 18; 19; 20; 21]. As such, it is imperative to find appropriate materials with an insulating-like bulk and Dirac points near \(E_{\rm F}\), whose properties can also be tuned to specific requirements and even show a good match to important semiconducting substrates [22]. One of the most versatile systems that host numerous topological states of matter is the half-Heusler compounds [4; 23]. This family, with a simple MgAgAs-type cubic structure (space group \(F\overline{4}3m\)), that can be seen as a ZnS-type structure with filled octahedral lattice sites [Fig. 1 (a)] [23], has been extensively explored due to the fact that its semiconducting, magnetic, thermoelectric and strongly correlated properties can often be tailored [4; 24; 25; 26; 27; 28]. Earlier theoretical calculations have suggested that the mechanism behind the appearance of topological features in this family depends on the band inversion, which is very similar to the one observed in the prototypical CdTe and HgTe systems [29]. In both compounds, at the \(\Gamma\) point near \(E_{\rm F}\), the energy bands are split into \(\Gamma_{6}\), \(\Gamma_{7}\) (both twofold degenerate) and \(\Gamma_{8}\) (fourfold degenerate) states. This splitting originates from the zinc blende crystal symmetry and strong spin-orbit coupling [29; 30; 31; 32]. In the context of band topology, CdTe possesses a normal band order (the \(s\)-like \(\Gamma_{6}\) state sits above the \(p\)-like \(\Gamma_{8}\) state) while in HgTe band inversion occurs such that \(\Gamma_{6}\) resides below \(\Gamma_{8}\)[33]. Both situations are reproduced in Y\(T\)Bi, where for \(T\) = Pd the normal (trivial) state is realized, while for \(T\) = Pt a band inversion occurs and non-trivial states emerge (see Fig. 9 in the Appendix) [29; 30; 31; 32]. In fact, previous angle resolved photoemission spectroscopy (ARPES) measurements on YPtBi have shown the presence of unusual topological surface states [34; 35]. Here, the topological surface states are observed throughout the Brillouin zone. This situation is different from the one typically observed, e.g., in the chalcogenides TIs, where the topological surface states emerge as a Dirac cone [33; 34; 36; 37]. Moreover, topological features were also observed in form of Weyl fermions in related half-Heusler semimetals \(R\)PtBi (\(R\) = Nd, Gd, Yb) [38; 17; 39]. In particular, topological features can affect a possible superconducting state, even resulting in triplet superconductivity for some members of the \(RT\)Bi family (\(R\) = rare earth, \(T\) = Pd or Pt) [40; 41; 42; 43; 44; 45; 46]. In an even more fundamental aspect, comparing YPtBi and YPdBi can be an excellent platform to experimentally tune the topological properties through the spin-orbit coupling. Both systems possess very similar lattice parameters [6.652(1) A and 6.639(1) A for YPtBi [41] and YPdBi [42], respectively], which makes the spin-orbit coupling the key parameter to distinguish between trivial (YPdBi) and non-trivial (YPtBi) topological states [30; 31; 32; 34; 35]. These compounds are particularly attractive due to the possibility of obtaining high quality thin films, increasing their potential applicability [47]. Previous nuclear magnetic resonance (NMR) [48; 49] and electron spin resonance (ESR) experiments [50] pointed toward a strong impact of spin-orbit coupling on the detailed band structure of Y\(T\)Bi. A direct experimental visualization of the surface states resulting from band inversion in half-Heuslers has not been demonstrated, yet. Additionally, although this family of materials supports so many different physical properties, which are often related to surface states, little is known about the surface properties of half-Heuslers. This is, at least in part, certainly related to the fact that half-Heusler compounds with cubic structure are notoriously difficult to cleave, rendering an _in situ_ preparation of atomically flat surfaces from bulk samples a challenge. In consequence, reports employing scanning tunneling microscopy/spectroscopy (STM/STS) are scarce and focused on disordered surfaces in single crystals [51] or on the study of surface reconstructions in thin films [52]. In this work, we report on atomically flat surfaces investigated by STM/STS, combined with first-principles density functional theory (DFT) slab calculations, to explore the local properties of the half-Heuslers YPdBi and YPtBi. We cleaved our samples _in situ_, most likely along the (120) planes exposing a \((2\times 1)\) reconstructed YBi-terminated surface. From our STS data we infer a _finite_ local density of states (local DOS or LDOS) \(\eta(E)\) at \(E_{\rm F}\) for YPtBi, while the trivial YPdBi compound exhibits a well defined gap around \(E_{\rm F}\). We argue that the difference in the LDOS is likely due to the formation of metallic surface states in YPtBi, a finding corroborated by our slab calculations. Our work establishes the possibility of using STM as a local probe to investigate half-Heusler systems and suggests that a tuning of the LDOS can be achieved through the increase of the spin-orbit coupling upon going from Pd to Pt. ## II Methods Single crystalline samples of Y(Pd,Pt)Bi were synthesized by the Bi self flux growth technique with starting elements Y (99.99%):(Pd,Pt) (99.99%):Bi (99.999+%) in the proportion of 1:1:10 [53]. While YPtBi samples naturally expose (001), (110) and (111) planes in a pyramid-like shape, YPdBi samples only expose (001) planes (all samples had a cube-like shape). The investigated samples had an approximate size of \(1\times 1\times 1\) mm\({}^{3}\). STM/STS measurements were performed in a ultrahigh vacuum system at pressures \(p\leq 2.5\times 10^{-9}\) Pa and at temperatures \(T\) = 4.6 K. A total of seven (four YPdBi and three YPtBi) samples were cleaved _in situ_ at temperatures \(T\approx 20\) K. The tunneling current \(I\) was measured using electrochemical prepared tungsten tips and a bias voltage \(V_{b}\) was applied to the samples. The topographies were obtained in a constant current mode with a pre-defined current set point \(I_{sp}\). Most topographies were obtained in a dual-bias mode, i.e., forward and backward scans along the fast scan direction were obtained with a \(V_{b}\) of the same magnitude, but with opposite signs. We did not see any differences in dual-bias mode (i.e. for the different values of \(V_{b}\)) when scanning the samples along the (120) planes. The d\(I\)/d\(V\)-spectra were acquired by a lock-in technique applying a modulation voltage of typically \(V_{mod}=0.3\) mV at 117 Hz. The first-principles density functional theory (DFT) calculations were performed using the projector Figure 1: (a) Crystal structure of the half-Heusler systems. The blue and green planes indicate the (120) planes of YBi and Pd/Pt termination, respectively. The black lines outline the unit cell. (b) Brillouin zone and its projection for the (111) and (120) surface planes. augmented-wave (PAW) potentials [54] implemented in the Vienna Ab initio Simulation Package (vasp) code [55; 56; 57]. The calculations containing the spin-orbit coupling (SOC) were performed with the generalized gradient approximation (GGA) under the modified Becke-Johnson (mBJ) exchange potential [58; 59; 60]. The energy cutoff for the plane-wave expansion is set to \(350\) eV. The density of states was calculated using \(12\times 12\times 12\) k-point \(\Gamma\)-centered grids in the Monkhorst-Pack scheme [61]. The lattice constants were assumed to be equal to the experimental values, i.e. \(\approx 6.64\) A for both compounds [62]. The band structures from the DFT calculations were used to find tight binding models by Wannier90 [63; 64], which allowed us to calculate surface state spectra by WannierTools[65]. The theoretical simulation of STM topographies for (120) YBi-terminated surfaces (without and with reconstruction) were computed using the Tersoff-Hamann approach [66]. Due to technical limitations of the mBJ potential for slab-type calculations, these specific DFT calculations were performed using a GGA with Perdew-Burke-Ernzerhof (PBE) parametrization [67]. More details on the calculations are provided in the Appendix D, _Details of Bulk and surface band structure calculations_. Figure 2: (a) \(20\times 20\) nm\({}^{2}\) topography along the (120) plane for YPtBi (bias voltage \(V_{b}=-300\) mV, \(I_{sp}=0.7\) nA, scale bar of \(5\) nm). The height scans along two (almost) perpendicular directions are shown in (b) by curves of corresponding color. The step edge height is, approximately, \(115\) pm. (c) \(100\times 100\) nm\({}^{2}\) field of view (scale bar of \(10\) nm). The height scan along the violet line over several step edges is presented in (d). (e) \(30\times 30\) nm\({}^{2}\) STM topography along the (120) plane for YPdBi (\(V_{b}=-200\) mV, \(I_{sp}=0.6\) nA, scale bar of \(5\) nm). (f) High resolution \(6\times 6\) nm\({}^{2}\) field of view (scale bar of \(1\) nm). (f) Two perpendicular height scans from (f) are presented by corresponding colors. View onto the topmost layer of (h) Pd/Pt- and (i) YBi-terminated surfaces for the (120) plane. In (i), the proposed (\(2\times 1\)) surface reconstruction is indicated where hollow circles mark empty atom positions. Theoretically predicted STM topography for (120) surfaces: (j) unreconstructed and (k) (\(2\times 1\)) reconstructed YBi-terminated surface. The topography was simulated for a tip \(\sim\)1 Å above the surface of area \(5.3\times 5.9\) nm\({}^{2}\). The blue (red) color corresponds to high (low) charge density. ## III Results and Discussion ### Topography of YPtBi single crystals _In situ_ preparation of clean surfaces (in case of bulk single crystals typically by cleaving) is of utmost importance for STM/STS studies but often exceedingly difficult for materials of cubic crystal structure [20]. Our single crystals of half-Heusler compounds YPdBi and YPtBi naturally expose (001) crystallographic planes, while the (111) plane was only found for YPtBi. Figure 1(b) shows the surface projection of the Brillouin zone along the latter direction. In principle, the pyramid-like shape of our YPtBi samples may open the possibility of exploring surfaces along the (001) and (111) planes. However, the YPdBi crystals had a more cube-like shape, suggesting a preferred cleave along the (001) plane. We emphasize that for a reasonable comparison between results obtained on both compounds it is vital to investigate _identical_ crystallographic planes. Therefore, we focus on samples mounted along the (001) direction in the following (further details of measurements for cleaving YPtBi along the (111) plane are provided in Appendix A, see Figs. 4, 5 and 6). Atomically flat surfaces on cleaved half-Heuslers are extremely scarce and required extensive search. One example is shown in Fig. 2(a) for YPtBi where, in principle, the cleaving was expected to occur along the (001) plane. Before comparing the results from two different compounds, it is crucial to identify which planes and terminations are obtained since surface states depend decisively on those two parameters [34; 35]. Notably, all the obtained flat surfaces show a \(\sim 30^{\circ}\) tilt angle with respect to the sample mounting plane (001) in this cleaving configuration. This is a hint that the obtained surfaces are _not_ (001) planes. As will be argued below, the exposed planes are likely YBi-terminated (120) planes instead, which are highlighted in blue in Fig. 1(a). Three crucial pieces of information are helpful to identify the cleaving plane: (i) As mentioned, the surfaces are tilted by \(\sim 30^{\circ}\) with respect to the sample mounting plane. This renders the (120) plane a likely surface as it is expected \(26.6^{\circ}\) away from the (001) plane. (ii) The distance between corrugations and the (iii) the height of the step edges can be analyzed. Fig. 2(a) exemplifies the latter two in the same field of view. The apparent height profile \(\Delta z\) as a function of the lateral distance \(\Delta x\) (height scans) shown in Fig. 2(b) reveals a step edge height of \(\sim 115\) pm and a distance between corrugations \(d_{exp}^{(120)Pt}=0.66(3)\) and \(0.68(3)\) nm for the pink and turquoise directions, respectively. Those distances are in agreement with the ones extracted from the Fourier transform of larger areas, from which we obtain \(d_{exp}^{(120)PtFT}=0.67(3)\) and \(0.70(3)\) nm (see Fig. 7 in the Appendix). In order to gain better statistics on the average step edge height a \(100\times 100\) nm\({}^{2}\) area obtained on YPtBi is investigated, Fig. 2(c). We can clearly see the occurrence of several step edges and a lack of adatoms. We note that the latter observation distinguishes the here observed surfaces over measurements on the (111) plane (compare Fig. 4 in the Appendix). The average step edge height between different exposed surfaces is \(d_{exp}^{(120)}\approx 135\) pm [Fig. 2(d)] indicating that either exclusively YBi-terminated or Pt-terminated are observed. Note that the theoretically expected distance between these planes is 148 pm, see Fig. 1(a). A Pt-terminated surface can be ruled out since a distorted hexagonal lattice with distances between Pt atoms of 0.66 and 0.81 nm would be expected, Fig. 2(h), which is in clear contrast to the observation of Fig. 2(a). On the other hand, for a YBi-terminated surface a rectangular lattice with distances of 0.33 and 0.74 nm between atoms is expected. Therefore, we propose a \((2\times 1)\) reconstructed YBi surface where half of the atoms are missing, see Fig. 2(i). This scenario is consistent with the observed distances between corrugations and STM simulations obtained through slab DFT calculations. Reconstructed surfaces, including the \((2\times 1)\) type, are commonly observed on both, bulk samples [52] and thin films [68; 69; 70; 71] of half-Heusler compounds. The driving forces behind these reconstructions were argued to be charge neutrality and a minimization of the number of dangling bonds [34; 35; 52]. However, the (120) surface plane has not been investigated so far. In order to get further insight, we conducted first-principles slab calculations for this particular surface termination. Specifically, the total energies for slabs without and with \((2\times 1)\) reconstruction were calculated. The reconstructed slabs contained 84 sites, i.e. 28 atoms of each species (for further details see Appendix D). To allow comparison to the non-reconstructed surface, two Pd/Pt atoms were removed from one surface, but added as free atoms to the total energies. The calculations clearly favor a reconstructed surface by about 4.1 eV in case of YPdBi, and about 8.3 eV for YPtBi. From the valence situation in the half-Heusler compounds, one may expect an YBi-terminated surface to be charge-neutral. In line with the statement above [52] one may then speculate about a minimum number of dangling bonds for the reconstructed surface and hence, a limited impact of dangling bonds on the surface properties. ### Topography of YPdBi Naturally, also on YPdBi atomically flat surfaces needed to be extensively searched for. Areas of \(30\times 30\) nm\({}^{2}\) could be identified, as exhibited in Fig. 2(e). However, we were not able to find any step edges in all of our investigated YPdBi cleaves. Importantly, within these areas we observed the same rectangular pattern as for YPtBi. This pattern is confirmed by high resolution topographies, as presented in Fig. 2(f), where the obtained distances between corrugations are \(d_{exp}^{(120)Pd}=0.61(3)\) and \(0.72(3)\) nm [Fig. 2(g)]. These values are consistent with results from Fourier analyses obtained on bigger areas (see Fig. 7 in the Appendix) as well as with the YPtBi results. Apart from the missing step edges, all of the investigated, atomically flat areas on YPdBi appeared to be consistent with the plane orientation and termination as observed for YPtBi cleaves. In particular, the experimentally obtained surfaces are again tilted by \(\sim 30^{\circ}\) with respect to the sample mounting plane (001). Hence, our observation point again toward \((2\times 1)\) reconstructed surfaces along the (120) plane. Figures 2(j) and (k) represent STM simulations obtained through slab DFT calculations for YBi-terminated surfaces without and with \((2\times 1)\) reconstruction, respectively (for details, see Appendix D.3). These results indicate that without a surface reconstruction we would likely observe stripes along the (100) crystallographic direction [Fig. 2(j)]. Such stripes are absent on \((2\times 1)\) reconstructed surfaces [Fig. 2(k)], in line with our observations. The simulations also help explaining the subtle differences in the topographies upon going from Pd to Pt samples. For a (120) termination, the Pd/Pt atoms reside only 74 pm below the topmost YBi layer and hence, the Pd/Pt atoms may also contribute to the topography, as suggested by the yellow contributions in Figs. 2(j) and 2(k). The radial extent of the \(4d\) orbitals is smaller than the \(5d\) ones [72; 73]. Therefore, the second-to-topmost layer may have slightly different contributions to the topography depending on whether it is Pd or Pt. ### Bulk properties of Y\(T\)Bi (\(T=\) Pd, Pt) Before discussing our spectroscopic results of the _surface_ properties of YPdBi and YPtBi, we address possible differences in the _bulk_ DOS near \(E_{\rm F}\) for these two materials as this may easily influence the spectral weight measured at the surface. As already mentioned, half-Heusler systems have a three dimensional character and, as such, bulk states might be relevant [74; 75]. In fact, previous specific heat studies have obtained very similar Sommerfeld coefficients \(\gamma\) for both systems. While for YPdBi \(\gamma=0.3(1)\) mJ mol\({}^{-1}\)K\({}^{-2}\) was reported [53], results for YPtBi ranged from \(\sim\)0.1 to 0.4 mJ mol\({}^{-1}\)K\({}^{-2}\)[76; 43]. Figure 3: (a) Topography of YPtBi as in Fig. 2(a) with areas marked within which d\(I\)/d\(V\)-spectra were averaged (in addition to the total area). (b) d\(I\)/d\(V\)-spectra within the purple and green rectangles as well as for the total area (black) presented in (a). Also, the calculated bulk DOS (dashed line) is included. The latter is normalized at negative bias to the experimental value to improve the visualization. (c) Slab calculated electronic band structure for a YBi-terminated (120) plane of YPtBi. (d) \(6\times 6\) nm\({}^{2}\) topography for YPdBi (\(V_{b}=-200\) mV, \(I_{sp}=0.6\) nA, scale bar of 1 nm). (e) d\(I\)/d\(V\)-spectra averaged over the magenta, orange, purple and the whole area shown in (d). Again, the calculated bulk DOS for YPdBi is included for comparison (dashed line). (f) Slab calculated electronic band structure for the YBi-terminated (120) plane of YPdBi. Assuming a free conduction electron gas model with \(\gamma=(2/3)\pi k_{B}^{2}\eta_{s}(E_{\rm F})\), where \(k_{B}\) is the Bohr magneton and \(\eta_{\rm s}(E_{\rm F})\) denotes the spin-resolved DOS at \(E_{\rm F}\), one obtains \(\eta_{s}(E_{\rm F})^{\rm{YPdBi}}=0.06(4)\) eV\({}^{-1}\) mol\({}^{-1}\) spin\({}^{-1}\) for YPdBi, and \(\eta_{s}(E_{\rm F})^{\rm{YPtBi}}=0.04(4)\) eV\({}^{-1}\) mol\({}^{-1}\) spin\({}^{-1}\) for YPtBi. Such a negligible electronic contribution to the specific heat is consistent with previous transport measurements for both systems, which reported a semiconductor/semimetal-like behavior [41, 42, 43, 76, 77, 78]. It is worth noting that Pd/Pt and Bi-based compounds are known for hosting impurity phases, such as Bi and/or Pd/Pt-Bi binary phases [73]. Such impurity phases may affect the macroscopic properties, especially transport measurements. Consequently, it is highly desirable to have an experimental confirmation of the insulating bulk nature from a microscopic technique. Indeed, previous electron spin resonance measurements for rare-earth substituted YPdBi and YPtBi clearly indicate an insulating bulk behavior. This establishes the presence of a small gap in the bulk DOS at \(E_{\rm F}\) of both systems [53, 76], in agreement with our DFT results discussed below. ### Spectroscopic results on Y\(T\)Bi (\(T=\) Pd, Pt) Having identified identical surface terminations and established negligible bulk contributions to the DOS near \(E_{\rm F}\) for both materials YPdBi and YPtBi, we can now compare their surface electronic properties. In Fig. 3(b) and (e) the STS results, i.e. d\(I\)/d\(V\)-spectra, are presented. We note that, within simplifying assumptions, d\(I\)/d\(V\)\(\propto\)\(\eta(E)\). The topographic areas over which the spectroscopy curves were averaged are shown in Figs. 3(a) and (d), respectively, with the black curves in (b) and (e) obtained within the total areas of (a) and (d). Clearly, there are only minor differences between spectra obtained in different areas of a given compound. In particular, for YPdBi also spectra obtained at different defects are included, see orange and pink rectangles/curves in Fig. 3(d) and (e), which do not significantly deviate from the spectra in a clean (violet) or the total area. Consequently, the spectra are not significantly influenced by these defects. The d\(I\)/d\(V\)-spectra of YPtBi are mostly featureless, with a \(V\)-like shape near \(E_{\rm F}\) and a minimum at around \(-10\) meV. Most importantly, we obtain a finite LDOS around \(E_{\rm F}\), which is a clear indication for a considerable amount of surface states closing the bulk gap. This experimental result is to be contrasted with the bulk DOS as calculated by DFT [black dashed line in Fig. 3(b) and Appendix Fig. 9(b)], which predicts a gap-like behavior near \(E_{\rm F}\). The calculations also find a mostly featureless spectrum and, for negative bias away from \(E_{\rm F}\), qualitatively agree with our d\(I\)/d\(V\)-data. However, a proper analysis, and specifically insight into the details near \(E_{\rm F}\), requires a slab calculation, the result of which is put forward in Fig. 3(c). The calculations were conducted with the Green function technique for semi-infinite systems assuming a (120) surface representing our experiments in a more realistic way. As discussed in the Appendix D.3, with the modified Becke-Johnson (mBJ) pseudopotential [58, 59, 60], it is neither possible to perform slab calculations using reconstructed surfaces nor to obtain the LDOS properly. Nonetheless, we are able to obtain important pieces of information to understand our STS results. As shown in Fig. 3(c), a mixture of surface states (non-trivial and trivial ones) contribute significantly to the spectral weight within the bulk gap, which is consistent with previous angle-resolved photoemission spectroscopy results [34, 35]. In other words, surface states dominate the LDOS near \(E_{\rm F}\), which clearly indicates that the increase of the LDOS in YPtBi, compared to the bulk DOS, stems directly from those surface states. As shown in Fig. 3(c), near the \(\Gamma\) point at \(\sim\)300 meV we obtain a high surface spectral weight of dangling bonds. Those trivial surface states are very similar to the van Hove singularity at approximately \(-100\) meV that is found for LuPtBi [31, 79, 37]. The results of our d\(I\)/d\(V\)-measurements for YPtBi become even more intriguing when compared to those of YPdBi, Fig. 3(e). There are two striking distinctions in the d\(I\)/d\(V\)-data of YPdBi: (i) Qualitatively, the LDOS exhibits more features, with a prominent peak at approximately 115 meV. (ii) Importantly, there is a clearly observable gap of width \(\Delta\sim 100\) meV around \(E_{\rm F}\). We should note, however, that the d\(I\)/d\(V\)-data for YPdBi do not strictly go all the way to zero, but remain finite at a very small value, possibly caused by thermal effects. A comparison of the experimental data with the calculated bulk DOS is only partially possible, Fig. 3(e). On the one hand, the band gap \(\Delta^{theor}\sim 0.15\) eV [see Fig. 9(a) in the Appendix] in the projection to the (120) plane is comparable to the experimental value. This gap once again confirms the trivial nature of YPdBi, in which the conduction and valence bands are not inverted. Moreover, there appear to be no surface states near \(E_{\rm F}\) in this case. Most of the spectral weight coming from surface states is located above \(E_{\rm F}\), which is consistent with our d\(I\)/d\(V\) being almost featureless at negative bias. ### Comparison between YPtBi and YPdBi The differences observed in the d\(I\)/d\(V\)-spectra of YPtBi and YPdBi are intriguing given the facts that identical surface terminations were investigated (thereby ruling out the surface reconstruction as the main cause of the differences) and both compound have very small bulk contributions to the DOS near \(E_{\rm F}\). The slab calculated electronic band structures for the YBi-terminated (120) surface plane, Figs. 3(c) and (f), suggest a considerable admixture of non-trivial surface states to the DOS near \(E_{\rm F}\) in case of the YPtBi surface, which is absent for YPdBi. In such a case it is a likely scenario that the topological surface states are the key component for the differences in the d\(I\)/d\(V\)-spectra near \(E_{\rm F}\). The minimum observed close to \(V_{b}\sim-10\) meV in case of YPtBi may then be linked to the Dirac point. It is also interesting to note that the peak-like feature at \(V_{b}=+115\) meV is only observed for YPdBi. Two different origins could be at play to cause this peak. In the first scenario, which is suggested by our slab calculations, this feature coincides with the bottom of the conduction band, as shown in Fig. 3(f) and discussed in Appendix D.3 and Fig. 12. Here, the lack of this peak for YPtBi could naturally be explained by the band inversion in this compound. As discussed in the introduction, non-trivial surface states emerge thereupon. An alternative scenario involves the presence of the surface reconstruction. Here, the peak would be a direct consequence of the enhancement of trivial surface states. However, in this scenario one should also expect such a peak for YPtBi, which is not observed experimentally. We emphasize that such a comparison is only possible since results obtained on _identical_ surface terminations (concerning the type and arrangement of the surface atoms as well as the orientation of the terminating plane) are compared. The clear difference between the LDOS of both systems near \(E_{\rm F}\) favours the increased spin-orbit coupling (upon going from Pd to Pt) as the source of the appearance of surface states with topological character. The predicted, strong modification of these surface states has, to the best of our knowledge, not been demonstrated by STM/S before and suggests a systematic tunability of topology in half-Heusler systems. Likely, by choosing the proper surface plane, these properties also may be accessed through macroscopic (e.g. transport) measurements. Finally, our results, even though obtained at \(T=4.6\) K, may also shed some light on superconductivity in half-Heusler systems. A well defined gap was found for the trivial insulator YPdBi. This compound has been reported to have one of the highest superconducting transition temperatures (\(T_{\rm c}\sim 1.6\) K) among the \(R\)PdBi family [44]. If the superconductivity is intrinsic, it would be interesting to understand how a gapped system can develop a superconducting phase. In this respect it is interesting to note that, at least for YPtBi, some reports discuss the possibility of superconductivity being a bulk or a surface property [47; 41]. Yet, a finite LDOS, possibly with Dirac point(s), for YPtBi is not inconsistent with a topological superconductivity scenario [45; 46; 47; 40; 41; 42; 43; 44; 43]. It would be interesting to conduct further experiments at mK temperatures to investigate the origin of superconductivity and its nature in half-Heusler systems. ## IV Conclusion In summary, we performed scanning tunneling microscopy/spectroscopy on the half-Heusler systems YPtBi and YPdBi. By _in situ_ cleaving the single crystals at low temperatures we were able to investigate atomically flat areas. Both materials very likely expose (120) YBi-terminated surfaces with (\(2\,\times\,1\)) reconstructions which induce additional surface states and hence, may complicate surface spectroscopy. Using STM, we can compare _identical_ surface terminations, thereby ruling out the reconstructions as the main cause for differences in the spectroscopic results between the two materials. However, we do observe a clear difference in the LDOS of these compounds: While YPdBi exhibits a gap of \(\sim\)100 meV around \(E_{\rm F}\), surface states are found for YPtBi without indication of a gap. Such distinct behavior was not seen by macroscopic measurements reported in previous studies. Our result provides evidence for the targeted realization of unusual surface states. DFT calculations are consistent with such change in the LDOS. In a more general way, our result can very likely be linked to a spin-orbit tuning of topology in half-Heusler systems. More importantly, our results emphasize the key role of surface states near \(E_{\rm F}\) in these systems. Exploring planes such as the (120) surface termination [or even the (001) plane in thin films] appears as an extremely promising route to obtain a versatile TI with an insulating bulk and increases the potential of half-Heusler systems for applications. ###### Acknowledgements. We thank E. H. da Silva Neto, T. J. Boyle and M. Walker for their help and discussions in the beginning of this project. This work was supported by FAPESP (SP-Brazil) Grants No 2020/12283-0, 2018/11364-7, 2017/10581-1, CNPq grant No 311783/2021-0 and CAPES. Work was also supported by the National Science Centre (NCN, Poland) under Projects No. 2021/43/B/ST3/02166 (A.P.). A.P. appreciates funding within the frame of scholarships of the Minister of Science and Higher Education of Poland for outstanding young scientists (2019 edition, No. 818/STYP/14/2019). Work at Los Alamos was supported by the Los Alamos Laboratory Directed Research and Development program through project 20210064DR. ## Appendix A (111) plane in YPtBi ### STM/STS results In YPtBi crystals with pyramid-like shape we are also able to cleave along the (111) plane; the results are summarized in Fig. 4. In this plane, the height difference between Y and Bi layers is twice as large as the Y-Pt or Pt-Bi layer spacing, see Fig. 4(a). Furthermore, fewer chemical bonds need to be broken between Y and Bi layers compared to cleaves involving Pt layers, which should result in cleaves exposing mostly Y or Bi layers [34]. We found more easily atomically flat surfaces along the (111) plane when compared to the cleaves along the (120) plane for YPtBi. Yet, atomically flat surfaces needed to be ex tensively searched for, which is not surprising in a cubic system. Importantly, however, despite great efforts atomically flat areas on a (111) plane could _not_ be found on YPdBi and therefore, the comparison of the LDOS for both compounds was focused on the (120) plane. The cleave along the (111) plane should expose either Y- or Bi-terminated triangular lattices, as shown in Fig. 4(a). Fig. 4(b) exhibits a \(200\times 200\) nm\({}^{2}\) topography. Albeit we were able to locate such large atomically flat areas, there was quite an amount of adatoms on top of such surfaces. The topography in Fig. 4(c) zooms into an area of \(20\times 20\) nm\({}^{2}\). In this case it is possible to observe in more detail the triangular lattice, which is confirmed by the Fourier transform presented in the lower right inset. Again, we obtain a moderate amount of adatoms, which are expected on an unreconstructed surface due to its polar nature. The establishment of an unreconstructed surface is further corroborated by the distance between corrugations, as highlighted in the height scans of Fig. 4(d). We obtain a distance between corrugations of \(d_{exp}^{(111)}=0.43(2)\) nm, 0.46(2) nm, and 0.44(2) nm for the magenta, blue and orange lines, respectively, which is in excellent agreement with the theoretical distance between Bi/Y atoms of \(d_{theor}^{(111)\text{Y/Bi}}=0.47\) nm along the (111) plane. As we will discuss in more detail below, the exposed surface is likely an unreconstructed Bi-terminated surface. In Fig. 4(e) we present d\(I\)/d\(V\)-spectra, which were obtained within in the total field of view of Fig. 4(c) (black line) as well as within the areas marked by col Figure 4: (a) The triangular lattice of the (111) plane of the half-Heusler systems and the height difference between distinct termination planes. (b) \(200\times 200\) nm\({}^{2}\) STM topography of the (111) plane of YPtBi (\(V_{b}=-300\) mV, \(I_{sp}=0.6\) nA, scale bar of 50 nm). (c) Atomically resolved \(20\times 20\) nm\({}^{2}\) topography on YPtBi along the (111) surface (scale bar 5 nm). The lower right inset shows the Fourier transform. The magenta, blue and orange solid lines represent the direction of the height scans presented in (d). (e) d\(I\)/d\(V\)-spectra averaged over the rectangular areas of corresponding colors as well as over the total field of view in (c) [black line in (e), obtained on a \(35\times 35\) grid]. Figure 5: (a) \(100\times 100\) nm\({}^{2}\) STM topography along the (111) plane of YPtBi (\(V_{b}=200\) mV, \(I_{sp}=0.3\) nA, scale bar of 20 nm). The purple line represents the line along which the height scan in (b) was obtained. (c) \(10\times 10\) nm\({}^{2}\) field of view (\(V_{b}=-300\) mV, \(I_{sp}=0.3\) nA, scale bar of 2 nm). Here, the three most dominant defects can be recognized, which likely are a triangular vacancy [defect (1)], a triangle which could be related to a substitution [defect (2)], and adatoms [defect (3)]. The latter is supported by the violet and blue height scans in (d) obtained for opposite bias voltages along the lines indicated in (c). ored rectangles (with the colors corresponding to those of the spectra). Earlier theoretical calculations indicated a Dirac point buried in the bulk DOS for either Y- or Bi-terminated surfaces, which was confirmed by angle resolved photoemission spectroscopy [34; 35]. Nonetheless, trivial Rashba-like surface states can be expected due to the presence of dangling bonds [52; 34]. The observed d\(I\)/d\(V\)-spectra are almost featureless, with a finite DOS at the Fermi level \(E_{\rm F}\). Interestingly, the LDOS obtained at two adatoms [yellow area in Fig. 4(c)], or at defects [blue area in Fig. 4(c)] does not change significantly when compared to the LDOS obtained on clean surfaces (green and purple areas and spectra) or even to the spectrum averaged over the total field of view. This is an indication that the surface states are not affected locally by small amounts of disorder. In order to provide further evidence to the (111) assignment of the terminating plane observed in Fig. 4, we experimentally explored the presence of step edges and adatoms. Fig. 5(a) shows a typical 100 x 100 nm\({}^{2}\) topography along the (111) plane with two step edges. As can be seen in Fig. 5(b), the height difference between each exposed surface is \(h_{exp}^{(111)}\approx 370\) pm. Such a distance is consistent to either Y-Y, Bi-Bi or Pd-Pd/Pt-Pt surface terminations for which \(h_{theor}^{(111)}=383\) pm is expected. It is worth to note that there is an accumulation of adatoms along the edges of the exposed surfaces (small peaks in the height scan), which reinforces our assumption above that such adatoms play a role in neutralization of the exposed polar surfaces. Such an assumption is also corroborated by different apparent heights of the adatom if measured with different bias \(V_{b}\). Fig. 5(c) shows a \(10\times 10\) nm\({}^{2}\) topography taken with \(V_{b}=-300\) mV. We can observe the three most numerous defects obtained on these surfaces: a triangular one likely related to a vacancy [defect (1)], a small triangle which could be related to a substitution in a sub-layer underneath the exposed surface [defect (2)] and the already mentioned adatoms [defect (3)]. Fig. 5(d) provides the height scan across the adatom position in Fig. 5(c) with different values of the applied \(V_{b}\) obtained in dual-bias mode (i.e. at exactly the same position). We systematically observe higher heights at the adatom sites for negative \(V_{b}\). For negative (positive) \(V_{b}\), the tip will have a positive (negative) potential with respect to the sample. In this scenario, the tip gets further away from Figure 6: (a) \(100\times 100\) nm\({}^{2}\) STM topography along the (111) plane for YPtBi (\(V_{b}=-200\) mV, \(I_{sp}=0.3\) nA, scale bar of 20 nm). (b) Height scan along the blue line shown in (a). The \(A\) and \(B\) labels denote the surfaces \(A\) (Bi-terminated) and \(B\) (likely Pt-terminated). (c) Tunneling current \(I\) as a function of the tip-sample distance \(\Delta z\) for the two different surfaces A and B. The red solid lines are exponential fits as described in the text. (d) Zoom of \(20\times 20\) nm\({}^{2}\) into the magenta box shown in (a) (scale bar of 5 nm). (e) Height scan along the magenta/black line shown in (d) and for different \(V_{b}\). The lines cross an adatom located in surface \(B\). (f) Averaged d\(I\)/d\(V\)-spectra of both surfaces A and B. The averages were taken over areas of \(4\times 4\) nm\({}^{2}\) at equally spaced positions on \(50\times 50\) grids. (closer to) the adatom if it has a more positive charge compared to the surrounding bulk. We expect a valence of 3+ for Y, while the other constituents in YPtBi should have a more negative valence. Consequently, an adatom is much more likely more positive as its surrounding on a Bi or Pt terminated surface. Since a cleave exposing Pt is unlikely from the chemical bonding situation discussed above, we speculate that we obtained a Bi terminated surface in Fig. 5. ### Coexistence of Pt and Bi-terminated surfaces As already pointed out, the majority of the YPtBi samples which successfully cleaved along the (111) plane exposed the same termination within the field of view (even when step edges were included, cf. Fig. 5). However, in one particular field of view we were able to observe a coexistence of two differently terminated surfaces. Fig. 6(a) shows this \(100\times 100\) nm\({}^{2}\) topography. In the height scan explored in Fig. 6(b) we obtain a step height between two consecutive surfaces (labeled \(A\) and \(B\)) of \(\approx 95\) pm. This value is much smaller than the expected step height \(h_{theor}^{(111)}=383\) pm for identical terminations. Indeed, it is close to the Pt-Bi (or Pt-Y) layer distance, which is 95.8 pm [Fig. 4(a)]. Such a step height necessarily implies that one of the surfaces _has_ to have a Pt termination. The difference between those two surface terminations is also manifested by two different heights of the tunneling barrier \(\Phi\), which is closely related to the work function \(\Phi_{s}\) of the sample (note that also the tip work function \(\Phi_{t}\) enters into \(\Phi\)). \(\Phi\) can be obtained from an analysis of \(I\) as a function of the tip-sample distance \(\Delta z\). For clean surfaces, \(I(\Delta z)\propto\exp(-2\kappa\,\Delta z)\) with \(\kappa^{2}=2m_{e}\Phi/\hbar^{2}\), where \(m_{e}\) is the bare electron mass and \(V_{b}\ll\Phi_{s,t}\). Fig. 6(c) represents \(I(\Delta z)\) curves for surfaces \(A\) and \(B\), which are identified in Figs. 6(b) and (d). By fitting the \(I(\Delta z)\) curves, red lines in Fig. 6(c), we obtain \(\Phi_{A}\approx 4.9\) eV and \(\Phi_{B}\approx 5.5\) eV. A fair comparison here is to look at the values of the elemental materials. For Y, Bi and Pt, \(\Phi_{s}=3.1\) eV, 4.22 eV, and 5.65 eV, respectively. Comparing to our obtained results, one may speculate that the highest obtained value, i.e. \(\Phi_{B}\), is unlikely from an Y-terminated surface and, conversely, the lower value \(\Phi_{A}\) does not stem from a Pt-terminated surface. As one of the two surfaces (\(A\) or \(B\)) has to be Pt-terminated, it is likely surface \(B\). A closer look at height scans across defects can also be informative with respect to the assignment of those two distinct surfaces. A zoom into the magenta box of Fig. 6(a) is given in Fig. 6(d). According to Fig. 6(b), we identify the differently terminated surfaces as \(A\) and \(B\) in the \(20\times 20\) nm\({}^{2}\) topography. The height scans across a defect at surface \(B\) for opposite \(V_{b}\) signs are shown in Fig. 6(e). They were taken at the positions highlighted by the magenta and black lines in Fig. 6(d). It is straightforward to note that the difference of the defect height for opposite \(V_{b}\)-values is much smaller on this surface when compared to the adatom height at surface \(A\) (which was assigned as Bi-terminated), cf. Fig. 5(d). In consequence, the defect investigated in Fig. 6(e) is very likely located in a sub-surface layer, i.e., in a layer underneath the exposed surface. Finally, Fig. 6(f) represents d\(I\)/d\(V\)-spectra as a function of \(V_{b}\) for both surfaces. Surprisingly, there is only a small difference at higher positive \(V_{b}\)-values between the Figure 8: Comparison of results obtained on different samples of (a) YPtBi and (b) YPdBi. Clearly, the d\(I\)/d\(V\)-curves of a given material are well reproduced. All spectra are averages over certain areas; for S1 and S3 they are described in Fig. 3. The averages were taken within a \(20\times 10\) nm\({}^{2}\) field of view (\(V_{b}=-300\) mV, \(I_{sp}=0.7\) nA) for spectra S2, and a \(4\times 4\) nm\({}^{2}\) field of view (\(V_{b}=-200\) mV, \(I_{sp}=0.6\) nA) for S4. Figure 7: (a) 25 x 39 nm\({}^{2}\) and (b) 30 x 30 nm\({}^{2}\) topographies along the (120) plane for YPtBi (\(V_{b}=-300\) mV, \(I_{sp}=0.7\) nA, scale bar of 10 nm) and YPdBi (\(V_{b}=-200\) mV, \(I_{sp}=0.6\) nA, scale bar of 5 nm), respectively. The right insets show the Fourier transform of the respective topography. spectra of the two differently terminated surfaces, suggesting that the LDOS is dominated by bulk and trivial surfaces state contributions in both cases. ## Appendix B Fourier transform of the (120) planes A more accurate extraction of the distance between corrugations can be achieved by analyzing the Fourier transform of larger areas. In the case of YPtBi, one large flat area that we were able to obtain is shown in Fig. 7(a). From the Fourier transform (inset), we obtained \(d_{exp}^{(120)Pt}=0.67(3)\) and \(0.70(3)\) nm. For YPdBi, a large, atomically flat area is presented in Fig. 7(b). Here, the Fourier transform yielded \(d_{exp}^{(120)Pd}=0.61(3)\) and \(0.72(3)\) nm. It is worth to notice that the Fourier transforms even have a slightly rectangular shape [see also Fig. 2(i)], consistent with the asymmetry between the distance of corrugations. ## Appendix C Reproducibility of the spectra As mentioned in Sec. II, four YPdBi and three YPtBi samples were cleaved _in situ_ and subsequently investigated by STM/STS. In order to provide support for the reproducibility of our data, specifically the spectra, we here exemplify results obtained on different samples of YPtBi as well as YPdBi. For comparison, we also reproduced in Fig. 8 the spectra of Fig. 3, marked S1 and S3, respectively. Clearly, the spectra of different samples of a given material compare well, and all the main features are reproduced. Most important for the main conclusion of our investigation, the reduction of the LDOS at \(E_{\rm F}\) is clearly reproduced upon going from YPtBi to YPdBi. ## Appendix D Details of Bulk and surface band structure calculations ### Bulk band structure calculations We were able to reproduce the bands at \(\Gamma\) point using the modified Becke-Johnson (mBJ) pseudopotential, see Fig. 9, as given in [29, 30]. Here, the topological properties can be described by \(\Delta E=E_{\Gamma_{6}}-E_{\Gamma_{8}}\), which is negative for systems with band inversion [29] (as mentioned in the introduction). Indeed, for YPdBi we find \(\Delta E\simeq+0.35\) eV, while for YPtBi we obtain approximately \(-0.79\) eV. In other words, our calculations specify YPtBi as a zero-gap semiconductor. Both values are similar to those reported earlier [29, 30, 31]. It is worth to point out that even though different pseudopotentials might result in similar densities of states, some of those pseudopotentials do not reproduce correctly the band inver Figure 10: Surface spectral functions of (a) YPdBi and (b) YPtBi for the (111) surface planes. Figure 9: Electronic band structures and density of states (DOS) for bulk (a) YPdBi and (b) YPtBi. The \(\Gamma_{6}\) and \(\Gamma_{8}\) bands are marked in the figure. The light-blue stripe in (a) visualizes the gap of \(\sim\)0.15 eV width. sion [82]. Finally, these results depend strongly on the lattice constant [31]. ### Slab calculations for the (111) plane The surface spectral functions, calculated within the Green function technique for semi-infinite systems, for the (111) surface planes of YPdBi and YPtBi are presented in Figs. 10(a) and (b), respectively [cf. Figs. 3(c) and (f) for the (120) plane]. In the case of a Bi-terminated (111) surface of YPdBi, there is no Dirac point at the \(\bar{\Gamma}\) point, as expected [Fig. 10(a)]. In the YPtBi case, the Dirac point appears below the Fermi level \(E_{\rm F}\), as seen in Fig. 10(b). This result is consistent with previous DFT calculations and ARPES experiments [34; 35]. It is worth to note that our slab calculations improve our understanding of the lack of changes in the LDOS near defects in this plane, cf. Fig. 4(e). As discussed in Sec. III.4, the bulk LDOS has some impact in our data. Near \(E_{\rm F}\), we find trivial and topological bands, which may complicate the scattering process even further. Therefore, the change in the LDOS due to trivial surface states may be hard to detect within this energy window. ### Direct slab band structure calculations Due to the technical limitations of the mBJ potential implemented in vasp, it cannot be used directly to study the slab band structure. As such, to present the main impact of the surface reconstruction on the electronic band structure, we performed DFT calculations using GGA with Perdew-Burke-Enzerhof (PBE) parametrization [67]. Here, we should emphasize that this approach (DFT with GGA + PBE) does not correctly reproduce the bulk band structure of half-Heusler compounds, a problem well reported in the literature [58; 59; 60]. This is reflected in the absence of the band gap for YPdBi [cf. Fig. 9(a) and Fig. 11(a)] or incorrect band curvatures around the \(\Gamma\) point in the band structure of YPtBi [cf. Fig. 9(b) and Fig. 11(b)]. Nevertheless, this type of calculation can be used to present the main features of the band structure of (120) surfaces with reconstruction. For the simulation of the (120) surface band structure, we constructed slab models containing 3 layers of the discussed compounds (mostly 28 formula units). The reconstructions of the surface was introduced "by hand", i.e., by removing some atoms from the surfaces. From the self-consistently found charge distributions, the STM simulations were computed [see Figs. 2(j) and (k)]. Similarly, the electronic band structure for both compounds are presented in Fig. 12 where the results obtained for surfaces without and with surface reconstruction are presented on the left and right panels, respectively. For both YPdBi and YPtBi (120) surfaces without reconstruction (left panels in Fig. 12), we observed the realization of the surface states above \(E_{\rm F}\). The introduction of surface reconstructions on the (120) surface led to a multiplication of the surface states, an effect that is well visible at the M points in the right panels of Fig. 12. The additional, "extra" surface states come from the hanging (non-bonded) orbitals in the surface plane, due to the absence of some atoms on the surface (i.e. the surface reconstruction). This main feature of the band structure for the reconstructed surface, i.e. the surface states multiplication, is expected to also be present in case of a "correctly" obtained band structure, i.e. if calculated with mBJ potential.
2305.09197
Mixed-State Quantum Spin Liquids and Dynamical Anyon Condensations in Kitaev Lindbladians
Quantum spin liquids and anyons, used to be subjects of condensed matter physics, now are realized in various platforms of qubits, offering unprecedented opportunities to investigate fundamental physics of many-body quantum entangled states. Qubits are inevitably exposed to environment effects such as decoherence and dissipation, which are believed to be detrimental to many-body entanglement. Here, we argue that unlike the common belief decoherence and dissipation can give rise to novel topological phenomena in quantum spin liquids. We study open quantum systems of the Kitaev spin liquid and the toric code via the Lindblad master equation approach. By using exact solutions and numerical approaches, we show the dynamical occurrence of anyon condensation by decoherence and dissipation, which results in a topological transition from the initial state spin liquid to the steady state spin liquid. The mechanism of the anyon condensation transition by the Lindblad dynamics is elucidated. We also provide an insight into the relationship between the Kitaev spin liquid and the toric code in the picture of anyon condensation. Our work suggests open quantum systems to be a new venue for topological phenomena of quantum spin liquids and anyons.
Kyusung Hwang
2023-05-16T06:10:50Z
http://arxiv.org/abs/2305.09197v4
# Mixed-State Quantum Spin Liquid in Kitaev Lindbladian: ###### Abstract We propose open quantum spin liquids as a novel platform for studying anyon condensation topological transitions. As a concrete example, we consider the Kitaev spin liquid (KSL) coupled to a Markovian environment via the Lindblad master equation approach. By a combined study of exact solutions and numerical approaches, we demonstrate a dynamical anyon condensation transition between the initially prepared pure KSL and mixed-state KSL arising in the steady state limit, induced by the environment's decoherence and dissipation effects. General principles of generating anyon condensations in open quantum spin liquids are discussed. This work presents mixed-state quantum spin liquids as a new route for anyon condensation transitions. _Introduction._ Quantum spin liquids are exotic phases of matter featured with emergent anyon quasiparticles and long range entanglement [1; 2; 3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13]. In addition to their importance in fundamental physics, anyons and quantum spin liquids have been a central topic of recent studies due to their potential applications in quantum memories and quantum computations [1; 2; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24]. In the aspect of applications, it is essential to understand quantum devices under the unavoidable influences of environments such as decoherence and dissipation effects [25]. Studies of such open quantum systems also provide a promising opportunity for exploring new phenomena fostered by many-body quantum entanglement and the environment effects [26; 27; 28]. Nonetheless, most studies of quantum spin liquids have focussed on closed systems so far. Open quantum spin liquids remain largely unexplored due to the complexities of the problems and the absence of solvable systems. In this work, we introduce open quantum spin liquids that are exactly solved in the steady state limit. As a primary example, we consider the Kitaev spin liquid (KSL) [2] coupled with a Markovian environment via the Lindblad master equation approach [29; 30; 31; 32; 33; 34]. Under the Lindblad dynamics, the pure state of the KSL evolves to a steady state that shows vanishing spin-spin correlation, yet still preserving the zero-flux quantum number. The exact form of the steady state is given by the maximally mixed state in the zero-flux sector with equal weight. We call this state "mixed-state Kitaev spin liquid". The Lindblad dynamics of the Kitaev spin liquid presents rich physics with a deep connection to anyon condensation, which is an important mechanism for understanding topological transitions between distinct quantum spin liquids [35; 36; 37; 38; 39]. By mapping the Lindblad system to a doubled non-Hermitian system [40; 41], we uncover that the time evolution from the pure KSL to the mixed-state KSL is actually a dynamical transition from a KSL bilayer product state to a toric code type \(\mathbb{Z}_{2}\) spin liquid state by anyon condensation. Our results illuminate a new aspect of open quantum systems, i.e., a useful platform for studying topological transitions in quantum spin liquids. In addition to the Kitaev spin liquid, we study open quantum systems of the toric code model [1]. Based on the concrete examples, we develop general principles about anyon condensation transitions in quantum spin liquids induced by the Lindblad dynamics. _Kitaev Lindbladian._ We consider an open quantum system of the Kitaev spin liquid coupled to a Markovian environment. The dynamics of the system is investigated by using the Lindblad master equation [29; 30; 31; 32; 33; 34], \[\frac{d\hat{\rho}}{dt}=\mathcal{L}(\hat{\rho})=-i[H,\hat{\rho}]+\sum_{\mu}(L_ {\mu}\hat{\rho}L_{\mu}^{\dagger}-\frac{1}{2}\{L_{\mu}^{\dagger}L_{\mu},\hat {\rho}\}), \tag{1}\] where the system's density matrix \(\hat{\rho}(t)\) evolves in time by the Lindbladian \(\mathcal{L}\) composed of the Hamiltonian \(H\) and Figure 1: Open quantum system of the Kitaev spin liquid in two equivalent descriptions. (a) Original density matrix description. The Kitaev honeycomb model is embedded in the Lindblad master equation with the jump operator, \(L_{\mu}=\sqrt{\gamma}\sigma_{j}^{\alpha}\sigma_{k}^{\alpha}\). (b) Doubled state vector description. Vectorization of the Lindblad master equation leads to a closed system defined on the bilayer honeycomb lattice with the non-Hermitian inter-layer interaction, \(i\gamma(\sigma_{j}^{\alpha}\sigma_{k}^{\alpha}\tau_{j}^{\alpha}\tau_{k}^{ \alpha}-1)\). The two descriptions are compared in the table. the Lindblad operators \(\{L_{\mu}\}\) describing the environment effects. The Lindblad equation has the completely positive and trace-preserving property, i.e., \(\mathrm{Tr}\hat{\rho}(t)=\mathrm{const}\). For the Hamiltonian, we consider the Kitaev honeycomb model [2], \[H=K\sum_{\langle jk\rangle_{x}}\sigma_{j}^{x}\sigma_{k}^{x}+K\sum_{\langle jk \rangle_{y}}\sigma_{j}^{y}\sigma_{k}^{y}+K\sum_{\langle jk\rangle_{z}}\sigma_{ j}^{z}\sigma_{k}^{z}, \tag{2}\] where \(\sigma^{x,y,z}\) mean the Pauli matrices and \(\langle jk\rangle_{x,y,z}\) denote \(x,y,z\)-bonds of the honeycomb lattice [Fig. 1(a)]. For the Lindblad operators generating non-unitary dynamics, we assume the Kitaev bond interactions, \[L_{\mu}=\sqrt{\gamma}\sigma_{j}^{\alpha}\sigma_{k}^{\alpha}, \tag{3}\] with the dissipation strength \(\gamma\) (\(\geq 0\)). The Lindblad operator generates decoherence and dissipation effects on the Kitaev spin liquid state by creating a pair of fermion excitations at both sites of the bond \(\langle jk\rangle_{\alpha}\). The Hamiltonian \(H\) and the Lindblad operators \(\{L_{\mu}\}\) both commute with the \(\mathbb{Z}_{2}\) flux operator, \[\hat{W}_{p}=\sigma_{1}^{z}\sigma_{2}^{y}\sigma_{3}^{z}\sigma_{4}^{z}\sigma_{5 }^{y}\sigma_{6}^{x}, \tag{4}\] i.e., \([H,\hat{W}_{p}]=[L_{\mu},\hat{W}_{p}]=0\). Hence, the Kitaev Lindbladian \(\mathcal{L}\) conserves the \(\mathbb{Z}_{2}\) flux quantum number. _Lindblad dynamics of the Kitaev spin liquid._ To understand the dynamics generated by many-body quantum entanglement and the environment effects in an unbiased way, we numerically solve the Lindblad equation by putting the system on a 24-site cluster with periodic boundary condition [Fig. 1(a)]. First, we prepare the system in the pure state of the Kitaev spin liquid, \[\hat{\rho}(t=0)=|\Psi_{\mathrm{KSL}}\rangle\langle\Psi_{\mathrm{KSL}}|, \tag{5}\] where \(|\Psi_{\mathrm{KSL}}\rangle\) is the ground state of \(H\) which we obtain by exact diagonalization on the 24-site cluster (torus geometry). The model \(H\) has threefold ground state degeneracy, among which we choose the ground state \(|\Psi_{\mathrm{KSL}}\rangle\) in the sector of the Wilson loop flux \(\mathcal{W}_{x,y}=-1\) in our calculations [42]. The time evolution of the density matrix, \(\hat{\rho}(t)=e^{t\mathcal{L}}\hat{\rho}(0)\), is computed by using the Krylov subspace methods [43; 44]. In the construction of the time evolution operator \(e^{t\mathcal{L}}\), we utilize the \(\mathbb{Z}_{2}\) flux quantum numbers of the Kitaev Lindbladian to reduce the size of the Hilbert space. Figure 2 shows the calculation results of the Lindblad dynamics. First, we check the properties \(\mathrm{Tr}\hat{\rho}(t)=1\) and \(\langle\hat{W}_{p}\rangle=\mathrm{Tr}[\hat{\rho}(t)\hat{W}_{p}]=1\) in Fig. 2(a). Next, we consider the Renyi entropy, \[S_{\mathrm{R\acute{e}nyi}}=-\mathrm{log}[\mathrm{Tr}\hat{\rho}^{2}(t)], \tag{6}\] which tells us how the system's purity changes by the environment effects. As shown in Fig. 2(b), the Renyi entropy gradually increases converging to the constant value: \(S_{\mathrm{R\acute{e}nyi}}=\mathrm{log}(2^{N/2-1})\) when \(t\rightarrow\infty\) (\(N=24\) is the number of sites). The larger the dissipation strength \(\gamma\) is, the faster the convergence is reached. The constant value reveals the size of the subspace (\(2^{N/2-1}=2,048\)) that the system belongs to, i.e., the zero flux sector with Figure 3: Time evolution of the spin structure factor \(S(\mathbf{q})\). The three plots at \(t=0,0.1,0.2\) are obtained for the case of \(K=-1\) and \(\gamma=1\). In each plot, the two hexagons represent the first and second Brillouin zones in momentum space. Figure 2: Lindblad dynamics of the Kitaev spin liquid. (a) The trace of the density matrix \(\mathrm{Tr}\rho(t)\) and the expectation value of the flux operator \(\langle\hat{W}_{p}\rangle\). (b) The Rényi entropy \(S_{\mathrm{R\acute{e}nyi}}\). The dashed line marks the steady state value, \(\mathrm{log}(2^{N/2-1})\), where \(N=24\) is the number of sites. (c) The spin-spin correlator \(\langle\sigma_{j}^{\alpha}\sigma_{k}^{\alpha}\rangle\) for the nearest-neighbor bond \(\langle jk\rangle_{\alpha}\). Different colors denote the results of different dissipation strengths (\(\gamma=0,\ 0.2,\ 0.4,\ 0.6,\ 0.8,\ 1\)). The unitary evolution of the closed system (\(\gamma=0\)) is shown together for comparison (black). In all the calculations, the coupling constant is fixed by \(K=-1\). the Wilson loop flux \(\mathcal{W}_{x,y}=-1\). An interesting behavior appears in the spin-spin correlation \(\langle\sigma_{j}^{\alpha}\sigma_{k}^{\alpha}\rangle=\mathrm{Tr}[\hat{\rho}(t) \sigma_{j}^{\alpha}\sigma_{k}^{\alpha}]\). It monotonically decreases and vanishes in the long time limit: \(\langle\sigma_{j}^{\alpha}\sigma_{k}^{\alpha}\rangle=0\) when \(t\rightarrow\infty\) [Fig. 2(c)]. The system is still in a quantum spin liquid phase since it stays in the zero flux sector. Nonetheless, the nearest neighbor bond spin-spin correlation disappears in the long time limit. This peculiar property is also reflected in the spin structure factor, \[S(\mathbf{q})=\frac{1}{N}\sum_{j,k}e^{i\mathbf{q}\cdot(\mathbf{r}_{j}-\mathbf{ r}_{k})}\langle\mathbf{\sigma}_{j}\cdot\mathbf{\sigma}_{k}\rangle, \tag{7}\] which becomes flat in momentum space in the long time limit: \(S(\mathbf{q})=3\) when \(t\rightarrow\infty\) (see Fig. 3). _Exact solution of the steady state._ The numerical approach establishes the existence of the steady state characterized by the properties, \(\langle\hat{W}_{p}\rangle=1\), \(S_{\mathrm{R\acute{e}nyi}}=\log(2^{N/2-1})\), and \(\langle\sigma_{j}^{\alpha}\sigma_{k}^{\alpha}\rangle=0\). We find the exact solution for the steady state: \[\hat{\rho}_{\mathrm{MSKSL}} = \frac{1}{2^{N+1}}\prod_{l=x,y}(1-\hat{\mathcal{W}}_{l})\prod_{p} (1+\hat{W}_{p}) \tag{8}\] \[= \sum_{n}\rho_{n}|\Psi_{n}\rangle\langle\Psi_{n}|.\] The steady state density matrix is simply given by the projection operator into the zero flux sector with the Wilson loop flux \(\mathcal{W}_{x,y}=-1\). Hence, it is diagonal in the eigenbasis \(\{|\Psi_{n}\rangle\}\) of \(H\) and \(\hat{W}_{p}\) with the weight \[\rho_{n}=\left\{\begin{array}{cc}\frac{1}{2^{N/2-1}}&(W_{p}=+1\ \&\ \mathcal{W}_{x,y}=-1)\\ 0&(\mathrm{otherwise})\end{array}\right.. \tag{9}\] This equal weight property is indeed identified in our numerical calculations shown in Fig. 4. One can check that \(\mathcal{L}(\hat{\rho}_{\mathrm{MSKSL}})=0\) along with the three properties mentioned earlier. This is the maximally mixed state in the zero-flux sector with equal weight, which we call "mixed-state Kitaev spin liquid (MSKL)". _Mapping to the non-Hermitian bilayer model._ Lindblad system allows an exact mapping to a doubled system described by a non-Hermitian Hamiltonian [40; 41]. The mapping is conducted by the vectorization of the density matrix called the Choi-Jamiolkowski isomorphism [45; 46]: \[\hat{\rho}=\sum_{m,n}\rho_{mn}|m\rangle\langle n| \Rightarrow |\rho\rangle\!\rangle=\sum_{m,n}\rho_{mn}|m\rangle\otimes|n\rangle, \tag{10}\] where the bra of the density matrix \(\hat{\rho}\) is turned into a ket of the resulting state vector \(|\rho\rangle\!\rangle\). The system is now effectively doubled with an additional copy of the original Hilbert space. By the vectorization, the Lindblad master equation takes the form of the Schrodinger equation \[i\frac{d|\rho\rangle\!\rangle}{dt}=\mathcal{H}|\rho\rangle\!\rangle \tag{11}\] with the non-Hermitian Hamiltonian \[\mathcal{H} = H\otimes I-I\otimes H^{T}\] \[+ i\sum_{\mu}\left[L_{\mu}\otimes L_{\mu}^{*}-\frac{1}{2}(L_{\mu} ^{\dagger}L_{\mu})\otimes I-\frac{1}{2}I\otimes(L_{\mu}^{\dagger}L_{\mu})^{*} \right].\] In our case, the non-Hermitian Hamiltonian is given by \[\mathcal{H} = K\sum_{\langle jk\rangle_{\alpha}}^{\mathrm{upper\ layer}}\sigma _{j}^{\alpha}\sigma_{k}^{\alpha}-K\sum_{\langle jk\rangle_{\alpha}}^{\mathrm{ lower\ layer}}\tau_{j}^{\alpha}\tau_{k}^{\alpha} \tag{13}\] \[+ i\gamma\sum_{\langle jk\rangle_{\alpha}}^{\mathrm{inter\ layer}} (\sigma_{j}^{\alpha}\sigma_{k}^{\alpha}\tau_{j}^{\alpha}\tau_{k}^{\alpha}-1),\] where \(\sigma\)\(\&\ \tau\) are Pauli spin operators acting on the original kets and copied kets; \(\sigma_{j}^{\alpha}\tau_{k}^{\beta}(|m\rangle\otimes|n\rangle)=(\sigma_{j}^{ \alpha}|m\rangle)\otimes(\tau_{k}^{\beta}|n\rangle)\). We now have a bilayer spin model with the intralayer Kitaev interactions (\(K\ \&\ -K\)) and the interlayer interactions (\(i\gamma\)) [Fig. 1(b)]. Note that the interlayer interactions are non-Hermitian originating from the environment effects. This leads to non-unitary time evolution of the state vector, \(|\rho(t)\rangle\!\rangle=e^{-it\mathcal{H}}|\rho(0)\rangle\!\rangle\), which is identified from the relationship \(\langle\!\langle\rho|\rho\rangle\!\rangle=\exp(-S_{\mathrm{R\acute{e}nyi}})\) and Fig. 2(b). The non-Hermitian bilayer model has two types of \(\mathbb{Z}_{2}\) flux operators, \(\hat{W}_{p}\) and \(\hat{Z}_{p}=\tau_{1}^{z}\tau_{2}^{y}\tau_{3}^{x}\tau_{4}^{y}\tau_{5}^{y}\tau_{6}^ {x}\), commuting with the Hamiltonian (\([\mathcal{H},\hat{W}_{p}]=[\mathcal{H},\hat{Z}_{p}]=[\hat{W}_{p},\hat{Z}_{p^{ \prime}}]=0\)). This model can be viewed as a non-Hermitian analog of the bilayer spin model in Ref. [39]. _Steady state degeneracy._ The bilayer model approach enables us to discover exact steady state solutions and their Figure 4: Decomposition of the density matrix. Numerically obtained density matrices at different times (\(t=0.1,0.3,0.5,0.7\)) are decomposed by \(\rho_{n}=\langle\Psi_{n}|\hat{\rho}(t)|\Psi_{n}\rangle\). At \(t=0.7\), the components become almost constant, \(\rho_{n}=2^{-N/2+1}\simeq 4.9\times 10^{-4}\) (\(N=24\)). The results are obtained with the coupling constants, \(K=-1\) and \(\gamma=1\). extensive degeneracy. We find the steady state condition, \[\sigma_{j}^{\alpha}\sigma_{k}^{\alpha}\tau_{j}^{\alpha}\tau_{k}^{ \alpha}|\rho\rangle\!\rangle=|\rho\rangle\!\rangle\quad\Rightarrow\quad\mathcal{H} |\rho\rangle\!\rangle=0. \tag{14}\] Any state that satisfies the above local constraint is basically a steady state. For instance, we may consider the singlet product state, \[|\lambda\rangle\!\rangle=\otimes_{j}|s\rangle_{j}, \tag{15}\] where \(|s\rangle=\frac{1}{\sqrt{2}}(|\uparrow\rangle\!\rangle\otimes|\downarrow\rangle -|\downarrow\rangle\otimes|\uparrow\rangle)\) is the spin-singlet of \(\sigma\) and \(\tau\) spins. One can check that \(\sigma_{j}^{\alpha}\sigma_{k}^{\alpha}\tau_{j}^{\alpha}\tau_{k}^{\alpha}| \lambda\rangle=|\lambda\rangle\!\rangle\). From the singlet product state, we may generate further steady states as follows. \[|\rho_{\rm ss}\{n\}\rangle\!\rangle=\prod_{l=x,y}(\mathcal{\hat{ W}}_{l})^{n_{l}}\prod_{p}(\hat{W}_{p})^{n_{p}}|\lambda\rangle\!\rangle, \tag{16}\] where \(n_{l}=0,1\) (\(l=x,y\)), and \(n_{p}=0,1\) at each plaquette \(p\). Instead of the states, we consider the \(\mathbb{Z}_{2}\) flux eigenstates, \[|\rho_{\rm ss}\{w\}\rangle\!\rangle=\mathcal{N}\prod_{l=x,y}(1+w _{l}\mathcal{\hat{W}}_{l})\prod_{p}(1+w_{p}\mathcal{\hat{W}}_{p})|\lambda \rangle\!\rangle, \tag{17}\] where \(\mathcal{N}\) is a normalization constant, and \(w_{p}(=\pm 1)\) & \(w_{l}(=\pm 1)\) specify the flux sector including the Wilson loop flux. By counting all these states, we find the degeneracy of the steady state manifold, or steady state degeneracy (SSD): \[\text{SSD}=2^{2}\times 2^{N/2-1}. \tag{18}\] The first factor accounts for the four Wilson loop flux sectors (\(w_{x}=\pm 1\) & \(w_{y}=\pm 1\)) while the second one counts the number of possible flux sectors (\(w_{p}=\pm 1\)) on torus geometry. By applying the inverse vectorization (\(|s\rangle\Rightarrow i\sigma^{y}/\sqrt{2}\)) to \(|\rho_{\rm ss}\{w\}\rangle\), we obtain the density matrices of the steady states: \[\hat{\rho}_{\rm ss}\{w\}=\mathcal{N}\prod_{l=x,y}(1+w_{l}\mathcal{ \hat{W}}_{l})\prod_{p}(1+w_{p}\mathcal{\hat{W}}_{p})\prod_{j}\frac{i\sigma_{j }^{y}}{\sqrt{2}}. \tag{19}\] On torus geometry, \(\prod_{j}\frac{i\sigma_{j}^{y}}{\sqrt{2}}\) is equivalent to a product of flux operators thus can be absorbed into the term \(\prod_{p}(1+w_{p}\mathcal{\hat{W}}_{p})\)[42]. If we focus on the flux sector (\(w_{p}=+1,w_{l}=-1\)), we obtain the steady state in Eq. (8). _Dynamical anyon condensation._ A fascinating physics is hidden in the Lindblad dynamics of Fig. 2, which we unveil by using the bilayer description. The initial state is the KSL bilayer product state, \(|\rho(t=0)\rangle\!\rangle=|\Psi_{\rm KSL}\rangle\otimes|\Psi_{\rm KSL}\rangle\), and the steady state is given by \(|\rho(t\rightarrow\infty)\rangle\!\rangle=|\rho_{\rm MSKSL}\rangle\!\rangle\), i.e., \[|\rho(t\rightarrow\infty)\rangle\!\rangle=\frac{1}{2^{N/2+1}} \prod_{l=x,y}(1-\mathcal{\hat{W}}_{l})\prod_{p}(1+\mathcal{\hat{W}}_{p})| \lambda\rangle\!\rangle.\] We find that this steady state is nothing but a \(\mathbb{Z}_{2}\) spin liquid. Therefore, the time evolution from \(|\rho(t=0)\rangle\!\rangle\) to \(|\rho(t\rightarrow\infty)\rangle\!\rangle\) is a dynamical transition from the KSL\(\times\)KSL state to a \(\mathbb{Z}_{2}\) spin liquid state. How does the state \(|\rho(t\rightarrow\infty)\rangle\!\rangle\) represent a \(\mathbb{Z}_{2}\) spin liquid? First, we note that \(|\lambda\rangle\!\rangle\) satisfies the condition \(\phi_{j}^{x}=\phi_{j}^{y}=\phi_{j}^{z}=-1\) at every site, where \(\phi_{j}^{\alpha}\equiv\sigma_{j}^{\alpha}\tau_{j}^{\alpha}\)[39]. If a flux operator \(\hat{W}_{p}\) acts on \(|\lambda\rangle\!\rangle\), it flips the sign of \(\phi_{j}^{\alpha}\) (\(-1\rightarrow+1\)) along the bonds of the plaquette \(p\). Using these properties, we assign a \(\mathbb{Z}_{2}\) variable \(z_{jk}(\equiv\phi_{j}^{\alpha}=\phi_{k}^{\alpha}=\pm 1)\) to each bond \(\langle jk\rangle_{\alpha}\). Then, we immediately recognize that the state \(\prod_{p}(1+\mathcal{\hat{W}}_{p})|\lambda\rangle\!\rangle\) is equivalent with the \(\mathbb{Z}_{2}\) spin liquid state of the toric code model [1]; see Fig. 5(a). As already shown in Eqs. (8) and (9), the steady state Figure 5: Two equivalent representations of the \(\mathbb{Z}_{2}\) spin liquid steady state \(|\rho(t\rightarrow\infty)\rangle\!\rangle\). (a) Toric code state representation. Black lines denote the \(\mathbb{Z}_{2}\) variables \(\{z_{jk}=-1\}\). The sign-flipped \(\mathbb{Z}_{2}\) variables \(\{z_{jk}=+1\}\) are depicted by red lines. (b) Anyon condensation representation. The condensed anyons are fermion pairs across the layers (blue balls connected by a dashed line). can be represented by \[|\rho(t\rightarrow\infty)\rangle\!\rangle=\sum_{n}\rho_{n}|\Psi_{n}\rangle\otimes| \Psi_{n}\rangle,\] where \(\{|\Psi_{n}\rangle\}\) are the states of the zero flux sector including the KSL ground state and excited states. Note that each state has an even number of fermion excitations (without any flux excitations). The state \(|\rho(t\rightarrow\infty)\rangle\!\rangle\) is an equal weight superposition of all the bilayer product states \(\{|\Psi_{n}\rangle\otimes|\Psi_{n}\rangle\}\). This implies that \(|\rho(t\rightarrow\infty)\rangle\!\rangle\) is the \(\mathbb{Z}_{2}\) spin liquid state that emerges from the KSL\(\times\)KSL state by condensing fermion pairs between the two layers [Fig. 5(b)]. To confirm the fermion pair condensation, we consider the loop operator \[\hat{\mathcal{A}}=(\tau_{1}^{y}\tau_{6}^{y})(\tau_{6}^{z}\tau_{5}^{z})(\tau_{5 }^{x}\tau_{4}^{x})(\sigma_{4}^{y}\sigma_{3}^{y})(\sigma_{3}^{z}\sigma_{2}^{z} )(\sigma_{2}^{x}\sigma_{1}^{x}), \tag{20}\] which measures whether fermion excitations of the KSL can move between the two layers, i.e., the fermion pair condensation between the layers [39]. We indeed find that the expectation value \(\langle\!\langle\hat{\mathcal{A}}\rangle\!\rangle\equiv\langle\!\langle\rho| \hat{\mathcal{A}}|\rho\rangle\!\rangle/\langle\!\langle\rho|\rho\rangle\!\rangle\) increases from almost zero (\(t=0\)) to one (\(t\rightarrow\infty\)); see Fig. 6. Furthermore, we explicitly check the steady state condition, \(\langle\!\langle\sigma_{j}^{\alpha}\sigma_{k}^{\alpha}\tau_{j}^{\alpha}\tau_{ k}^{\alpha}\rangle\!\rangle=1\), and entanglement entropy between the two layers in the steady state limit [42]. All these results confirm the dynamical anyon condensation from the KSL\(\times\)KSL state to the \(\mathbb{Z}_{2}\) spin liquid state. _Discussion._ This work establishes that the \(\mathbb{Z}_{2}\) toric code (TC) spin liquid is equivalent with an anyon condensed phase of the Kitaev spin liquid bilayer. Here, the condensed anyons are the fermion pairs \(\psi_{1}\boxtimes\psi_{2}\) (\(\psi_{1},\psi_{2}\): fermion excitations in the upper and lower layer KSLs, respectively). \[\mathcal{L}_{\text{KSL}}:\text{KSL}\times\text{KSL}\xrightarrow{\langle\psi_ {1}\boxtimes\psi_{2}\rangle\neq 0}\mathbb{Z}_{2}\text{TC} \tag{21}\] We showed this by using the Lindblad dynamics of the KSL and analyzing the exact solution of the mixed-state KSL via the Choi-Jamiolkowski isomorphism. The results are summarized by Eq. (8) and Fig. 5, elucidating the anyon condensation transition in a simple, exact, but nontrivial way. The open quantum system of the Kitaev spin liquid suggests general principles for inducing anyon condensations in quantum spin liquids dynamically. A quantum spin liquid is usually realized by preserving a certain type of local constraints (e.g., flux quantum number in the Kitaev spin liquid). The quantum spin liquid is embedded in a Lindbladian that preserves the local constraints. Then, the steady state should appear in the form of a maximally mixed state within the subspace defined by the constraints. In the bilayer description, the maximally mixed state corresponds to a new quantum spin liquid emerging from the initial-state spin liquid bilayer by anyon condensation. The key point is that the _maximal mixing_ in the original system corresponds to the _anyon condensation_ in the doubled system. This picture applies to general quantum spin liquids. We take one more example: an open quantum system of the toric code model [1]. The toric code system \(H_{\text{TC}}=-J_{A}\sum_{s}A_{s}-J_{B}\sum_{p}B_{p}\) (where \(J_{A,B}>0\), \(A_{s}=\prod_{j\in s}\sigma_{j}^{x}\) and \(B_{p}=\prod_{j\in p}\sigma_{j}^{x}\)) is embedded in the Lindbladian with the jump operator \(L_{\mu}=\sqrt{\gamma}\sigma_{j}^{x}\). Note that the jump operator creates a pair of \(m\)-particles (\(B_{p}=-1\)) without touching the \(e\)-particle sector. In this case, the Lindblad equation [Eq. (1)] is exactly solved in the steady state limit. The \(\mathbb{Z}_{2}\) toric code spin liquid state \(|A_{s}=1\ \&\ B_{p}=1\rangle\) evolves in time to the exact steady state \[\hat{\rho}_{\text{ss}}\propto\prod_{s}\frac{1+A_{s}}{2}, \tag{22}\] which is the maximally mixed state of all possible \(m\)-particle excitations (with no \(e\)-particles). In the doubled system description, the maximally mixedness corresponds to the condensation of \(m\)-particle pairs, \(m_{1}\boxtimes m_{2}\), yielding another \(\mathbb{Z}_{2}\) toric code spin liquid in the steady state \(|\rho_{\text{ss}}\rangle\!\rangle\). \[\mathcal{L}_{\text{TC}}:\mathbb{Z}_{2}\text{TC}\times\mathbb{Z}_{2}\text{TC} \xrightarrow{\langle m_{1}\boxtimes m_{2}\rangle\neq 0}\mathbb{Z}_{2}\text{TC} \tag{23}\] One can condense \(e\)-particles (\(A_{s}=-1\)) instead by considering the jump operator \(L_{\mu}=\sqrt{\gamma}\sigma_{j}^{z}\). Qualitatively same results are obtained in this case due to the duality between the \(e\)- and \(m\)-particles. More details can be found in Supplemental Material [42]. To summarize, we proposed a new platform for anyon condensation topological transitions, i.e., open quantum spin liquids. The main idea is to induce a mixed-state quantum spin liquid in the steady state limit by using the Lindblad dynamics, which gives rise to an anyon-condensed phase in the corresponding doubled system. This framework can be applied to more generic systems with topological orders. _Acknowledgements._ I thank Jong Yeon Lee for valuable discussions and also for introducing related studies [47; 48]. This work was supported by Individual Grants (No. PG071402 & PG071403) of Korea Institute for Advanced Study (KIAS). Computations were performed on clusters at the Center for Advanced Computation (CAC) of KIAS.
2302.02065
Sensing aided Channel Estimation in Wideband Millimeter-Wave MIMO Systems
In this work, the uplink channel estimation problem is considered for a millimeter wave (mmWave) multi-input multi-output (MIMO) system. It is well known that pilot overhead and computation complexity in estimating the channel increases with the number of antennas and the bandwidth. To overcome this, the proposed approach allows the channel estimation at the base station to be aided by the sensing information. The sensing information contains an estimate of scatterers locations in an environment. A simultaneous weighting orthogonal matching pursuit (SWOMP) - sparse Bayesian learning (SBL) algorithm is proposed that efficiently incorporates this sensing information in the communication channel estimation procedure. The proposed framework can cope with scenarios where a) scatterers present in the sensing information are not associated with the communication channel and b) imperfections in the scatterers' location. Simulation results show that the proposed sensing aided channel estimation algorithm can obtain good wideband performance only at the cost of fractional pilot overhead. Finally, the Cramer-Rao Bound (CRB) for the angle estimation and multipath channel gains in the SBL is derived, providing valuable insights into the local identifiability of the proposed algorithms.
Rakesh Mundlamuri, Rajeev Gangula, Christo Kurisummoottil Thomas, Florian Kaltenberger, Walid Saad
2023-02-04T02:26:22Z
http://arxiv.org/abs/2302.02065v1
# Sensing aided Channel Estimation in Wideband Millimeter-Wave MIMO Systems ###### Abstract In this work, the uplink channel estimation problem is considered for a millimeter wave (mmWave) multi-input multi-output (MIMO) system. It is well known that pilot overhead and computation complexity in estimating the channel increases with the number of antennas and the bandwidth. To overcome this, the proposed approach allows the channel estimation at the base station to be aided by the sensing information. The sensing information contains an estimate of scatterers locations in an environment. A simultaneous weighting orthogonal matching pursuit (SWOMP) - sparse Bayesian learning (SBL) algorithm is proposed that efficiently incorporates this sensing information in the communication channel estimation procedure. The proposed framework can cope with scenarios where a) scatterers present in the sensing information are not associated with the communication channel and b) imperfections in the scatterers' location. Simulation results show that the proposed sensing aided channel estimation algorithm can obtain good wideband performance only at the cost of fractional pilot overhead. Finally, the Cramer-Rao Bound (CRB) for the angle estimation and multipath channel gains in the SBL is derived, providing valuable insights into the local identifiability of the proposed algorithms. ## I Introduction Millimeter wave (mmWave) and terahertz (THz) frequencies are considered to be a key component of 5G and 6G cellular systems [1]. However, as the operating frequencies increase, path and absorption losses also increase. Despite these disadvantages, this approach will allow packing more antennas in a small area and, then, the network can leverage beamforming techniques to compensate for the losses operating in such frequencies. However, the gains stemming from these multiple antenna techniques hinge on the ability to accurately estimate the channel state information (CSI). Estimating channel coefficients over a wideband and across multiple antennas incurs significant resource overhead in terms of resources occupied for sending pilot symbols. However, it has been observed that the mmWave channel exhibits a sparse behavior with only a few resolvable multi-paths in angle and delay domain [2] and [3]. By leveraging such sparsity, several works have come with compressed sensing (CS) based approaches for channel estimation and precoder design in mmWave multi-input multi-output (MIMO) systems [4, 5, 6, 7, 8, 9]. However, while used in wideband massive MIMO systems, these approaches lead to higher complexity due to the requirement of inverting huge matrices (for every subcarrier) across such antenna arrays. Since the sparse wireless channel is described by a few geometric multi-path propagation parameters, one might ask: Can the information on the physical propagation environment, for example, scatter or reflector locations, be useful in channel estimation? Indeed, one of the earlier works in [10] has utilized this key observation. The authors extract physical multi-path parameters from the CSI measurements in one frequency band and then use them to construct the CSI in another frequency band. However, no extra pilots are used in aiding the channel estimation, and they assume that the extracted multi-path parameters are perfect. On the other hand, advances in radar and joint communication sensing made it possible to have real-time dynamic radio environment maps at the communicating devices. Prior works in [11, 12, 13, 14, 15, 16] use such radar environment side information for the beam prediction and beam alignment to reduce the initial synchronization time in vehicular systems. A recent work in [17] tried to address the problem of channel estimation in massive MIMO systems by leveraging radar sensing information. The authors retrieve the scatterers location and velocity in the surrounding environment from the measurements collected by a co-located radar at the base station (gNB). Then multi-path parameters such as delays and angles are extracted from the radar sensing information. These inferred multi-path parameters from the radar are then used to initialize the dictionary in an orthogonal matching pursuit (OMP) based channel estimation algorithm. However, the extracted multi-path parameters from the radar are assumed to be error-free. The limitations of existing literature on radar-based sensing for channel estimation in massive MIMO systems include the following. Firstly, the sensing based channel estimation Fig. 1: Uplink multi-path scenario along with the co-located radar. algorithms may have limited angle and delay resolution, resulting in imprecise channel estimation and affecting the system performance. Secondly, utilizing separate spectrum for sensing and communication leads to inefficient resource utilization. But this can be overcome by using full-duplex techniques [18], which requires efficient signal processing to mitigate the self-interference components. Thirdly, implementing sensing baded channel estimation in practice can be difficult due to the increasing radio frequency (RF) components and computational complexity, especially in massive MIMO systems with many antennas. These limitations highlight the need for further research to improve sensing based channel estimation's accuracy, efficiency, and robustness in massive MIMO systems. Specifically, this paper proposes novel signal processing methods to bridge the gap on the limited angular and delay resolution issue mentioned above. In this work, we consider the problem of channel estimation in a wideband mmWave MIMO system in which sensing information is obtained from a co-located radar at the gNB as shown in Fig. 1 is used to reduce the pilot overhead. The contributions of our paper are summarized as follows, 1. Unlike [17], we assume that the sensing information from the radar can be erroneous. We also consider cases in which scatterers detected from radar might not be associated with the communication channel. 2. To address these issues, a novel Simultaneous Weighting Orthogonal Matching Pursuit (SWOMP) - Sparse Bayesian Learning (SBL) based channel estimation is proposed that incorporates the imperfect sensing information from the radar. 3. We also provide local identifiability analysis for the parameter estimation using SBL by deriving Cramer-Rao bound (CRB) for the joint angle of arrival (AoA) and path gain estimation using SBL. ## II System Model We consider a scenario where a user (UE) communicates with base station (gNB) in an environment with the scatterers located between them. The scatterers are represented by \(\mathcal{S}_{r}\), and \(|\mathcal{S}_{r}|=L_{r}\). Only a subset of these scatterers, \(\mathcal{S}_{c}\subseteq\mathcal{S}_{r}\), \(|\mathcal{S}_{c}|=L_{c}\), are assumed to affect the UE-gNB communication channel. The set \(\mathcal{S}_{c}\) is unknown, however, we assume that location estimates of scatterers in \(\mathcal{S}_{r}\) are provided by a sensing system. This represents a scenario where the scatterers are present in the blind zone to UE but can be detected by a sensing system co-located at the gNB as shown in the Fig. 2. ### _Sensing Information_ We assume that the sensing is accomplished at the gNB either using a co-located radar operating in a seperate spectrum [19] or through a joint communication sensing framework [20]. The sensing information available at the gNB is given by \[\{(\tau_{\ell}^{rad}+e_{\tau},\theta_{l}+e_{\theta})\ |\ l=1,2,\ldots,L_{r}\},\] where \(\tau_{\ell}^{rad}\) and \(\theta_{\ell}\) represent the round trip delay and angle of the \(l\)-th scatterer from the gNB, respectively. The error in the estimated parameters is assumed to be Gaussian distributed as \(e_{\theta}\sim\mathcal{N}(0,\sigma_{\theta}^{2})\) and \(e_{\tau}\sim\mathcal{N}(0,\sigma_{\tau}^{2})\). The error in the radar spatial information can appear due to noise and the inability of the radar to resolve delay and/or angles sufficiently. ### _Communication Model_ We consider a mmWave orthogonal frequency division multiplexing (OFDM) system with a single antenna UE and \(M\) antenna gNB. The gNB is equipped with a uniform linear array (ULA) with half-wavelength spacing between consecutive antennas. The UE sends \(P\ll K\) (narrowband) pilots, where \(K\) is the total number of subcarriers used for communication. The received complex baseband signal at the \(k\)-th subcarrier after down-conversion, zero prefix removal, OFDM demodulation, and correlation with the pilots is given by \[\boldsymbol{y}[k]=\boldsymbol{h}[k]+\boldsymbol{n}[k], \tag{1}\] where \(k=0\) to \(K-1\), \(\boldsymbol{h}[k]\in\mathbb{C}^{M\times 1}\) represent the baseband channel, \(\boldsymbol{n}[k]\sim\mathcal{CN}(0,\sigma^{2}\boldsymbol{I}_{M})\) is a circularly symmetric complex Gaussian distributed additive noise vector. We define the received signal-to-noise-ratio (SNR) at subcarrier \(k\) as \(\left\|\boldsymbol{h}[k]\right\|^{2}/\sigma^{2}\). Next, we describe the mmWave channel model generation that is a parametric function of the multipath components. ### _Channel Model_ A frequency-selective geometric channel model with \(N_{c}\) delay taps and \(L_{c}+1\) paths [7] is considered. The channel consists of a line-of-sight (LoS) component, and \(L_{c}\) (yet unknown) reflections resulting from the scatterers as described earlier. The \(d\)-th delay tap is modeled as \[\boldsymbol{h}_{d}=\sqrt{\frac{M}{L_{c}+1}}\sum\limits_{\ell=0}^{L_{c}}{\alpha _{\ell}p(dT_{s}-\tau_{\ell})\boldsymbol{a}(\theta_{\ell})}, \tag{2}\] where \(p(.)\) is the pulse-shaping filter, \(T_{s}\) is the sampling interval, \(\alpha_{\ell}\), \(\tau_{\ell}\), \(\theta_{\ell}\) represent the path gain, delay and the angle-of-arrival (AoA) of the \(l\)-th path, respectively. The receiver array steering vector for the \(l\)-th path is denoted by \(\boldsymbol{a}(\theta_{\ell})\in\mathbb{C}^{M\times 1}\). The index \(\ell{=}0\) is always associated with the LoS path. We can compactly represent the channel as \(\boldsymbol{h}_{d}{=}\boldsymbol{A}\boldsymbol{\Delta}_{d}\), where \(\boldsymbol{A}{=}[\boldsymbol{a}(\theta_{0})\ \boldsymbol{a}(\theta_{1})\ \ldots\ \boldsymbol{a}(\theta_{L_{c}})]\in\mathbb{C}^{M\times(L_{c}+1)}\) contains the receiver side steering vectors and \[\boldsymbol{\Delta}_{d}=\big{[}\ \alpha_{0}p(dT_{s}-\tau_{0}),\cdots,\alpha_{L_{c}}p(dT_{s}- \tau_{L_{c}})\ \big{]}^{\mathrm{T}}. \tag{3}\] We obtain the frequency domain channel representation by taking a \(K\)-point DFT of the delay-domain channel, and the channel at subcarrier \(k\) can be written as \[\boldsymbol{h}[k]=\sum\limits_{d=0}^{N_{c}-1}\boldsymbol{h}_{d}\exp\left(- \frac{j2\pi kd}{K}\right)=\boldsymbol{A}\boldsymbol{\Delta}[k], \tag{4}\] and \(\boldsymbol{\Delta}[k]\) is given by \(\boldsymbol{\Delta}[k]=\sum\limits_{d=0}^{N_{c}-1}\boldsymbol{\Delta}_{d} \exp\left(-\frac{i2\pi kd}{K}\right).\) Further substituting for \(\boldsymbol{\Delta}_{d}\) from (3), we obtain \[\boldsymbol{\Delta}[k]=\left[\beta_{k,0}\alpha_{0},\beta_{k,1}\alpha_{1}, \ldots,\beta_{k,L_{c}}\alpha_{L_{c}}\right]^{\mathrm{T}}, \tag{5}\] where \(\beta_{k,\ell}{=}\sum_{d=0}^{N_{c}-1}p(dT_{s}-\tau_{\ell})\exp\left(-\frac{i2\pi kd }{K}\right)\). Substituting \(\mathbf{\Delta}[k]\) in (4), a compact form of the frequency domain channel \(\mathbf{h}[k]\) can be obtained as \[\mathbf{h}[k]=\mathbf{A}\mathbf{\beta}_{k}\mathbf{\alpha}, \tag{6}\] where \(\mathbf{\beta}_{k}{=}\text{diag}\big{(}\beta_{k,0},\beta_{k,1},\ldots,\beta_{k,L_{ c}-1}\big{)}\), and \(\mathbf{\alpha}{=}[\alpha_{0},\alpha_{1},\ldots,\alpha_{L_{c}}]^{\mathrm{T}}\). Further substituting \(\mathbf{h}[k]\) in (1), the received frequency domain signal \(\mathbf{y}[k]\) can be written as \[\mathbf{y}[k]=\mathbf{\Psi}_{k}\mathbf{\alpha}+\mathbf{n}[k], \tag{7}\] where \(\mathbf{\Psi}_{k}=\mathbf{A}\mathbf{\beta}_{k}\in\mathbb{C}^{M\times(L_{c}+1)}\). ## III Sensing Aided Channel Estimation In this section, we provide a channel estimation framework that incorporates the sensing information available at the gNB. From (7), the received \(P\) pilots in vectorized form is given by \[\mathbf{y} =\begin{bmatrix}\mathbf{y}^{\mathrm{T}}[0]\,\mathbf{y}^{\mathrm{T}}[1] \,\ldots\,\mathbf{y}^{\mathrm{T}}[P-1]\end{bmatrix}^{\mathrm{T}}, \tag{8}\] \[\mathbf{y} =\underbrace{\big{\{}\mathbf{\Psi}_{0}^{\mathrm{T}}\mathbf{\Psi}_{1}^{ \mathrm{T}}\cdots\mathbf{\Psi}_{P-1}^{\mathrm{T}}\big{\}}}_{\mathbf{\Omega}}\mathbf{ \alpha}+\mathbf{n}, \tag{9}\] where the matrix \(\mathbf{\Omega}\in\mathbb{C}^{MP\times(L_{c}+1)}\) carries the delay-angle information of the multipath components and \(\mathbf{n}\) is the vectorized noise \(\mathbf{n}=[\mathbf{n}^{\mathrm{T}}[0]\mathbf{n}^{\mathrm{T}}[1]\cdots\mathbf{n}^{\mathrm{T}} [P-1]]^{\mathrm{T}}\). Moreover, the sensing information can be used as an initial estimate of the multipath delays and angles. Let \(\tilde{\mathbf{\theta}}=[\theta_{0},\tilde{\theta}_{1},\tilde{\theta}_{2},\ldots, \tilde{\theta}_{L_{c}}]\), where \(\theta_{0}\) is the angle associated with the LoS path and \(\tilde{\theta}_{l}\), \(l\in[1,L_{r}]\) is the AoA of the \(l\)-th path obtained from the sensing information. The round-trip propagation delay between the gNB and the \(l\)-th, \(l\in[1,L_{r}]\), scatterer is denoted by \(\tau_{\ell}^{rad}\). Let us define \(\tilde{\mathbf{\tau}}=[\tau_{0},\tilde{\tau}_{1},\tilde{\tau}_{2},\ldots,\tilde{ \tau}_{L_{r}}]\), where \(\tau_{0}\) is the delay between the UE and the gNB, and the delay of the \(\ell\)-th communication path can be estimated using the radar delay \(\tau_{\ell}^{rad}\) as \[\tilde{\tau}_{\ell}=\tau_{\ell}^{rad}/2+\tau_{\ell}^{\prime}, \tag{10}\] where \(\tau_{\ell}^{\prime}\) is obtained using triangle laws of cosines as shown in Fig. 3, \[\tau_{\ell}^{\prime}=\sqrt{\tau_{0}^{2}+\left(\tau_{\ell}^{rad}/2\right)^{2}- \tau_{0}(\tau_{\ell}^{rad})cos(\tilde{\theta}_{\ell}-\theta_{0})}. \tag{11}\] Similar to the matrix \(\mathbf{\Omega}\) in (9), using the sensing information \((\tilde{\mathbf{\tau}},\tilde{\mathbf{\theta}})\), we can construct a matrix \(\tilde{\mathbf{\Omega}}\in\mathbb{C}^{MP\times(L_{r}+1)}\) that captures the delay-angle information of the \(L_{r}+1\) paths. As we described earlier, only a subset of \(L_{c}\) among the \(L_{r}\) scatterers are included in the communication channel, and \(L_{c}\) is unknown. This can be mathematically represented as, \[\mathbf{\Omega}=\tilde{\mathbf{\Omega}}\mathbf{B}+\mathbf{E}, \tag{12}\] where \(\mathbf{B}\in\mathbb{R}^{(L_{r}+1)\times(L_{c}+1)}\) is obtained by selecting \(L_{c}+1\) columns of the identity matrix \(\mathbf{I}_{L_{r}+1}\). The indices of the columns that are included in \(\mathbf{B}\), correspond to the paths that are present both in the communication channel and sensing information. The unknown error term is denoted by \(\mathbf{E}\). ### _Problem Formulation_ Utilizing the received pilot signal (9) and the sensing information in the form of (12), the maximum a posteriori (MAP) based channel estimation problem is formulated as: \[[\mathbf{\Omega}^{*},\mathbf{\alpha}^{*}]=\arg\max_{\mathbf{\Omega},\mathbf{\alpha}}p(\mathbf{ \Omega},\mathbf{\alpha}\mid\mathbf{y}), \tag{13}\] where \(p(.)\) represents the probability distribution and \(\mathbf{\alpha}\) is the channel gain vector. The optimization problem at hand is difficult to solve in general as a) it is hard to obtain the distribution \(p(\mathbf{\Omega},\mathbf{\alpha}\mid\mathbf{y})\) b) the combinatorial nature of the path association matrix \(\mathbf{B}\) and the unknown error. A conventional approach to relax this problem and solve it using compressed sensing schemes, such as SBL, by considering a joint dictionary matrix consisting of finely spaced angles and delays. However, such a solution results in cubic complexity with respect to the dictionary dimensions, which has to be finely spaced to alleviate the off-grid errors. Hence, we propose a two-stage SWOMP-SBL algorithm to overcome such high complexity. ## IV SWOMP-SBL Algorithm The proposed algorithm works in two stages. In the first stage, based on the sensing information, a SWOMP based algorithm is used to find the paths that are associated with the communication and their respective AoAs. Based on these selected paths, a SBL inference algorithm is used to obtain finer estimate of the delays and corresponding channel gains \(\tilde{\mathbf{\alpha}}\). A schematic describing this two-stage algorithm is shown in Fig 4. ### _SWOMP Stage_ The algorithm is initialized with assuming that all the \(L_{r}+1\) paths from the sensing information are present in the communication channel. The AoA's \(\tilde{\mathbf{\theta}}\) are used to form the Fig. 3: Communication delay estimation from radar delay. Fig. 2: Scatterer environment along with the sensing information. angle dictionary \(\mathbf{A}^{\prime}\) as described in steps 2 and 3 of the Algorithm 1. The SWOMP algorithm [21] outputs the maximum correlated paths \(\hat{\mathbf{\theta}}\) corresponding to the angle dictionary \(\mathbf{A}^{\prime}\) with the received signal \(\mathbf{y}\). The noise variance \(\sigma^{2}\) is utilized as a stopping condition in SWOMP, where all the refined angles associated with the channel and their corresponding path indices \(\mathbf{\chi}\) are estimated. However, a dictionary matrix \(\mathbf{\widehat{\Omega}}\) is needed to refine the delays further and estimate the channel gains. The path association matrix \(\mathbf{B}\) can be obtained from the estimated \(\mathbf{\chi}\), but it's avoided since the path indices are enough to create the dictionary matrix \(\mathbf{\widehat{\Omega}}\). \(\mathbf{\widehat{\Omega}}\) is constructed using the refined AoA \(\mathbf{\hat{\theta}}\) obtained using SWOMP and a finely space dictionary matrix of the associated delays. The association of the path is given by the path indices \(\mathbf{\chi}\) and maps the refined angles to their corresponding delays. The details of our algorithm are discussed in Algorithm 1. The refinement of the delays \(\mathbf{\tilde{\tau}}\) and their corresponding channel gains \(\mathbf{\alpha}\) are estimated using SBL with the obtained \(\mathbf{\widehat{\Omega}}\) in the next stage. The computational complexity of SWOMP in each iteration is \(MP(d_{\theta}L^{\prime})^{2}+MPd_{\theta}L_{r}+MPd_{\theta}L^{\prime}\). ### _SBL Stage_ Recalling the measurement equation with the obtained \(\mathbf{\widehat{\Omega}}\), we write \(\mathbf{y}=\mathbf{\widehat{\Omega}}\mathbf{\alpha}+\mathbf{n}\). We formulate the estimation method of \((\mathbf{\alpha},\mathbf{\tilde{\tau}})\) using SBL as follows. SBL is a type-II maximum likelihood (ML) estimation procedure to obtain the channel estimate [22, 23]. In this method, \(\mathbf{\alpha}\) is considered as a hidden variable, and posterior statistics are obtained given the observations. SBL assumes a complex Gaussian prior distribution for the entries of \(\mathbf{\alpha}\), which gets written as \(p(\alpha_{i}){=}\frac{\gamma_{i}}{\pi}\cdot\gamma_{i}(\alpha_{i})^{2}\). The hyperparameters \(\alpha_{i}\) also estimated using the inference procedure. \(\gamma_{i}\) is assumed to follow a Gamma distribution, \(\mathcal{G}(\gamma_{i};a,b){=}\Gamma((\gamma_{i}))^{-1}(\gamma_{i})^{a-1}e^{-b \gamma_{i}}b\gamma^{\ast_{i}}\). Defining, \(\mathbf{\Gamma}{=}\text{diag}(\mathbf{\gamma})\), where \(\mathbf{\gamma}\) is the vector of \(\gamma_{i}\). Noise is assumed to be complex Gaussian, \(\mathcal{CN}(\mathbf{0},\frac{1}{\sigma}\mathbf{\Gamma})\). \(\zeta\) is assumed to have Gamma as a prior distribution such that \(p_{\zeta}(\zeta){=}\mathcal{G}(\zeta;c,d)\), where \(c,d\) are known. Note that in the case of an uninformative prior, the values of \(a\) and \(b\) corresponds to 1 and 0 respectively. Now, the posterior distribution of \(\mathbf{\alpha}\) and the hyper-parameters \(\mathbf{\gamma},\zeta\) needs to be obtained. Since the prior and the noise are both Gaussian, obtaining the posterior statistics of \(\mathbf{\alpha}\) is straightforward. But, the computation of \(\mathbf{\gamma}\) requires the computation of the marginal probability distribution \(p(\mathbf{y};\mathbf{\gamma},\zeta)\) and maximizing it (alternatively) w.r.t. \(\mathbf{\gamma},\zeta\). This procedure is known as evidence maximization or type-II ML estimation. To solve this, expectation-maximization (EM) algorithm is used, which proceeds by lower bounding the logarithm of the evidence \(p(\mathbf{y};\mathbf{\gamma},\zeta)\), and maximizing it iteratively. Treating \(\mathbf{\alpha}\) as a hidden variable, In the expectation (E) step, expectation of the log likelihood of \((\mathbf{y},\mathbf{\alpha})\) w.r.t. \(p(\mathbf{\alpha}|\mathbf{y},\mathbf{\gamma},\zeta)\) is computed. In the maximization (M) step, the hyper-parameters \(\mathbf{\gamma},\zeta\) are computed by maximizing the function obtained in the E step. More details of SBL and type-II ML estimation can be found in [22]. Detailed steps for the channel estimation are provided in Algorithm 2. The SBL algorithm outputs the estimate of the channel gains \(\mathbf{\widehat{\alpha}}\). Using step 17 in Algorithm 1, the channel estimate \(\mathbf{\hat{h}}\) at the \(k\)-th subcarrier can be obtained by \(\hat{\mathbf{h}}_{k}{=}\mathbf{\hat{\Psi}}_{k}\mathbf{\widehat{\alpha}}\) for all the \(K\) subcarriers. The convergence properties of the SBL algorithm are well understood in the literature [22]. In short, using similar arguments in [22], we can show that the proposed SBL converges to the sparsest solution when the noise variance is zero and to a sparse local minimum, irrespective of the noise variance. The computational complexity of each iteration of SBL is \((MP)^{3}{+}2(MP)^{2}(d_{\tau}L^{\prime})+2(d_{\tau}L^{\prime})^{2}MP+(d_{\tau}L ^{\prime})^{2}+4MP(d_{\tau}L^{\prime})+(d_{\tau}L^{\prime})\). ``` 0:\(\mathbf{y},\mathbf{\hat{\theta}},\mathbf{\tilde{\tau}},d_{\theta},d_{\tau},\sigma_{\theta}, \sigma_{\tau},\sigma^{2}\) 1:Initialize: \(\mathbf{\chi}=\{\}\) 2:\(\mathbf{\theta}^{\prime}_{l}=\tilde{\theta}_{l}-2\sigma_{\theta}:\frac{4\sigma_{l} }{d_{\theta}}:\tilde{\theta}_{l}+2\sigma_{\theta}\in\mathbb{R}^{1\times d_{ \theta}}\) 3:\(\mathbf{A}^{\prime}\)=\([a(\mathbf{\theta}^{\prime}_{0})\ a(\mathbf{\theta}^{\prime}_{1})\ \cdots\ a(\mathbf{\theta}^{\prime}_{L_{\tau}})]\in\mathbb{C}^{M\times d_{\theta}(L_ {\tau}+1)}\) 4:\(\mathbf{\hat{\theta}}=\text{SWOMP}(\mathbf{y},\mathbf{A}^{\prime},\sigma^{2})\) 5:\(\mathbf{\hat{\theta}}\in\mathbb{R}^{1\times L^{\prime}}\) is \(\{\tilde{\theta}_{\ell}\ |\ \ell=0,1,\cdots,L^{\prime}-1\}\) 6:Path association: 7:for\(\ell=0:L^{\prime}-1\)do 8:\(p=\arg\min\{|\theta_{\ell}\mathbf{1}-\mathbf{\theta}|\}\)\(\triangleright\ \mathbf{1}\in 1^{1\times(L_{\tau}+1)}\) 9:\(\mathbf{\chi}=\mathbf{\chi}\cup p\)\(\triangleright\ \mathbf{p}=\text{path index}\) 10:endfor 11:\(\mathbf{\tilde{\tau}}(\mathbf{\chi})\in\mathbb{R}^{1\times L^{\prime}}\) is \(\{\tilde{\tau}_{\ell}(\mathbf{\chi})\ |\ \ell=0,1,\cdots,L^{\prime}-1\}\)\(\triangleright\) delays of the corresponding path index obtained in step 9 12:\(\mathbf{\hat{\tau}}_{\ell}=\tilde{\tau}_{\ell}(\mathbf{\chi})-2\sigma_{\tau}:\frac{d_{\sigma _{\tau}}}{d_{\tau}}:\tilde{\tau}_{\ell}(\mathbf{\chi})+2\sigma_{\tau}\in\mathbb{R }^{1\times d_{\tau}}\) 13:\(\mathbf{\hat{\tau}}=[\mathbf{\hat{\tau}}_{0}\ \mathbf{\hat{\tau}}_{1}\ \cdots\ \mathbf{\hat{\tau}}_{L^{\prime}-1}]\in \mathbb{R}^{1\times d_{\tau}L^{\prime}}\) 14:The resulting \(\mathbf{\beta}_{k}\) obtained using \(\mathbf{\hat{\tau}}\) is denoted as \(\mathbf{\hat{\beta}}_{k}\in\mathbb{C}^{d_{\tau}L^{\prime}\times d_{\tau}}\) 15:\(\mathbf{\hat{A}}_{\ell}=[a(\hat{\theta}_{\ell})a(\hat{\theta}_{\ell})\cdots a(\hat{ \theta}_{\ell})]\in\mathbb{C}^{M\times d_{\tau}}\triangleright\) Repeat \(d_{\tau}\) times 16:\(\mathbf{\hat{A}}=[\mathbf{\hat{A}}_{0}\ \mathbf{\hat{A}}_{1}\ \cdots\ \mathbf{\hat{A}}_{L^{\prime}-1}]\in \mathbb{C}^{M\times d_{\tau}L^{\prime}}\) 17:\(\mathbf{\hat{\Psi}}_{k}=\mathbf{\hat{A}}\mathbf{\hat{\beta}}_{k}\in\mathbb{C}^{M\times d _{\tau}L^{\prime}}\) 18:\(\mathbf{\widehat{\Omega}}=[\mathbf{\hat{\Psi}}_{0}^{\mathbf{\widehat{\Omega}}}\mathbf{\hat{\Psi}}_{ 1}^{\mathrm{T}}\cdots\mathbf{\hat{\Psi}}_{D-1}^{\mathrm{T}}]^{\mathrm{T}}\!\in \!\mathbb{C}^{MP\times d_{\tau}L^{\prime}}\) ``` **Algorithm 1** SWOMP Stage The application of the proposed SBL algorithm to the proposed SBL is shown in Fig. 4. The SBL algorithm outputs the estimate of the channel gains \(\mathbf{\hat{\alpha}}\). Using step 17 in Algorithm 1, the channel estimate \(\mathbf{\hat{h}}\) at the \(k\)-th subcarrier can be obtained by \(\hat{\mathbf{h}}_{k}{=}\mathbf{\hat{\Psi}}_{k}\mathbf{\widehat{\alpha}}\) for all the \(K\) subcarriers. The convergence properties of the SBL algorithm are well understood in the literature [22]. In short, using similar arguments in [22], we can show that the proposed SBL converges to the sparsest solution when the noise variance is zero and to a sparse local minimum, irrespective of the noise variance. The computational complexity of each iteration of SBL is \((MP)^{3}{+}2(MP)^{2}(d_{\tau}L^{\prime})+2(d_{\tau}L^{\prime})^{2}MP+(d_{\tau}L ^{\prime})^{2}+4MP(d_{\tau}L^{\prime})+(d_{\tau}L^{\prime})\). ### _Identifiability of the Proposed SBL: Minimum Narrowband Pilots Required?_ This subsection provides conditions under which the sensing-aided channel estimation using SBL becomes locally identifiable. Furthermore, the analysis herein provides the minimum pilots required for the respective channel estimation algorithm \[p_{\mathbf{y}}(\mathbf{y}[k])=\mathcal{CN}(\mathbf{0},\mathbf{\Psi}_{k}\mathbf{\Gamma}^{-1}(\mathbf{\Psi}_ {k})^{H}+\zeta^{-1}\mathbf{I}). \tag{14}\] The signal model in (7) is non-identifiable if \(\mathbf{\Psi}_{k}\mathbf{\Gamma}_{1}^{-1}(\mathbf{\Psi}_{k})^{H}\)=\(\mathbf{\Psi}_{k}\mathbf{\Gamma}_{2}^{-1}(\mathbf{\Psi}_{k})^{H}\) for some \(\mathbf{\Gamma}_{1}^{-1}\neq\mathbf{\Gamma}_{2}^{-1}\). The rank of \(\mathbf{\Psi}_{k}^{T}\otimes\mathbf{\Psi}\) is denoted as \(R\leq(MP)\), where \(\otimes\) represents the Khatri-Rao product. Following similar analysis [24], we can show that the SBL algorithm is identifiable as long as \(L\) (number of nonzero elements in \(\mathbf{\alpha}\)) is \(\mathcal{O}(R^{2})\) (=\(\mathcal{O}((MP)^{2})\) for suitable \(\mathbf{\Psi}_{k}^{T}\)). For a mmWave system, this would be just fewer pilots, compared to using number of pilots of the \(\mathcal{O}(K)\) as in existing 5G-NR algorithms. Next, we look at the CRB of the estimation model here. The local identifiability (upto permutation ambiguity) of the SBL based parameter estimation is ensured if the Fisher information matrix (FIM) is non-singular [25]. First, the estimated parameters are defined in a vector as \(\mathbf{\Theta}\)=\([\mathbf{\theta},\mathbf{\alpha},\mathbf{\gamma},\zeta,\mathbf{\tau}]\). The FIM can be partitioned as \[\mathbf{J}_{\mathbf{\Theta}\mathbf{\Theta}}=\begin{bmatrix}\mathbf{J}_{\mathbf{\theta}\mathbf{\alpha} }&\mathbf{J}_{\mathbf{\theta}\mathbf{\alpha}}&\mathbf{J}_{\mathbf{\theta}\mathbf{\gamma}}&\mathbf{J}_{ \mathbf{\theta}\mathbf{\gamma}}\\ \mathbf{J}_{\mathbf{\alpha}\mathbf{\theta}}&\mathbf{J}_{\mathbf{\alpha}\mathbf{\alpha}}&\mathbf{J}_{\mathbf{ \alpha}\mathbf{\gamma}}&\mathbf{J}_{\mathbf{\alpha}\mathbf{\zeta}}&\mathbf{J}_{\mathbf{\alpha}\mathbf{ \gamma}}\\ \mathbf{J}_{\mathbf{\gamma}\mathbf{\theta}}&\mathbf{J}_{\mathbf{\gamma}\mathbf{\gamma}}&\mathbf{J}_{\mathbf{ \gamma}\mathbf{\gamma}}&\mathbf{J}_{\mathbf{\gamma}\mathbf{\gamma}}\\ \mathbf{J}_{\mathbf{\gamma}\mathbf{\theta}}&\mathbf{J}_{\mathbf{\gamma}\mathbf{\alpha}}&\mathbf{J}_{\mathbf{ \gamma}\mathbf{\gamma}}&\mathbf{J}_{\mathbf{\gamma}\mathbf{\zeta}}&\mathbf{J}_{\mathbf{\gamma}\mathbf{ \tau}}\\ \end{bmatrix}, \tag{15}\] where \(\mathbf{J}_{\mathbf{x}\mathbf{y}}\)=\(\mathbb{E}\left(\frac{\partial\ln p(\mathbf{y},\mathbf{x})}{\partial\mathbf{x}}\frac{ \partial\ln p(\mathbf{y},\mathbf{x})}{\partial\mathbf{y}}\right)\). Each of the FIM blocks can be derived as (detailed derivations are skipped since those follows classical results in estimation theory) \[\mathbf{J}_{\mathbf{\theta}\mathbf{\theta}} =\mathbb{E}(\zeta)(\mathbf{\beta}_{k})^{H}\mathbb{E}(\frac{\partial \mathbf{A}(\mathbf{\theta})}{\partial\mathbf{\theta}}^{H}\frac{\partial\mathbf{A}(\mathbf{\theta} )}{\partial\mathbf{\theta}})\mathbf{\beta}_{k}\mathbb{E}(\mathbf{\Gamma}^{-1}), \tag{16}\] \[\mathbf{J}_{\mathbf{\theta}\zeta} =\text{diag}\left(\Re\{(\mathbf{\beta}_{k})^{H}\frac{\partial\mathbf{A} (\mathbf{\theta})}{\partial\mathbf{\theta}}^{H}\mathbf{A}\mathbf{\beta}_{k}\}\mathbb{E}(\mathbf{ \Gamma}^{-1})\right),\] (17) \[\mathbf{J}_{\mathbf{\gamma}\mathbf{\gamma}} =-\mathbb{E}(\mathbf{\Gamma}^{-1})+(a-1)\mathbb{E}(\mathbf{\Gamma}^{-1}),\] (18) \[\mathbf{J}_{\mathbf{\zeta}\zeta} =-MP\mathbb{E}(\zeta^{-2})+(c-1)\mathbb{E}(\zeta^{-1}),\] (19) \[\mathbf{J}_{\mathbf{\alpha}\mathbf{\alpha}} =-\mathbb{E}(\mathbf{\Gamma})-(\mathbf{\beta}_{k})^{H}(\mathbf{A})^{H}\mathbf{A} \mathbf{\beta}_{k}\mathbb{E}(\zeta),\] (20) \[\mathbf{J}_{\zeta\mathbf{\tau}} =\frac{\partial(\mathbf{\beta}_{k})^{H}}{\partial\mathbf{\tau}}\mathbb{E} \left(\mathbf{A}(\mathbf{\theta})^{H}\mathbf{A}(\mathbf{\theta})\right)\mathbf{\beta}_{k}\mathbb{E} (\mathbf{\Gamma}^{-1}),\] (21) \[\mathbf{J}_{\mathbf{\tau}\mathbf{\tau}} =\mathbb{E}(\zeta)\frac{\partial(\mathbf{\beta}_{k})^{H}}{\partial \mathbf{\tau}}\mathbb{E}\left(\mathbf{A}(\mathbf{\theta})^{H}\mathbf{A}(\mathbf{\theta})\right) \frac{\partial\mathbf{\beta}_{k}}{\partial\mathbf{\tau}}\mathbb{E}(\mathbf{\Gamma}^{-1}),\] (22) \[\mathbf{J}_{\mathbf{\theta}\mathbf{\tau}} =\mathbb{E}(\zeta)\frac{\partial(\mathbf{\beta}_{k})^{H}}{\partial \mathbf{\tau}}\mathbb{E}\left(\mathbf{A}(\mathbf{\theta})^{H}\frac{\partial\mathbf{A}(\mathbf{ \theta})}{\partial\mathbf{\theta}}\right)\mathbf{\beta}_{k}\mathbb{E}(\mathbf{\Gamma}^{-1}), \tag{23}\] and rest of the terms result to be zero. \[\mathbf{J}_{\mathbf{\Theta}\mathbf{\Theta}}=\begin{bmatrix}\mathbf{J}_{\mathbf{\theta}\mathbf{\theta}}& \mathbf{0}&\mathbf{0}&\mathbf{J}_{\mathbf{\theta}\mathbf{\zeta}}&\mathbf{J}_{\mathbf{\theta}\mathbf{\tau}}\\ \mathbf{0}&\mathbf{J}_{\mathbf{\alpha}\mathbf{\alpha}}&\mathbf{0}&\mathbf{0}&\mathbf{0}\\ \mathbf{0}&\mathbf{0}&\mathbf{J}_{\mathbf{\gamma}\mathbf{\gamma}}&\mathbf{0}&\mathbf{0}\\ \mathbf{J}_{\mathbf{\zeta}\mathbf{\theta}}&\mathbf{0}&\mathbf{0}&\mathbf{J}_{\mathbf{\zeta}\mathbf{\zeta}}&\mathbf{J}_ {\mathbf{\zeta}\mathbf{\tau}}\\ \mathbf{J}_{\mathbf{\tau}\mathbf{\theta}}&\mathbf{0}&\mathbf{0}&\mathbf{J}_{\mathbf{\tau}\mathbf{\zeta}}&\mathbf{J}_ {\mathbf{\tau}\mathbf{\tau}}\\ \end{bmatrix}. \tag{24}\] The CRB for \(\mathbf{\Theta}\) can be expressed as \(CRB(\mathbf{\Theta})\)=\(\mathbf{J}_{\mathbf{\Theta}\mathbf{\Theta}}^{-1}\). The CRB for AoA estimates and \(\mathbf{\alpha}\) can be written using Schur complement for inverting a block matrix as derived in (27), which can be simplified as \[CRB(\mathbf{\theta},\mathbf{\alpha})= \tag{25}\] \[\left[\begin{matrix}\left(\mathbf{J}_{\mathbf{\theta}\mathbf{\theta}}-\begin{bmatrix} \mathbf{J}_{\mathbf{\theta}\zeta}&\mathbf{J}_{\mathbf{\theta}\mathbf{\tau}}\end{bmatrix}\mathbf{F}_{ \zeta\mathbf{\tau}}^{-1}\begin{bmatrix}\mathbf{J}_{\mathbf{\theta}\zeta}&\mathbf{J}_{\mathbf{\theta} \mathbf{\tau}}\end{bmatrix}^{\mathrm{T}}\right)^{-1}&\mathbf{0}\\ \mathbf{0}&\mathbf{J}_{\mathbf{\alpha}\mathbf{\alpha}}^{-1}\end{matrix}.\] Following similar derivations, the CRB can be computed for \(\mathbf{\alpha},\mathbf{\gamma},\zeta\). From (26), we can conclude that for local identifiability of \(\mathbf{\theta},\mathbf{\alpha}\), \(\mathbf{J}_{\mathbf{\theta}\mathbf{\theta}}-\begin{bmatrix}\mathbf{J}_{\mathbf{\theta}\zeta}&\mathbf{J}_ {\mathbf{\theta}\mathbf{\tau}}\end{bmatrix}\mathbf{F}_{\zeta\zeta}^{-1}\begin{bmatrix}\mathbf{J}_ {\mathbf{\theta}\zeta}&\mathbf{J}_{\mathbf{\theta}\mathbf{\tau}}\end{bmatrix}^{\mathrm{T}}\), and \(\mathbf{J}_{\mathbf{\alpha}\mathbf{\alpha}}\) should be invertible, respectively. ## V Simulation Results In this section, the performance of our novel SWOMP-SBL sensing aided channel estimation algorithm is evaluated through numerical simulations. The system parameters considered here are, subcarrier spacing\(=\)\(120\,\)KHz, center frequency\(=\)\(28\,\)GHz, sampling rate\(=\)\(30.72\,\)MHz, sampling period \(T_{s}\)\(=\)\(32.552\,\)ns, fft size \(K\)\(=\)\(256\), cyclic prefix length \(N_{cp}\)\(=\)\(34\) and the number of receive antennas \(M\)\(=\)\(32\). The pilots are generated similar to the sounding reference signals (SRS) in 5G standards [26]. The location of the pilots in the OFDM grid are arranged in a comb fashion as defined in the 3GPP standard [26] i.e., one pilot for every \(K_{c}\) subcarriers as shown in Fig. 5. The channel is generated using a ray-tracing tool using the locations of gNB and UE with the number of delay taps \(N_{c}\)\(=\)\(N_{cp}\). The erroneous sensing information is generated with \(\sigma_{\theta}\)=\(3^{\circ}\) and \(\sigma_{\tau}\)=\(\frac{Ts}{6}\). In our scenario, pilots of size \(P\)\(=\)\(16\) are transmitted with a comb size \(K_{c}\)\(=\)\(16\). The erroneous AoA from the sensing information are refined using SWOMP with a dictionary matrix considering \(d_{\theta the communication channel. The parameters of the scatterers chosen for the simulations are \(L_{r}{=}10\) and \(L_{c}{=}6\). From Fig. 6, we can see that with sensing information, SWOMP-SBL based channel estimation algorithm has a significant gain in the NMSE compared to the wideband classical LS and greedy SWOMP algorithm with fewer pilots and robust to the errors in the sensing information. Hence, we reduce the pilot overhead from \(100\%\) to \(6.25\%\). ## VI Conclusion In this paper, the uplink channel estimation aided by sensing information for mmWave MIMO systems has been studied. The proposed SWOMP-SBL algorithm, along with the sensing information, uses fewer uplink pilots compared to conventional state-of-the-art systems. The proposed scheme is also robust to erroneous sensing information, including unassociated paths in the sensing information. Simulation results have validated the superior performance using reduced uplink pilots for the proposed SWOMP-SBL scheme compared to conventional state-of-the-art algorithms. Finally, the CRB for the unknown parameters is derived, and local identifiability analysis has been presented.
2306.10196
Structured Thoughts Automaton: First Formalized Execution Model for Auto-Regressive Language Models
In recent months, Language Models (LMs) have become a part of daily discourse, with focus on OpenAI and the potential of Artificial General Intelligence (AGI). Furthermore, the leaking of LLama's weights to the public has led to an influx of innovations demonstrating the impressive capabilities of generative LMs. While we believe that AGI is still a distant goal, we recognize the potential of LMs in solving tasks such as searching complex documents, compiling reports with basic analysis, and providing assistance in problem-solving. In this paper, we propose formalizing the execution model of language models. We investigate current execution models, to find that this formalism has received little attention, and present our contribution: the first formalized execution model for LMs. We introduce a new algorithm for sampling the predictions of LMs, which we use to build a reliable and inspectable execution model. We introduce a low-level language to write "cognitive program" for this execution model. We hope to shed light on the need for execution models for LMs and encourage further research in this area.
Tristan Vanderbruggen, Chunhua Liao, Peter Pirkelbauer, Pei-Hung Lin
2023-06-16T22:04:50Z
http://arxiv.org/abs/2306.10196v1
# Structured Thoughts Automaton: First Formalized Execution Model for Auto-Regressive Language Models ###### Abstract In recent months, Language Models (LMs) have become a part of daily discourse, with focus on OpenAI and the potential of Artificial General Intelligence (AGI). Furthermore, the leaking of LLama's weights to the public has led to an influx of innovations demonstrating the impressive capabilities of generative LMs. While we believe that AGI is still a distant goal, we recognize the potential of LMs in solving tasks such as searching complex documents, compiling reports with basic analysis, and providing assistance in problem-solving. In this paper, we propose formalizing the execution model of language models. We investigate current execution models, to find that this formalism has received little attention, and present our contribution: the first formalized execution model for LMs. We introduce a new algorithm for sampling the predictions of LMs, which we use to build a reliable and inspectable execution model. We introduce a low-level language to write "cognitive program" for this execution model. We hope to shed light on the need for execution models for LMs and encourage further research in this area. Language Models, Programming Languages, Execution Model, Generative AI, Inspectable AI, AI Algorithms ## Preprint Notes This paper has been submitted for peer review. All examples have a working implementation at the time of writing. We highlighted a few features that are being implemented. The framework AutoCog is released under Apache 2.0 license at [https://github.com/LLNL/AutoCog](https://github.com/LLNL/AutoCog). ## I Introduction Language Models (LMs) [1, 2] are commonly used to complete prompts, which are text documents that describe some tasks to be performed. As we make LMs perform increasingly complex tasks, the syntax of these manually crafted prompts have been growing more complicated. Well crafted prompts can accept a wide range of data (such as user's question and chat history) without deviating from the task. It is important as we rely on the LM to provide appropriately formatted text such that we can parse it (usually with regular expressions). The data parsed from the LM response is used to call tools or trigger other prompts. As the number of components (prompts and tools) in these system grow, it will rapidly become unmanageable. The introduction of a formalized Execution Model is the first step to establish a real programming environment for LMs. Our execution model, Structured Thoughts Automaton (STA), specifically targets auto-regressive language models (ARLM). STA is equipped with a matching low-level language to enable the creation of "cognitive programs". We introduce STA within AutoCog (Automaton & Cognition), a python framework to build Cognitive Architecture. AutoCog defines \(Cog\), a class of asynchronous callable objects managed by a cognitive architecture (\(CogArch\)). STA programs compile to \(STA\) a subclass of \(Cog\). AutoCog's \(Cogs\) are easily specialized to provide access to tools such as search engines through their APIs. With AutoCog, we aim at facilitating the design of execution models beyond ARLM. Many think [3, 4] that growing the number of parameters in LLM has reached the point of diminishing returns. Furthermore, next token prediction (NTP) seems inherently limited in its ability to capture semantics. However, competing ideas such as Joint Embedding Predictive Architecture (JEPA) [5] are more suited for sequence of images at this stage. We believe that the execution model is a concept that is missing from modern machine-learning. It might even be the concept needed to bridge the gap between the symbolic and connectionist views of AI. We are creating one place to implement: (1) execution models (specific to the machine-learning architecture), (2) programming models (compilable to some execution models), (3) symbolic AI algorithms for LM, and (4) training of new ML model by transcribing execution traces across execution models. In this paper, we present the first execution model with its own low-level language. It does, technically, also constitute the first programming model as we provide an initial library for writing programs. ## II State-of-the-art ### _Large Language Models_ The Large Language Models (LLMs) [6, 7] that have made the news lately are specifically Auto-Regressive Transformer-based Language Models. LLMs are a feat of engineering where hundreds, if not thousands, of "tweaks" enable widely over-parameterized models to converge. The Transformers model architecture was introduced in [8]. Generative Pre-trained Transformer (GPT) [9] introduced the combination of auto-regressive transformers for language modeling and large-scale pretraining using Next Token Prediction (NTP). However, auto-regressive language models (ARLM) predate artificial neural networks (ANNs). In essence, a language model assigns some probabilities to sequence of tokens from an alphabet. Given a sequence of tokens, a causal language model assigns probabilities to continuations of this sequence. Finally, ARLM predict the next token given a sequence. Auto-regressive means that to predict the following token, the previously predicted token is added to the end of the input sequence. The auto-regressive process applied to language models is often referred to as Next Token Prediction (NTP). The current technology relies on foundational LLMs that cost hundreds of thousands of dollars to train, though the cost is declining fast. While the models and software to perform this training are extremely complex, the training itself could not be simpler. It is NTP applied billions of times, evaluating the error and propagating that error to adjust the models' billions of parameters. One of the real breakthroughs of the past few months is the realization that LLMs can be fine-tuned for a few hundreds of dollars, and that we have the techniques to run them at the edge. LLaMa [10] is a foundational model that was released for research-purpose by MetaAI. Soon after its release, LLaMa's weights were leaked to the public, leading to a wave of innovation. Stanford Alpaca [11] was fine-tuned from LLaMa for less than $600. Alpaca-LoRA [12] enables fine-tuning on consumer hardware such as a gaming GPU. LLaMa.cpp [13] was "hacked in an evening" and was soon capable of running LLaMa-based models on Raspberry Pi and Pixel 6. ### _Intrinsic Execution Model_ When we mention Execution Model in the context of ARLM, we mean three things: (1) how do we assemble the input sequence, (2) how do we generate new tokens, and (3) what happens to the generated tokens. The most common execution model used for LMs is Next Token Prediction (NTP), which is the initial execution model for most Generative LMs. NTP involves predicting the next token in a sequence given the preceding tokens. In some cases, pretraining may have used Masked Language Modeling (MLM1) with an encoder architecture, such as BERT [14], but eventually, it is fine-tuned for NTP when used for generative tasks. Footnote 1: collloquially known as _fill-the-blank_s: sentence or paragraph with missing words, students must figure-out those missing words. While NTP is not very useful on its own, it is used to implement various completion algorithms. The straight application of NTP is colloquially referred to as _greedy_, but most generative systems use variations of the beam search algorithm which is often referred to as _completion_. _Truncation_ is a very simple execution model that builds on _completion_. It deals with preventing termination because the token window is full. It is used to build story-teller and chatbot systems, by truncating from the head or middle, respectively. ### _Special tokens_ Special tokens are another way some form of execution model is enforced. These tokens do not come from the source language but are added to control the LM. Classic examples are start/end of text/document and blank. Modern generative LLMs, often support a small set of special tokens used to organize the instruction. For example, the recently released StarCoder [15] has the following: <|system|>, <|user|>, <|assistant|>, and <|end|>. The first three start text sections while the last ends those sections. The <|system|> section comes first, it _adjusts_ the purpose of the model. It is followed by a <|user|> section to specify the input. Finally, StarCoder fills the <|assistant|> section until it produces the <|end|> token. Special tokens are also instrumental to implement training techniques such as MLM where mask tokens are used. In that case, the input sequence is masked at random using special _mask_ tokens. Each mask token appears only once in the masked sequence. The previous sentence could become: <|input|>Each <|M1|> token appears only <|M3|> in the masked <|M2|>,<|end|> <|answer|><|M1|>mask<|M2|>sequence <|M3|>once<|end|> The LM is then given the <|input|> section and trained to produce the <|answer|> section. We find it revealing that special tokens work. It shows a _willingness_ from the model to follow sequences and use variables. In fact, special tokens form a communication protocol above the natural language. Furthermore, even smaller LM are very good at "artificial" syntax, like python, CSS, HTML, JSON, and Markdown. It seems to us that LM are particularly good at syntax but have a shallow understanding of semantic. The focus on syntax over semantic could be inherent to NTP. Our execution model does not use special tokens to ensure compatibility with current ARLMs. However, it is our goal to eventually separate _data_ and _command_ tokens. ### _Emerging Execution Model_ LangChain [16] is a framework that allows the building of pipelines of prompts. Each stage can iterate between completion and python logic to complete its prompt. LangChain's Agents use completion and regular expression to control external tools, allowing the LM to extract information from data formatted by the agent. LangChain implements many state-of-the-art agents such as Reasoning/Acting (or ReAct) [17]. ReAct presents the LLM with a prompt that describes a task, a list of tools, and a prompting format. The format section explains how the LLM is suppose to (1) think, (2) pick a tool, (3) provide inputs for the tool, (4) observe the output of the tool, and (5) loop back. One of the tool options is to interrupt the loop. LangChain Agent provides very little control and must be used with heavily fine-tuned models with a low temperature setting (a measure of the model's "creativity"). By chaining multiple agents within LangChain, complex behavior can be elicited from the LM. We hypothesize that the transfer of information across contexts is the source of the sparks of Artificial General Intelligence observed with GPT-4 [6]. LLMs have mastered the syntax of both human and artificial languages, enabling them to read and write JSON, facilitating the communication of structured data between symbolic and connectionist processes (python programs and LLMs). It is possible to achieve similar results with smaller models if we have a better execution model. Already, fine-tuning LLMs for a specific set of LangChain prompts can provide impressive results. Furthermore, using Low Rank Adaptation (LoRA) [18], it is possible to perform this fine-tuning for a couple hundred of dollars. Over the past few months, most big players in the AI industry have been launching their own line of products or tools to leverage LLM chaining. There is HugginFace's Transformers Agent that defines a natural language API on top of transformer models. The agents can interpret natural language requests from users and use a set of curated tools through HuggingFace APIs in various ML-based workflows [19]. Both Google2 and OpenAI [20] have "plugins" which are a variety of tools that the agent can use to complete a task. For example, OpenAI plugins connect ChatGPT to third party applications to retrieve real-time information or assist users with actions. An ai-plugin.json file is used to define a plugin's name, description, endpoints, authentication schema and so on. In all case, they use very similar techniques to LangChain's Agents. Footnote 2: Google I/O event a few days before submission Recently, Language Model Query Language (LMQL) [21] introduced the idea of Language Model Programming (LMP). They created a small language to describe prompts and provide some degree of reliability. It seems the underlying system uses Deterministic Finite Automaton to parse the token stream. LMQL provides more freedom to the user within the prompt than our work where a precise syntax facilitate the creation of programs that span multiple prompts. Aside from LMQL, none of these systems consider controlling the tokens that are produced by the LM. We must assume that the big players can rely on heavily fine-tuned the models as they have the compute and data. Without that fine-tuning, LLMs will not follow directions in a reliable manner. However, it is desirable to be able to run LM applications with foundation models quantitatively to 4 bits. That can be used to probe these models, or to enable iterative training without labels. Our execution model can execute programs on any ARLM, we have used it with OpenAI GPT-3 (API), GPT-2 (HuggingFace's transformers), and LLaMa 7B (using LLaMa.cpp and 4 bits quantization - model has 4GB footprint in RAM). ## III Structured Thoughts Automaton In this section, we introduce Structured Thought Automaton, or STA, which is a formalized Execution Model for LMs. STA's main concepts are: (1) structured prompts, (2) communication channels and (3) data formats. These concepts are captured in a low-level language. While, we refer to STA as an Execution Model given its very low-level of abstractions. STA is made of an execution model, a language, and a (tiny) library of programs. Hence, technically, it is a programming model, albeit a burgeoning one. We simply wish to convey the fact that proper programming languages must be built. We will present the three main concepts, followed by the language design, details of the execution model, desription of the execution traces, and finally the choice algorithm. While the choice algorithm might seem out of place, it is the one trick that make STA possible. Indeed, it let the LM decide which branch of the automaton should be taken when a choice arise. ### _Main Concepts_ In Figure 1, we illustrate a program implemented with STA. It is representation of our main example from Figure 6. STA overse the execution of _prompts_ which produce structured documents. The leaves of these documents have prescribed _formats_. Each prompt declares communication _channels_ which are executed before the questionnaire. The questionnaire compiles to a push-down automaton. For each state, the LM provides text that follows the prescribed format (using either _completion_ or _choice_). When there is more than one possible branch in the PDA, the _choice_ algorithm is used. #### Iii-A1 Prompts Prompts are executed in sequence and each prompt can have any number of successors. If more than one successor, the last question is to decide which one is next. A prompt's header (Fig 2 L1-L23) has a set of instructions (Fig 2 L7-L13) and a description of the text formats (Fig 2 L16-L20) used to answer each question. After the start prompt the prompt's PDA is used to generate the structure, presenting the LM with a choice when needed. The content of each line can either be provided through a channel (Fig 2 L24-L26) or generated by the LM. STA uses the format associated with each question to properly configure the completion algorithm (effectively selecting the proper LM wrapper in the cognitive architecture). In this run, the choice algorithm is used at: line 30 and 31 to keep adding considerations, line 32 to not add another problem, and line 33 to only have one answer. On this last case, it did write two sentences in that line while it usually keep to one sentence per line when using the sentence format. It is possible that it decided that it already had two sentences3. There is one final use of _choice_ when selecting the next prompt. It had to choose "edit" or "submit" and picked the latter. It agrees with its statement that there are no issues with his answer. Footnote 3: Anecdotally, we created a program that let GPT 3.5 think ten time -> think[10](thought): think as much as you’d like” but lack the instructions to say five instead. In our dozen or so tries, it never went further than five thoughts following the instruction even when given the choice not to. That is similar to most prompting of LLM, for example LangChain's Agent or LMQL. One difference is that they let input data be formatted in the header while in STA the header is static. The main difference is that STA introduces nesting in the questionnaire and the declaration of lists. The results of the execution of one prompt is a structure document: nested list and dictionary with text at the leaves. The questionnaire is parsed using a push-down automaton to produce that document. Initially, we tried to introduce this structure in prompts for OpenAI GPT-3 using LangChain. GPT-3 had no problem reading the input data (list of ten search results with title, url, and description). The problems came when asked to answer with nested questions. It would follow the format for a few lines but soon start to add random blank lines or Fig. 1: Illustration of our example program (Figure 6). The three main boxes represent prompts with the hierarchical questionnaire. Edges represent the control-flow of the program that is decided by the LM. Empty line of the same color as the prompts represent inputs (w.r.t. the prompt). Green and orange empty lines are filled by the LM using sentences or thoughts respectively. We configure the LM to make thoughts shorter and more creative than sentences. Fig. 2: Transcript of OpenAI’s GPT-3.5 running the edit prompt from the program of Fig 6. In this example, we asked GPT to “explain the different phases of a compiler”. Its original answer (shown as draft) was pretty decent but it decided it was too technical. As usual with LLM, tiny changes in any wording can completely change the results, this is conducting to some fun tuning the program. For example, in a similar case, GPT-3 decided it had to use metaphors to make the answer more accessible. The resulting story about a chef cutting vegetable had little to do with a compiler... Amusingly, GPT-3 always thought that compilers were either too technical or complicated. even comments. After those blanks it often hallucinated4 new prompts... Given the results, we realized that we had to read the LM output line by line, properly configuring the LM for each completion. The next issue was how to decide branches in the PDA. For example, we want to let the LM write up to ten sentences, how do we decide when it is done? We started with a greedy algorithm to decide token by token what was the best branch. Eventually, we devised a proper _choice_ algorithm for that task. Footnote 4: Colloquially, “out-of-distribution” answers from LLM are referred to as “hallucination”. It is not to be confounded with the LM providing non-factual information. Hallucinations are when the model switch to a completely different subject. #### Iii-C2 Formats With the introduction of the choice algorithm, we were able to better formalize the idea of _format_. Formats follow a hierarchy with an abstract root that have three children: _text_, _enum_, and _regex_. The default format is _text_ and causes a call to the completion algorithm of the LM. _Thought_ is a child of _text_ meant to use a LM configured for short and "creative" completion. This is achieve by setting the number of desired tokens and the "temperature"5 of the LM. The _enum_ format uses the _choice_ algorithm to decide between a list of tokens. So far, it is only used for the control-flow between prompts and there is no possibility to declare an _enum_ in the language. Static _enum_ are going to be first with a list of choice declared in the program. The more interesting concept is dynamic _enum_ which can take any values from a list (input or previous prompt). Finally, formats defined by regular-expressions. It requires the development of an appropriate sampling algorithm. As regular-expressions compile to deterministic finite automaton, we envisage to adapt beam-search to only explore paths that agree with the DFA. The goal of _regex_ formats is to represent integers, floating point numbers, phone number, path, url,... Access to dynamic _enum_ and _regex_ will be essential to truly probe the capabilities of LM. Footnote 5: Temperature is a scalar (usually between 0. and 2.) which is used to configure the creativity of the completion algorithm. Depending on the algorithm, other parameters might be used, such as \(top_{k}\) and \(top_{p}\). #### Iii-C3 Channels Each prompt can have many _channels_ which are used to move data from the inputs, or previously executed prompts. Channels can also trigger external calls to any callable component in the architecture. Data-parallelism can be achieved through the use of mapped channels which create one instance of the prompt for each element of the source list. A prompt with multiple mapped channels will have has many instances as the cross-product of the sources. ### _Language_ STA's language is primarily a procedural and structured languages, with some declarative features. It is equipped with a call interface but there is no context sharing. Calls are issued to callable objects (\(Cogs\)) in the cognitive architecture: other _programs_, vector-stores, or external tools. STA _programs_ have a collections of _prompts_, a set of _formats_, one _entry_, and a few declarative statements (task description using natural language). In Figure 3, we provide the BNF representation of STA's grammar. A program consists of an entry, zero or more formats, and prompts. It also have a few declarative statements (Figure 4) that are not shown in the grammar. These permit users to override any part of the construction of the prompts' headers. This header contains instructions in natural language and a technical description of the prompts mechanics (Fig 2 L1-L23). ### _Execution Model_ A basic block is usually defined as a sequence of statements that has a single entry point and a single exit point, and repre Fig. 3: BNF Grammar of STA’s language. We omitted a few trivial rules for brevity (sentence, literal, digit, identifier, and newline). sents a contiguous sequence of instructions that are executed without interruption. Similarly, _prompts_ have single entry point (upon which data communication and calls occur), and a single exit point (either selecting the next _prompts_ or exiting with some outputs). Each prompt is a statically-bound _questionnaire_ which produces structured documents. Documents are nested lists and dictionaries with native or user-defined (text) _formats_ at the leaves. The _questionnaires_ of STA's _prompts_ compile to push-down automaton (PDA) shown in Figure 5. Upon reaching a _prompt_, _channels_ are executed first. There are a few types of communication channels: (1) copy from inputs (a) or prompt (b), (2) append, and (3) calls. They can retrieve data from the inputs, the latest _content_ of another prompt, or previous _content_ of the current prompt. Channels can be _mapped_ causing multiple instantiation of the _questionnaire_. The resulting _questionnaires_ are completed independently by the Language Model. There is one "soft" constraint which does not influence the current implementation but seems necessary for future efficiency. The organization of the questionnaire should be such that there are no gaps in the communicated data. In this way, after communication, the questionnaire can be loaded with the communicated data and _unparsed_ into one contiguous string. Coupled with a standard format to express PDAs (including concepts of _completion_ and _choice_), it would enable model vendors to serve API not only for STA but many other execution models compiling to the same PDAs. Channels of type (1) and (2) are executed first (in declaration order) while type (3) are executed second. This is to simplify the data-flow around calls, particularly for _mapped_ channels (only copy and call can be mapped) as they instantiate multiple instances of the questionnaire. At this time, STA programs are single procedure. We are planning to add alternate entry-points to enable _call_ channels to call _functions_ that share (some) context with the caller. We are unsure how to deal with sharing context across functions and the parallelism introduced by mapped channels. In Figure 6, we show a sample STA program. The first line declares that the initial prompt is the entry of this program, it also describe the purpose of this program. Then, we declare the user-defined sentence format (adding to the native text and thought). The program uses three prompts to answer a user's question in a few sentences. * The entry prompt, initial, starting at line 6 let the LM produce \(T0\) thoughts to "ponder" about an initial answer. The answer is made of up to \(N\) sentences. * The second prompt, edit, starting at line 13 takes the current answer and make the LM consider up to \(R\) problems in sequence. Then the LM produces a new version of the answer before deciding whether or not the answer is ready for submission. * After up to \(L\) iteration of the second prompt, the final prompt at line 25 is reached. In this example, it is a "ghost" prompt that only serve to join the control-flow before exiting. We originally tried to avoid "ghost" prompts but eventually realized that it was causing undue complexity in STA. Now that the need for high level programming languages is evident, we embrace "ghost" prompts in STA's low level language. ### _Execution Trace_ With an execution model, we can define the notion of execution traces. In traditional computing, execution traces are obtained thought instrumentation which includes monitoring special hardware registers, introducing special counter in the executable, analyzing snapshots of running processes, and more. In STA's current implementation, a full trace of the execution is captured. For a given program, we maintain one stack per prompt. Each time a prompt is reached a list of StructuredThought objects is stacked (list because of mapped channels). These objects capture the _content_ of the prompt, meaning the document (nested list and dictionary) with the inputs and the LM productions. Given a program, an input, and the resulting stacks, we can fully reconstruct the execution of the program. ### _Choice Algorithm_ The _choice_ algorithm is a simple concept: "given a prompt (sequence of token) and a list of candidate completions (list of sequence of tokens) _choose_ the most likely completions". We have found that perplexity is often used to implement such function. However, the algorithm shown below is simple, deterministic, and greedily explores all possibilities. It is consequently expensive to run when dealing with many long candidates. It could explain the common use of perplexity to compare natural language sentences. In STA, _choice_ is Fig. 4: Shows how to configure the generation of the prompt header in STA’s language. primarily used for branches in the PDA. These only have a few candidates with shared prefixes. We use the implementation below with the _LLaMa.cpp_ and _HuggingFace_ wrappers for LM. In the case of OpenAI, we do not get access the base _greedy_ algorithm (single step prediction with full probability vector) needed to implement _choice_. It is also worth noting that these implementations are atrociously inefficient, evaluating the full model from scratch for each prediction. The rest of this section was generated by ChatGPT (3.5) given the python implementation (Fig. 7). We lightly edited to get the Latex formatting right: The TokenChoiceTree class is a python implementation of the choice algorithm, which is a way to compute the probability of different possible continuations of a given text prompt. The class is initialized with a token (which is None for the root), a depth (which is 0 for the root), and an empty children dictionary. The proba attribute is initialized as None for the root and represents the probability of the current token given its parent. The cumul attribute represents the cumulative probability of the path from the root to the current token, and is initialized as 1.0 for the root. The __add method takes a list of integers (representing tokens) as input and adds the tokens to the tree as children of the current node, returning the last node added. The add method takes a language model (1lm) and a string (text) as input, tokenizes the string using the language model, and calls the __add method with the resulting list of tokens (excluding the first token, which is the root). The eval method takes a language model (1lm) and a string (prompt) as input. It starts by adding the current token (if it is not the root) to the prompt. It then calls the greedy method of the language model with the updated prompt, which returns the log probabilities of the next tokens. These log probabilities are exponentiated to get the probabilities of the next tokens, which are stored in the proba attribute of each child node. The cumul attribute of each child node is then updated as the product of its parent's cumul and its own proba. Finally, the eval method is recursively called on each child node. The probability method returns the probability of the path from the root to the current node. This is computed as the cumul attribute raised to the power of 1/depth, where depth is the depth of the current node in the tree. The probability method returns None for the root and for nodes whose cumul attribute is None. The _choose_ method creates a TokenChoiceTree object with a language model (1lm) as input. It then adds each text in a list of _choices_ to the tree using the add method, and stores the resulting leaves in the leaves list. Finally, it calls the eval method on the root of the tree with a Fig. 5: Graphical representation of the state-machine that implement the Push-Down Automaton of each prompt. The Hierarchical structure of each prompt is shown with the dotted edges. The root of each prompt also correspond to the initial state of the PDA. State transitions are shown with solid edges, with green and red edges corresponding to a push or pop of the stack, respectively. prompt (prompt) and computes the probability of each leaf node using the probability method. The index of the leaf node with the highest probability is returned as the choice. ## IV Future Work Our immediate goal is to facilitate the implementation of symbolic AI algorithms to be executed by connectionist language models. Symbolic and connectionist visions of AI have been at odd for a few decades. To bridge the gap between these visions, we will need expressive programming languages. Particularly, STA provides _formats_ which are organized in documents inside a _prompt_. But it is lacking a notion to represent sub-trees of that document. These sub-trees, maybe _struct_, would simplify dataflow manipulation (eliminating some ghost prompts) while permitting some code reuse across prompts. The resulting STA+ will also refine the syntax of the language which was cobbled together from the prompting syntax. Leveraging the type system of STA+, we will introduce State Full Typed Language Model (SFTLM). Training SFTLM to execute STA+ programs enable us to devise new training paradigms. Particularly, we can use a combination of NTP for state-transitions, JEPA for type-system embeddings, and MLM to generate data tokens for each state. With formalized execution model, we can leverage LLM to produce training data for SFTLM. For example, we can run a STA+ program with a LLM and save the execution trace. The resulting traces can then be systematically transcribed to train a SFTLM. That process is an advanced version of distillation [22] where results of running a tuned model is used to train or finetune another model. In fact, we will investigate grammar-based fine-tuning and self-distillation as ways to train ARLMs. Fine-tuning is usually done by training for NTP on the whole prompt, with either ground-truth or distilled outputs. It seems that instead we should focus on the tokens that are expected to be produced by the model. Furthermore, we will investigate crafting loss functions6 that leverage syntax and type information. Footnote 6: mathematical expression of a model’s training objectives With this grammar-based fine-tuning will investigate self-distillation, an iterative process to train/tune foundation model with little data. By design, STA cannot "crash" if the LM does not understand, it simply get random output. The idea of self-distillation is to use increasingly complex curriculum to train the model. Each curriculum is made of one program, some examples, many exercises, and a grading tool. For each "epoch", the LM run all the exercises which get graded. Best exercises and examples are used for grammar fine tuning. We will start with elementary school tasks: spelling, arithmetic, Fig. 6: This example demonstrates all implemented features of STA (mapped and call channels coming soon behind). One can note the presence of python f-expression (\(x\)), these are _macro_ which get substituted before parsing. In this example, they configure limits on lists’ sizes and trip-count of loops. Fig. 7: Python Implementation of Choice Algorithm conjugation,... Eventually, the curriculum might include complex planning or inductive logic programs. ## V Conclusion We have presented Structured Thoughts Automaton (STA), the first execution model for auto-regressive language models (ARLM). STA is designed to leverage current Large Language Models by providing a fine level of control over the algorithms used to sample tokens from the language models. We have shown how STA can be used to build "cognitive programs" using a proto-language. Cognitive programs are made of prompts organized in a control-flow graph. Each prompt compiles to a push-down automaton (PDA) and declare communication channels. The LM uses the _choice_ algorithm to "traverse" the PDA. The resulting token stream can be parsed into a structured document. The concept of execution model could bridge the gap between symbolic and connectionist views of AI. First, we can now implement algorithms such as planning (forward, backward,...) on top of the LM. Second, investigating the models understanding of formal logic within a completely formalized framework becomes possible. Third, syntax and types within the token stream can be used to create loss functions coupling the symbolic constructs and the connectionist optimization objective. Finally, one can imagine a "cognitive compiler" which given a "problem", write the "cognitive program" to solve it. It raises intriguing questions about Turing Completeness.
2306.00385
HySpecNet-11k: A Large-Scale Hyperspectral Dataset for Benchmarking Learning-Based Hyperspectral Image Compression Methods
The development of learning-based hyperspectral image compression methods has recently attracted great attention in remote sensing. Such methods require a high number of hyperspectral images to be used during training to optimize all parameters and reach a high compression performance. However, existing hyperspectral datasets are not sufficient to train and evaluate learning-based compression methods, which hinders the research in this field. To address this problem, in this paper we present HySpecNet-11k that is a large-scale hyperspectral benchmark dataset made up of 11,483 nonoverlapping image patches. Each patch is a portion of 128 $\times$ 128 pixels with 224 spectral bands and a ground sample distance of 30 m. We exploit HySpecNet-11k to benchmark the current state of the art in learning-based hyperspectral image compression by focussing our attention on various 1D, 2D and 3D convolutional autoencoder architectures. Nevertheless, HySpecNet-11k can be used for any unsupervised learning task in the framework of hyperspectral image analysis. The dataset, our code and the pre-trained weights are publicly available at https://hyspecnet.rsim.berlin
Martin Hermann Paul Fuchs, Begüm Demir
2023-06-01T06:34:14Z
http://arxiv.org/abs/2306.00385v2
HySpecNet-11k: A Large-Scale Hyperspectral Dataset for Benchmarking Learning-Based Hyperspectral Image Compression Methods ###### Abstract The development of learning-based hyperspectral image compression methods has recently attracted great attention in remote sensing. Such methods require a high number of hyperspectral images to be used during training to optimize all parameters and reach a high compression performance. However, existing hyperspectral datasets are not sufficient to train and evaluate learning-based compression methods, which hinders the research in this field. To address this problem, in this paper we present HySpecNet-11k that is a large-scale hyperspectral benchmark dataset made up of \(11{,}483\) nonoverlapping image patches. Each patch is a portion of \(128\times 128\)\(\mathrm{pixels}\) with \(224\,\mathrm{spectral}\)\(\mathrm{bands}\) and a ground sample distance of \(30\,\mathrm{m}\). We exploit HySpecNet-11k to benchmark the current state of the art in learning-based hyperspectral image compression by focussing our attention on various 1D, 2D and 3D convolutional autoencoder architectures. Nevertheless, HySpecNet-11k can be used for any unsupervised learning task in the framework of hyperspectral image analysis. The dataset, our code and the pre-trained weights are publicly available at [https://hyspecnet.rsim.berlin](https://hyspecnet.rsim.berlin). Martin Hermann Paul Fuchs \({}^{1}\), Begum Demir \({}^{1,2}\)\({}^{1}\)Faculty of Electrical Engineering and Computer Science, Technische Universitat Berlin, Germany \({}^{2}\)BIFOLD - Berlin Institute for the Foundations of Learning and Data, Germany EnMAP, hyperspectral dataset, image compression, deep learning, remote sensing. ## 1 Introduction Advancements in hyperspectral imaging technologies have led to a significant increase in the volume of hyperspectral data archives [1]. Dense spectral information provided by hyperspectral images leads to a very high capability for the identification and discrimination of the materials in a given scene. However, to reduce the storage required for the huge amounts of hyperspectral data, it is needed to compress the images before storing them. Accordingly, one emerging research topic is associated to the efficient and effective compression of hyperspectral images (HSIs) [2]. Many HSI compression algorithms are presented in the literature. Generally, they can be divided into two categories: i) traditional methods, e.g. [3, 4, 5]; and ii) learning-based methods, e.g. [6, 7, 8, 9, 10]. The most popular traditional algorithms are defined based on transform coding in combination with a quantization step and entropy coding. In contrast, learning-based methods mostly rely on convolutional autoencoders (CAEs) to reduce the dimensionality of the latent space. Recent studies show that learning-based HSI compression methods can preserve the reconstruction quality at lower rates compared to traditional compression approaches. Learning-based compression methods generally require a large number of unlabeled images to optimize their model parameters during training. There are only few hyperspectral benchmark datasets publicly available in remote sensing (see Table 1). To the best of our knowledge, most of the existing datasets only contain a single HSI, which is divided into patches for the training and evaluation processes. Thus, they are not sufficient to train learning-based compression methods to reach a high generalization ability as the models may overfit dramatically, when using such training data from spatially joint areas. The lack of a large hyperspectral dataset is an important bottleneck, affecting the research and development in the field of learning-based HSI compression. Hence, a large-scale benchmark archive consisting of a high number of HSIs acquired in spatially disjoint geographical areas is needed. To address this problem, in this paper we introduce a new hyperspectral benchmark dataset (denoted as HySpecNet-11k) and exploit it to benchmark the current state of the art in learning-based HSI compression. ## 2 The HySpecNet-11k Dataset HySpecNet-11k is made up of \(11{,}483\) image patches acquired by the Environmental Mapping and Analysis Program (EnMAP) satellite [11]. Each image patch in HySpecNet-11k consists of \(128\times 128\)\(\mathrm{pixels}\) and \(224\,\mathrm{bands}\) with a ground sample distance (GSD) of \(30\,\mathrm{m}\) (see Table 1). To construct HySpecNet-11k, a total of \(250\) EnMAP tiles acquired during the routine operation phase between 2 November 2022 and 9 November 2022 were considered. It is worth nothing that the considered tiles are associated with less than \(10\,\mathrm{\char 37}\) cloud and snow cover. The tiles were radiometrically, geometrically and atmospherically corrected (L2A water & land product). Then, the tiles were divided into nonoverlapping image patches. Therefore, cropped patches at the borders of the tiles were eliminated. Thus, we were able to generate more than \(45\) patches per tile resulting in an overall number of \(11,\!483\) patches for the full dataset. Due to the L2A-processed data, the number of bands is reduced from \(224\) to \(202\) by removing bands [\(127\!-\!141\)] and [\(161\!-\!167\)] that are affected by strong water vapor absorption. We provide predefined splits to make the results of the considered methods reproducible. Therefore, we randomly divided the dataset into: i) a training set that includes \(70\,\%\) of the patches, ii) a validation set that includes \(20\,\%\) of the patches, and iii) a test set that includes \(10\,\%\) of the patches. Depending on the way that we used for splitting the dataset, we define two different splits: i) an easy split, where patches from the same tile can be present in different sets (patchwise splitting); and ii) a hard split, where all patches from one tile must belong to the same set (tilewise splitting). To get an overview of the dataset, Figure 1 illustrates representative HySpecNet-11k images patches. It is worth noting that compression methods generally do not require labeled training images and therefore our dataset does not contain image annotations. ## 3 Learning-based Hyperspectral Image Compression In order to provide an initial benchmarking of the proposed HySpecNet-11k dataset in the framework of learning-based HSI compression, we train and evaluate the following state of the art baseline methods: i) 1D-Convolutional Autoencoder (1D-CAE) [6]; ii) Advanced 1D-Convolutional Autoencoder (1D-CAE-Adv) [7]; iii) Extended 1D-Convolutional Autoencoder (1D-CAE-Ext) [8]; iv) Spectral Signals Compressor Network (SSCNet) [9]; and v) 3D Convolutional Auto-Encoder (3D-CAE) [10]. All methods are based on convolutional operations paired with down- and upsamplings in the encoder and decoder, respectively. They differ from each other with respect to the approaches considered for spatial and spectral compression. An overview of the considered methods is given in the following. For a detailed description we refer the reader to the respective papers. The 1D-Convolutional Autoencoder (1D-CAE) [6] applies pixelwise compression without utilizing spatial content. The encoder consists of multiple 1D convolutions in combination with two 1D max pooling layers that fix the compression ratio (CR) to a factor of \(4\). Leaky rectified linear units (LeakyReLUs) are present as activation functions after each convolution. The decoder mirrors the encoder, however it uses two upsampling layers to reverse the respective downsamplings from the encoder. A sigmoid activation function scales the reconstructed pixel values to the range from \(0-1\) in the last decoder layer. Furthermore, padding and unpadding are needed in encoder and decoder, respectively. The Advanced 1D-Convolutional Autoencoder (1D-CAE-Adv) [7] adapts the decoder by substituting the combinations of 1D convolutional layer and upsampling layer with 1D transposed convolutions to make the upsampling operation trainable and more adaptive. On the encoder side, the 1D max poolings are dropped for 1D average poolings. Moreover, the Extended 1D-Convolutional Autoencoder (1D-CAE-Ext) [8] adds a 1D batch normalization [12] between each (transposed) convolutional and activation layer. The sigmoid activation in the last decoder layer is replaced by a hard hyperbolic tangent (HardTanh) and 1D average poolings are substituted by 1D max poolings again. The repetition of down- and upsampling blocks enables the construction of models with different CRs. The Spectral Signals Compressor Network (SSCNet) [9] encoder uses 2D convolutions with parametric rectified linear units (PReLUs) as activation after each convolutional layer. Three 2D max pooling layers are added for a fixed spatial compression by a factor of \(64\). The final CR is set via the number of latent channels in the bottleneck layer. Simultaneously, the ratio between input and latent channels decides the spectral compression factor. 2D transposed convolutions are used to reconstruct the HSIs on the decoder side. After each transposed convolution there is a PReLU activation except for the last layer where a sigmoid activation is used for scaling the outputs into the correct range from \(0\!-\!1\). For all convolutional layers a kernel size of \(3\times 3\) pixels ensures the integration of nearby spatial content into the compression of the currently considered pixel. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|} \hline Dataset & Acquisition & Sensor & GSD & Spectral Range & \#Bands & Dataset Size \\ \hline Indian Pines & 1992 & AVIRIS & \(20.0\,\mathrm{m}\) & \(400\!-\!2500\,\mathrm{nm}\) & \(224\) & \(0.02\,\mathrm{MP}\) \\ Kennedy Space Center (KSC) & 1996 & AVIRIS & \(18.0\,\mathrm{m}\) & \(400\!-\!2500\,\mathrm{nm}\) & \(224\) & \(0.31\,\mathrm{MP}\) \\ Salinas Scene & 1998 & AVIRIS & \(3.7\,\mathrm{m}\) & \(420\!-\!2450\,\mathrm{nm}\) & \(224\) & \(0.11\,\mathrm{MP}\) \\ Pavia Center & 2001 & ROSIS & \(1.3\,\mathrm{m}\) & \(430\!-\!860\,\mathrm{nm}\) & \(102\) & \(1.20\,\mathrm{MP}\) \\ Pavia University & 2001 & ROSIS & \(1.3\,\mathrm{m}\) & \(430\!-\!860\,\mathrm{nm}\) & \(103\) & \(0.21\,\mathrm{MP}\) \\ Botswana & 2001 & Hyperion & \(30.0\,\mathrm{m}\) & \(400\!-\!2500\,\mathrm{nm}\) & \(242\) & \(0.38\,\mathrm{MP}\) \\ Cooke City & 2008 & HyMap & \(3.0\,\mathrm{m}\) & \(450\!-\!2480\,\mathrm{nm}\) & \(126\) & \(0.22\,\mathrm{MP}\) \\ ShanDongFeiCheng (SDFC) & 2021 & HAHS & \(0.5\,\mathrm{m}\) & \(400\!-\!1000\,\mathrm{nm}\) & \(63\) & \(0.72\,\mathrm{MP}\) \\ \hline HySpecNet-11k (ours) & \(2022\) & EnMAP & \(30.0\,\mathrm{m}\) & \(420\!-\!2450\,\mathrm{nm}\) & \(224\) & \(188.14\,\mathrm{MP}\) \\ \hline \end{tabular} \end{table} Table 1: A summary of publicly available hyperspectral benchmark datasets and their characteristics. The 3D-CAE [10] is built by three strided 3D convolutional layers, each of which is followed by a 3D batch normalization [12] and a LeakyReLU. Residual blocks are added to increase the depth of the network. Upsampling layers are furthermore present on the decoder side for reconstruction. Padding operations are required as in the 1D-CAE. The 3D kernels in the convolutional layers are able to integrate the local spatial and spectral neighborhood jointly during the compression. Spatial and spectral CRs are fixed to a factor of \(64\) and \(4\), respectively and the overall CR can be set via the number of latent channels in the bottleneck of the network. ## 4 Experimental Results Our code is implemented in PyTorch based on the CompressAI [13] framework. We applied gradient clipping and a global min-max normalization to scale the input data in the range between \(0\!-\!1\) that is usually a requirement for learning-based compression methods. As optimizer, we used Adam [14] with a learning rate of \(10^{-4}\) for the 1D-CAE [6], 1D-CAE-Adv [7] and 1D-CAE-Ext [8]. For SSCNet [9] and 3D-CAE [10] the learning rate was set to \(10^{-5}\). We trained the networks using mean squared error (MSE) as loss function until convergence on the validation set that took \(500\,\mathrm{epochs}\), \(2000\,\mathrm{epochs}\) and \(1000\,\mathrm{epochs}\) for the 1D-CAEs, SSCNet and 3D-CAE, respectively. Training runs were carried out on a single NVIDIA A100 SXM4 80 GB GPU and took between \(1\!-\!10\,\mathrm{days}\) each, depending on the method. In general, SSCNet requires fewer GPU hours than 1D-CAE and 3D-CAE. For the 1D-CAE another factor is the CR. The higher the CR, the deeper the network and thus the longer the training time. Therefore, the modified versions of the 1D-CAE with increased CRs were trained for only \(250\,\mathrm{epochs}\) due to runtime limitations. Rate-distortion performance of the baseline methods on the HySpecNet-11k test set is shown in Figure 2. The results have been obtained by using the HySpecNet-11k easy split as introduced in Section 2. Overall, the 1D-CAE produces the highest quality reconstructions with a peak signal-to-noise ratio (PSNR) of \(54.85\,\mathrm{dB}\) at a relatively high fixed rate of \(8.08\,\mathrm{bpppc}\). Despite the added trainable upsampling operations, the 1D-CAE-Adv reaches a \(0.66\,\mathrm{dB}\) lower PSNR than the 1D-CAE at the same rate and the 1D-CAE-Ext only achieves a PSNR of \(43.08\,\mathrm{dB}\), while increasing training time significantly due to batch normalization. As a result, further experiments were only performed for the 1D-CAE. SSCNet is able to operate at a lower rate of \(2.53\,\mathrm{bpppc}\) due to spatial dimensionality reduction, which on the other hand introduces blurry reconstructions that reduce the PSNR to \(43.64\,\mathrm{dB}\). The 3D-CAE only achieves \(39.54\,\mathrm{dB}\) PSNR at \(2.02\,\mathrm{bpppc}\) while being particularly unstable on the validation set during training. We would like to note that we have observed similar behaviour on the HySpecNet-11k hard split, which shows the high generalization ability of the considered methods due to our proposed large-scale dataset. To achieve a better comparison along several CRs, we modified the baseline models by repeating the downsam Figure 1: True color representations of example images from our proposed HySpecNet-11k dataset. Red, green and blue channels are extracted from EnMAP bands \(43\), \(28\) and \(10\) at wavelengths \(634.919\,\mathrm{nm}\), \(550.525\,\mathrm{nm}\) and \(463.584\,\mathrm{nm}\), respectively. Figure 2: Rate-distortion performance of learning-based hyperspectral image compression methods on the test set of our proposed HySpecNet-11k dataset (easy split). Rate is visualized in bits per pixel per channel (\(\mathrm{bpppc}\)) and distortion is given as peak signal-to-noise ratio (PSNR) in decibels (\(\mathrm{dB}\)). pling blocks in the 1D-CAE and varying the number of latent channels for SSCNet and 3D-CAE. As seen in Figure 2, the 1D-CAE, which only compresses the spectral content, is superior in all cases compared to both other methods that apply spatial downsampling. Even when increasing the number of latent channels in the bottleneck, SSCNet and 3D-CAE are not able to compensate the loss of information introduced by the spatial compression. Thus, in contrast to RGB imagery, HSI compression methods should focus implicitly on the spectral domain while maintaining the spatial dimensions when applied on hyperspectral data with a low spatial resolution. ## 5 Conclusion In this paper, we have introduced HySpecNet-11k that is a large-scale hyperspectral benchmark dataset (which consist of \(11{,}483\) unlabeled image patches) for learning-based hyperspectral image compression problems. To the best of our knowledge, HySpecNet-11k is the first publicly available benchmark dataset that includes images acquired by the EnMAP satellite. It is worth nothing that the use of HySpecNet-11k is not limited to image compression problems and can be exploited for any unsupervised learning task. We believe that HySpecNet-11k will make a significant advancement in the field of unsupervised learning from hyperspectral data by overcoming the current limitations of existing hyperspectral image datasets. With the continuous release of EnMAP tiles we plan to enrich the dataset and develop further extended versions of HySpecNet as a future work. ## 6 Acknowledgements This work is supported by the European Research Council (ERC) through the ERC-2017-STG BigEarth Project under Grant 759764.
2305.19384
User Driven Functionality Deletion for Mobile Apps
Evolving software with an increasing number of features is harder to understand and thus harder to use. Software release planning has been concerned with planning these additions. Moreover, software of increasing size takes more effort to be maintained. In the domain of mobile apps, too much functionality can easily impact usability, maintainability, and resource consumption. Hence, it is important to understand the extent to which the law of continuous growth applies to mobile apps. Previous work showed that the deletion of functionality is common and sometimes driven by user reviews. However, it is not known if these deletions are visible or important to the app users. In this study, we performed a survey study with 297 mobile app users to understand the significance of functionality deletion for them. Our results showed that for the majority of users, the deletion of features corresponds with negative sentiments and change in usage and even churn. Motivated by these preliminary results, we propose RADIATION to input user reviews and recommend if any functionality should be deleted from an app's User Interface (UI). We evaluate RADIATION using historical data and surveying developers' opinions. From the analysis of 190,062 reviews from 115 randomly selected apps, we show that RADIATION can recommend functionality deletion with an average F-Score of 74% and if sufficiently many negative user reviews suggest so.
Maleknaz Nayebi, Konstantin Kuznetsov, Andreas Zeller, Guenther Ruhe
2023-05-30T19:56:54Z
http://arxiv.org/abs/2305.19384v1
# User Driven Functionality Deletion for Mobile Apps ###### Abstract Evolving software with an increasing number of features is harder to understand and thus harder to use. Software release planning has been concerned with planning these additions. Moreover, software of increasing size takes more effort to be maintained. In the domain of mobile apps, too much functionality can easily impact usability, maintainability, and resource consumption. Hence, it is important to understand the extent to which the law of continuous growth applies to mobile apps. Previous work showed that the deletion of functionality is common and sometimes driven by user reviews. However, it is not known if these deletions are visible or important to the app users. In this study, we performed a survey study with 297 mobile app users to understand the significance of functionality deletion for them. Our results showed that for the majority of users, the deletion of features corresponds with negative sentiments and change in usage and even churn. Motivated by these preliminary results, we propose Radiation to input user reviews and recommend if any functionality should be deleted from an app's User Interface (UI). We evaluate Radiation using historical data and surveying developers' opinions. From the analysis of 190,062 reviews from 115 randomly selected apps, we show that Radiation can recommend functionality deletion with an average F-Score of 74% and if sufficiently many negative user reviews suggest so. Mobile apps, Survey, App store mining, Software Release planning ## I Introduction It is often assumed that the evolution of a product implies constant addition to it which results in a larger and more complex codebase. This addition has been discussed in terms of evolving software code, enhanced quality, added features and functionalities, etc. over different releases of a product. The tendency to add more and more features to an evolving software is a form of excessive software development [39] and does not automatically make the software better. In particular, release planning as an iterative and evolutionary process has been always concerned with further adding features into the next releases [11]. Lehman's [17] sixth law of software evolution emphasizes growth and states that "the functional content of a program must be continually increased to maintain user satisfaction over its lifetime." However, viewing through the lens of user-computer interaction, when a program is mainly invoked by users, the increasing set of features is in sharp conflict with usability [42]. Mobile apps in particular can seriously suffer from this type of problem [43]. On mobile devices, any functionality comes at a cost: First, the small screen severely limits the number of features that can be offered by an application in each UI [10]. Second, computational demands and memory usage may impact battery life. Hence, developers should have an interest in _removing_ functionality that negatively impacts the user experience. While this removal can be the result of different development activities (for example, removing the code, commenting out the code, or disabling respective UI elements), from the user's perspective, a functionality is removed when it is no longer accessible through the user interface [25]. There is an established body of knowledge on the release engineering of mobile apps. Several techniques [20] have been proposed for the release planning of mobile apps. Generally, these existing methods are focused on feedback development planning, based on user reviews. They first categorize reviews into general categories of uninformative comments, feature requests, bug reports, or praise. Then, they aim to satisfy that user feedback in the upcoming release. Palomba et al. [29, 30] proved empirically that mobile app developers are changing their code based on the crowdsourced app reviews. Among these studies, multiple provided a variety of taxonomies for mobile app reviews [6, 32]. When analyzing user reviews, a few studies reported a reason for negative reviews [15, 19]. The study of Nayebi et al. [25] showed that 11.23% of commits and 44.79% of the developers indicated better user experience as the reason for deletion. The author's analysis of commit messages showed that 14.63% of deletions are driven by negative user feedback. Yet, users' perceptions of feature removal have never been evaluated empirically or ever surveyed with the users. Our research focuses on studying feature deletions in the evolution and release planning of mobile applications and their visibility to the end users. To understand the significance of this issue to users, we conducted a survey of 297 individuals. Since functionality is typically accessed through graphical user interface (GUI) elements [2], we specifically investigated deletions that are visible to end users. By surveying 297 individuals, we examined the extent to which users notice functionality deletions over different releases, their perception and emotional response to such changes, and any resulting alterations to their usage patterns. Driven by the results of this survey, we introduce Radiation1, a system that analyses user reviews and recommends UI elements and features that can be considered for deletion. We evaluated Radiation internally (via cross-validation) and externally (with 37 developers and 42 users). Results show Radiation recommends feature deletions with high precision (0.83 in retrospect and 0.95 when compared to developers). End-user study confirmed recommendations' validity. ## II Importance of Feature Deletions to Users To the best of our knowledge, the significance of feature deletion for end users is unexplored. Nayebi et al. [26] examined the problem from the developer's perspective and developed a taxonomy of deletion commits in mobile apps using source code and commit messages. However, this taxonomy covers a broad range of artifacts and reasons, and it is unclear whether and to what extent these deletions are visible or important to end users; _"How is the deletion of software functionality perceived by mobile app end users?"_ To answer this question and understand the relevance of app feature deletion, we surveyed real app users. We followed the established guidelines for performing the survey research [34]. Our survey consists of four main parts: * Gather the demographics, * Assess how aware mobile app users are of missing features or functionalities, * Evaluate if the deletion of features impacts users' satisfaction, and * Understand the extent and impact of functionality deletion or limitation on app usage. The survey included 12 questions overall, and they were all close-ended questions (see Table I). Five questions were designed to capture demographics. The rest of the questions sought participants' opinions using a five-point Likert scale. The survey was focused on individuals' experiences and decisions. The survey was anonymous, and we did not gather any identifying information from the participants. We used Qualtrics as the survey instrument. For acquiring participants, we used convenience sampling [16]. We posted the survey through our personal connections on social media. The link to the survey has been clicked 638 times. 388 individuals started the survey, whereas 297 individuals completed the survey and responded to all the survey questions (46.5% of all the people we could reach). Among the 297 participants, 44.1% were aged between 28-40 years old. 27.3% were 18-28 years, 15.5% were 40-64 years, and 13.1% were above 64 years old. The majority of the participants (51.9%) have personally installed 5-10 apps on their devices2. 26.9% (80 participants) have installed more than 10 apps, while 21.2% of the participants have installed less than five apps personally. Also, 53.9% of all the participants (160 individuals) used more than 10 apps on a daily basis. Only 1.3% of participants (only four individuals) used less than five apps daily, while 44.8% used 5-10 apps daily. Out of the 297 participants, 238 individuals (80.1%) have uninstalled some apps but only 39% sometimes or more frequently have left any reviews for a mobile app. The demographics are presented in Figure 1. Our questions followed three main objectives: Footnote 2: Mobile devices come with a number of pre-installed apps. **First,**: the extent to which a user realizes and notices the change and deletion in mobile app features (Q6 and Q7 in Table I). The majority of the users (55.2%) _sometimes_ noticed changes in the app features that they were using. While 2.7% of them (8 out of 297 participants) and 20.5% reported they have _never_ or _rarely_ noticed a change. When it comes to the deletion of features, 34.4% _never_ or _rarely_ noticed a deletion. Fig. 1: Demographics of 297 participants in the survey. Regional census categories are used for age information (the region is masked due to double-blind). This compares to the 65.7% who reported sometimes or more frequently noticing a feature deletion in an app. **Second,**: the perception and sentiment of users toward a feature deletion in an app and its impact on their app usage (Q8 and Q9 in Table I), 51.9% of participants perceived somewhat of _negative_ feeling associated with feature deletions. 41.1% of the participants stated negative and 7.75% stated _very negative_ sentiments. This is while 13.5% was _positive_, and 1.0% stated _very positive_ feelings about feature deletions. 33.7% of the participants were _neutral_ about the feature deletion. Almost the same proportion of users (48.8%) reported almost _no change_ in their app usage following a feature deletion. Yet, 51.2% reported _somewhat_ or _extensive_ change in their app usage following a feature deletion. **Third,**: the extent that deletions impact users' decisions and provoke a reaction (Q10 - Q12 in Table I), Only 51 out of 297 participants (17.17%) have _often_ or _sometimes_ left a review for a mobile app following the deletion of a feature (Q10). This compares to the 39.1% of the participants who _sometimes_ or _often_ left a review for an app (see Figure 1-(e)). As a result of losing access to app functionality, 63.7% of the participants _sometimes_ or _more frequently_ looked to use alternative apps. 36.4% of the participants _never_ or _rarely_ looked up alternatives when their access to a certain feature is omitted (Q11). 31% of the participants reported that they _at least once_ uninstalled an app because of a feature deletion. 41.4% _never_ or _rarely_ deleted an app due to this reason while 27.6% _sometimes_ did so (Q12). Figure 2 shows the summary of our survey results. _Deletion of app functionality provokes negative feeling for the majority of the participants (51.9% of the participants) and somewhat change their usage behavior (51.2% of the participants). Functionality deletion caused 31.0% of the users often to migrate to another app. 27.6% of the users uninstalled the app following the deletion of a feature._ Hence, we consider the issue of functionality deletion visible to the end user. Further, Nayebi et al. [26] stated that while the planning for feature deletion is less frequent compared to the feature additions and bug fixes, still, 77.3% of developers **plan** for these deletions. The desire to retain users and avoid the distribution of negative sentiments about an app motivates us to evaluate if and to what extent these deletions are predictable. ## III Research Questions and Empirical Design Functionality deletions are important to users, and eliminating access to particular features can cause customer churn, negative reviews, and lead to app uninstallations. Hence, these deletions should be planned with care and precision by a software product team. To assist the production team with such decisions, we introduce Radiation to recommend deletions based on user reviews. We further evaluate Radiation's performance retrospectively and by performing cross-validation. To externally validate Radiation, we survey 37 software developers and 42 users to understand their perception of the value of deletions recommended by Radiation. We evaluated Radiation in three ways by answering the following research questions: **RQ1:**: To what extent does Radiation accurately predict functionality deletions in comparison to actual deletions, retrospectively? Radiation_predicts the elimination of the functionality which is visible to the end user. It connects reviews to the UI elements that represent the functionality as seen by the end user. For the internal validation, we randomly sampled 115 apps and cross-validated the results of Radiation with the actual changes that happened retrospectively. We gathered deletion commits and manually checked the code base following former studies [26] and compared actual deletions with the Radiation suggestions._ **RQ2:**: To what extent do app developers consider analogical reasoning useful for predicting functionality deletions? _We performed a survey with 37 developers to evaluate if the predictions of Radiation would make sense to the Fig. 2: Results of the survey with app users (Q6 to Q12). professionals. These developers are active in open-source mobile app development but are not the actual developers of the app. We provide each developer with the reviews, the link to the code repository, the app, and the recommendations of Radiation and ask if they consider these suggestions reasonable or not._ **RQ3:**: What was users' experience with the functionalities that _Radiation_ offers for deletion? _We conducted a survey with 42 users to assess their sentiment towards the functionalities recommended for deletion by Radiation. After familiarizing themselves with the app, we asked each participant to evaluate 30 UI functionalities based on their level of liking and the importance of deletion. We performed a controlled experiment by presenting the question for both the features recommended for deletion by Radiation and those not recommended for deletion. Finally, we analyzed the relationship between user sentiment and the recommendations provided by the tool._ Radiation can recommend feature deletions sufficiently well. Our evaluation of Radiation on 115 apps across 3,364 releases and for 190,062 reviews shows a recall of 0.48 and precision of 0.83 using 10-fold cross-validation (**RQ1**). Our evaluation of 25 apps involving 36,039 reviews with 37 developers shows an F-score of 0.90 for Radiation (**RQ2**). Also, our survey shows users' negative experience with the features that Radiation recommends for deletion (**RQ3**). ## IV Radiation for Predicting Functionality Deletions Multiple factors may trigger functionality deletion. We designed Radiation to recommend deleting functionalities suggested by user reviews. However, since apps may receive a large number of reviews, manually tracking user feedback may not be feasible. The current literature on apps' user needs and planning is primarily focused on adding features or fixing bugs in each release, based on user requests [12, 32, 44]. Radiation differs from this approach by targeting deletions inputting user reviews. Radiation is a recommendation tool that helps developers identify deletion candidates. While deleting features is sometimes necessary [26], developers must be cautious about the features they remove, as it can result in a negative user experience and potentially losing customers, as shown by our survey study (see Section II). Radiation is the first step to assist developers with this task. Figure 3 illustrates the six steps of Radiation. In what follows, we explain each step of our proposed method and provide a walk through examples referring to Figure 4. **Step 1**: **Reviews pre-processing.** We eliminated emojis, special characters, and stop words and expanded contractions ("can't" was expanded to "can not"). Then, we applied lemmatization to map the words into their dictionary format ("deciding" and "decided" turned into "decide"). We used Python library NLTK for this step. We customized the list of stop words as suggested by Maalej and Nabil [18] and Palomba et al. [29]. **Step 2**: **Separating informative and non-informative reviews.** Not all reviews were useful. We followed the definition of what is informative and non-informative as described by Maalej and Nabil [18]. In short, informative reviews communicate content that can be used in the process of the app evolution, while an advertisement, a short statement of praise (i.e., "The app is nice"), or a statement of an emotion (i.e, "I hate this app!") is not informative for enhancing an app in future releases. To identify informative reviews, we manually classified a fraction of reviews (see Section V) and used them to train a Naive Bayes classifier (following [18]). This setup resulted in the F1 score (the harmonic mean of precision and recall [35]) of 0.82, calculated as the average of ten 10-fold cross-validation runs. **Step 3**: **Finding UI elements for each release.** For _each release_ we extracted UI elements used in an application. We leveraged the UI elements to connect the reviews with the apps' functionality following the method of Palomba et al. [29]. They showed that users write reviews related to the app components visible to them, which are the elements of the user interface. To mine UI elements, we implemented the lightweight analysis of Android layout files. These files include most of the GUI elements, also known as view widgets, and control as it is visible to the app user [1, 21]. Additionally, we parsed the Strings.xml file which contains text strings for an app. By mining these files, for each identified UI element we got its _description_ consisting of an element type, a variable name used in the code, a label associated with the element, and an icon name if applicable (e.g., <Button, btn mic, 'Start Listening', >). **Step 4**: **Connecting reviews to the UI elements.** We used the description of elements connecting reviews to app functionalities. To connect a review to a UI element in a release \(V_{h}\) we calculated the cosine similarity between the text of a UI description and a review's content. We established connection when the similarity score exceeded a threshold of 0.65. Palomba et al. [29] used the threshold of 0.6 for this purpose, however when analyzed manually, we slightly increased the threshold to achieve a more accurate matching. **Step 5**: **Clustering reviews based on their topic.** Several app reviews are pointing to the same functionality, while they may contain different opinions about that functionality. Fig. 3: The process of Radiation to support decisions on user-driven UI functionality deletions. We used _Hierarchical Dirichlet Process_ (HDP) [41] with its default setup to group reviews related to each functionality (UI element) as suggested by Palomba et al. [31]. HDP is a topic mining technique which automatically infers number of topics. Using HDP as described in [31], we performed topic modeling and formed clusters with reviews about a particular topic. One review might also discuss multiple UI elements hence the clusters are non exclusive. We manually analysed the results for 1,500 reviews across eight apps: The topics were intuitive and understandable. **Step 6. Identifying candidate functionality deletion.** Following the existing literature on prioritizing app reviews (Table II) and our survey (Section II) we selected attributes for identifying and recommending possible functionality deletion. To determine candidates, we used Random Forest, as it was suggested by related studies [44] and showed good time performance. A list of attributes for training is presented in Table II. The "polarity" and "objectivity" of the reviews in a cluster were extracted by sentiment analysis performed by Pattern [23, 24, 37, 40] technique. We evaluated the classifier based on 190,062 reviews across 115 randomly chosen apps. Figure 4 illustrates the execution of Radiation on the Wikipedia Android app. ## V Evaluation and Case Study Design As of June 2022, F-Droid (the open-source repository for Android mobile apps) included 3,810 mobile apps. Among them, we identified 1,704 apps with a valid link to their GitHub repositories. These apps involve an overall of 14,493 releases. As deletions are identified by comparing sequential releases, we excluded 554 apps which had only one or two releases from our analysis to evaluate Radiation over multiple releases. For the remaining apps, we gathered the reviews from the Google Play store while accessing their code and development artifacts through GitHub. We randomly selected 8,300 reviews (\(\cong\) 5% of the total number of reviews) across different apps and manually labeled each review as "informative" or "non-informative" as described in Step 2 of Radiation. Two of the authors classified these reviews with an average Cohen's Kappa agreement's degree [38] of 86%. We labeled 2,917 of these reviews as "non-informative" and used them along with the same number of "informative" reviews randomly sampled from the rest of reviews to train a classifier. Finally, we identified 8.1% of the total number of reviews as uninformative. We applied Radiation and analyzed 115 randomly selected apps in detail as well as evaluating Radiation recommendation against developers judgment (**RQ2**) and users experience (**RQ3**) for 25 apps. In what follows, we explain the methodology for answering each research question and then provide the results. ### _Internal Validation of Radiation (RQ1)_ To internally validate the usefulness of Radiation, we retrospectively compared the recommendations of Radiation with the actual changes in the source code. We performed this cross-validation across multiple releases of the same app and for a total of 115 apps, involving 3,364 releases. As a result of Step 5 of the Radiation process, we clustered the reviews for each UI element. Next, we manually labeled each review cluster as either "deleted" or "not deleted". This labeling was conducted by two independent researchers who manually checked for the deletion of the code in the source code repository and identified the deletion commit messages, as discussed in the literature [26]. The agreement between the annotators was close to perfect, with a 96% agreement rate, as the decision was based on factual evidence of changes in the Git repository. Any differences were resolved with a short code look-up and recheck. Hence, if an element \(E_{i}\) was deleted in release \(V_{k}\) we tagged the clustered reviews in \(V_{i-1}\) as "deleted". We used these manually labeled clusters as our _truth set_. To internally validate our results, we compared the output of Radiation with this truth set. Radiation takes the information of the app (as detailed in Table II) in release \(V_{i-1}\) and predicts whether an element \(E_{i}\) in release \(V_{i}\) should be deleted or not. Retrospectively comparing this prediction with our truth set can result in one of the following cases: **TP:**: Radiation recommends deletion of \(E_{i}\) in \(V_{k}\) and historical data of our truth set shows the element was deleted. **TN:**: Radiation does not recommend deletion of \(E_{i}\) in \(V_{k}\) and historical data of our truth set shows the element was not deleted. **FP:**: Radiation recommends deleting \(E_{i}\) in \(V_{k}\) but our truth set's historical data shows that the element was not deleted. **FN:**: Radiation does not recommend deletion of \(E_{i}\) in \(V_{k}\) but historical data of our truth set shows its deletion. Using these outcomes, we formed a confusion matrix and calculated the precision, recall, and F-Score of Radiation. For this evaluation (**RQ1**), we excluded apps with less than two releases (554 apps). Among the remaining 1,150 apps, we picked 10% (115 apps) randomly and analyzed them in depth. These 115 apps included 190,062 reviews. ### _External Validation of Radiation with Developers (RQ2)_ We aimed to evaluate the perception of software developers regarding the correctness of the Radiation recommendations. Initially, we invited software developers who actively commit to the repositories of our studied open-source Android apps to participate in the study. However, due to their unavailability and unresponsiveness, we decided to recruit developers through advertising on our social media and professional network. We specifically targeted developers to participate in a survey on app functionality deletion. Using convenience sampling, we were able to hire 37 developers for the study. These developers had an average of 8.3 years (ranging from two to 15 years) of experience in software development and 4.4 years of mobile app development (ranging from one to 12 years). Each of the developers had participated in the development of at least two apps. In evaluating Radiation, the developers went through two steps: topic modeling in Step \(\$\), and reviewing Radiation recommendations for 25 apps and 36,039 reviews (20% of our chosen apps for validation). #### Iv-B1 Evaluation of cluster topics about each UI element The quality of topics and modeling in Step \(\$\) is crucial to the success of Radiation. To assess the effectiveness of clustering by HDP in Step \(\$\) of Radiation, we utilized a human judgment method called _topic intrusion_[4]. This involved presenting the top two topics with the highest similarity for a review and presented them along with a random topic of lower probability (the intruder topic) to a developer, who was then asked to identify all relevant topics. To evaluate the results of Step \(\$\), we calculated _Topic Log Odds (TLO)_[4]. TLO is a quantitative measure of agreement between a model and a human. TLO is defined as the difference between the log probability assigned to the intruder topic and the log probability assigned to the topic chosen by a developer. This number is averaged across developers to get a TLO score for a single document \(d\)[3]: \[\textit{TLO}(d)=\frac{\begin{array}{l}\text{\text{\text{\text{\text{ \text{\text{\text{\text{\text{\text{\text ### _External Validation of Radiation with Users (RQ3)_ We aim to assess the degree to which recommendations generated by Radiation align or conflict with user experience toward specific app functionalities. As part of our evaluation, we randomly selected 30 UI elements and functionalities from each app. We made a deliberate effort to include a mix of correct (TP and TN) and incorrect (FP and FN) deletion recommendations (as explained in RQ2), whenever possible. In total, we evaluated 650 UI functionalities, with 325 recommended for deletion by Radiation and 325 that were not recommended for deletion. Our survey included 42 participants selected via convenience sampling from our social and professional network. For each functionality of the app, three users provided evaluations. Figure 5 displays a sample survey question and the response of one participant specifically for the org.isoron.uhabits app. After familiarizing themselves with their assigned apps for at least 20 minutes, we presented a specific feature of the app they had studied and requested that they rate their liking of the feature on a five-point Likert scale. Furthermore, we also asked the participants to express their emotions if the feature were to be removed. We used conventional sentiment scores [13] for evaluation, with \(-2\) indicating strong dislike, 0 indicating neutrality, and \(+2\) indicating strong liking. ## VI Case study Results Table III presents the results of **RQ1** and **RQ2** for 25 apps that were cross-validated and evaluated by developers. Figure 6 demonstrates the goodness of the topic modeling of app reviews (Step 5) as part of **RQ2**). ### _Internal Validation of Radiation (RQ1)_ We conducted cross-validation on 115 apps, 3,364 releases, and a total of 190,062 reviews. The results indicate high precision (_0.83_) and recall of _0.48_ using 10-fold cross-validation. The precision is considerably higher than recall because in Radiation the number of false positives (FP) is much lower than false negatives (FN). In other words, in mobile apps, there have been features that were deleted, but Radiation is unable to recommend them for deletion (FN) This results in a low recall. Radiation cannot (and is not designed to) capture all deletions that happen within a mobile app. However, as the first study looking into functionality deletion, we could predict with 83% precision. For several of these "false negatives", we did not find reviews related to an element that has been deleted. Hence, we concluded that the feature would not be deleted, and there were other reasons than user reviews for deleting the UI element. Table III details the confusion matrix for the 25 apps that were also externally evaluated in **RQ2**. ### _External Evaluation of Radiation with Developers (RQ2)_ The 37 developers evaluated Radiation in two steps. #### Vi-B1 Evaluation of cluster topics about each UI element We followed the approach of Palomba et al. [31] to cluster user reviews by their connection to UI elements. Hence, in Radiation we first connected reviews to the UI elements (Step 4) and then clustered the reviews around each UI element using HDP topic modeling (Step 5) [31]. We presented the number of UI elements along with the number of clusters and number of user reviews in Table III. To evaluate the usefulness of our topic model, we relied on the judgment of app developers. After asking them to evaluate the topics using topic intrusion, we calculated TLO as suggested by Chang et al. [4]. We present the distribution of TLO in the boxplot chart of Figure 6. \(\textit{TLO}=0\) shows the highest conformance between developers and the topic modeling technique. Comparison of the distribution of our HDP clustering showed a slight disagreement between developers and machine learning results as the median is around \(-3\). However, this is still considered as a relatively low disagreement compared to former benchmarks [3, 4]. #### Vi-B2 Evaluating Radiation recommendations We asked developers to evaluate whether a cluster of reviews for a UI element were "motivating a functionality deletion" or "not motivating a functionality deletion" (e.g., implying a bug fix). We compared Radiation results to developer perceptions for 25 randomly selected apps, resulting in an average F-Score of 90% for Radiation. See Table III for the number of true and false recommendations for these apps. Upon examining the results presented in Table III, it is apparent that there are fewer false positives (FP) and false negatives (FN) when comparing our recommendations with developers' perceptions as opposed to retrospective evaluation. This difference can be attributed to the fact that recommending deletions involves multiple factors beyond user reviews, which Radiation does not take into account. Therefore, when asking developers to make a decision based on user reviews, Radiation demonstrates better performance. Radiation_achieves an average F-score of 0.9 when its recommendations are compared with the developers' decisions based on the respective clustered reviews._ ### _External evaluation of Radiation with Users (RQ3)_ Our objective was to evaluate user sentiment towards the functionalities recommended for deletion by Radiation. To achieve this, we conducted a survey of 42 users to evaluate their perception of specific mobile app functionalities and to understand their sentiments if those functionalities were to be removed (refer to Figure 5). We asked each participant two questions regarding the features they were evaluating. Figure 2 displays a violin plot of the results. Table IV presents an overview of our results for the first survey question we asked in **RQ3** and for each of the 25 apps we evaluated. Each column in this table displays the average of responses provided by three users who participated in our survey. Note that the number of samples was not evenly distributed across TP, TN, and other categories. For example, (Al) app.opennect had only one UI functionality that was correctly recommended for deletion (TP) in **RQ2** (see Table III). We also asked users how they would feel if the functionality were to be removed (Q2). We observed a high correlation of -0.86 between the responses to Q1 and Q2 in our survey. That being said, we found that the more negative the feelings users had towards the feature, the more positive they were about its removal. When we surveyed users about the functionalities, we observed that the average sentiment of the participants towards the features that were correctly recommended for deletion by Radiation (TP recommendations) was consistently negative. In other words, the negative experiences of the users were aligned with the recommendations. However, for deletions that were not actually performed (FP), we observed mixed sentiments. Nevertheless, the majority of the apps (13 out of 16) received an overall average of negative sentiments for wrong predictions as well. Thus, it is essential to note that a negative experience might not necessarily imply feature deletion but could call for a bug fix or a change in the software. This finding aligns with our analysis of **RQ2**, where external developers favored Radiation recommendations, while historical data showed that the decisions of the actual app developers (**RQ1**) were different. This difference could be due to the exclusion of particular ecosystem or business factors in Radiation modeling. _The users consistently disliked the functionalities that Radiation correctly recommended for deletion and in general are not against removing them._ ## VII Discussion In this section, we briefly discuss the further interpretation of the achieved results and some design decisions. ### _Scope of Radiation_ Motivated by the number of studies on release planning of mobile applications and in consideration of the limited Fig. 6: Topic Log Odds (TLO) shows the performance of Radiation’s clustering against developers’ perception. Fig. 7: Evaluation results of 650 features with users through survey resources for mobile devices [26] we studied the possibility of predicting feature deletions for mobile applications. Radiation uses user reviews to recommend UI functionality deletions based on various factors. We analyzed user reviews and clustered them according to relevant UI elements, which enables Radiation to focus solely on user feedback and visible app functionality. Upon retrospective analysis, we found that Radiation has a low recall due to a considerable proportion of false negatives. These false negatives indicate deletions that were not motivated by user reviews and therefore fell outside the scope of Radiation recommendations. To further evaluate the effectiveness of our approach, we provided software developers with reviews for each UI element and asked them to decide whether they motivated functionality deletion or not. This resulted in better recall compared to our previous cross-validation results. We also evaluated user sentiment toward these functionalities and found that they consistently experienced negative emotions when using the Radiation recommended for deletion. We further discovered that the more negative the user's experience, the more likely they were to be neutral or positive about removing that feature from the app. ### _Benchmarking and performance of Radiation_ We relied on the highly performed methods discussed in the literature and did not re-evaluate the performance of the learners. We do not argue these techniques are the most optimal and highest-performing methods possible. Rather, as the first study on recommending feature deletion in app releases, we focused on exploring the possibility of deletion recommendations, their usefulness, and the ease of explanation to the users and the developers. As the first study on predicting deletions based on user reviews, our target was to examine if the deletion prediction is possible rather than to highly optimize the performance of the approach. This is essential step before taking further steps for planning these deletion. Based on the current state-of-the-art results, we do not expect that a benchmark of different classifiers would significantly improve the performance of our approach. One key motivation for the paper comes from the observation that current release planning in general [36] and in particular for mobile apps [20, 44] is exclusively focused on feature addition. Planning in consideration of both addition and deletion of functionality requires revisiting the planning objective(s). Clearly, deletion consumes development effort as well. While we took the first step toward understanding functionality deletion, future work involves contextualizing the results for specific projects and development teams. Besides a more comprehensive empirical evaluation in general, we also target trade-off analysis between measuring the evolving maintenance effort and functionality deletions. Overall, the main goal of future research will be to better understand the deletion of functionality as part of software evolution, also beyond mobile apps. In addition, we will work on improving the performance of our recommendations by updating the machine learning techniques and features and tuning the model (for instance, by more in-depth analysis of similarity). ## VIII Threats to Validity Throughout the different steps of the process, there are various threats to the validity of our achieved results. **Are we measuring the right things?** We pre-processed all review texts and used machine learning classification to ensure that the analysis is only considering informative user reviews. The Naive Bayes classification resulted in an F1 score of 0.82. While this is a very good result, there is still a possibility that a review has been classified incorrectly. There is a risk related to linking reviews to the proper UI elements. Two of the authors looked into the results of this linking (Step 4 of Radiation) for 600 reviews across six apps and found 71 mismatched or unrelated reviews. **Are we drawing the right conclusions about treatment and outcome relation?** In comparison to studies in the context of mobile apps (Table V), our surveys can be considered high participated. However we used convenience sampling to attract participant which might bias the conclusions that are drawn [16]. It is essential to note this type of evaluation is subjective. However, the results of **RQ1** based on the retrospective analysis of the data are aligned with our survey results presented in **RQ2** and **RQ3**. In total, we think that the evaluation gained with 37 developers and 42 users is sufficient to confirm our findings. When connecting a review to a UI element in Radiation, there is a chance that we relate a review to an element incorrectly (false positives). This may happen because * We may miss some UI elements, as they can be instantiated in the program code or hard coded, * Some UI elements are not visible to the end user, or * Text of some UI elements are common English words or can have similar labels in different app views. To address the first two items above, we used Backstage[1] on a few of the apps and we found that while the risk exists, it is relatively small. Since Backstage works on compiled application binaries we were limited to using it in Radiation. For the third item above, we applied preprocessing as suggested in CRISTAL [29] and adopted their list of stop words. Further, RADIATION is not intended to exhaustively find all the deleted feature (recall). The impact of potentially missed elements is insignificant. **Can we be sure that the treatment indeed caused the outcome?** The selection of attributes used in Radiation to decide _if a UI functionality should be deleted_ is another threat to validity. Our survey with users was aligned with the findings in the literature [26] and showed that users and their feedback is important information in the deletion process. However, it is not the only decisive factor for excluding a functionality from apps. We selected attributes based on related studies (Table II). There are other attributes related to competitors, performance, or maintenance considerations that are relevant for the decision-making but could not be taken into account for our study. Following the results of former studies on mobile apps [29], we assumed that users are reviewing just the functionality that is visible to them (and not the background code). This might not be true for all the users, reviews, and sentiments. However, we expect a low number of such cases. **Can the results be generalized beyond the scope of this study?** Our retrospective analysis was performed on open-source mobile apps. The number of apps, reviews, and commits analyzed is considered high, indicating that results are significant at least for open-source mobile apps. While selecting the apps for this study, we did not consider their status (for example, the number of downloads) which may pose a risk of bias in the findings. The results may vary between apps with regards to their status on the app store3. Footnote 3: Authors will provide data and scripts in case of acceptance. ## IX Related Work In this study, we challenged Lehman's law of growth by investigating functionality deletion as a specific activity in the development process. We focused on the mobile apps because the device resources are limited and the size of the release has been introduced as a decisive factor for release decisions [22, 27, 28]. Feature and functionality deletion for software products in general have been discussed mostly on the model level which triggered us to widely investigate on the nature and reasons of functionality deletion in **RQ1** and **RQ2**. Analyzing user reviews to support app evolution and maintenance was studied by several researchers [20]. These studies are mainly focused on different user needs to be articulated at the level of being a "feature request" or "bug report" [18]. The study by Palomba et al. [29] found that 49% of informative reviews were considered for app evolution. In this direction, current studies take user reviews as the source of change requests, apply a variety of NLP techniques, and provide a prioritization or classification scheme. The objective is to help developers decide on the next best changes either by adding new functionality or fixing a bug. We provided an overview of the most related methods in Table V. _Current literature discuss different types of user requests on app evolution. We focused on a functionality deletions which was not studied._ CLAP [44] used a mixed method by combining the retrospective analysis of changes for 463 reviews in conjunction with interviewing three app developers. PAID [9] had the most comprehensive retrospective evaluation of data by investigating 18 apps for issue (bug) prioritization. Compared to the former studies in analyzing app reviews, we have a more rigorous evaluation by asking 37 developers to evaluate 36,039 reviews for a total of 25 apps. We compared these evaluations with the results gained from Radiation. While some studies compared different methods for evaluating their results, this was not possible for Radiation in general as none of the existing techniques is focused on functionality deletion. However, to select classifier and topic modeling techniques, we made the comparisons as discussed in Section IV. ## X Conclusions _Lehman's law on continuous growth of functionality does not universally apply._ In the domain of mobile apps, developers frequently delete functionality--be it to fix bugs, maintain compatibility, or improve the user experience. We performed a study with _app users_ to confirm the potential value of deletions also from their perspective. We suggested that the process of selecting the functionality to be deleted can be automated, as demonstrated by our Radiation recommendation system. Radiation analyses the UI elements of the app and the reviews and recommends if the UI element and its functionality shall be deleted or not. This is the first study to investigate the prediction of functionality deletion in software evolution. It opens the door towards a better understanding of software evolution, in particular in an important domain such as mobile app development. In the days of Lehman's studies, features such as user experience, screen space, or energy consumption were not as crucial as they are today; it may be time to revisit and refine Lehman's findings.
2304.10937
A convection-diffusion problem with a large shift on Duran meshes
A convection-diffusion problem with a large shift in space is considered. Numerical analysis of high order finite element methods on layer-adapted Duran type meshes, as well as on coarser Duran type meshes in places where weak layers appear, is provided. The theoretical results are confirmed by numerical experiments.
Mirjana Brdar, Sebastian Franz, Hans-Goerg Roos
2023-04-21T13:21:30Z
http://arxiv.org/abs/2304.10937v2
# A convection-diffusion problem with a large shift on Duran meshes ###### Abstract A convection-diffusion problem with a large shift in space is considered. Numerical analysis of high order finite element methods on layer-adapted Duran type meshes, as well as on coarser Duran type meshes in places where weak layers appear, is provided. The theoretical results are confirmed by numerical experiments. keywords: spatial large shift, singularly perturbed, Galerkin method, Duran mesh _AMS Mathematics Subject Classification (2010)_: 65M12, 65M15, 65M60 ## 1 Introduction Singularly perturbed problems with delay are differential equations that allow past actions to be included in mathematical models, bringing the model closer to real-world phenomena. Their solution depends not only on on the solution at a current stage, but also on the solution at some past stages. This type of problem occurs in the control theory and biosciences [3; 4; 12], and in the study of chemostat models, circadian rhythms, epidemiology, the respiratory system, tumour growth and neural networks. Although singularly perturbed problems have been studied intensively in recent years, there are not many papers on these problems with a on these problems with large displacements. Here we consider the following convection-diffusion problem with a large shift: Find \(u\) such that \[-\varepsilon u^{\prime\prime}(x)-b(x)u^{\prime}(x)+c(x)u(x)+d(x)u( x-1) =f(x),\quad x\in\Omega=(0,2), \tag{1a}\] \[u(2) =0,\] (1b) \[u(x) =\Phi(x),\quad x\in(-1,0], \tag{1c}\] where \(b\geq\beta>0\), \(d\geq 0\) and \(0<\varepsilon\ll 1\) is a small perturbation parameter. We assume \(c-\dfrac{b^{\prime}}{2}-\dfrac{\|d\|_{L^{\infty}(1,2)}}{2}\geq\gamma>0\), as well as \(\Phi(0)=0\), which is not a restriction for the function \(\Phi\), since this condition can always be guaranteed by a simple transformation. Thus, it holds \(u\in H^{1}_{0}(\Omega)\). For convection dominated singularly perturbed problems with an additional shift you can find only a few paper, see [8; 10; 11], where authors consider a negative coefficient \(d\), which supports a maximum principle. Then they use the finite-difference method on layer-adapted meshes. Here we consider finite element methods of arbitrary order for positive coefficients \(d\). The solution decomposition for this kind of problem we proved in [1], as well as the error analysis on a standard S-type mesh. Our problem has an exponential layer near \(x=0\) and one weak layer near \(x=1\) whose appearance is caused by the delay. Numerical methods for singularly perturbed problems typically use some known behavior of the exact solution to design a priori adapted meshes in order to approximate boundary layers well. Probably the most well-known approximations of this kind are those based on the so-called Shishkin meshes (see [9] and references therein). Duran and Lombardi in [2] introduced a new graded mesh in order to solve a convection-diffusion model problem by standard bilinear finite elements. They proved an (almost) optimal error estimate. Up to the logarithmic factor, those estimates are valid uniformly in the perturbation parameter. Hence, the graded meshes could be an excellent replacement for the well-known piecewise uniform Shishkin mesh which has transition point(s) dividing a domain on coarse and fine mesh regions. Indeed, according to some numerical experiments, the graded mesh procedure seems to be more robust in the sense that the numerical results are not strongly affected by variations of parameters which define the mesh. This is the reason why we used the Duran mesh in the paper. Since the problem has a weak layer, we constructed a coarser Duran mesh based on [1; 7]. The outline of the paper is as follows. In Section 2, we give the solution decomposition and introduce the recursively defined graded mesh. The following section contains a description of the finite element method and error analysis for the stationary singularly perturbed shift problem and the main convergence result. In Section 4 we present a coarser mesh of Duran-type and numerical analysis on this mesh for problem (1). Finally, Section 5 provides some numerical results supporting our theoretical analysis. **Notation.**_We use the standard notation of Sobolev spaces, where, for a set \(D\), \(\|\cdot\|_{L^{2}(D)}\) is the \(L^{2}-\)norm and \(\langle\cdot,\cdot\rangle_{D}\) is the standard scalar product in \(L^{2}(D).\) Also, we write \(A\lesssim B\) if there exists a generic positive constant \(C\) independent of the perturbation parameter \(\varepsilon\) and the mesh, such that \(A\leq CB\)._ ## 2 Solution decomposition and mesh The solution decomposition of this problem proved in [1] we give in the following theorem. **Theorem 2.1**.: _Let \(k\geq 0\) be a given integer and the data of (1) smooth enough. Then it holds_ \[u=S+E+W,\] _where for any \(l\in\{0,1,\ldots,k\}\) it holds_ \[\|S^{(\ell)}\|_{L^{2}(0,1)}+\|S^{(\ell)}\|_{L^{2}(1,2)}\lesssim 1, |E^{(\ell)}(x)|\lesssim\varepsilon^{-\ell}\mathrm{e}^{-\beta\frac{ \varepsilon}{\varepsilon}},\quad x\in[0,2],\] \[|W^{(\ell)}(x)|\lesssim\begin{cases}0,&x\in(0,1),\\ \varepsilon^{1-\ell}\mathrm{e}^{-\beta\frac{(x-1)}{\varepsilon}},&x\in(1,2). \end{cases}\] Thus, \(E\) is the layer function corresponding to the left boundary, while \(W\) is an interior layer function, and \(S\) represents the smooth part. Knowing how the layers are structured allows us to create a layer-adapted mesh that resolves the layers. We will modify the recursively defined graded mesh of the Duran type to our problem. Its advantages are the simple construction and generation of mesh points (without transition point(s)) and a certain robustness property. More precisely, if we are approximating a singularly perturbed problem with a mesh that has been adapted a priori, we can expect that a mesh designed for one value of the small parameter will perform well for larger values of the small parameter as well. In this respect, the recursively graded meshes have better behavior in numerical experiments. We first define the points on the interval \([0,1]\) and then the rest of the domain to construct this mesh. Let \(H\in(0,1)\) be arbitrary. Define the number M \[M=\left\lceil\left\lceil\frac{1}{H}\right\rceil-\frac{\ln\left(H\varepsilon \left\lceil\frac{1}{H}\right\rceil\right)}{\ln(1+H)}\right\rceil. \tag{2}\] We define mesh points recursively in the following way: \[x_{i}=\begin{cases}0,&i=0,\\ iH\varepsilon,&i=1\leq i\leq\left\lceil\frac{1}{H}\right\rceil,\\ (1+H)x_{i-1},&\left\lceil\frac{1}{H}\right\rceil<i\leq M-1,\\ 1,&i=M,\\ 1+x_{i-M},&M\leq i\leq 2M.\end{cases} \tag{3}\] If the interval \((x_{M-1},1)\) is too small in relation to \((x_{M-2},x_{M-1})\), we simply omit the mesh point \(x_{M-1}\). The total number of mesh subintervals is \(N=2M.\) It depends on the parameters \(H\) and conditions (2), see [2]. Furthermore, the inequality \[H\lesssim N^{-1}\ln(1/\varepsilon) \tag{4}\] holds. For mesh step sizes \(h_{i}=x_{i}-x_{i-1}\), \(1\leq i\leq 2M\), is valid \(CH\varepsilon\leq h_{i}\leq H\), where \(C\) is a constant independent of \(\varepsilon\) and the number of mesh points. It also applies \[\begin{split} h_{i}&=H\varepsilon,\ \ i\in\{1, \ldots,\left\lceil\frac{1}{H}\right\rceil\}\cup\{M+1,\ldots,M+1+\left\lceil \frac{1}{H}\right\rceil\},\\ h_{i}&\leq Hx,\ \ i\in\{\left\lceil\frac{1}{H} \right\rceil+1,\ldots,M\}\cup\{M+2+\left\lceil\frac{1}{H}\right\rceil,\ldots,2M\},\end{split} \tag{5}\] where \(x\in[x_{i-1},x_{i}]\). **Remark 2.2**.: _For an arbitrarily chosen parameter H in the Duran mesh, \(2M\) steps of the mesh are obtained, which is generally not comparable to the number of steps in other papers that deal with the same or similar issues. An approach that eliminates this shortcoming is given in the paper [5]._ ## 3 Finite element method and error estimates The bilinear and linear form for problem (1) are given by \[B(u,v) :=\varepsilon\left\langle u^{\prime},v^{\prime}\right\rangle_{ \Omega}+\left\langle cu-bu^{\prime},v\right\rangle_{\Omega}+\left\langle du( \cdot-1),v\right\rangle_{(1,2)}\] \[=\left\langle f,v\right\rangle_{\Omega}-\left\langle d\phi( \cdot-1),v\right\rangle_{(0,1)}=:F(v) \tag{6}\] for \(u,v\in\mathcal{U}\). We use \(\mathcal{U}_{N}:=\{v\in H^{1}_{0}(\Omega):v|_{\tau}\in\mathcal{P}_{k}(\tau)\}\) for discrete space, where \(\mathcal{P}_{k}(\tau)\) is the space of polynomials of degree \(k\) at most on a cell \(\tau\) of the mesh. Let \(I\) be the standard Lagrange-interpolation operator into \(\mathcal{U}_{N}\) using equidistant points or any other suitable distribution of points. The finite element method is given by: Find \(u_{N}\in\mathcal{U}_{N}\) such that for all \(v\in\mathcal{U}_{N}\) it holds \[B(u_{N},v)=F(v). \tag{7}\] The bilinear form is coercive in the energy norm \(\left\|u\right\|\|^{2}:=\varepsilon\|u^{\prime}\|_{L^{2}(\Omega)}^{2}+\gamma \|u\|_{L^{2}(\Omega)}^{2}\) and satisfies the Galerkin orthogonality \(B(u-u_{N},v)=0\) for all \(v\in\mathcal{U}_{N}\). **Lemma 3.1**.: _For the standard piecewise Lagrange interpolation operator on the graded mesh (3) we have_ \[\|u-Iu\|_{L^{2}(\Omega)} \lesssim H^{k+1}, \tag{8}\] \[\|(u-Iu)^{\prime}\|_{L^{2}(\Omega)} \lesssim\varepsilon^{-1/2}H^{k}. \tag{9}\] Proof.: In the proof we use the norm definitions, the solution decomposition and standard interpolation error estimates, given on any cell \(\tau_{i}\) with width \(h_{i}\) and \(1\leq s\leq k+1\) and \(1\leq t\leq k\) by \[\|v-Iv\|_{L^{2}(\tau_{i})} \lesssim h_{i}^{s}\|v^{(s)}\|_{L^{2}(\tau_{i})}, \tag{10a}\] \[\|(v-Iv)^{\prime}\|_{L^{2}(\tau_{i})} \lesssim h_{i}^{t}\|v^{(t+1)}\|_{L^{2}(\tau_{i})}, \tag{10b}\] for \(v\) smooth enough. We obtain \[\|S-IS\|_{L^{2}(0,1)}^{2}=\sum_{i=1}^{M}\|S-IS\|_{L^{2}(I_{i})}^{2}\] \[\lesssim\sum_{i=1}^{\left\lceil\frac{1}{H}\right\rceil}h_{i}^{2( k+1)}\|S^{(k+1)}\|_{L^{2}(I_{i})}^{2}+\sum_{i=\left\lceil\frac{1}{H}\right\rceil +1}^{M}h_{i}^{2(k+1)}\|S^{(k+1)}\|_{L^{2}(I_{i})}^{2}\] \[\lesssim\sum_{i=1}^{\left\lceil\frac{1}{H}\right\rceil}(H\varepsilon )^{2(k+1)}\|S^{(k+1)}\|_{L^{2}(I_{i})}^{2}+\sum_{i=\left\lceil\frac{1}{H} \right\rceil+1}^{M}H^{2(k+1)}\|x^{k+1}S^{(k+1)}\|_{L^{2}(I_{i})}^{2}\lesssim H ^{2(k+1)}\] where \(I_{i}=(x_{i-1},x_{i}).\) The same estimate we get on \([1,2].\) In the same way we get \[\|(S-IS)^{\prime}\|_{L^{2}(\Omega)}^{2}\lesssim H^{2k}.\] For the boundary layer part we obtain \[\|E-IE\|_{L^{2}(0,1)}^{2}\lesssim\sum_{i=1}^{\left\lceil\frac{1}{H} \right\rceil}(H\varepsilon)^{2(k+1)}\|E^{(k+1)}\|_{L^{2}(I_{i})}^{2}+\sum_{i= \left\lceil\frac{1}{H}\right\rceil+1}^{M}H^{2(k+1)}\|x^{k+1}E^{(k+1)}\|_{L^{2}( I_{i})}^{2}\] \[\lesssim(H\varepsilon)^{2(k+1)}\int\limits_{0}^{\varepsilon} \varepsilon^{-2(k+1)}e^{-\frac{2\beta x}{\varepsilon}}dx+H^{2(k+1)}\int\limits _{\varepsilon}^{1}x^{2(k+1)}\varepsilon^{-2(k+1)}e^{-\frac{2\beta x}{ \varepsilon}}dx\lesssim\varepsilon H^{2(k+1)}.\] and the same estimate on \([1,2]\). Also, we get \[\|(E-IE)^{\prime}\|_{L^{2}(0,\varepsilon)}^{2}\lesssim\sum_{i=1}^ {\left\lceil\frac{1}{H}\right\rceil}(H\varepsilon)^{2k}\|E^{(k+1)}\|_{L^{2}( I_{i})}^{2}\lesssim\varepsilon^{-1}H^{2k},\] \[\|(E-IE)^{\prime}\|_{L^{2}(\varepsilon,1)}^{2}\lesssim\sum_{i= \left\lceil\frac{1}{H}\right\rceil+1}^{M}H^{2k}\|x^{k+1}E^{(k+1)}\|_{L^{2}(I_ {i})}^{2}\lesssim\varepsilon H^{2k}.\] For interior layer function we have following estimates \[\|W-IW\|_{L^{2}(1,2)}^{2}\] \[\lesssim\sum_{i=M}^{M+\left\lceil\frac{1}{H}\right\rceil}(H \varepsilon)^{2(k+1)}\|W^{(k+1)}\|_{L^{2}(I_{i})}^{2}+\sum_{i=M+\left\lceil \frac{1}{H}\right\rceil+1}^{2M}H^{2(k+1)}\|x^{k+1}W^{(k+1)}\|_{L^{2}(I_{i})}^{2}\] \[\lesssim(H\varepsilon)^{2(k+1)}\int\limits_{1}^{1+\varepsilon}( \varepsilon^{1-(k+1)})^{2}e^{-\frac{2\beta(x-1)}{\varepsilon}}dx+H^{2(k+1)} \int\limits_{1+\varepsilon}^{2}x^{2(k+1)}(\varepsilon^{1-(k+1)})^{2}e^{-\frac {2\beta(x-1)}{\varepsilon}}dx\] \[\lesssim\varepsilon^{3}H^{2(k+1)},\] \[\|(W-IW)^{\prime}\|_{L^{2}(1,2)}^{2}\lesssim(H\varepsilon)^{2k} \int\limits_{1}^{1+\varepsilon}\varepsilon^{-2k}e^{-\frac{2\beta(x-1)}{ \varepsilon}}dx+H^{2k}\int\limits_{1+\varepsilon}^{2}\varepsilon^{-2k}x^{2k}e^ {-\frac{2\beta(x-1)}{\varepsilon}}dx\lesssim\varepsilon H^{2k}.\] Using Theorem 2.1 and above estimates the statement of the lemma follows. **Theorem 3.2**.: _For the solution \(u\) of problem (1) and the numerical solution \(u_{N}\) of (7) on a Duran mesh (3) it holds_ \[\|\|u-u_{N}\|\|\lesssim H^{k}.\] Proof.: Based on the triangle inequality \(\|\|u-u_{N}\|\|\leq\|\|u-Iu\|\|+\|\|Iu-u_{N}\|\|,\) and using the evaluation of Lemma 3.1 for the first term, it is sufficient to evaluate the second term. Let \(\eta:=u-Iu\) and \(\chi:=Iu-u_{N}\in\mathcal{U}_{N}.\) Coercivity and Galerkin orthogonality with parts of the proof of Lemma 3.1 yield \[\begin{split}\|\chi\|&\leq B(\chi,\chi)=B(\eta, \chi)=\varepsilon\left\langle\eta^{\prime},\chi^{\prime}\right\rangle_{ \Omega}+\left\langle c\eta-b\eta^{\prime},\chi\right\rangle_{\Omega}+\left\langle d \eta(\cdot-1),\chi\right\rangle_{(1,2)}\\ &\lesssim H^{k}\left\|\|\chi\right\|+\left\langle b(E-IE),\chi^{ \prime}\right\rangle_{\Omega}\lesssim H^{k}.\end{split}\] ## 4 On a coarser mesh Following the idea from [7] for the weak layer, we use a mesh that is sparser than the one defined in (3). This is called a coarser mesh. Numerical results show that for small \(k\) and reasonably small \(\varepsilon\) it is possible to use coarser meshes. For higher polynomial degrees, the weak layer should be resolved by a classical layer-adapted mesh. Here, we construct the mesh in the following way. Let \(M\) be defined as in (2) and analogously \[M_{2}=\left\lceil\left\lceil\frac{1}{H}\right\rceil-\frac{\ln\left(H\varepsilon ^{\frac{k-1}{k}}\left\lceil\frac{1}{H}\right\rceil\right)}{\ln(1+H)}\right\rceil. \tag{11}\] Then the mesh nodes are given by \[x_{i}=\begin{cases}0,&i=0,\\ iH\varepsilon,&1\leq i\leq\left\lceil\frac{1}{H}\right\rceil,\\ (1+H)x_{i-1},&\left\lceil\frac{1}{H}\right\rceil<i\leq M-1,\\ 1,&i=M,\\ 1+(i-M)H\varepsilon^{\frac{k-1}{k}},&M+1\leq i\leq M+\left\lceil\frac{1}{H} \right\rceil,\\ 1+(1+H)(x_{i-1}-1),&M+\left\lceil\frac{1}{H}\right\rceil<i\leq M+M_{2}-1,\\ 2,&i=M+M_{2}.\end{cases} \tag{12}\] The mesh step sizes \(h_{i}=x_{i}-x_{i-1}\) satisfy \[\begin{split} h_{i}&=H\varepsilon,\qquad i\in\{1,\ldots,\left\lceil\frac{1}{H}\right\rceil\}\\ h_{i}&\leq Hx,\qquad i\in\{\left\lceil\frac{1}{H}\right\rceil+1, \ldots,M\}\\ h_{i}&=H\varepsilon^{\frac{k-1}{k}},&i\in\{M+1,\ldots,M+\left \lceil\frac{1}{H}\right\rceil\},\\ h_{i}&\leq Hx,\qquad i\in\{M+\left\lceil\frac{1}{H}\right\rceil+1, \ldots,M+M_{2}\}.\end{split} \tag{13}\] where \(x\in[x_{i-1},x_{i}]\). **Lemma 4.1**.: _Let us assume \(e^{-\varepsilon^{-1/k}}\leq H^{k-1}.\) Then for the standard piecewise Lagrange interpolation operator on the graded mesh (12) we have_ \[\|u-Iu\|_{L^{2}(\Omega)} \lesssim H^{k+\frac{1}{2}}, \tag{14}\] \[\|u-Iu\| \lesssim H^{k}. \tag{15}\] Proof.: Similarly to the proof of the previous lemma, using solution decomposition and estimates (10a) and (10b), we obtain same estimates for \(\|S-IS\|_{L^{2}(0,2)}\), \(\|E-IE\|_{L^{2}(0,2)}\). Using the assumption \(e^{-\varepsilon^{-1/k}}\leq H^{k-1}\) we get \[\|(E-IE)^{\prime}\|_{L^{2}(1,1+\varepsilon^{\frac{k-1}{k}})}\lesssim \varepsilon^{-\frac{1}{2}}H^{k}.\] For the estimation of \(W\) we follow the idea given in [6]. From \[\|W-IW\|_{L^{2}(1+\varepsilon^{\frac{k-1}{k}},2)}\lesssim H\|xW^{\prime}\|_{L^{2}(1+\varepsilon^{\frac{k-1}{k}},2)}\lesssim \varepsilon^{\frac{1}{2}}H^{k},\] where we use assumption \(e^{-\varepsilon^{-1/k}}\leq H^{k-1}\), and from \[\|W-IW\|_{L^{2}(1+\varepsilon^{\frac{k-1}{k}},2)}\lesssim H^{2}\|x^{2}W^{\prime\prime}\|_{L^{2}(1+\varepsilon^{\frac{k-1}{k}},2 )}\lesssim\varepsilon^{-\frac{1}{2}}H^{k+1},\] we obtain \[\|W-IW\|_{L^{2}(1+\varepsilon^{\frac{k-1}{k}},2)}\lesssim H^{k+\frac{1}{2}}.\] For the derivative we obtain \[\|(W-IW)^{\prime}\|_{L^{2}(1,1+\varepsilon^{\frac{k-1}{k}})}\lesssim h_{i}^{k}\|W^{(k+1)}\|_{L^{2}(1,1+\varepsilon^{\frac{k-1}{k}})}\lesssim \varepsilon^{-\frac{1}{2}}H^{k},\] and \[\|(W-IW)^{\prime}\|_{L^{2}(1+\varepsilon^{\frac{k-1}{k}},2)}\lesssim h_{i}\|W^{\prime\prime}\|_{L^{2}(1+\varepsilon^{\frac{k-1}{k}},2)}\lesssim \varepsilon^{-\frac{1}{2}}H^{k}.\] Collecting all these estimates, we get the statement of lemma. **Theorem 4.2**.: _For the solution \(u\) of problem (1) and the numerical solution \(u_{N}\) of (7) on a coarser Duran mesh it holds_ \[\|u-u_{N}\|\|\lesssim H^{k}.\] Proof.: Similar to the proof of the Theorem 3.2 using the result of the Lemma 4.1. ## 5 Numerical results As an example, let us look at the following problem taken from [1] \[-\varepsilon u^{\prime\prime}(x)-(2+x)u^{\prime}(x)+(3+x)u(x)-d(x) u(x-1) =3,\,x\in(0,2),\] \[u(2) =0,\] \[u(x) =x^{2},\,x\in(-1,0],\] where \[d(x)=\begin{cases}1-x,&x<1,\\ 2+\sin(4\pi x),&x\geq 1.\end{cases}\] In this case the exact solution is not known. Our numerical simulations are performed using the finite-element framework \(\mathbb{SOFE}\) developed by L. Ludwig, see github.com/SOFE-Developers/SOFE. We compare the results both on a Duran mesh and on a coarse Duran mesh. Since the exact solution to this example is unknown, we use a reference solution as a proxy. This is computed on a standard or coarsened Duran mesh with \(k=4\) and \(H=0.05\). Table 1 shows the results for different polynomial degrees \(k\) on the standard Duran mesh for \(\varepsilon=10^{-6}\). Given are, for different values of \(H\), the corresponding number of cells \(\tilde{M}=2M\), the error measured in the energy norm and the numerical rate of convergence. Obviously we have a convergence of order \(k\). Table 2 shows the results on the coarse version of the Duran mesh with \(\tilde{M}=M+M_{2}\) cells and \(\varepsilon=10^{-6}\). Again we observe a convergence of order \(k\). Note that in the case of \(k=1\) we essentially have an equidistant mesh in \([1,2]\) with \(M_{2}=\left\lceil\frac{1}{H}\right\rceil\) cells. Comparing the results of the two tables, there is little difference in the calculated errors, but a large reduction in the number of cells. Thus, using the coarsened mesh gives equally good results with less computational cost. \begin{table} \begin{tabular}{l c|c c c c c c} & & \multicolumn{2}{c}{\(k=1\)} & \multicolumn{2}{c}{\(k=2\)} & \multicolumn{2}{c}{\(k=3\)} \\ \(H\) & \(\tilde{M}\) & \(\||u-u_{N}\||\) & \(\tilde{M}\) & \(\||u-u_{N}\||\) & \(\tilde{M}\) & \(\||u-u_{N}\||\) \\ \hline 0.90 & 25 & 3.14e-01 & 1.31 & 35 & 6.85e-02 & 2.49 & 39 & 1.02e-02 & 3.43 \\ 0.80 & 27 & 2.84e-01 & 1.11 & 38 & 5.58e-02 & 1.93 & 42 & 7.91e-03 & 3.64 \\ 0.70 & 30 & 2.53e-01 & 1.11 & 43 & 4.40e-02 & 2.19 & 47 & 5.25e-03 & 2.06 \\ 0.60 & 34 & 2.20e-01 & 1.23 & 49 & 3.31e-02 & 2.23 & 54 & 3.94e-03 & 4.89 \\ 0.50 & 39 & 1.86e-01 & 1.07 & 57 & 2.36e-02 & 2.64 & 62 & 2.01e-03 & 3.31 \\ 0.40 & 47 & 1.52e-01 & 1.20 & 67 & 1.54e-02 & 2.26 & 74 & 1.12e-03 & 3.42 \\ 0.30 & 60 & 1.13e-01 & 1.03 & 86 & 8.75e-03 & 2.18 & 95 & 4.76e-04 & 3.41 \\ 0.20 & 86 & 7.81e-02 & 1.10 & 124 & 3.94e-03 & 2.11 & 137 & 1.37e-04 & 3.16 \\ 0.10 & 165 & 3.81e-02 & & 238 & 9.96e-04 & & 262 & 1.76e-05 & \\ \end{tabular} \end{table} Table 2: Errors for various polynomial degrees on coarse Durán meshes and \(\varepsilon=10^{-6}\) \begin{table} \begin{tabular}{l c|c c c c c c} & & \multicolumn{2}{c}{\(k=1\)} & \multicolumn{2}{c}{\(k=2\)} & \multicolumn{2}{c}{\(k=3\)} \\ \(H\) & \(\tilde{M}\) & \(\||u-u_{N}\||\) & \(\tilde{M}\) & \(\||u-u_{N}\||\) & \(\tilde{M}\) & \(\||u-u_{N}\||\) \\ \hline 0.90 & 25 & 3.14e-01 & 1.31 & 35 & 6.85e-02 & 2.49 & 39 & 1.02e-02 & 3.43 \\ 0.80 & 27 & 2.84e-01 & 1.11 & 38 & 5.58e-02 & 1.93 & 42 & 7.91e-03 & 3.64 \\ 0.70 & 30 & 2.53e-01 & 1.11 & 43 & 4.40e-02 & 2.19 & 47 & 5.25e-03 & 2.06 \\ 0.60 & 34 & 2.20e-01 & 1.23 & 49 & 3.31e-02 & 2.23 & 54 & 3.94e-03 & 4.89 \\ 0.50 & 39 & 1.86e-01 & 1.07 & 57 & 2.36e-02 & 2.64 & 62 & 2.01e-03 & 3.31 \\ 0.40 & 47 & 1.52e-01 & 1.20 & 67 & 1.54e-02 & 2.26 & 74 & 1.12e-03 & 3.42 \\ 0.30 & 60 & 1.13e-01 & 1.03 & 86 & 8.75e-03 & 2.18 & 95 & 4.76e-04 & 3.41 \\ 0.20 & 86 & 7.81e-02 & 1.10 & 124 & 3.94e-03 & 2.11 & 137 & 1.37e-04 & 3.16 \\ 0.10 & 165 & 3.81e-02 & & 238 & 9.96e-04 & & 262 & 1.76e-05 & \\ \end{tabular} \end{table} Table 1: Errors for various polynomial degrees on standard Durán meshes and \(\varepsilon=10^{-6}\) **Acknowledgment**. The first author is supported by the Ministry of Science, Technological Development and Innovation of the Republic of Serbia under grant no. 451-03-47/2023-01/200134.
2307.09936
AGAR: Attention Graph-RNN for Adaptative Motion Prediction of Point Clouds of Deformable Objects
This paper focuses on motion prediction for point cloud sequences in the challenging case of deformable 3D objects, such as human body motion. First, we investigate the challenges caused by deformable shapes and complex motions present in this type of representation, with the ultimate goal of understanding the technical limitations of state-of-the-art models. From this understanding, we propose an improved architecture for point cloud prediction of deformable 3D objects. Specifically, to handle deformable shapes, we propose a graph-based approach that learns and exploits the spatial structure of point clouds to extract more representative features. Then we propose a module able to combine the learned features in an adaptative manner according to the point cloud movements. The proposed adaptative module controls the composition of local and global motions for each point, enabling the network to model complex motions in deformable 3D objects more effectively. We tested the proposed method on the following datasets: MNIST moving digits, the Mixamo human bodies motions, JPEG and CWIPC-SXR real-world dynamic bodies. Simulation results demonstrate that our method outperforms the current baseline methods given its improved ability to model complex movements as well as preserve point cloud shape. Furthermore, we demonstrate the generalizability of the proposed framework for dynamic feature learning, by testing the framework for action recognition on the MSRAction3D dataset and achieving results on-par with state-of-the-art methods
Pedro Gomes, Silvia Rossi, Laura Toni
2023-07-19T12:21:39Z
http://arxiv.org/abs/2307.09936v1
# AGAR: Attention Graph-RNN for Adaptative Motion Prediction of Point Clouds of Deformable Objects ###### Abstract This paper focuses on motion prediction for point cloud sequences in the challenging case of deformable 3D objects, such as human body motion. First, we investigate the challenges caused by deformable shapes and complex motions present in this type of representation, with the ultimate goal of understanding the technical limitations of state-of-the-art models. From this understanding, we propose an improved architecture for point cloud prediction of deformable 3D objects. Specifically, to handle deformable shapes, we propose a graph-based approach that learns and exploits the spatial structure of point clouds to extract more representative features. Then we propose a module able to combine the learned features in a _adaptative_ manner according to the point cloud movements. The proposed adaptative module controls the composition of local and global motions for each point, enabling the network to model complex motions in deformable 3D objects more effectively. We tested the proposed method on the following datasets: MNIST moving digits, the _Mixamo_ human bodies motions [14], JPEG [5] and CWIPC-SXR [30] real-world dynamic bodies. Simulation results demonstrate that our method outperforms the current baseline methods given its improved ability to model complex movements as well as preserve point cloud shape. Furthermore, we demonstrate the generalizability of the proposed framework for dynamic feature learning, by testing the framework for action recognition on the MSRAction3D dataset [17] and achieving results on-par with state-of-the-art methods. D 2023. 1551-6857/2023/-ART $15.00 [https://doi.org/10.1145/3381846](https://doi.org/10.1145/3381846) ## 1 Introduction The most popular approach to modeling the motion motion motion motion is to predict the motion of a point cloud. The motion motion is a _motion_ motion motion motion, which is a _motion_ motion motion. The motion motion motion is a _motion_ motion motion motion, which is a _motion_ motion motion motion. The motion motion motion is a _motion_ motion motion motion, which is a _motion_ motion motion motion. The motion motion motion is a _motion_ motion motion motion motion, which is a _motion_ motion motion motion motion. The motion motion motion motion is a _motion_ ###### Abstract A flexible and rich geometric representation of volumetric content used in a wide range of applications from autonomous driving (Steintein et al., 2017; Steintein and Steintein, 2018), robotics (Steintein et al., 2018; Steintein and Steintein, 2018) to virtual/mixed-reality services (Steintein et al., 2018; Steintein and Steintein, 2018). Such sequences consist of consecutive point clouds, each composed of an unordered collection of 3D points representing 3D scenes or 3D objects. Although the point cloud is a highly appealing representation impacting multiple sectors, how to properly process it is still an open challenge. One of the most successful methodologies was the development of neural networks able to learn directly from unstructured point cloud data. This approach was pioneered by PointNET (Pont, 2017) architecture, which learns features by processing each point independently. However, in such architecture, the local structures within the point cloud are neglected. This is a strong limitation since local structures contain key semantic information about the 3D geometry. To address this issue, PointNet++(Steintein and Steintein, 2018) introduced a point-based hierarchical architecture that considers hierarchical neighbourhoods of points rather than acting on each of them independently. The network processes point neighbourhoods at increasingly larger scales along a multi-resolution hierarchy, as shown on the left side of Fig. 1. This approach groups the local features learned from small neighbourhoods into larger units and processes them to learn higher-level features, allowing the network to abstract the multiple structures within the data. Although PointNet++ hierarchical architecture was initially designed for static point clouds, it has since been extended to the case of dynamic point clouds (Gomes et al., 2017; Gomes et al., 2018). In these cases, instead of extracting features from neighbourhoods in a single point cloud, the network extracts dynamic features from a hierarchy of spatio-temporal neighbourhoods across time. The learned dynamic features can be applied to a wide range of downstream tasks, such as action classification, motion prediction and segmentation. In this paper, we focus on the point cloud prediction task. Specifically given a point cloud sequence \(\mathcal{P}=\{P_{1},\ldots,P_{T}\}\), composed of \(T\) frames with \(p_{i,t}\in\mathbf{R}^{3}\) being the Euclidean coordinates of point \(i\) in point cloud \(P_{t}\in\mathbf{R}^{N\times 3}\), our goal is to predict the coordinates of future point clouds (\(\hat{P}_{T+1},\ldots,\hat{P}_{T+Q}\)), where \(Q\) is the prediction horizon. At the moment, point-based hierarchical methods can be considered the de-facto state-of-art approach for point cloud prediction. However, while these methodologies have shown good performance when predicting simple and rigid movements as translations in automobile scenes (Bogman et al., 2017), they are often limited when predicting the motion of 3D deformable objects. Addressing this limitation is the main goal of this paper. Predicting deformable objects is challenging since the point cloud shape changes over time and performs highly _complex_ motions. For example, in a 3D representation of a football player running or a dancer performing during a music event, their point cloud representations change over time following different postures. Moreover, the performed movements are not rigid transformations but rather a combination of multiple and diverse local motions. For instance, if we imagine the player rising the hand while running, their arm and hand will be characterised by a combination of movements (i.e., local rising movement and the global forward translation). Given their characteristics, processing 3D deformable objects presents two major challenges: (i) establishing point correspondence across time and preserving the shape of the predicted point cloud; (ii) generating accurate motion predictions that are a composition of multiple movements at different levels of resolution. To address the above challenges, we must first understand if the current state-of-art models are able to address such challenges. Within this context, we first demonstrate the model's inability to establish precise temporal correlations and preserve the predicted point cloud shape. This is because they fail to consider the structural relationships between the points during the learning process. Then, to investigate the challenge of predicting complex motions, we employ the explainability techniques introduced in our previous work [12]. These techniques demonstrated that the hierarchy of dynamic features corresponds to learning from local to global motions (in the centre of Fig. 1). In this paper, we build upon this interpretation to identify the technical limitations of the current framework approach. Specifically, we show that in most methodologies [6, 11, 18, 25] to generate predictions of future motions the hierarchical features are combined via learnable weights. Most critically to preserve permutation invariance, when combining hierarchical features, the same learned weights are applied to all points across frames. However, in deformable objects, not all points benefit from the same combination of hierarchical features. For example, some points can be described entirely by global motions, while other points are better described by a combination of global and local motions. We show that this _fixed_ combination of hierarchical features is a key limitation to the network's ability to predict complex motions. Based on the limitations identified above, we propose AGAR: an attention-based hierarchical graph-recurrent neural network (RNN) for point cloud prediction of deformable objects. Our proposed architecture includes an initial graph-based module that extracts the underlying geometric structure of the input point cloud as spatial features. From the learned spatial features, we construct a _spatio-temporal graph_ that forms more representative neighbourhoods than current methods that neglect the point cloud structure. The graph is then processed by sequential graph-RNN cells that take structural relations between points into account to learn dynamic features. To address the limitation of the fixed combination of hierarchical features, we propose a novel module denoted as _Adaptative feature combination_. The proposed module employs an attention mechanism to dynamically assign different degrees of importance to each level of hierarchical features. As such, for each point, the network can control the composition of the local and global motions that best describe the point behaviour. This concept is illustrated in the right part of Fig. 1, where the network selects the regions that benefit from particular motions (i.e. local, semi-local, global) instead of blindly combining all the motions learned in the multiple hierarchical levels. Besides improving the prediction of complex motions, the _Adaptative feature combination_ module, is also an explainability tool. The module allows us to visualize the influence of each learned feature on the predicted motion, providing a deeper understanding of the network's internal workings. The proposed method is trained in a self-supervised fashion and it is tested on several datasets such as the _Mixamo_ synthetic human bodies activities dataset [14], JPEG [5] and CWIPC-SXR [30] real-world human bodies datasets and compared against state-of-art methods. To extend such comparison, we also tested on a dataset of rigid objects (moving MNIST point cloud dataset [6, 31]), and a dataset of automobile scenes (Argoverse dataset [3]). A powerful strength of our framework is the ability to extract the general dynamic behaviour of the point cloud as dynamic features. Since such features are useful for downstream tasks, we also tested the proposed architecture for the action recognition task on human bodies (MSRAction3D dataset [17]). The proposed method outperforms the state-of-art methods in human bodies prediction and achieves on-par results for rigid objects and automobile scene prediction as well as for the action recognition task. The results demonstrated that our proposed method can leverage the structural relations between points to learn more accurate representations and preserve the point cloud shape during prediction. The results further show that the proposed _Adaptive feature combination_ module predicts complex motions in human bodies with more accuracy than the current state-of-art approaches. Lastly, the code and datasets required to reproduce the work are made publicly available at [https://github.com/pedro-dm-gomes/AGAR](https://github.com/pedro-dm-gomes/AGAR). In summary, the key contributions of our work are: * Understanding of current state-of-the-art frameworks key limitation for generating motion flow prediction. We show how the current approach is equivalent to combining learned local and global motions without regard to the point position in space and time, and how this strategy fails to model the complex motions present in deformable objects. * A novel module that combines hierarchical features in an adaptive manner according to the scene context. The proposed module dynamically controls the composition of local and global motions for each point, allowing the network to predict complex motions with higher accuracy and flexibility. This also offers an explainability tool. * A graph-based module that exploits the point cloud geometric structure to form spatio-temporal neighbourhoods from where the meaningful dynamic features can be extracted. The structural information is further included in the learned dynamic features, reducing the deformation of the predicted point cloud shape. The remainder of this article is organized as follows. In Section 2, we provide a state-of-art of research for point cloud prediction. In Section 3, we study the hierarchical component and identify the limitations of the state-of-art prediction framework. Based on the limitations identified, in Section 4, we propose AGAR, an improved architecture with graph-RNN cells and a novel _Adaptive feature combination_ module. Section 5 describes implementation details. Finally, the experimental results and conclusion are presented in Section 6 and Section 7, respectively. ## 2. Background This section provides an overview of the research in dynamic point cloud processing (Section 2.1), followed by a detailed description of the current state-of-art point cloud prediction framework and the notation used throughout this paper (Section 2.2). ### Related Works: In the current literature, dynamic cloud processing has been approached from multiple overlapping directions related to motion prediction (e.g., segmentation, and action recognition). These high-level tasks share a common challenge: extraction of temporal correlations between sequential point cloud frames, challenged by the irregular structure and by the lack of explicit point-to-point correspondence across time. In the following, we summarize the approaches proposed in the literature to overcome such challenges and how they lead to the development of the current state-of-the-art framework for point cloud prediction. An initial approach to learn from irregularly structured data such as point cloud data was to convert them into a regular representation such as 2D multi-view (Dong et al., 2018; Wang et al., 2019; Wang et al., 2020) or 3D voxels (Wang et al., 2019; Wang et al., 2020; Wang et al., 2020) and then process the converted data with traditional neural networks. This approach, however, suffered from high memory consumption and quantization errors. Within this context, the hierarchical architecture proposed in PointNet++ (Wang et al., 2020), able to process raw point cloud data directly, has become a pillar of work for learning-based point cloud processing. The PointNet++ hierarchical architecture has been extended to dynamic point clouds, by introducing spatio-temporal neighbourhoods to extract dynamic features. The spatio-temporal neighbourhoods still lack explicit point-to-point correspondence over time. However, by processing the neighbourhoods at multiple scales, the network can capture temporal correlations that would otherwise be hidden. This hierarchical learning strategy has proved to be highly successful at learning from point cloud sequences and has been widely adopted throughout the literature (Dong et al., 2018; Wang et al., 2019; Wang et al., 2020; Wang et al., 2020; Wang et al., 2020; Wang et al., 2020; Wang et al., 2020). In PSTNet (Dong et al., 2018) a hierarchical architecture is used for the action classification of point cloud sequences. In PointPWC-Net (Wang et al., 2020) a hierarchical architecture learns motion in a course-to-fine fashion by learning a motion flow and a cost function between two adjacent frames at each hierarchical level. More recently, attention-based mechanisms have been incorporated into hierarchical architectures (Han et al., 2017; Wang et al., 2018; Wang et al., 2019; Wang et al., 2019; Wang et al., 2019). The use of attention allows the network to selectively focus on the most important parts of the point cloud. Although attention mechanisms do not fully address point-to-point correspondence, they allow for a more flexible construction of hierarchical neighbourhoods by enabling selective aggregation within the network. For example in (Wang et al., 2018), an attention mechanism is used to sample the most critical points, enabling the network to better correspond points over time. In (Han et al., 2017), attention is incorporated into the spatio-temporal point aggregation, assigning greater weight to points that are more similar to the target point during the feature aggregation. It is worth noting, these aforementioned attention-based works, learn the attention of a point relative to the features of other points, with the goal of improving the extracted features. We, on the other hand, propose to learn the attention of a point relative to the features of each hierarchical level with the goal of refining the predicted motion. Although the methods presented above have demonstrated their ability to extract features from point cloud sequences, they suffer from several drawbacks when specifically applied to the point cloud prediction task, which is the focus of this paper. For instance, methods such as PointPWC-Net (Wang et al., 2019) learn a motion flow between two adjacent frames instead of learning a future motion to predict the next frames, preventing the model from capturing long-term movements. Other methods such as PSTNet (Chen et al., 2019) are able to capture long-term correlations by processing all the sequence frames simultaneously. While this is an effective approach for classification or segmentation tasks, the memory required to process all the frames simultaneously prevents this approach to be scaled to long sequences or applied to iterative prediction tasks. These drawbacks led to the implementation of point-based hierarchical architectures into RNNs or their variants, e.g., Long Short-term Memory (LSTM) and Gated Recurrent Unit (GRU). These types of models are designed to model sequential data taking only one frame as input at each interaction. The key characteristic of RNNs is their hidden states that can act as a _memory_. The states store information from prior inputs and are continuously updated. As a result, the output of RNNs depends on the input but also the prior elements within the sequence. Pioneer in this framework, PointRNN (Chen et al., 2019) learns dynamic features from spatio-temporal neighbourhoods between two adjacent frames. The learned dynamic features are then used as states storing the point's history of movements and used to learn the next interaction features as well to predict future movements. This methodology inherits the ability to capture the long-term dynamic behaviour of sequential data from RNN models while having low-memory requirements. Following this approach, several works (Chen et al., 2019; Wang et al., 2019; Wang et al., 2019; Wang et al., 2019; Wang et al., 2019) combined RNN cells or its variants into hierarchical architectures to model point cloud sequences. These point-based hierarchical RNN architectures are currently the state-of-art approach for iterative point cloud prediction. However, the majority of current methods proposed in the literature are focused on predicting the motion of point clouds from rigid objects, leaving the unique challenges associated with predicting point clouds from deformable objects overlooked. Our work aims to address this gap, by first identifying the current challenges caused by such objects and by developing models specifically designed to handle such challenges. ### Hierarchical Point-based RNN Architecture for Point Cloud Prediction In this section, we present an architecture that characterizes the state-of-art hierarchical RNN framework used for point cloud prediction. We will use this model to identify the key challenges of current state-of-art (Section 3) and to highlight the novelty of the solutions proposed in this paper (Section 4). Table 1 summarizes the main notation used throughout the paper. Without loss of generality, we describe the iterative prediction framework depicted in Fig. 2. Given a point cloud sequence \(\mathcal{P}\), at each interaction, the network processes one input point cloud and outputs the prediction of the point cloud at the next time step \(\hat{P}_{t+1}\). The framework can be described by three main phases: 1. Dynamic Extraction (DE) phase: the network processes the input point cloud \(P_{t}\) and extracts the point cloud dynamic as multiple \(L\) levels of hierarchical features \((D_{t}^{1},...,D_{t}^{L})\). 2. Feature Propagation (FP) phase: combines the learned features from multiple levels into a single final dynamic feature \(D_{t}^{\text{Final}}\); 3. Prediction phase: The final features are converted via a fully-connected layer into motion vectors \(M_{t}\) and added to the input point \(P_{t}\) cloud to predict the point cloud \(\hat{P}_{t+1}\) at the next time step. We now describe the DE and FP phases in more detail. Being straightforward we omit the detailed description of the prediction phase. #### 2.2.1. Dynamic extraction (DE) phase Depicted on the left part of Fig. 2, the DE phase consists of multiple sequential RNN cells, for a total of \(L\) levels (in the figure \(L=3\)). Before being processed by each RNN cell, the point cloud is downsampled by a _Sampling and Grouping_ (SG) module, as described in [(29)]. At each RNN cell, for each point, a dynamic feature is extracted by aggregating information from the point spatio-temporal neighbourhood. In the majority of methods [(18; 21; 25)] \begin{table} \begin{tabular}{c l} \hline \hline **Terminology** & **Description** \\ level & network layer extracting dynamic features at a specific resolution. \\ spatial features & vectors describing the point’s local geometric structure. \\ dynamic features & vectors describing the point’s dynamic behaviour. \\ \hline **Parameter** & **Description** \\ \(\mathcal{P}\) & sequence of point clouds. \\ \(T\) & number of points clouds (frames) in sequence. \\ \(l,L\) & level and the total number of levels. \\ \(N,N^{l}\) & original number of points and number of points at a level \(l\). \\ \(P_{t}^{l}\)\(p_{it}\in P_{t}\) & point cloud and cartesian coordinates of point \(i\) \\ \(P_{t}^{l}\)\(\hat{p}_{it}\in\hat{P}_{t}\) & predicted point cloud and cartesian coordinates of predicted point \(i\) \\ \(k\) & number of point neighbourhoods. \\ \(S_{t}^{l}\)\(s_{t}^{l}\in S_{t}^{l}\) & point cloud spatial features and spatial feature of point \(i\). \\ \(D_{t}^{l}\)\(d_{t,t}^{l}\in D_{t}^{l}\) & point cloud dynamic features and dynamic feature of point \(i\). \\ \(M_{t}\), \(m_{t}\in M_{t}\) & point cloud motion vectors and motion vector of point \(i\). \\ \(D_{t}^{l\prime}\)\(d_{t,t}^{l\prime}\in D^{l\prime\prime}\) & dynamic features propagated from level \(l\) to \(l-1\). \\ \(D_{t}^{\text{Final}}\)\(d_{t,t}^{\text{Final}}\in D^{\text{Final}}\) & point cloud final dynamic features and final feature of point \(i\). \\ \(\Theta_{\text{TP}},\Theta_{\text{S}},\Theta_{\text{D}},\Theta_{\text{R}},\Theta_ {\text{a}}\) & learnable network weights \\ \(G_{t}^{\text{C}}\) & coordinate graph. \\ \(G_{t}^{\text{ST},l}\) & spatio-temporal graph. \\ \(m_{ij}^{l}\) & message vector for node \(j\) to node \(i\). \\ \(a_{i}^{l}\) & attention value of point \(i\) to the feature of level \(l\). \\ \hline \hline \end{tabular} \end{table} Table 1. Terminology & Notation Figure 2. **Generic state-of-art framework for point cloud prediction** for interaction at time \(t\). The architecture is composed of a Dynamic Extraction (DE) phase, Feature Propagation (FP) phase and a prediction phase. the neighbourhood of each point is defined as the \(k\) nearest neighbour (\(k\)-\(nn\)) points in the previous frame, where the proximity is measured using the Euclidean distance between point 3D coordinates. The RNN cells are sequentially stacked in order to have the dynamic features learned at an RNN cell be the input of the next RNN cell. It is worth noting that the subsequent sampling, which results in a sparser point cloud at later levels/RNN cells, is responsible for the creation of hierarchical neighbourhoods with a progressively larger geometric distance between points. Thus, the first level (\(l=1\)) learns local dynamic features \(D_{t}^{1}\) from small-scale neighbourhoods, whereas the last level \(l=L\) learns global dynamic features \(D_{t}^{L}\) observing large-scale neighbourhoods. #### 2.2.2. Feature Propagation (FP) phase Once the DE phase has learned the features from all the levels (\(D_{t}^{1},...,D_{t}^{L}\)), the FP phase combines them into a single final feature (\(D_{t}^{Final}\)). Currently, the most popular architecture for features combination is the original architecture proposed in PointNet++(Wang et al., 2019), which is also found in most state-of-art methods without significant differences. We will refer to this architecture as state-of-art _Classic-FP_ (depicted in the green side of Fig. 2). In the _Classic-FP_ the features combination is done by hierarchically propagating the features from the higher levels to the lower levels using several FP modules (Wang et al., 2019). At each module, the sub-sampled features from the higher level are first interpolated to the same number of points as the lower level. The interpolation is done by weighted aggregation of the features of the three closest \(j\) points in the sub-sampled point cloud as such: \[\tilde{d}_{i,t}^{l}=\frac{\sum_{j=1}^{3}dist_{ij,t}\times d_{i,t}^{l+1}}{\sum _{j=1}^{3}dist_{ij,t}},\ \ \ \ dist_{ij,t}=\frac{1}{||p_{i,t}^{l}-p_{j,t}^{l+1}||^{2}} \tag{1}\] where \(\tilde{d}_{i,t}^{l}\in\tilde{D}_{t}\) is interpolated features from the number of points at level \(l+1\) to the number of points at level \(l\). The interpolated high-level features are then concatenated with a skip-linked connection to lower-level features at the same number of points. The concatenation is processed by a point-based network that processes each point independently via shared weights \(\Theta_{FP}^{l}\) as follows: \[D_{t}^{l^{\prime\prime}}=\text{ReLU}\,(\Theta_{FP}^{l}\,\{D_{t}^{l},\tilde{D} _{t}^{l}\}). \tag{2}\] The process is repeated in a hierarchical manner until the features from all the levels have been combined into final features (\(D_{t}^{\text{Final}}\)). ## 3. Challenges and Limitations The hierarchical point-based RNN framework, presented in the previous section, suffers several limitations when facing the challenge of processing deformable objects such as human-body-like sequences. In this paper, we explain why those challenges arise and how to overcome them. In the following, we disentangle the challenges of current models as \(i)\) challenges in processing/predicting objects with deformable shapes (Section 3.1); \(ii)\) challenges in predicting complex motions (Section 3.2). Taking advantage of the understanding built in this section, in Section 4 we introduce our proposed method, built to overcome the main limitations identified here. ### Challenges in Processing Deformable Shapes The main challenges encountered in processing and predicting objects with deformable shapes, such as clothing, food, or human bodies are \(i)\) having a semantically-meaningful point-to-point correspondence (used to learn dynamic features); \(ii)\) avoiding shape distortion (which is highly noticeable in 3D objects and therefore of high-negative impact on cloud prediction quality). The challenge of establishing point-to-point correspondence is present in any point cloud processing, but it is clearly exacerbated in the case of deformable 3D objects. The majority of current works follow the same strategy as PointRNN (Chen et al., 2019) and assume that the points in the current frame are matched with points in close proximity in the previous frame. This proximity is built in the 3D Euclidean space. However, in 3D deformable objects, points that are geometrically close in space are not necessarily semantically correlated and not necessarily belong to the same segment of the object. Fig. 3, shows three examples of how matching based on geometric proximity can lead to the creation of misleading neighbourhoods. This means that point correspondence across time is challenged by the mismatch between Euclidean proximity and semantically-meaningful proximity. On the other hand, current methods often struggle to preserve the predicted point cloud shape. This is mainly due to the fact that a separate motion vector is learned for every point with no clear semantic constraints. If these motion vectors vary significantly among neighbouring points, the result is a prediction with a deformed shape. This issue can be tackled by imposing _hard_ shape constraints, such as learning a single motion vector for all the points in a region. However, this strategy can only be applied to rigid objects. In deformable objects, the object shape changes according to different postures, meaning points must be allowed to have separate motions. Thus, it is important to strike a balance between preserving the shape and having enough per-point motion flexibility to predict possible shape variations. The key to achieving this balance is to capture the underlying semantic structure and take it into account as a soft shape constraint during the learning process. Both challenges of point correspondence and shape deformation can be summarized in the following limitation: **Lack of structural relationship between points in point cloud prediction (Limitation 1)**. Learning and exploiting this prior in the learning process is one of the novelties of our proposed model and it will be specifically addressed by learning a semantically-meaningful graph and exploiting this graph when extracting features (via graph-RNN cell). ### Challenges in Processing Complex Motions A second key challenge present in processing 3D dynamic objects as the human body is that the movement of such objects is usually a _complex motion_. Complex motions refer to movements that involve a combination of multiple degrees of freedom such as translation, rotation, and deformation, which are applied to different parts of the object independently. This is typical of deformable objects or any 3D objects with disjoint components, each of them with its own movement. As an example, consider a point cloud representing a human body running forward (Fig.4 (a)-_Man-Running_). While the full body moves forward (translation), the person swings their arms (rotation), and their hand bends from an open to a closed position (shape change). The complex nature of such movements makes them challenging to be accurately captured and predicted. Based on a novel visualization technique that we introduced in our previous work [12] on explainability, we now highlight key Figure 3. Example of matching points across time using geometric coordinates for three sequences: _Running, Diving, Jumping_ from [14] (These sequences are examples of particularly high motions for visualization purpose). The dashed circles show a zoom-in of the regions where grouping using coordinates would create incorrect neighbourhoods. For example in the _Running_ sequence, the points in the foot at time \(t\) are incorrectly matched with the points in lower-leg at \(t-1\). limitations of the current architectures. Specifically, we show how complex motions can be seen as a sum of low-, medium- and high-level motion leading to an understanding that the current model suffers from the following main limitation: **the fixed combination of hierarchical features in the prediction phase (Limitation 2)**. We now explain this limitation in more detail. In our explainability work [12], we have demonstrated that motion vectors inferred by hierarchical architectures (Fig.2) can be disentangled into individual motion vectors produced at each hierarchical level, as follows. \[M_{t}^{l}=Classic_{FP}^{l}(D_{t}^{l}) \tag{3}\] \[M_{t}=\sum_{l}^{L}M_{t}^{l} \tag{4}\] where, \(Classic_{FP}^{l}\) is the function that replicates the operation of the _Classic-FP_ in a disentangle manner converting the learned feature at each level \(l\) to an individual motion vector \(M_{t}^{l}\), and \(M_{t}\) is the final predicted motion vectors outputted by the network. This leads to the interpretation that current approaches in the literature **model complex motions as a combination of local and global motions**, which are learned as hierarchical dynamic features. This is illustrated in Fig. 4 which depicts the dynamic features as motion vectors and the hierarchical neighbourhoods given two point cloud sequences as input to a state-of-art prediction architecture (presented in Fig.2) with three levels (\(L=3\)) [12]. In both sequences, it can be seen that the lower level learns features only by looking at points in a small area (top gold squares in the figure). In contrast, the higher level learns features by considering a sparser set of points in a large area (bottom blue squares in the figure). In the example in Fig.4 (a), in which the runner's foot performs a complex motion, it can be observed that the lowest level captures small and diverse motions (e.g., rotation of the heel) \(M_{t}^{1}\), while the highest level learns the forward motion of the entire body \(M_{t}^{3}\). This interpretation of features as motion vectors can be generalized for the majority of current methods because while they differ in the feature extraction process, they all share the _Classic-FP_ strategy to perform the motion reconstruction process. As such we elaborate on this explainability technique to identify current state-of-art framework limitations to predict complex motions. Namely, the motion vector prediction is obtained by combining the dynamic features from the different levels via a learned weighted combination. However, each point motion is obtained using the same Figure 4: Hierarchical of dynamic features as motion vectors given for two input sequences (_Man-Running_ and _Woman-Running_). For each sequence, the figure shows input dynamic point cloud; multi-scale neighbourhood at different levels; motion vectors learned at each level of the network. set of combination weights \([\Theta^{1}_{\text{FP}},\ldots,\Theta^{L}_{\text{FP}}]\) for _all_ points, frames, and sequences. As a result for every point, regardless of its position space and time, the predicted motion is obtained by the same fixed combination of local, medium and global motions. Based on this technique we can understand that _i_) different features can be associated with the different levels of motions forming the complex resultant motion, and _ii_) knowing different parts of the objects might be subject to different types of movements highlights the strong limitation in having the same combination of motion levels. Specifically, while a set of weights might lead to the appropriate combination of the motion vectors in Fig.4 (a), in which a local movement is analysed (foot), it does not hold in the case of the _"Woman-Running_ sequence in Fig.4 (b), in which a more global movement is highlighted (torso). The points in the lower torso perform a rigid movement forward corresponding to the global motion of the body, while the lower part of the body performs a quite dynamic rotation of the foot. This means that only the global motion vector (pointing forward) would be sufficient to describe the movement of the torso. However local features (hence local motions) cannot be neglected, since this would lead to neglecting the local motions in parts with strong local movement such as the foot. As a result, in Fig. 4 (b) local motion vectors (\(M^{1}_{t}\)) clearly lose any motion interpretation and becomes instead random vectors mainly used to compensate for the erroneous addition of multiple motion vectors in this part of the body. It is worth mentioning that while this understanding might appear straightforward, to the best of our knowledge, this is the first work explaining PointRNN and similar hierarchical architectures when processing 3D deformable objects, showing the limitation in adopting a fixed combination of hierarchical features in the prediction phase. In the next section, we propose an architecture that overcomes this limitation by introducing an attention-based mechanism in the prediction phase. ## 4. Proposed Agar Method To address the limitations identified in the previous section, we now propose an improved architecture for point cloud prediction, depicted in Fig. 5. The proposed architecture preserves the state-of-art global framework composed of a DE, FP and prediction phase. However, we propose to replace current state-of-art modules, with improved versions to leverage on the point cloud semantic structure during the DE phase and to perform an adaptive combination of dynamic features in the FP phase. ### Addressing Limitation 1: Inclusion of structural relationships between points To overcome the lack of geometrical prior with meaningful spatial/semantic information, we propose an initial graph neural network denoted by _Spatial-Structure GNN_ (SS-GNN) that processes each frame to extract for each point spatial features that carry local topological information. From the learned spatial features, we then construct a _spatio-temporal_ graph that incorporates the point structural/semantic information and uses that information to build representative neighbourhoods of points. The spatio-temporal graph is processed by a proposed _graph-RNN_ cells, that can extract point cloud behaviour as dynamic features. Below we present each of the proposed modules in detail. #### 4.1.1. Spatial-Structure GNN (SS-GNN) Given an input point cloud \(P_{t}\) for each point \(i\), the SS-GNN learns a spatial feature \(s_{i,t}\) describing the point's local geometric structure. To learn these features SS-GNN starts by constructing a _coordinate graph_\(\mathcal{G}^{C}_{t}=(P_{t},\mathcal{E}^{C}_{t})\) by taking the points \(P_{t}\) as vertices and by building directed edges \(\mathcal{E}^{C}_{t}\in\mathbb{R}^{N\times k}\) between each point to its \(k\)-nearest neighbours based on Euclidean distance. The SS-GNN is composed of three layers, each layer performs a graph message-passing convolution (Wang et al., 2017). At the \(h\)-th layer, for a target point \(i\), all its neighbouring points \(j\in\mathcal{E}^{C}_{i}\) exchange a message along the edge connecting the two points. The message between points is obtained by processing the concatenation between the target point spatial feature at the previous layer \(s_{i,t}^{h-1}\); the target point coordinates \(p_{i,t}\); the geometry displacement between target points \(i\) and it neighbours \(j\) (\(\Delta p_{ij}\)). A symmetric function is then applied to aggregate all the messages into an updated feature for the target node. More formally, the message between two nodes (\(m_{it,t}^{h}\)) and the output spatial features (\(s_{i,t}^{h}\)) are obtained as follows: \[m_{ij,t}^{h} =\Theta_{S}^{h}(s_{i,t}^{h-1}\ ;p_{i,t}^{l}\ ;\Delta p_{ij}) \tag{5}\] \[s_{i,t}^{h} =\bigoplus_{j\in\mathcal{E}_{i}^{C}}\left\{m_{ij,t}^{h+1}\right\} \tag{6}\] where \(\Theta_{S}^{h}\) is a set of learnable parameters at layer \(h\) abd ';' identifies the concatenation operation. The \(\bigoplus\) represents an element-wise max pooling function that acts as an activation function by introducing non-linearity. It is important to note that the above operation does not involve spatio-temporal aggregation. Instead, the spatial features are learned from a single point cloud at a single timestep. #### 4.1.2. Graph-RNN Each graph-RNN cell, at level \(l\), takes as input the point coordinates, spatial and dynamic features (\(P_{t}^{l}\,S_{t}^{l}\,D_{t}^{l}\)) and learns updated dynamic features \(D_{t}^{l+1}\) describing the point's dynamic behaviour. To this end, the graph-RNN cell builds a spatio-temporal graph \(\mathcal{G}_{t}^{\text{ST},l}=(P_{t^{\prime}}^{l},\mathcal{E}_{t}^{\text{ST}})\) between the points \(P_{t}^{l}\) and \(P_{t-1}^{l}\). Unlike the coordinate graph which is built on geometric distances, the spatio-temporal graph is built based on the spatial features distance. Specifically, for each point \(i\) at time \(t\), we calculate the distance between the point spatial feature \(s_{i,t}\) and the spatial feature from other points in the present frame \(s_{j,t}\) and in the past frame \(s_{j,t-1}\). Each point \(i\) is connected to its \(k\)-closest points in present time \(t\) and its \(k\)-closest in past time \(t-1\). By connecting points that share a common local structure, we are able to establish correspondence between points that despite not being close in the Euclidean space, they share semantic similarities and therefore they will most likely share motion vectors. Fig. 6 depicts an example of a spatio-temporal graph constructed between two frames in a fast-moving sequence of a person running (some edges are hidden for image clarity). The dashed boxes in Fig. 6 show the edges build for the points in the foot when using spatial feature distance -our approach- (upper box, in red) and the edges built if we had used coordinate distance -state-of-art approach- (lower box, in blue). The edges built on spatial feature similarity (in red) can correctly match points across time while edges based on geometry proximity would lead to incorrect grouping. As a result, the network learns dynamic features from neighbourhoods of points that share similar semantic/structural properties. Figure 5. **Proposed AGAR prediction architecture** composed of DE, FP and prediction phase.the DE phase, the architecture consists of an SS-GNN module followed by graph-RNN cells. The SS-GNN module extracts spatial features from the point cloud which are then utilized by the graph-RNN cells to learn dynamic features. In the FP phase, the state-of-art FP modules are replaced by a novel _Adaptative feature combination_ module able to dynamically combine hierarchical features according to the scene. Similarly to the SS-GNN, the graph-RNN extracts dynamic features by performing a message-passing convolution between a point and its neighbourhoods in the spatio-temporal graph. For each target point, we learn a message for each edge by processing the concatenation of the target point dynamic feature (\(d_{i,t}^{l}\)); the neighbour point dynamic feature (\(d_{j,t^{\prime}}^{l}\)) where \(t^{\prime}\) can be either \(t\) or \(t-1\); the coordinates difference (\(\Delta p_{ij}\)), spatial features difference (\(\Delta s_{ij}\)); temporal different (\(\Delta t_{ij}\)) between the target and neighbour point. All the messages are aggregated into a single representation to update the target point dynamic features \(d_{i,t}^{l+1}\). The operation can be formalized as: \[m_{ij,t}^{l} =\Theta_{D}^{l}(d_{i,t}^{l};\ d_{j,t^{\prime}}^{l};\ \Delta p_{ij};\ \Delta s _{ij};;\Delta t_{ij}) \tag{8}\] \[d_{i,t}^{l+1} =\bigoplus_{j\in S_{t}^{\text{ST}}}\left\{m_{ij,t}^{l}\right\} \tag{7}\] The learned spatial features are used not only to connect points with similar spatial characteristics in _both_ the present and past frame but are also directly incorporated in the graph-RNN convolution. As a result, the graph-RNN learns a point dynamic behaviour taking into account structural relations to neighbourhood points. This inclusion of point spatial features in the graph-RNN cell convolution, allows the network to learn more representative dynamic features and helps to preserve the predicted point cloud shape. ### Addressing Limitation 2: Adaptative Feature Combination We now address the current framework limitation to generate complex motions caused by the fixed combination of dynamic features in the FP phase. To overcome the issue, we propose to replace the FP modules with an attention-based module denoted _Adaptative feature combination_ represented in detail in Fig.7. Instead of using a fixed combination, the proposed module dynamically assigns an attention value to each level based on the learned features. This attention value determines the amount of influence each level will have on the predicted motion of the point. In details, given an architecture with L hierarchical levels (\(L=3\) in the example in Fig.7 ), the proposed _Adaptative Feature combination_ module takes as input the dynamic features (\(D_{t}^{1}\), \(D_{t}^{2}\), \(...\), \(D_{t}^{L}\)) learned in the DE phase and combines them into a single final dynamic feature (\(D_{t}^{\text{Final}}\)). However, we recall that each RNN cell is preceded by a downsampling module, hence each feature needs to be up-sampled before being combined. To do this, the proposed module first interpolates the dynamic features to the same number of points as the first level and processes each independently through a refinement layer \(\Theta_{R}^{l}\), to ensure the features are on a similar scale, as follows: Figure 6. Spatio-Temporal graph \(G_{st}\), with some temporal edges colored in red; Dashed box depicts the difference between building the \(G_{st}\) using spatial features or using point coordinates. \[\psi(d_{i,t}^{l})=\sigma\left(\Theta_{R}^{l}\left\{d_{i,t}^{l}\right\}\right) \tag{9}\] where \(d_{i,t}^{l}\) are the interpolated features to original number of points, \(\psi(d_{i,t}^{l})\) are the outputted refined features and \(\sigma\) is the activation function. To learn scalar attention values \(\alpha_{i,t}^{l}\), the network concatenates the refined features from all levels and processes them through learnable parameters \(\Theta_{\alpha}^{l}\) as follows: \[\alpha_{i,t}^{l}=\sigma\left(\Theta_{\alpha}^{l}\{\psi(d_{i,t}^{1});\psi(d_{i, t}^{2});\psi(d_{i,t}^{3})\}\right) \tag{10}\] The refined dynamic features \(\psi(d_{i,t}^{l})\) are then multiplied by their respective attention value. Hence, the \(\alpha\) value reflects the _influence_ that the learned feature has on the predicted motion, allowing the network to adjust the contribution of each level to the predicted motion. Namely, \[\Psi(d_{i,t}^{l})=\psi(d_{i,t}^{l})\times\alpha_{i,t}^{l} \tag{11}\] Lastly, the dynamic features post-attention module \(\Psi(d_{i,t}^{l})\) are combined by a single learnable layer (\(\Theta_{FC}\)) into the final dynamic features \(d_{i,t}^{\text{Final}}\in D_{t}^{\text{Final}}\). \[d_{i,t}^{Final}=\sigma\left(\Theta_{FC}\{\Psi(d_{i,t}^{1});\Psi(d_{i,t}^{2}); \Psi(d_{i,t}^{3})\}\right) \tag{12}\] #### 4.2.1. Explainablity of the Adapative feature combination module A key benefit of the _Adaptative feature combination_ module is that its underlying mechanism can be visualized and explained. This can be seen in Fig. 8, which illustrates how the proposed module combines dynamic features to produce motion vectors given two point cloud sequences (_Man-Running and Woman-Dancing_). For each sequence, Fig. 8 depicts: the PCA of the dynamic features learned at the DE phase; the learned attention values per point; the individual motion vectors1 produced at each level in the proposed _Adaptative_ architecture and in the _Classic-FP_ architecture (previously presented in Section 2.2 and Fig.2). Footnote 1: For the sake of image clarity the motion vectors were uniformly sampled In the _Man-Running_ sequence depicted in Fig. 8 (a), at the first level the network assigns high attention values (\(\alpha_{t}^{1}\)) to the arms and low attention values to the points in the rest of the body. As a result, the predicted motion of the points in the arms is heavily influenced by local motions, while in the rest of the body, the local motions have a very small influence on prediction. The network Figure 7. **Adaptative Feature Combination Module.** Given a point cloud prediction framework with three hierarchical levels, the module takes as input dynamic features \(D_{t}^{1},D_{t}^{2},D_{t}^{3}\) and outputs a single final dynamic feature \(D^{Final}\) exhibits similar selective behaviour at the second level, assigning higher attention to the points in the left foot, increasing the influence of the dynamic features \(D_{t}^{2}\) have on the motion of the foot. In the third and final level, the network learned non-zero attention values \(\alpha_{t}^{3}\) for the majority of the body. As a result, in the _Man-Running_ sequence, the global motion is the primary contributor to the predicted motion of the points, with the exception of the arm and the foot regions where the prediction is given by a motion combination from multiple levels. Similar considerations can be derived from the second example _Woman-Dancing_, in which the learned global motions are an accurate descriptor for the majority of the points, except for certain regions with more local movements. The _Adaptative feature combination_ module is able to distinguish between regions and combine properly the different levels of motions based on the distinction. It is worth noting that different attention values are learned for the _Man-Running_ and _Woman-Dancing_ sequences, demonstrating the network's ability to adapt the attention according to the characteristics of the input data. In summary, the proposed _Adaptative feature combination_ module combines features in an adaptive manner, allowing it to **control the composition of global and local motions** that best describes the motion of each point. This adaptive operation can be understood and explained through visualization, which may be beneficial for future research on developing more expressive architectures. ## 5. Implementation In this section, we describe the datasets and implementation detail of our proposed method. To ensure reproducibility, we release the dataset and the source code of our architecture as well as the benchmarking methods2. Figure 8. _Adaptative Features Combination_ operation. Example of how the proposed module adaptively combines local and global motion for different points, and comparison with motion obtained with _Classic-FP_. ### Datasets In our experiments, we considered the following datasets: **Mixamo Human Bodies Activities**: Synthetically human motions generated following (Nagolov et al., 2017), using the online service Mixamo (Mikamo et al., 2018) and Blender software (Blender et al., 2018). Despite being synthetic the dataset provides an accurate representation of real-world movements. We create \(152\) test sequences and \(9,375\) training sequences (further augmented by randomly changing movement direction, speed, and the body starting position during training). Each training sequence consists of approximately \(50\) frames, and for each sequence, we sample \(T=12\) consecutive frames as inputs to the model during training. Similarly, the testing sequences are composed of \(12\) frames. Each frame in the dataset contains a point cloud consisting of \(1,000\) points, which we found to be sufficient for capturing a rich and detailed representation of the human body. **CWIPC-SXR Human motions**(Kumar et al., 2019): Real-world human motions in social settings. The dataset consists of \(21\) dynamic sequences. For each sequence, we sampled the first \(60\) frames from a capture rate of \(30\) fps to \(10\) fps, resulting in \(21\) sequences of \(15\) frames each (\(T=15\)). Given its reduced size, this dataset is not used for training but only for testing. To ensure consistency with the training data (_Mixamo_), we downsampled the dataset to \(1,000\) points per frame before feeding it into the model. **JPEG Pleno Voxelized Full Bodies**(Kumar et al., 2019): Real-world human bodies. The dataset is composed of four sequences known as longdress, loot, redandblack, and soldier. This dataset is not used for training, only for testing. Each sequence is downsampled to \(12\) frames (\(T=12\)) and \(1,000\) points. **Moving MNIST Point Cloud**: Created by converting the MNIST dataset of handwritten digits into moving point cloud, as previous works (Kumar et al., 2019). The sequences are generated by applying rigid motion at random to each digit. Each sequence contains \(20\) frames (\(T=20\)) with either \(128\) (\(1\) digit) or \(256\) points (\(2\) digits). **Argoverse**(Blender et al., 2018): Large scale automotive dataset. We use the same train and test data as in PointRNN (Kumar et al., 2019). The dataset contains \(910\) training sequences and \(209\) test sequences. Each sequence contains \(20\) frames (\(T=20\)) and each frame is downsampled to \(1024\) points. **MSRAction3D**(Kumar et al., 2019): Real-world human motion performing annotated actions. The dataset consists of \(567\) Kinect depth videos, with \(20\) action categories. We sampled each point cloud to \(1024\) points, and use the same training and test conditions as works (Kumar et al., 2019; Kumar et al., 2020). ### Benchmarking This subsection outlines the tasks, as well as the state-of-the-art benchmark methods used for comparison. #### 5.2.1. Prediction Task In the prediction task, we consider both short-term and long-term predictions. In short-term prediction, at each iteration, the network takes as input the ground truth frame \(P_{t}\) to predict the next frame \(\hat{P}_{t+1}\). At the following prediction step, the network will be predicting \(\hat{P}_{t+2}\) having as input the ground truth \(P_{t+1}\). This is repeated till the end of the sequence. In long-term prediction, the predicted frame from the previous interaction \(\hat{P}_{t}\) is used as input to predict the next frame \(\hat{P}_{t+1}\). In long-term prediction only the later half (\(T/2\)) of sequence is predicted using this strategy. For benchmarking, since point cloud prediction of human bodies is a mostly an unexplored topic, the range of possible choices of baseline methods to compare our work is limited. Moreover, many of the existing point-based RNN point cloud prediction methods designed for automobile scenes do not provide the necessary materials to be replicated. Therefore, besides selecting the most related works available, we adapted several methods that, while not originally designed for point cloud prediction, are well-recognized in the field of point cloud sequence processing. For the point cloud prediction task, we consider the following as baseline models: (1) _Copy-Last-input model which simply copies the past point cloud frame instead of predicting it; (2) PointPWC-Net [45] a hierarchical point-based architecture to extract the motion flow between two frames; (3) FlowStep3D [15] a hierarchical point-based architecture to extract learned motion flow between two frames via RNN cells; (4) PSTNet [8] a hierarchical point-based architecture for action classification of human body sequences; (5) PointRNN [6] (\(k\)-NN): point-based RNN architecture presented in Section 2.2. Both PointPWC-Net and FlowStep3D were originally designed to learn the motion flow between two frames. To extend these two models to the task of predicting future frames, we incorporate a prediction phase into their architectures. This prediction phase refines the extracted motion flow via fully connected layers and calculates a predicted point cloud at the next time step. Similarly, the PST-Net architecture, designed for action classification, is adapted for the prediction task by adding an FP phase (with _Classic-FP_) to propagate the learned features to the original number of points, followed by a prediction phase to generate a prediction of the point cloud at the next timestep given the propagated features. To differentiate the adapted models from their original counterparts, we denote the adapted models for the prediction task as PointPWC-Net-_pred_, FlowStep3D-_pred_ and PST-Net-_pred_ respectively. #### 5.2.2. Action Classification Task To study the generalizability of the proposed AGAR framework for dynamic feature learning, we extended its application to the classification task. In this task, the AGAR takes a point cloud sequence as input and outputs a classification score. To adapt AGAR for the classification task, we discarded the FP phase and the prediction phase. Instead, the dynamic features from the last level are max-pooled to form a global feature, which is used to generate the classification score. We denoted this architecture adapted for classification tasks as AGAR-_cls_. Given human action classification from point clouds sequences a well-studied problem we compare AGAR-_cls_ to well-established methods such as MeteorNet [19], PSTNet [8], P4Transformer [7] without adaptations. ### AGAR Architecture details For the prediction and classification tasks, we implemented AGAR and an AGAR-_cls_ (adapted for classification) architectures with three hierarchical levels (\(L=3\)) respectively. In both cases, the SS-GNN in the first level consists of three layers with \(64,128\), and \(128\) dimensions respectively. Each level contains a graph-RNN cell that learns dynamic features with \(128\) dimensions. The number of nearest neighbours (\(k\)) is \(8\) for all graph-RNN cells. Between each graph-RNN cell, the point cloud is downsampled by a factor of \(4\). All the models are trained using the Adam optimizer, with a learning rate of \(10^{-4}\) for \(500,000\) interactions. In the training phase, we utilize a batch size of \(16\) for the Mixamo Human Bodies dataset, \(32\) for the MNIST dataset, \(4\) for the Argoverse dataset, and \(32\) for MSRAction3D. For all models, the gradients are clipped in the range \([5,5]\). ### Training and Metrics The AGAR architecture has multiple end-to-end parameters, trained in a self-supervised fashion by comparing the predicted point cloud \(\hat{P}_{t+1}\) with the target point cloud \(P_{t+1}\). Unlike supervised methods [15; 19; 40; 45], which require the ground-truth motion flow to train the network, in a self-supervised setting the ground-truth data can be obtained from the input data itself. This technique allows us to train on a dataset of deformable dynamic point clouds, such as human bodies dataset [5; 14; 30], where annotated ground-truth motion vectors are not available. #### 5.4.1. Training Metrics To measure the difference between the predicted point cloud and the ground-truth point cloud during training, we employ the commonly used chamfer distance (CD) [13] and earth's moving distance (EMD) [2]. These metrics are defined as the following: _Chamfer distance (CD) :_ The CD measures the distance between each point in the predicted point cloud and its closest target point in the reference point cloud, and vice-versa. \[d_{CD}(P,\hat{P})=\frac{1}{n}\sum_{p\in P}\min_{\hat{p}\in\hat{P}}||p-\hat{p}||^{ 2}+\frac{1}{n}\sum_{p\in\hat{P}}\min_{p\in P}||\hat{p}-p||^{2} \tag{13}\] _Earth's moving distance (EMD)_: The EMD solves an optimization problem, by finding the optimal point-wise bijection mapping between two point clouds \(\theta:P\rightarrow\hat{P}\). The EMD distance is then given by the distance of the points at both ends of this mapping, as follows: \[d_{EMD}(P,\hat{P})=\min_{\theta:P\rightarrow\hat{P}}\sum_{p\in P}||p-\theta(p) ||^{2}. \tag{14}\] Although the EMD and CD metrics are commonly used in point cloud analysis, they may not always provide an accurate measure of similarity. The CD only considers the nearest neighbour of a point and does not take into account the global distribution of points. On the other hand, EMD tries to find a unique mapping between two point clouds. However, in most cases a unique mapping is realistically impossible, resulting in a measurement that is rarely correct for all points. Since CD and EMD measure different notions of similarity with different shortcomings, we use a combination of both metrics as the loss function in order to make the loss function more robust, as follows: \[\mathcal{L}(P,\hat{P})=d_{CD}(P,\hat{P})+d_{EMD}(P,\hat{P}) \tag{15}\] #### 5.4.2. Evaluation Metrics To evaluate our model we used the CD and EMD metrics also used for training. However since CD and EMD measure the similarity between two point clouds by averaging the distance across all points, they tend to flatten their distance scores towards zero values. This is because in a point cloud, the majority of points are perfectly predicted (either no motion or little motion), and most of the high prediction errors are concentrated in small areas of high or complex motion. Therefore to better evaluate the model's ability to predict complex motions, besides the CD and EMD we also consider the following additional evaluation metric, defined as: _Chamfer distance of the top %5 worst points (CD Top %5)_: This metric returns the average CD distance of the 5% of points with the worst predictions (i.e., points with the farthest distance to their closest point). We found that this CD Top %5 focuses on the regions where the body performs complex motions and provides the best correlation with the visual quality. To the best of our knowledge, we are the first to work to present results using CD top 5% metric. ## 6. Experimental Results In this section, we present and discuss the results of our proposed AGAR method, described in Section 4 for each task and dataset. We begin by presenting and discussing the results point cloud prediction of human body motions, which is the main goal of this paper. Next, we present the experimental results for the prediction of rigid point clouds (i.e., moving digits and automobile scenes). This is followed by the results for action classification on human body motions. Lastly, we present an ablation study on the prediction of human body motions. ### Prediction of Synthetic Human Bodies Motions - Mixamo human bodies The short-term prediction results from _Mixamo_ dataset of human body activities can be found in Table 2 and Fig. 9 depicts prediction examples for two sequences. In addition to evaluating the AGAR architecture with _Adaptive feature combination_ described in Section 4, we also evaluate a modified AGAR architecture where the _Adaptive feature combination_ is replaced by _Classic-FP_. The results in Table 2 show PointRNN and both variations of the AGAR architecture outperformed the remaining methods by a large margin, demonstrating the superiority of the RNN architecture for interactive prediction. Furthermore, both AGAR architectures consistently outperform PointRNN, achieving lower prediction error in all three metrics (CD, EMD, CD Top%5). Notably, the AGAR with _Adaptive feature combination_) achieves an EMD error of 58.2, surpassing PointRNN's EMD error of 68.0 with a 10.2 gain. This gain is especially significant for deformable objects since shape distortion has a high visual impact. This is particularly noticeable in the last frame (\(t=10\)) of the "Woman-Turning" sequence (in Fig. 9), where the AGAR prediction suffers less deformation compared to the PointRNN prediction. In the following, we analyze the improvement provided by each component of the proposed AGAR method to better understand the impact of each limitation on the prediction task. Figure 9: Example of prediction of human bodies activities on the Mixamo dataset. To understand the impact of combining features in an adaptative manner we compare the AGAR with _Adaptive feature combination_ and the AGAR with _Classic-FP_. Table 2 shows the AGAR with _Adaptive feature combination_ achieves a lower prediction error compared to the AGAR with _Classic-FP_. While the error improvement in terms of CD and EMD is relatively small, the CD Top 5% metric, which is more sensitive to local distortion, shows a clear improvement in the AGAR with _Adaptive feature combination_. The superior performance of adaptively combining dynamic features can also be seen by looking at the visual results in Fig.9. We can notice the AGAR with _Adaptive features combination_ predicts better specific regions such as the hands and the legs, which involve complex motions. This improvement is due to the module's ability to generate refined motion predictions required in these regions. **These results show the clear advantage of adaptively combining dynamic features to predict complex motions.** To understand the advantages of incorporating the structural relations between points when dynamic learning features, in Table 3 we compare: i) an AGAR architecture; ii) an AGAR model that does not learn spatial features (without the SS-GNN module). Hence does not take the structural relation between the point into account, when learning dynamic features; iii) an AGAR model that learns spatial features, but builds only a temporal graph i.e., a \(k\)-\(nn\) graph is built only connecting each point of the frame \(t\) with points in frame \(t-1\) (the total number of neighbours \(k=8\) remains the same for fairness). All three model variations have a _Classic-FP_ phase. The results show there is a relatively small gain in building a complete spatio-temporal graph but significant improvement by learning spatial features. It is worth noticing, that the _CD Top 5%_ (the most sensitive metric to point cloud local shape distortion) is significantly lower in the model that learns spatial features compared to the model that does not learn spatial features. This demonstrates that while both models are able to capture the overall motion, **the inclusion of spatial features in the DE phase significantly improves the accuracy and preservation of the predicted point cloud's shape.** \begin{table} \begin{tabular}{|c|c c c|} \hline \multicolumn{4}{|c|}{Mixamo} \\ \multicolumn{4}{|c|}{(Synthetic Human bodies dataset)} \\ \hline Model & CD & EMD & \multicolumn{2}{c|}{CD} \\ \hline Copy-Last-input & 0.1056 & 123.4 & 0.2691 \\ \hline PointPWC-Net-_pred_[45] & 0.09358 & 118.5 & 0.2601 \\ \hline FlowStep3D-_pred_[15] & 0.09153 & 115.6 & 0.2575 \\ \hline PSTNet-_pred_[8] & 0.08984 & 114.1 & 0.2556 \\ \hline PointRNN [6] & 0.00351 & 68.0 & 0.1593 \\ \hline \multirow{2}{*}{AGAR} & _Classic-FP_ & 0.00262 & 59.6 & 0.1412 \\ \cline{2-4} & _Adaptative_ & **0.00254** & **58.2** & **0.1346** \\ \hline \end{tabular} \end{table} Table 2: Point cloud prediction results on the Mixamo dataset \begin{table} \begin{tabular}{|c|c|c|c c c|} \hline \multicolumn{4}{|c|}{Mixamo} \\ \multicolumn{4}{|c|}{(Synthetic human bodies dataset)} \\ \hline Model & Type of graph & Spatial features & CD & EMD & \multicolumn{2}{c|}{CD} \\ \cline{2-5} & (i) spatio-temporal & ✓ & **0.00262** & **59.6** & **0.1410** \\ \cline{2-5} & (i) spatio-temporal & \(\times\) & 0.00341 & 67.0 & 0.1602 \\ \cline{2-5} & (ii) only temporal & ✓ & 0.00266 & 60.0 & 0.1417 \\ \hline \end{tabular} \end{table} Table 3: Comparison of three variations of the AGAR framework demonstrating gain from the including structural relations between points in the spatio-temporal graph. ### Prediction of Real Human Bodies Motions - JPEG and CWIPC-SXR dataset We now turn our focus to real-world human bodies datasets: the JPEG and CWIPC-SXR datasets. Since both the JPEG and CWIPC-SXR datasets are too small to train models, they are only used for the evaluation of the models trained on the Mixamo dataset. Table 4 depicts the short-term prediction results from real-world data from the JPEG dataset, and the CWIPC-SXR dataset. It can be noted, the _Copy-last-input_ has significantly lower prediction error in real-world datasets compared to the error on the Mixamo dataset. In the JPEG and CWIPC-SXR dataset, the point clouds were acquired from real test subjects only allowed to move in a small area, resulting in a lower magnitude of motion compared to the Mixamo dataset. Despite the lack of motion, the AGAR model is able to make accurate predictions and achieved the smallest prediction error across all metrics. The small improvement of the _Adaptive_ combination over the _Classic-FP_ can be attributed to the low magnitude of motion in the dataset. Importantly these results demonstrate that the AGAR model trained on synthetic human motions datasets can be effectively applied to real-world human motions datasets despite the large disparity in motion magnitudes between the two datasets. ### Prediction of Rigid Object - MNIST dataset Moving Digits The simplicity of representation and movements performed by the MNIST dataset makes it the ideal dataset to test the long-term prediction of the proposed AGAR method. Long-term prediction is when the network uses its output predictions at a time-step as input for the subsequent time-step. We present the prediction results for the MNIST dataset in Table 5, and prediction examples in Fig.10. Table 5 shows the AGAR model has superior prediction performance compared to the PointRNN model. This performance gap is particularly large for point clouds containing two digits. In two-digits, the Point-RNN CD prediction error is 14.54 whereas the AGAR (_Classic-FP_) CD error is 1.67. This large gain is due to the AGAR's ability to learn spatial features, which allows it to understand the structure to discern the two distinct shapes. This improvement can be seen \begin{table} \begin{tabular}{|c|c c c|c c c|} \hline \multicolumn{7}{|c|}{JPEG and CWIPC-SXR} \\ \multicolumn{7}{|c|}{Real-world human bodies dataset} \\ \hline \multirow{2}{*}{Method} & \multicolumn{3}{c|}{JPEG} & \multicolumn{3}{c|}{CWIPC-SXR} \\ \cline{2-7} & CD & EMD & \begin{tabular}{c} CD \\ Top 5\% \\ \end{tabular} & CD & EMD & \begin{tabular}{c} CD \\ Top 5\% \\ \end{tabular} \\ \hline Copy Last Input & 0.00118 & 42.0 & 0.09001 & 0.00295 & 43.2 & 0.12915 \\ \hline PointRNN & 0.00109 & 41.3 & 0.083461 & 0.00157 & 43.4 & 0.10973 \\ \hline AGAR & _Classic-FP_ & 0.00101 & 38.6 & 0.08172 & **0.00150** & 40.8 & **0.10655** \\ \hline \multicolumn{7}{|c|}{_Adaptitive_} & **0.00095** & **37.4** & **0.07754** & 0.00155 & **39.8** & 0.10760 \\ \hline \end{tabular} \end{table} Table 4. Prediction Error for real-world human bodies dataset. \begin{table} \begin{tabular}{|c|c c c|c c|} \hline \multicolumn{7}{|c|}{MNIST} \\ \multicolumn{7}{|c|}{Dataset} \\ \hline \multirow{3}{*}{Method} & \multicolumn{3}{c|}{Long-Term prediction} \\ \cline{2-6} & \multicolumn{2}{c|}{1 digit} & \multicolumn{2}{c|}{2 digits} \\ \cline{2-6} & CD & EMD & CD & EMD \\ \hline Copy Last Input & 262.46 & 15.94 & 140.14 & 15.8 \\ \hline PointRNN & 2.25 & 2.52 & 14.54 & 6.42 \\ \hline \multirow{2}{*}{AGAR} & _Classic-FP_ & **0.88** & **1.52** & **1.67** & **2.60** \\ & _Adaptative_ & 0.96 & 1.60 & 1.75 & 2.62 \\ \hline \end{tabular} \end{table} Table 5. Prediction error on the MNIST dataset. in Figure 10, where all the evaluated models exhibit a progressive loss of shape, however, the AGAR model suffers from significantly less deformation compared to Point-RNN. This visualization demonstrates that the AGAR is better at preserving the spatial structure over time, a direct effect of learning the point cloud spatial structure. Lastly, it can be noted the AGAR model with _Adaptive feature combination_ and with the _Classic-FP_ have a similar prediction error, also seen in the example in Fig. 10. The reason being in the moving digits dataset there are no complex motions (i.e, the digits perform simple rigid translation), as such the control over the motion provided by the _Adaptative features combination_ module is just unnecessary parameterization and does not translate into more accurate predictions. ### Prediction of Automobile scenes- Argoverse dataset Table 6 shows the results of training and evaluating the AGAR model and the PointRNN baseline with the Argoverse automobile dataset. Not surprisingly, both methods achieved similar prediction errors. This was an expected result, as the characteristics of deformable bodies on which AGAR relies are not present in the automobile dataset. More specifically the structural information in the data is not informative and reliable enough for the SS-GNN module to leverage when learning features. Similarly, the data does not perform complex motions that would require _Adaptive features combination_ module. Hence, the inclusion of both modules is not translated into a meaningful gain. However, despite being designed for deformable objects, the results demonstrate that the proposed \begin{table} \begin{tabular}{|c|c c|} \hline \multicolumn{3}{|c|}{Argoverse} \\ \multicolumn{3}{|c|}{(Automobile scenes dataset)} \\ \hline \multirow{2}{*}{Method} & \multicolumn{2}{c|}{Long-Term Prediction} \\ \cline{2-3} & CD & EMD \\ \hline Copy Last Input & 0.5812 & 1092.3 \\ \hline PointRNN & **0.2541** & 895.28 \\ \hline \multirow{2}{*}{AGAR} & _Classic-FP_ & 0.2680 & **875.22** \\ \cline{2-3} & _Adaptative_ & 0.2839 & 893.24 \\ \hline \end{tabular} \end{table} Table 6. Prediction error for the Argoverse dataset. Figure 10. Long-term predictions examples of MNIST sequences. AGAR is still capable to process and capturing the overall correct movement from point clouds of automobile scenes. ### Action Recognition of Human Motions - MSR3DAction Dataset Table 7 presents the results of the action recognition task on the MSRAction dataset. As described in Section 5.2, here we compare the AGAR-_cls_ with multiple well-known methodologies optimized for action classification. In the table, we provide the accuracy of different methods given input point cloud sequences of \(4,8,12,16,24\) frames. When looking at a shorter sequence (less than \(12\) frames), the proposed AGAR-_cls_ outperformed state-of-art methods. Notably, for sequences of \(8\) frames, the AGAR-_cls_ achieved \(87.2\%\) accuracy a \(5\%\) improvement over the PSTNet and P4Transfomer. However such gain is lost, for sequences longer than \(12\) frames, where both PSTNet and P4Transformer are slightly better than AGAR-_cls_. The reason for this decline in performance can be attributed to the RNN architecture of the AGAR-_cls_ framework. Accurate action recognition requires the model to retain information about early movements throughout the entire sequence. In the AGAR-_cls_ this information is retained in RNN hidden states. However, these states are continuously updated each iteration, as a result, the older information is not efficiently retained as PSTNet which processes all frames simultaneously. Despite this limitation, the results demonstrate the AGAR-_cls_ ability to capture complex motions from human body point clouds, making it a promising model for action recognition tasks, especially for shorter sequences. Furthermore, the understanding of the dynamic feature's role in capturing complex human motions presented in Section 3, can also provide valuable insight for action recognition. The understanding of how the composition of local and global motions leads to a class prediction can help in understanding why certain actions are misclassified, leading to the design of more accurate architectures. \begin{table} \begin{tabular}{|c|c|c c c c|c c|c|} \hline \multicolumn{10}{|c|}{MSR Action} \\ \hline \multirow{3}{*}{Method} & \multirow{3}{*}{Input} & \multicolumn{6}{c|}{Accuracy} \\ \cline{3-8} & & \multicolumn{6}{c|}{\#Frames} \\ \cline{3-8} & & 1 & 4 & 8 & 12 & 16 & 18 & 20 & 24 \\ \hline Vieira _et al._[33] & depth & \multicolumn{6}{c|}{78.20} \\ \hline Klaser _et al._[16] & \multicolumn{6}{c|}{81.43} \\ \hline PointNet++ [29] & \multirow{3}{*}{point} & 61.61 & \multirow{3}{*}{88.53} & \multirow{3}{*}{88.21} & \multirow{3}{*}{88.50} \\ \cline{2-2} \cline{6-6} MeteorNet [19] & & & & & & & & \\ \cline{1-1} \cline{5-6} PS1Net [8] & & & & & & & & \\ \cline{1-1} \cline{5-6} P4Transfomer [7] & & & & & & & & & \\ \cline{1-1} \cline{5-6} AGAR-_cls_ & & & & & & & & & \\ \hline \end{tabular} \end{table} Table 7. Action recognition accuracy (%) on the MSR-Action3D dataset for \(4,8,12,16,24\) frames as input. \begin{table} \begin{tabular}{|c|c c c|} \hline \multicolumn{10}{|c|}{Mixamo} \\ \multicolumn{10}{|c|}{(Synthetic human bodies dataset)} \\ \hline Number of levels & CD & EMD & CD \\ \hline 1 & 0.00296 & 65.4 & 0.166 \\ \hline 2 & 0.00276 & 61.2 & 0.1461 \\ \hline 3 & **0.00262** & **59.6** & **0.1412** \\ \hline 4 & 0.00290 & 62.0 & 0.14745 \\ \hline \end{tabular} \end{table} Table 8. Effect of the number of neighbors on the AGAR framework. \begin{table} \begin{tabular}{|c|c c c|} \hline \multicolumn{10}{|c|}{Mixamo} \\ \multicolumn{10}{|c|}{(Synthetic human bodies dataset)} \\ \hline Number of levels & CD & EMD & CD \\ \hline 1 & 0.00296 & 65.4 & 0.166 \\ \hline 2 & 0.00276 & 61.2 & 0.1461 \\ \hline 3 & **0.00262** & **59.6** & **0.1412** \\ \hline 4 & 0.00290 & 62.0 & 0.14745 \\ \hline \end{tabular} \end{table} Table 9. Effect of the number of neighbors on the AGAR framework. ### Ablation Study To gain a deeper understanding of our proposed architecture, an ablation study is conducted on the Mixamo Synthetic dataset for short-term prediction. Table 8, Table 9 and Table 10 show how each parameter influences the performance of the network. **The number of levels** (Table 8): The best results were achieved with architecture with three hierarchical levels (\(L=3\)), showing that increasing the number of levels does not necessarily lead to superior performance. However, a minimum number of levels does impact positively the accuracy, confirming the importance of hierarchical learning. **Neighborhood size** (Table 9): The results show an increasing number of neighbours points (\(k\)) improves the model performance. However, increasing neighbours also significantly increases the memory required to train the model. This illustrates one of the main limitations of the current deep learning frameworks, which is the high GPU memory requirements. This limitation was not addressed in this paper. **The downsampling factor** (Table 10): Given a point cloud with \(1,000\) points, a down-sample by a factor of \(2\) for each level leads to the best results. (i.e, \(500,250\), and \(125\) points at levels \(1,2\) and \(3\) respectively). Using a downsampling factor of \(1\) (i.e, no sampling between layers) resulted in the worst performance, which was similar to the performance obtained using a single level (\(L=1\)). This demonstrates that the improvement gained from using hierarchical architecture levels is due to learning features from neighbourhoods at different scales. ## 7. Conclusion The goal of this paper is to improve current prediction frameworks for point clouds representing deformable 3D objects, with a focus on human bodies motions. To reach this goal, we investigated the current state-of-the-art point-based RNN prediction framework and identified its limitations when processing deformable shapes and complex motions present in deformable objects. To overcome these limitations, we propose an improved architecture for dynamic point cloud processing. This architecture includes an initial graph-based module that learns the structural relations of point clouds as spatial features. From the spatial features, we then construct spatio-temporal graphs. This module is followed by a hierarchy of graph-RNN cells, to extract dynamics features from the spatio-temporal graphs taking the learned structural relations between points into account. Lastly, as a key novelty, we propose a module able to combine dynamic features learned by the graph-RNN cells in a _adaptative_ manner Our proposed module assigns a level of attention to each hierarchical feature in order to control the composition of local and global motion that best describes each point motion. Notably, the adaptive combination module inner-working can be visualized and understood, opening the door to future research to gain insights to develop more expressive architectures. Our experimental results demonstrate the superiority of the proposed architecture in motion prediction and in action classification of deformable objects. We also showed this improvement is due to the method's ability to exploit the spatial structure of the point cloud to extract more representative dynamic features, as well as the adaptive combination of the dynamic features to predict complex motions. ###### Acknowledgements. This work has been partially funded by CISCO under the Academic Donation Scheme and by the EPSRF-SFI grant EP/T03324X/1
2308.08275
Generalized parton distributions of gluon in proton: a light-front quantization approach
We solve for the gluon generalized parton distributions (GPDs) inside the proton, focusing specifically on leading twist chiral-even GPDs. We obtain and employ the light-front wavefunctions (LFWFs) of the proton from a light-front quantized Hamiltonian with Quantum Chromodynamics input using basis light-front quantization (BLFQ). Our investigation incorporates the valence Fock sector with three constituent quarks and an additional Fock sector, encompassing three quarks and a dynamical gluon. We examine the GPDs within impact parameter space and evaluate the $x$-dependence of the transverse square radius. We find that the transverse size of the gluon at lower-$x$ is larger than that of the quark, while it exhibits opposite behavior at large-$x$. Using the proton spin sum rule, we also determine the relative contributions of quarks and the gluon to the total angular momentum of the proton.
Bolang Lin, Sreeraj Nair, Siqi Xu, Zhi Hu, Chandan Mondal, Xingbo Zhao, James P. Vary
2023-08-16T10:31:11Z
http://arxiv.org/abs/2308.08275v1
# Generalized parton distributions of gluon in proton: a light-front quantization approach ###### Abstract We solve for the gluon generalized parton distributions (GPDs) inside the proton, focusing specifically on leading twist chiral-even GPDs. We obtain and employ the light-front wavefunctions (LFWFs) of the proton from a light-front quantized Hamiltonian with Quantum Chromodynamics input using basis light-front quantization (BLFQ). Our investigation incorporates the valence Fock sector with three constituent quarks and an additional Fock sector, encompassing three quarks and a dynamical gluon. We examine the GPDs within impact parameter space and evaluate the \(x\)-dependence of the transverse square radius. We find that the transverse size of the gluon at lower-\(x\) is larger than that of the quark, while it exhibits opposite behavior at large-\(x\). Using the proton spin sum rule, we also determine the relative contributions of quarks and the gluon to the total angular momentum of the proton. keywords: Light-front quantization, Dynamical gluon, Gluon GPDs, Total angular momentum, Squared radius + Footnote †: journal: ## 1 Introduction One of the key challenges in hadron physics is to understand the precise mechanisms by which the nonperturbative structure of the nucleon arises from the theory of quantum chromodynamics (QCD). A quintessential tool in the investigation of hadron structure is the parton distribution functions (PDFs) which encode information about the longitudinal momentum fraction carried by the active parton. Although PDFs have been utilized widely, a more complete description of the nonperturbative structure of hadrons requires extension of the PDFs into higher dimensional distributions called the generalized parton distributions (GPDs) [1; 2; 3]. Alongside the transverse momentum dependent parton distributions (TMDs), the GPDs have been established as an integral component in nucleon tomography [4] by unveiling the 3-dimensional structure of the nucleon. In addition to the longitudinal momentum fraction (\(x\)), the GPDs also depend on the square of the total momentum transferred (\(t\)) and the longitudinal momentum transferred (\(\zeta\)), also called the skewness variable. In the forward limit \(t=0\) and \(\zeta=0\), the GPDs reduce to the one dimensional ordinary PDFs. The moments (integration over \(x\)) of GPDs correspond to nucleon form factors. When the skewness \(\zeta\) is zero, the Fourier transform of the GPDs with respect to the transverse momentum transfer \(\Delta_{\perp}\) yields the impact parameter dependent parton distributions (ipdpdfs) [5; 6]. These distributions show how partons of a particular longitudinal momentum are distributed in the transverse position, also known as the impact parameter \(b_{\perp}\) space. The ipdpdfs adhere to certain positivity restrictions and, unlike the GPDs, have a probabilistic interpretation [7]. Experimentally, GPDs are accessible via the hard exclusive reactions, such as the deeply virtual Compton scattering (DVCS) [2; 3; 8; 9], deeply virtual meson production (DVMP) [10; 11], wide-angle Compton scattering (WACS) [12; 13], and also single diffractive hard exclusive processes (SDHEPs) [14]. Jefferson Lab (JLab) has produced a significant amount of exclusive measurements through the use of extensive data sets [15; 16; 17; 18]. The forthcoming Electron-Ion Collider (EIC) [19; 20] at Brookhaven National Lab (BNL) and Electron-Ion Collider in China (EIcC) [21] are projected to generate extensive additional data. Besides experiments, theoretical studies have also made significant progress. Nevertheless, the nonperturbative properties of GPDs prohibit their direct computa tion from the first principles of QCD at the present time. While there have been some calculations of GPDs based on Euclidean-space lattice results, the methodology is still nascent in its development [22; 23; 24; 25]. Various nonperturbative methods have been utilized to examine the properties of GPDs from a more phenomenological perspective. These methods include the MIT bag model [26], the chiral quark-soliton model [27; 28], the light-front constituent quark model [29; 30; 31; 32], NJL model [33], the color glass condensate model [34], the Bethe-Salpeter approach [35; 36] and the meson cloud model [37; 38]. In comparison to quark distributions, the study of gluon GPDs is relatively limited. The gluon distributions have an impact on the calculated cross-section of a process dominated by the gluon-initiated channel. On the other hand, gluons have a considerable influence on the mass decomposition of the proton [39; 40; 41; 42; 43; 44; 45; 46]. In the study of deep inelastic scattering (DIS) processes, the gluon distributions and fragmentation functions also encode essential information about the proton [47]. The leading twist gluon GPDs in a quark model within perturbative QCD were shown in Ref. [48]. A parametrization of the chiral-even GPDs for gluons with non-zero skewness in a perturbative QCD framework were shown in Ref. [49]. With the light-cone spectator model, the leading twist gluon GPDs and the gluon angular momentum inside the proton were studied in Ref. [50]. The lowest two Mellin moments of the gluon GPDs and the gluonic contribution to the nucleon spin have been explored within a model based on the Rainbow-Ladder truncation of the Dyson-Schwinger equations (DSE) of QCD [51] as well as using lattice QCD at physical pion mass [52]. In this work, we calculate the leading twist chiral-even gluon GPDs within basis light-front quantization (BLFQ). BLFQ is a nonperturbative framework that is proving to be effective in solving problems related to relativistic many-body bound states in quantum field theories [53; 54; 55; 56; 57; 58; 59; 60; 61; 62; 63; 64]. In the BLFQ framework we utilize an effective light-front Hamiltonian and solve for its mass eigenstates [53]. We take into account the baryon Fock sector, which includes one gluon (\(|qqqg\rangle\)), in addition to the valence Fock sector consisting of three quarks (\(|qqq\rangle\)). We consider the QCD light-front interaction [65] that applies to both of these Fock sectors, along with the model confining potentials that act in both the transverse and longitudinal directions [57]. We calculate the quark and gluon contribution to the total angular momentum (\(J_{q/g}\)) of the proton by using the spin sum rule that connects the moments of the GPDs to \(J_{q/g}\)[66]. We also investigate the gluon GPDs in the impact parameter space [5; 6] by Fourier transforming from the transverse \(\Delta_{\perp}\) to \(b_{\perp}\) space. ## 2 Proton LFWFs in the BLFQ framework In light-front quantum field theory, bound states are obtained by solving the mass eigenvalue equation \[\left(P^{+}P^{-}-P_{\perp}^{\,2}\right)|\Psi\rangle=M^{2}\left|\Psi\right\rangle, \tag{1}\] where \(P^{-}\), \(P^{+}\), \(P_{\perp}\) and \(M\) represent the light-front Hamiltonian, longitudinal momentum, transverse momentum and invariant mass, respectively. At constant light-front time \(x^{+}\equiv x^{0}+x^{3}\), a baryon state is expressed using various Fock sectors: \[\left|\Psi\right\rangle=\psi_{qqq}\left|qqq\right\rangle+\psi_{qqqg}\left|qqqg \right\rangle+\cdots \tag{2}\] \(\psi_{\cdots}\) denotes the light-front amplitudes associated with the Fock component \(\left|\cdots\right\rangle\). Numerical calculations require truncating Fock sector expansions to a countable Fock space via Eq. (2), in this case including one dynamical gluon. Thus, at the model scale, the proton is described by light-front amplitudes of valence quarks \(\psi_{uud}\) and three quarks with one dynamical gluon \(\psi_{uudg}\). We use a light-front Hamiltonian, \(P^{-}=P^{-}_{0}+P^{-}_{I}\), in which \(P^{-}_{0}\) refers to the light-front QCD Hamiltonian associated with the \(|qqq\rangle\) and \(|qqqg\rangle\) Fock states of the proton, while \(P^{-}_{I}\) denotes a model Hamiltonian for the confining interaction potential. We truncate the light-front QCD Hamiltonian to a single dynamical gluon Fock sector by employing the gauge \(A^{+}=0\)[63; 65; 67]: \[\begin{split} P^{-}_{0}=&\int\mathrm{d}x^{-} \mathrm{d}^{2}x^{\perp}\Big{\{}\frac{1}{2}\bar{\psi}\gamma^{+}\frac{m_{0}^{2} +(i\partial^{\perp})^{2}}{i\partial^{+}}\psi\\ &+\frac{1}{2}A^{i}_{a}\left[m_{g}^{2}+(i\partial^{\perp})^{2} \right]A^{i}_{a}+g_{e}\bar{\psi}\gamma_{\mu}T^{a}A^{\mu}_{a}\psi\\ &+\frac{1}{2}g_{e}^{2}\bar{\psi}\gamma^{+}T^{a}\psi\frac{1}{(i \partial^{+})^{2}}\bar{\psi}\gamma^{+}T^{a}\psi\Big{\}}.\end{split} \tag{3}\] Here, \(\psi\) and \(A^{\mu}\) correspond to the quark and gluon fields, respectively, and \(T^{a}\) represents one half times the Gell-Mann matrix, expressed as \(T^{a}=\lambda^{a}/2\). We denote the bare quark mass by \(m_{0}\) and the model gluon mass by \(m_{g}\). In Eq. (3), the initial two terms represent the kinetic energies of the quark and gluon, respectively. The final two terms in Eq. (3) correspond to the vertex interaction and the instantaneous interaction, both of which involve the coupling constant \(g_{c}\). Although the gluon mass is identically zero in QCD, an effective gluon mass, motivated in part by the Renormalization Group Project for Effective Particles [68], is employed in our model to reproduce the nucleon mass and form factors (FFs) [67]. We incorporate a mass counter term, \(\delta m_{q}=m_{0}-m_{q}\), for quarks in the leading Fock sector to accommodate quark mass corrections arising from fluctuations into higher Fock sectors, where \(m_{q}\) represents the renormalized quark mass. In the case of the vertex interaction, we introduce an independent quark mass, \(m_{f}\), in accordance with Ref. [69; 70]. The confining interaction potential within the leading Fock sector, encompassing both transverse and longitudinal confining potentials, can be expressed as follows [57; 63]: \[P_{I}^{-}P^{+}=\frac{\kappa^{4}}{2}\sum_{i\neq j}^{3}\left\{{r_{ij\perp}}^{2}- \frac{\partial_{x_{i}}\left(x_{i}x_{j}\partial_{x_{j}}\right)}{\left(m_{i}+m_{ j}\right)^{2}}\right\}. \tag{4}\] Here, \(\kappa\) represents the confinement strength, and \(r_{ij\perp}=\sqrt{x_{i}x_{j}}(r_{i\perp}-r_{j\perp}^{\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\ where, \(G^{+\mu}(x)=\partial^{+}A^{\mu}(x)\) represents the gluon field tensor in the light-cone gauge, while the dual field strength is given by \(\widetilde{G}^{\alpha\beta}(x)=\frac{1}{2}e^{\alpha\beta\gamma\delta}G_{\gamma \delta}(x)\). The momenta of the proton initial and final states are denoted by \(P\) and \(P^{\prime}\), respectively, with their corresponding helicities represented as \(\Lambda\) and \(\Lambda^{\prime}\). The light-front spinor for the proton is denoted by \(u(P,\Lambda)\). The momentum transfer in this scenario is expressed as \(\Delta^{\mu}={P^{{}^{\prime}}}^{\mu}-P^{\mu}\), while the skewness is given by \(\zeta=-\frac{\Delta^{+}}{2P^{+}}\). We use a symmetric frame wherein \({P^{{}^{\prime}}}^{\perp}=\Delta^{\perp}/2\) and \(P^{\perp}=-\Delta^{\perp}/2\). The average momentum \(\tilde{P}^{\mu}=(P^{\mu}+{P^{{}^{\prime}}}^{\mu})/2\). In this work, we consider the zero skewness case (\(\zeta=0\)) and thus only study the kinematic region corresponding to \(\zeta<x<1\) also called the DGLAP region. The invariant momentum transfer in the process is denoted by \(t=\Delta^{2}\) and for zero skewness, we have \(t=-\Delta_{\perp}^{2}\). Note that one has to consider nonzero skewness to compute \(\widetilde{E}^{g}\). The remaining three chiral-even gluon GPDs at leading twist can be represented in terms of diagonal (\(N\to N\)) overlap of LFWFs as follows: \[H^{g}\left(x,0,t\right) =\sum_{\{\lambda_{i}\}}\int\left[\mathrm{d}\mathcal{X}\,\mathrm{ d}\mathcal{P}_{\perp}\right]\delta(x-x_{1})\] \[\times\,\Psi_{4,\{\lambda_{i}\}}^{\uparrow*}(\{x_{i}^{\prime},{p^ {\prime}}_{i\perp}\})\Psi_{4,\{\lambda_{i}\}}^{\uparrow}(\{x_{i},{p}_{i\perp} \}), \tag{9}\] \[E^{g}\left(x,0,t\right) =\frac{2M}{\Delta_{1}-i\Delta_{2}}\sum_{\{\lambda_{i}\}}\int \left[\mathrm{d}\mathcal{X}\,\mathrm{d}\mathcal{P}_{\perp}\right]\delta(x-x_{1})\] \[\times\,\Psi_{4,\{\lambda_{i}\}}^{\uparrow*}(\{x_{i}^{\prime},{p^ {\prime}}_{i\perp}\})\Psi_{4,\{\lambda_{i}\}}^{\downarrow}(\{x_{i},{p}_{i\perp }\}), \tag{10}\] \[\widetilde{H}^{g}\left(x,0,t\right) =\sum_{\{\lambda_{i}\}}\int\left[\mathrm{d}\mathcal{X}\,\mathrm{ d}\mathcal{P}_{\perp}\right]\delta(x-x_{1})\] \[\times\,\lambda_{4}\Psi_{4,\{\lambda_{i}\}}^{\uparrow*}(\{x_{i}^ {\prime},{p^{\prime}}_{i\perp}\})\Psi_{4,\{\lambda_{i}\}}^{\uparrow}(\{x_{i}, {p}_{i\perp}\}), \tag{11}\] where the longitudinal and transverse moemmta of the struck parton are \(x_{1}^{\prime}=x_{1}\) and \({p^{\prime}}_{1\perp}={p}_{1\perp}+(1-x_{1})\Delta_{\perp}\) respectively. The \(M\) is the proton mass in Eq. (10). The spectator momenta are \(x_{i}^{\prime}=x_{i}\) and \({p^{\prime}}_{1\perp}={p}_{1\perp}+x_{1}\Delta_{\perp}\). The shorthand notation used for the integration measure is as follows: \[\left[\mathrm{d}\mathcal{X}\,\mathrm{d}\mathcal{P}_{\perp}\right] \equiv\prod_{i=1}^{N}\left[\frac{\mathrm{d}x_{i}\,\mathrm{d}^{2}k_{ i\perp}}{16\pi^{3}}\right]16\pi^{3}\delta\left(1-\sum_{i=1}^{N}x_{i}\right) \tag{12}\] \[\times\delta^{2}\left(\sum_{i=1}^{N}k_{i\perp}\right).\] We study the gluon GPDs in the impact parameter space by performing a two dimensional Fourier transform (FT) with respect to the transverse momentum transfer [6; 73] at zero skewness: \[\mathcal{F}\left(x,b_{\perp}\right)=\int\frac{\mathrm{d}^{2}\Delta_{\perp}}{ \left(2\pi\right)^{2}}e^{-\mathrm{i}\Delta_{\perp}b_{\perp}}F\left(x,\zeta=0,t =-\Delta_{\perp}^{2}\right), \tag{13}\] where \(F=(H^{g},E^{g},\widetilde{H}^{g})\). The impact parameter variable \(b_{\perp}\) in the transverse coordinate space is conjugate to the transverse momentum transfer \(\Delta_{\perp}\). \(b_{\perp}\) denotes the transverse separation between the struck parton and the nucleon's transverse center of momentum, expressed as \(\sum_{i}x_{i}b_{\perp i}\)[1] with summation over parton indices. The relative distance between the struck parton and the spectator partons' center of momentum is \(b_{\perp}/(1-x)\), enabling the estimation of the bound state's transverse size [74]. The Fourier transform of GPD \(H\), represented by the function \(\mathcal{H}\), is of particular interest as it characterizes the parton number density with longitudinal momentum fraction \(x\) at a specific transverse distance \(b_{\perp}\) inside the nucleon [5]. Thus, we can define the transverse parton density's \(x\)-dependent squared radius [75] \[\left\langle b_{\perp}^{2}\right\rangle^{q/g}(x)=\frac{\int\mathrm{d}^{2}b_{ \perp}\left(b_{\perp}\right)^{2}\mathcal{H}\left(x,0,b_{\perp}\right)}{\int \mathrm{d}^{2}b_{\perp}\mathcal{H}\left(x,0,b_{\perp}\right)}, \tag{14}\] which can also be written through the GPD \(H^{q/g}\left(x,0,t\right)\) as: \[\left\langle b_{\perp}^{2}\right\rangle^{q/g}(x)=-4\frac{\partial}{\partial(-t )}\mathrm{ln}H\left(x,0,t\right). \tag{15}\] Meanwhile the \(\mathcal{E}(x,b_{\perp})\) illustrates a deformation of the density of the unpolarized parton in the transversely polarized proton [6]. \(\widetilde{\mathcal{H}}\) is responsible for the density of longitudinally polarized parton in the unpolarized proton. We also determine the contribution of each parton flavor to the total spin of the proton by utilizing the spin sum rule. The nucleon spin sum rule relates the first moment of GPDs to the total angular momentum of the proton [66] \[J^{z}=\frac{1}{2}\int\mathrm{d}x\ x\left[H(x,0,0)+E(x,0,0)\right]. \tag{16}\] ## 4 Numerical results The transverse and longitudinal truncation parameters are set to \(\mathcal{N}=9\) and \(\mathcal{K}=16.5\), respectively, throughout the entire calculation. We choose the harmonic oscillator scale parameter \(b=0.70\) GeV and the UV cutoff for the instantaneous interaction \(b_{\mathrm{inst}}=3.00\) GeV. The model parameters of the Hamiltonian are \(\{m_{u},m_{d},m_{g},\kappa,m_{f},g_{c}\}=\{0.31,0.25,0.50,0.54,1.80,2.40\}\), with all values in GeV unit, except for \(g_{c}\). These values are obtained by fitting the proton mass (\(M\)), its electromagnetic properties, and flavor FFs [67]. With the above parameters set, we numerically solve the eigenvalue equation to obtain the corresponding boost-invariant LFWF, denoted by \(\Psi_{N,\lambda_{i}}^{M_{J}}(\{x_{i},p_{i}^{\perp}\})\) at the model scale \(\mu_{0}^{2}=0.23\sim 0.25\) GeV\({}^{2}\)[67]. It is important to note that LFWFs should exhibit parity symmetry (P); however, this is disrupted by Fock space truncation. Nevertheless, mirror parity, represented by \(\widehat{P}_{x}=\widehat{R}_{x}(\pi)P\)[76], can be used as a substitute for parity. When applying the mirror parity transformation, eigenvectors corresponding to \(M_{J}=-\frac{1}{2}\) can be derived from the eigenvector associated with \(M_{J}=\frac{1}{2}\) using the following relationship: \(\psi_{N}^{\downarrow}\left(\{x_{i},n_{i},m_{i},\lambda_{i}\}\right)=(-1)^{ \sum_{i}m_{i}+c(N)}\psi_{N}^{\uparrow}\left(\{x_{i},n_{i},-m_{i},-\lambda_{i}\}\right)\), where \(c(3)=1\) and \(c(4)=0\). We subsequently utilize the obtained LFWFs in conjunction with the expressions from Eqs. (9) to (11) to calculate the gluon GPDs in both momentum space and impact parameter space. ### The gluon GPDs in the momentum space In this study, we focus on the case of zero skewness and consequently omit the skewness argument when writing the GPDs. We present three-dimensional (3D) plots for the three non-zero chiral-even gluon GPDs, \(H_{g}(x,t)\), \(E_{g}(x,t)\), and \(\widetilde{H}_{g}(x,t)\), in Fig. 1. Both \(H_{g}(x,t)\) and \(\widetilde{H}_{g}(x,t)\) exhibit positive peaks along the \(-t\) direction, while \(E_{g}(x,t)\) displays negative peaks. The maximum peak value for all three distributions occurs at the forward limit \(-t=0.00~{}\text{GeV}^{2}\). Notably, the largest peak for \(H_{g}(x,t)\) is significantly greater than those for \(E_{g}(x,t)\) and \(\widetilde{H}_{g}(x,t)\). The maximum peak magnitudes for both \(E_{g}(x,t)\) and \(\widetilde{H}_{g}(x,t)\) are comparable. We also notice that the magnitudes of all the GPDs decrease and the peaks along \(x\) move towards larger values of \(x\) as the momentum transfer \(-t\) increases similar to that observed for the quark GPDs in the proton [77; 78; 79; 26; 80; 81; 82; 83; 84; 85; 86; 87] as well as in the light mesons [88; 89; 86; 87]. The forward limits of \(H_{g}(x,0)\) and \(\widetilde{H}_{g}(x,0)\) correspond to ordinary spin-independent and spin-dependent gluon PDFs, respectively, which have been reported within our BLFQ approach in Ref. [67]. We note that \(E_{g}(x,t)\) decouples in the forward limit, as demonstrated in Eq. (10), so no such limit exists for \(E_{g}(x,t)\)[1]. When comparing with the results from the light-cone spectator model [50], a significant difference can be observed. Contrary to our case, where both \(H_{g}(x,t)\) and \(\widetilde{H}_{g}(x,t)\) represent positive definite quantities, the spectator model displays regions around small \(x\) that are negative. However, the qualitative nature of \(E_{g}(x,t)\) as shown in Ref. [50] aligns with our findings. The sum rule in Eq. (16), which pertains to the forward limit of the GPDs, relates to the z-component of the total angular momentum of partons within a nucleon polarized in the z-direction. In Fig. 2, we plot the distribution \(\mathcal{J}^{z}(x)=\frac{1}{2}x\left[H(x,0,0)+E(x,0,0)\right]\) as a function of \(x\), displaying results for both valence quarks and gluons. The \(u\) quark and the gluon contributions are positive, while the \(d\) quark contribution is predominantly negative across the \(x\) range. We present the \(J^{z}\) results for valence quarks and gluons in Table 1. The \(u\) quark exhibits a positive and dominant contribution, as shown in Fig. 2. The \(d\) quark provides a negative contribution, and the gluon contributes positively with a value of \(J^{z}_{g}\approx 0.066\) at the scale Figure 1: 3D plots for the three non-zero chiral-even gluon GPDs, \(H_{g}(x,t)\), \(E_{g}(x,t)\), and \(\widetilde{H}_{g}(x,t)\) as functions of \(x\) and \(-t\) at the scale \(\mu_{0}^{2}=0.23\sim 0.25~{}\text{GeV}^{2}\). All results are shown at zero skewness. Figure 2: Plot of the density function \(\mathcal{J}^{z}(x)\) vs \(x\) for valence quarks and dynamical gluon. The black solid curve is for the \(d\)-quark, the dot-dashed magenta curve is for the \(u\)-quark and the blue dashed curve is for the gluon. \(\mu_{0}^{2}=0.23\sim 0.25\) GeV\({}^{2}\), accounting for approximately 13% of the total \(J^{z}=0.5\). Note that at the scale 4 GeV\({}^{2}\), the gluonic contribution to the proton total angular momentum: \(J_{g}^{z}\approx 0.194\) (38%) [51] and \(J_{g}^{z}\approx 0.187\) (37%) [52] have been reported by the DSE approach and the lattice QCD simulation, respectively. ### The gluon GPDs in the impact parameter space The GPDs in the impact parameter space (IPS) provide insight into the distribution of partons with a specific longitudinal momentum fraction \(x\) within the transverse position or IPS variable \(b_{\perp}\). Unlike GPDs, these distributions in the IPS obey certain positivity conditions and can be given a probabilistic interpretation [6]. In Fig. 3, we present our results for the GPDs in the IPS, displaying them as functions of \(x\) and \(b_{\perp}\). We observe that the GPD \(\mathcal{H}_{g}(x,b_{\perp})\) satisfies positivity constraints, such that \(\mathcal{H}_{g}(x,b_{\perp})\geq 0\), thereby permitting a probabilistic interpretation [6]. The peak at \(b_{\perp}=0\) indicates the highest probability of finding a gluon with a momentum fraction of \(x=5/16.5\). As anticipated, the gluon density gradually decreases as we move away from the proton's center (\(b_{\perp}=0\)). The GPD \(\mathcal{E}_{g}(x,b_{\perp})\) is negative, displaying a negative peak for \(x=5/16.5\) at \(b_{\perp}=0\). For a probabilistic interpretation, it is essential to consider amplitudes where the initial and final states share the same helicity. However, since the GPD \(E_{g}(x,t)\) in momentum space is associated with states possessing different helicities in the initial and final states (see Eq. (10)), so that developing a probabilistic interpretation for \(\mathcal{E}_{g}(x,t)\) is challenging. Nevertheless, a density interpretation of \(\mathcal{E}_{g}(x,t)\) is possible by considering the superposition of transversely localized nucleon states with opposite helicities [6]. The GPD \(\widetilde{\mathcal{H}}_{g}(x,b_{\perp})\) in Fig. 3 also obeys the positivity constraint. It represents the density difference between positive-helicity and negative-helicity gluons. For \(\widetilde{\mathcal{H}}_{g}(x,b_{\perp})\), we find that the peak at \(b_{\perp}=0\) resides at \(x=6/16.5\). We further notice in Fig. 3 that the width of all the GPDs in the transverse IPS decrease with increasing \(x\). This implies that the distributions are more concentrated and the gluon is more localized near the center of momentum (\(b_{\perp}=0\)) when it is carrying a higher longitudinal momentum. Meanwhile, the peaks of all the IPS GPDs move toward to lower values of \(x\) as \(b_{\perp}\) increases. This characteristic of the GPDs in the \(b_{\perp}\)-space is reassuring since the gluon GPDs in momentum space become wider in \(-t\) as \(x\) increases, as can be seen from Fig. 1. On the light-front, we can understand this as the larger \(x\), the smaller the kinetic energy carried by the gluon. As the total kinetic energy remains limited, the distribution in the transverse momentum broadens at higher longitudinal momentum fraction reflecting the trend to carry a larger portion of the kinetic energy. As a consequence, these general features should be nearly model-independent characteristics of the GPDs and, indeed, they are also noticed in other theoretical investigations of the GPDs [6; 81; 82; 83; 84; 85; 80]. In Fig. 4, we illustrate the \(x\)-dependent squared radius of quark and gluon densities in the transverse plane as a function of \(x\). The term \(\left\langle b_{\perp}^{2}\right\rangle^{q/g}(x)\) characterizes the transverse size of the hadron and demonstrates an increase in the transverse radius as the parton momentum fraction \(x\) decreases [91]. For a given value of \(x\), the transverse size of the \(u\) quark is slightly smaller than that of the \(d\) quark. At lower \(x\) values, the transverse size of the gluon is larger than that of the quark. However, at higher \(x\) values, the \begin{table} \begin{tabular}{c|c||c|c||c} \hline \hline \(J_{d}^{z}\) & \(-0.039\) & \(J_{u}^{z}\) & \(0.473\) & \(J_{g}^{z}\) & \(0.066\) \\ \hline \(d\%\) & \(-7.8\%\) & \(u\%\) & \(94.6\%\) & \(g\%\) & \(13.2\%\) \\ \hline \hline \end{tabular} \end{table} Table 1: The partonic contributions to the total angular momentum of the proton. Figure 3: 3D plots for the FT of the gluon GPDs, \(\mathcal{H}_{g}(x,b_{\perp})\), \(\mathcal{E}_{g}(x,b_{\perp})\) and \(\widetilde{\mathcal{H}}_{g}(x,b_{\perp})\) in the impact parameter space as functions of \(x\) and the transverse impact parameter \(b_{\perp}\) at the scale \(\mu_{0}^{2}=0.23\sim 0.25\) GeV\({}^{2}\). The FT is with respect to the transverse momentum transfer at zero skewness. gluon's transverse size is smaller than the quark's. According to the \(x\)-dependent squared radius of the proton distributions, where we compare the BLFQ prediction with available extracted data from the DVCS process within the range \(0.05\leq x\leq 0.2\)[91]. The \(\left\langle b_{\perp}^{2}\right\rangle(x)\) describes the transverse size of the nucleon and shows a decreasing value of the up quarks and down quark with increasing value of the quark momentum fraction \(x\). We evaluate the proton's transverse squared radius combining the PDFs \(f^{q}(x)\) following the reference [91]. \[\left\langle b_{\perp}^{2}\right\rangle=\sum_{q}e_{q}\int_{0}^{1}\mathrm{d}xf^ {q}(x)\left\langle b_{\perp}^{2}\right\rangle^{q}(x). \tag{17}\] In our approach, we obtain the squared radius of the proton model, \(\left\langle b_{\perp}^{2}\right\rangle=0.473\) fm\({}^{2}\), around 10% above the experimental data [91]: \(\left\langle b_{\perp}^{2}\right\rangle_{\mathrm{exp}}=0.43\pm 0.01\) fm\({}^{2}\). ## 5 Summary In this work, we compute the leading twist chiral-even gluon GPDs of the proton utilizing the BLFQ framework. This is achieved by numerically resolving the light-front bound state eigenvalue equation, using an effective QCD Hamiltonian incorporating three dimensional confinement in the leading Fock sector and fundamental QCD interactions for the one dynamical gluon Fock sector. We analyze the three non-zero chiral-even GPDs at zero skewness, both in momentum space and impact parameter space. Our results show that \(H_{g}\) and \(\widetilde{H}_{g}\) are positive, while \(E_{g}\) is negative. The peak magnitudes of \(E_{g}\) and \(\widetilde{H}_{g}\) are similar, whereas \(H_{g}\) exhibits a notably larger peak magnitude. Comparable observations are made in the impact parameter space, where \(\mathcal{H}_{g}\) and \(\widetilde{\mathcal{H}}_{g}\) satisfy positivity constraints, thus offering a density interpretation. We perform a comparison study of the transverse square radius, \(\left\langle b_{\perp}^{2}\right\rangle^{q/g}(x)\), for both the gluon and quarks, as a function of the longitudinal momentum fraction \(x\). As anticipated, the transverse size decreases with increasing \(x\); as \(x\to 1\), the transverse size of the proton behaves like a point-like object. Additionally, using the nucleon spin sum rule, we calculate the gluon and quarks contribution to the proton's total angular momentum. The \(u\) quark provides a dominant contribution, while the gluon's contribution is \(J_{g}=0.066\). The investigation of nonzero skewness GPDs as well as the chiral-odd sector will be the subject of future research. ## Acknowledgements We thank Zhimin Zhu, Ziqi Zhang and Yiping Liu for helpful discussions in the Institute of Modern Physics, University of Chinese Academy of Science. S. N and C. M. thank the Chinese Academy of Sciences Presidents International Fellowship Initiative for their support via Grants No. 2021PM0021 and 2021PM0023, respectively. C. M. is supported by new faculty start-up funding from the Institute of Modern Physics, Chinese Academy of Sciences, Grant No. E129952YR0. X. Z. is supported by new faculty startup funding by the Institute of Modern Physics, Chinese Academy of Sciences, by Key Research Program of Frontier Sciences, Chinese Academy of Sciences, Grant No. ZDBS-LY-7020, by the Natural Science Foundation of Gansu Province, China, Grant No. 20JR10RA067, by the Foundation for Key Talents of Gansu Province, by the Central Funds Guiding the Local Science and Technology Development of Gansu Province, Grant No. 22ZY1QA006, by Gansu International Collaboration and Talents Recruitment Base of Particle Physics (2023-2027), by International Partnership Program of the Chinese Academy of Sciences, Grant No. 016GJHZ2022103FN and by the Strategic Priority Research Program of the Chinese Academy of Sciences, Grant No. XDB34000000. J. P. V. is supported by the Department of Energy under Grants No. DE-SC0023692 and DE-SC0023707. A major portion of the computational resources were also provided by Sugon Advanced Computing Center. Figure 4: Plot for the \(x\)-dependent squared radius \(b_{\perp}^{2}\) (\(x\)) of the parton density in the transverse plane. The black solid curve is for the \(d\)-quark, the dot-dashed magenta curve is for the \(u\)-quark and the blue dashed curve is for the gluon.
2310.12701
Parity Games on Temporal Graphs
Temporal graphs are a popular modelling mechanism for dynamic complex systems that extend ordinary graphs with discrete time. Simply put, time progresses one unit per step and the availability of edges can change with time. We consider the complexity of solving $\omega$-regular games played on temporal graphs where the edge availability is ultimately periodic and fixed a priori. We show that solving parity games on temporal graphs is decidable in PSPACE, only assuming the edge predicate itself is in PSPACE. A matching lower bound already holds for what we call punctual reachability games on static graphs, where one player wants to reach the target at a given, binary encoded, point in time. We further study syntactic restrictions that imply more efficient procedures. In particular, if the edge predicate is in $P$ and is monotonically increasing for one player and decreasing for the other, then the complexity of solving games is only polynomially increased compared to static graphs.
Pete Austin, Sougata Bose, Patrick Totzke
2023-10-19T12:53:57Z
http://arxiv.org/abs/2310.12701v4
# Parity Games on Temporal Graphs ###### Abstract Temporal graphs are a popular modelling mechanism for dynamic complex systems that extend ordinary graphs with discrete time. Simply put, time progresses one unit per step and the availability of edges can change with time. We consider the complexity of solving \(\omega\)-regular games played on temporal graphs where the edge availability is ultimately periodic and fixed a priori. We show that solving parity games on temporal graphs is decidable in PSPACE, only assuming the edge predicate itself is in PSPACE. A matching lower bound already holds for what we call _punctual_ reachability games on static graphs, where one player wants to reach the target at a given, binary encoded, point in time. We further study syntactic restrictions that imply more efficient procedures. In particular, if the edge predicate is in P and is monotonically increasing for one player and decreasing for the other, then the complexity of solving games is only polynomially increased compared to static graphs. Keywords:Temporal graphs Reachability Games Complexity Timed automata ## 1 Introduction Temporal graphs are graphs where the edge relation changes over time. They are often presented as a sequence \(G_{0},G_{1},\ldots\) of graphs over the same set of vertices. We find it convenient to define them as pairs \(G=(V,E)\) consisting of a set \(V\) of vertices and associated edge availability predicate \(E:V^{2}\to 2^{\mathbb{N}}\) that determines at which integral times a directed edge can be traversed. This model has been used to analyse dynamic networks and distributed systems in dynamic topologies, such as gossiping and information dissemination [36, 24]. There is also a large body of work that considers temporal generalisations of various graph-theoretic notions and properties [32, 14, 10]. Related algorithmic questions include graph colouring [30], exploration [12], travelling salesman [33], maximum matching [29], and vertex-cover [2]. The edge relation is often deliberately left unspecified and sometimes only assumed to satisfy some weak assumptions about connectedness, frequency, or fairness to study the worst or average cases in uncontrollable environments. Depending on the application, one distinguishes between "online" questions, where the edge availability is revealed stepwise, as opposed to the "offline" variant where all is given in advance. We refer to [17, 31] for overviews of temporal graph theory and its applications. Two player zero-sum verification games on directed graphs play a central role in formal verification, specifically the reactive synthesis approach [34]. Here, a controllable system and an antagonistic environment are modeled as a game in which two opposing players jointly move a token through a graph. States are either owned by Player 1 (the system) or Player 2 (the environment), and the owner of the current state picks a valid successor. Such a play is won by Player 1 if, and only if, the constructed path satisfies a predetermined _winning condition_ that models the desired correctness specification. The winning condition is often given either in a temporal logic such as Linear Temporal Logic (LTL) [35], or directly as \(\omega\)-automaton whose language is the set of infinite paths considered winning for Player 1. The core algorithmic problem is solving games: to determine which player has a strategy to force a win, and if so, how. Determining the complexity of solving games on static graphs has a long history and continues to be an active area of research. We refer to [1, 13] for introductions on the topic and recall here only that solving reachability games, where Player 1 aims to eventually reach a designated target state, is complete for polynomial time. The precise complexity of solving parity games is a long-standing open question. It is known to be in \(\mathsf{UP}\cap\mathsf{coUP}\)[22], and so in particular in \(\mathsf{NP}\) and \(\mathsf{coNP}\), and recent advances have led to quasi-polynomial time algorithms [6, 23, 26, 9, 25]. Related Work.Periodic temporal graphs were first studied by Floccchini, Mans, and Santoro in [14], where they show polynomial bounds on the length of explorations (paths covering all vertices). Recently, De Carufel, Flocchini, Santoro, and Simard [10] study Cops & Robber games on periodic temporal graphs. They provide an algorithm for solving one-cop games that is only quadratic in the number of vertices and linear in the period. Games on temporal graphs with maximal age, or period of some absolute value \(K\) given in binary are games on exponentially succinctly presented arenas. Unfolding them up to time \(K\) yields an ordinary game on the exponential sized graph which allows to transfer upper bounds, that are not necessarily optimal. In a similar vein, Avni, Ghorpade, and Guha [4] have recently introduced types of games on exponentially succinct arenas called pawn games. Similar to our results, their findings provide improved \(\mathsf{PSPACE}\) upper bounds for reachability games. Parity games on temporal graphs are closely related to timed-parity games, which are played on the configuration graphs of timed automata [3]. However, the time in temporal graphs is discrete as opposed to the continuous time semantics in timed automata. Solving timed parity games is complete for \(\mathsf{EXP}\)[28, 8] and the lower bound already holds for reachability games on timed automata with only two clocks [21]. Unfortunately, a direct translation of (games on) temporal graphs to equivalent timed automata games requires at least two clocks: one to hold the global time used to check the edge predicate and one to ensure that time progresses one unit per step. Contributions.We study the complexity of solving parity games on temporal graphs. As a central variant of independent interest are what we call _punctual_ reachability games, that are played on a static graph and player wants to reach a target vertex at a given binary encoded time. We show that solving such games is already hard for \(\mathsf{PSPACE}\), which provides a lower bound for all temporal graph games we consider. As our second, and main result, we show how to solve parity games on (ultimately) periodic temporal graphs. The difficulty to overcome here is that the period may be exponential in the number of vertices and thus a naively solving the game on the unfolding only yields algorithms in exponential space. Our approach relies on the existence of polynomially sized summaries that can be verified in \(\mathsf{PSPACE}\) using punctual reachability games. We then provide a sufficient syntactic restriction that avoids an increased complexity for game solving. In particular, if the edge predicate is in polynomial time and is monotonically increasing for one player and decreasing for the other, then the cost of solving reachability or parity games on temporal graphs increases only polynomially in the number of vertices compared to the cost of solving these games on static graphs. None of our upper bounds rely on any particular representation of the edge predicate. Instead, we only require that the representation ensures that checking membership (if an edge is traversable at a given time) has suitably low complexity. That is, our approach to solve parity games only requires that the edge predicate is in \(\mathsf{PSPACE}\), and polynomial-time verifiable edge predicates suffice to derive \(\mathsf{P}\)-time upper bounds for monotone reachability games. These conditions are met for example if the edge predicate is defined as semilinear set given as an explicit union of linear sets (\(\mathsf{NP}\) in general and in \(\mathsf{P}\) for singleton sets of periods), or by restricted Presburger formulae: the quantifier-free fragment is in \(\mathsf{P}\), the existential fragment is in \(\mathsf{NP}\) but remains in \(\mathsf{P}\) if the number of variables is bounded [37]. See for instance [15] and contained references. The rest of the paper is structured as follows. We recall the necessary notations in Section 2 and then discuss reachability games in Section 3. Section 4 presents the main construction for solving parity games and finally, in Section 5, we discuss improved upper bounds for monotone temporal graphs. ## 2 Preliminaries Definition 1 (Temporal Graphs): A temporal graph \(G=(V,E)\) is a directed graph where \(V\) are vertices and \(E:V^{2}\to 2^{\mathbb{N}}\) is the edge availability relation that maps each pair of vertices to the set of times at which the respective directed edge can be traversed. If \(i\in E(s,t)\) we call \(t\) an i-successor of \(s\) and write \(s\,\smash{\mathop{\longrightarrow}\limits^{i}}\,t\). The _horizon_ of a temporal graph is \(h(G)=\sup_{s,t\in V}(E(s,t))\), the largest finite time at which any edge is available, or \(\infty\) if no such finite time exists. A temporal graph is _finite_ if \(h(G)\in\mathbb{N}\) i.e., every edge eventually disappears forever. A temporal graph is _periodic_ with period \(K\in\mathbb{N}\) if for all nodes \(s,t\in V\) it holds that \(E(s,t)=E(s,t)+K\cdot\mathbb{N}\). We call \(G\)_static_ if it has period \(1\). Naturally, one can unfold a temporal graph into its _expansion_ up to some time \(T\in\mathbb{N}\cup\{\infty\}\), which is the graph with nodes \(V\times\{0,1,\ldots,T\}\) and directed edges \((s,i)\to(t,i+1)\) iff \(i\in E(s,t)\). In order for algorithmic questions to be interesting, we assume that temporal graphs are given in a format that is more succinct than the expansion up to their horizon or period. We only require that the representation ensures that checking if an edge is traversable at a given time can be done reasonably efficiently. We will henceforth use formulae in the existential fragment of Presburger arithmetic, the first-order theory over natural numbers with equality and addition. That is, the \(\exists\)PA formula \(\Phi_{s,t}(x)\) with one free variable \(x\) represents the set of times at which an edge from \(s\) to \(t\) is available as \(E(s,t)=\{n\mid\Phi_{s,t}(n)\equiv true\}\). We use common syntactic sugar including inequality and multiplication with (binary encoded) constants. For instance, \(\Phi_{s,t}(x)\stackrel{{\mbox{\tiny{\rm{\tiny def}}}}}{{=}}5 \leq x\wedge x\leq 10\) means the edge is available at times \(\{5,6,7,8,9,10\}\); and \(\Phi_{s,t}(x)\stackrel{{\mbox{\tiny{\rm{\tiny def}}}}}{{=}} \exists y.(x=y\cdot 7)\wedge\neg(x\leq 100)\) means multiples of 7 greater than 100. Definition 2 (Parity Games): A _parity game_ is a zero-sum game played by two opposing players on a directed graph. Formally, the game is given by a game graph \(G=(V,E)\), a partitioning \(V=V_{1}\uplus V_{2}\) of vertices into those owned by Player 1 and Player 2 respectively, and a colouring \(col:V\to C\) of vertices into a finite set \(C\subsetneq\mathbb{N}\) of colours. The game starts with a token on an initial vertex \(s_{0}\in V\) and proceeds in turns where in round \(i\), the owner of the vertex occupied by the token moves it to some successor. This way both players jointly agree on an infinite path \(\rho=s_{0}s_{1}\ldots\) called a _play_. A play is winning for Player 1 if \(\max\{c\mid\forall i\exists j.col(s_{j})=c\}\), the maximum colour seen infinitely often, is even. A _strategy_ for Player \(i\) is a recipe for how to move. Formally, it is a function \(\sigma_{i}:V^{*}V_{i}\to V\) from finite paths ending in a vertex \(s\) in \(V_{i}\) to some successor. We call \(\sigma\) positional if \(\sigma(\pi s)=\sigma(\pi^{\prime}s)\) for any two prefixes \(\pi,\pi^{\prime}\in V^{*}\). A strategy is _winning from vertex \(s\)_ if Player \(i\) wins every play that starts in vertex \(s\) and during which all decisions are made according to \(\sigma\). We call a vertex \(s\) winning for Player \(i\) if there exists a winning strategy from \(s\), and call the subset of all such vertices the _winning region_ for that player. Parity games enjoy the following property (See [13, Theorem 15] for details). Proposition 1: _Parity games are uniformly positionally determined: For every game \((V\!=\!V_{1}\!\uplus\!V_{2},E,col)\) there is a pair \(\sigma_{1},\sigma_{2}\) of positional strategies so that \(\sigma_{i}\) is winning for Player \(i\) from every vertex in the winning region of Player \(i\)._ A _temporal parity game_ is a parity game played on the infinite expansion of a temporal graph \(G=(V,E)\), where the ownership and colouring of vertices are given with respect to the underlying directed graph \(V\!=\!V_{1}\!\uplus\!V_{2}\) and \(col:V\to C\). The ownership and colouring are lifted to the expansion so that vertices in \(V_{i}\times\mathbb{N}\) are owned by Player \(i\) and vertex \((s,n)\) has colour \(col(s)\). Example 1: Consider the temporal parity game shown in Fig. 1. We will draw Player 1 states as diamond and those controlled by Player 2 as squares and sometimes write modulo expressions to define the edge availability. For example, the constraint on the edge from \(u\) to \(v\) can be written as the \(\exists\)PA-formula as \(\exists y.(x=3y)\vee(x=3y+1)\) and so this edge is available at times \(0,1,3,4,6,\dots\). The temporal graph underlying this game has period \(15\). Player 1 has a winning strategy starting from \((s,i)\) in the expansion by staying in state \(s\) until time \(i^{\prime}\geq i\) with \(i^{\prime}\equiv 0\mod 5\) and then following the edge to \((t,i^{\prime}+1)\). If Player 2 ever chooses to move to \(r\), he is trapped in an even-coloured cycle; if he stays in \(t\) forever, then the resulting game sees only colour \(2\) and is losing for him. Otherwise, if the game continues at \((s,i^{\prime}+2)\), Player 1 repeats as above (and wins plays that see both states \(s\) and \(t\). The example shows that Player 1 s strategies depend on the time and are not positional in the vertices alone, even if the winning set has period \(1\). Indeed, the only possible vertex-positional strategy (cycle in \(s\)) is losing. The vertices \(\{s,t\}\) shaded in blue represent the vertex from which Player 1 can win starting at any time, following the strategy described above. From the vertices shaded in red, Player 2 can win starting at certain times. For example, Player 2 has a winning strategy from \((u,i)\) if, and only if, \(i\equiv 0\mod 3\) or \(i\equiv 1\mod 3\) by moving to \((v,i+1)\). Notice that this edge is not available, and thus Player 2 is forced to move to \(t\) at times \(x\equiv 2\mod 3\). In particular therefore, Player 1 wins from \((v,0)\). The winning region for Player 1 is \(\{(s,k),(t,k),(r,k),(u,3k+2),(v,3k),(w,3k+1)\ \mid\ k\in\mathbb{N}\}\). The algorithmic question we consider is determining the set of vertices from which Player 1 wins starting at time \(0\). Figure 1: An example of a temporal parity game. Player 1 controls the diamond vertices \(V_{1}=\{s,v\}\) and Player 2 controls square vertices \(V_{2}=\{r,t,u,w\}\). Edge labels are Presburger formulae constraints denoting when an edge is available; edges without constraints are always available. The grey label next to each node denotes its colour. E.g., \(col(s)=1\in C=\{1,2,3,4\}\). ## 3 Reachability Games We discuss a variant of temporal games that turns out to be central both for upper and lower bounds for solving games on temporal graphs. We call these _punctual reachability games_, which are played on a static graph and Player 1 aims to reach the target precisely at a target time. Definition 3: A _punctual_ reachability game \(G=(V,E,s_{0},F)\) is a game played on a static graph with vertices \(V=V_{1}\uplus V_{2}\), edges \(E\subseteq V^{2}\), an initial state \(s_{0}\) and set of target vertices \(F\subseteq V\). An additional parameter is a target time \(T\in\mathbb{N}\) given in binary. Player 1 wins a play if and only if a vertex in \(F\) is reached at time \(T\). Punctual reachability games are really just a reformulation of the membership problem for alternating finite automata (AFA) [7] over a unary input alphabet. Player 1 wins the punctual reachability game with target \(T\) if, and only if, the word \(a^{T}\) is accepted by the AFA described by the game graph. Checking if a given unary word \(a^{T}\) is accepted by an AFA is complete for polynomial time if \(T\) is given in unary [20]. We first observe that it is \(\mathsf{PSPACE}\)-hard if \(T\) is given in binary. We write in the terminology of punctual reachability games but the main argument is by reduction from the emptiness problem for unary AFA, which is \(\mathsf{PSPACE}\)-compete [18, 19]. We rely on the fact that the shortest word accepted by an AFA is at most exponential in the number of states. Lemma 1: _Let \(G=(V,E,s_{0},F)\) be a reachability game on a static graph. If there exist \(T\in\mathbb{N}\) so that Player 1 wins the punctual reachability game at target time \(T\), then there exists some such \(T\leq 2^{|V|}\)._ Proof: Assume towards contradiction that \(T\geq 2^{|V|}\) is the smallest number such that Player 1 wins the punctual reachability game and consider some winning strategy \(\sigma\). For any time \(k\leq T\) we can consider the set \(S_{k}\subseteq V\) of vertices occupied on any branch of length \(k\) on \(\sigma\). By the pigeonhole principle, we observe \(k<k^{\prime}\leq T\) with \(S_{k}=S_{k^{\prime}}\), which allows to create a strategy \(\sigma^{\prime}\) that follows \(\sigma\) until time \(k\), then continues (and wins) according to \(\sigma\) as if it had just seen a length \(k^{\prime}\) history leading to the same vertex. This shows that there exists a winning strategy for target time \(T-(k-k^{\prime})\), which contradicts the assumption. A lower bound for solving punctual reachability games is now immediate. Lemma 2: _Solving punctual reachability games with target time \(T\) encoded in binary is \(\mathsf{PSPACE}\)-hard._ Proof: We reduce the non-emptiness problem of AFA over unary alphabets. In our terminology this is the decision problem if, for a given a reachability game \(G=(V,E,s_{0},F)\) there exists some \(T\in\mathbb{N}\) so that Player 1 wins the punctual reachability game at target time \(T\). This problem is \(\mathsf{PSPACE}\)-complete [18]. By Lemma 1, positive instances can be witnessed by a small target \(T\leq 2^{|V|}\) and so we know that it is \(\mathsf{PSPACE}\)-hard to determine the existence of such a small target time that allows Player 1 to win. Consider now the punctual reachability game \(G^{\prime}\) that extends \(G\) by a new initial vertex \(s^{\prime}_{0}\) that is owned by Player 1 and which has a self-loop as well as an edge to the original initial vertex \(s_{0}\) with target time \(T^{\prime}\stackrel{{\mbox{\tiny{\rm def}}}}{{=}}2^{|V|}\). In \(G^{\prime}\), Player 1 selects some number \(T\leq T^{\prime}\) by waiting in the initial vertex for \(T^{\prime}-T\) steps and then starts the game \(G\) with the target time \(T\). Therefore, Player 1 wins in \(G^{\prime}\) for target \(T^{\prime}\) if, and only if, she wins for some \(T\leq 2^{|V|}\) in \(G\). Corollary 1: _Solving reachability games on finite temporal graphs is \(\mathsf{PSPACE}\)-hard._ Proof: We reduce the punctual reachability game with target \(T\) to an ordinary reachability game on a finite temporal graph. This can be done by introducing a new vertex \(u\) as the only target vertex, so that it is only reachable via edges from vertices in \(F\) at time exactly \(T\). That is \(E(s,u)\stackrel{{\mbox{\tiny{\rm def}}}}{{=}}\{T\}\) and \(E(s,t)=[0,T]\) for all \(s,t\in V\setminus\{u\}\). Now Player 1 wins the reachability game for target \(u\) if, and only if, she wins the punctual reachability game with target \(F\) at time \(T\). A matching \(\mathsf{PSPACE}\) upper bound for solving punctual reachability games, as well as reachability games on finite temporal graphs can be achieved by computing the winning region backwards as follows. For any game graph with vertices \(V\!=\!V_{1}\uplus V_{2}\), set \(S\subseteq V\) and \(i\in\{1,2\}\), let \(Pre_{i}(S)\subseteq V\) denote the set of vertices from which Player \(i\) can force to reach \(S\) in one step. \[Pre_{i}(S)\stackrel{{\mbox{\tiny{\rm def}}}}{{=}}\{v\in V_{i} \mid\exists(v,v^{\prime})\in E.v^{\prime}\in S\}\cup\{v\in V_{1-i}\mid\forall( v,v^{\prime})\in E.v^{\prime}\in S\}\] A straightforward induction on the duration \(T\) shows that Player 1 wins the punctual reachability game with target time \(T\) from vertex \(s\) if, and only if \(s\in Pre_{1}^{T}(F)\), the \(T\)-fold iteration of \(Pre_{1}\) applied to the target set \(F\). Notice that knowledge of \(Pre_{i}^{k}(S)\) is sufficient to compute \(Pre_{i}^{k+1}(S)\). We can therefore compute \(Pre_{1}^{T}(F)\) from \(Pre_{1}^{0}(F)=F\) in \(\mathcal{O}(T)\) time using only \(\mathcal{O}(\log(|V|)+\log(T))\) space. Together with Lemma 2 we conclude the following. Theorem 2.1: _Solving punctual reachability games with target time \(T\) encoded in binary is \(\mathsf{PSPACE}\)-complete._ The same approach works for reachability games on finite temporal graphs if applied to the expansion up to horizon \(h(G)\), leading to the same time and space complexity upper bounds. The only difference is that computing \(Pre_{1}^{k}(F\times\{T\})\) requires to check edge availability at time \(T-k\). Theorem 2.2: _Solving reachability games on finite temporal graphs is \(\mathsf{PSPACE}\)-complete._ Proof: Consider a temporal game with vertices \(V\!=\!V_{1}\uplus V_{2}\), edges \(E:V^{2}\to 2^{\mathbb{N}}\) target vertices \(F\subseteq V\) and where \(T=h(G)\) is the latest time an edge is available. We want to check if starting in an initial state \(s_{0}\) at time \(0\), Player \(1\) can force to reach \(F\) at time \(T\). In other words, for the game played on the expansion up to time \(T\) we want to decide if \((s_{0},0)\) is contained in \(Pre_{1}^{T}(F\times\{T\})\). By definition of the expansion, we have \(Pre_{1}(S\times\{n\})\subseteq V\times\{n-1\}\) for all \(S\subseteq V\) and \(n\leq T\). Since we can check the availability of an edge at time \(n\) in polynomial space, we can iteratively compute \(Pre_{1}^{n}(F\times\{T\})\) backwards, starting with set \(Pre_{1}^{0}(F\times\{T\})=F\times\{T\}\), and only memorising the current iteration \(n\leq T\) and a set \(W_{n}\subseteq V\) representing \(Pre_{1}^{n}(F\times\{T\})=W_{n}\times\{T-n\}\). ## 4 Parity Games We consider Parity games played on periodic temporal graphs. As input we take a temporal graph \(G=(V,E)\) with period \(K\), a partitioning \(V\!=\!V_{1}\!\uplus\!V_{2}\) of the vertices, as well as a colouring \(col:V\to C\) that associates a colour out of a finite set \(C\subset\mathbb{N}\) of colours to every state. It will be convenient to write \(col(\pi)\stackrel{{\mbox{\tiny def}}}{{=}}\max\{col(s_{i})\mid 0 \leq i\leq k\}\) for the maximal colour of any vertex visited along a finite path \(\pi=(s_{0},0)(s_{1},1)\ldots(s_{k},k)\). The following relations \(R_{s}^{\sigma}\) capture the guarantees provided by a strategy \(\sigma\) if followed for one full period from vertex \(s\). Definition 4: For a strategy \(\sigma\) and vertex \(s\in V\) define \(R_{s}^{\sigma}\subseteq V\times C\) be the relation containing \((t,c)\in R_{s}^{\sigma}\) if, and only if, there exists a finite play \(\pi=(s,0)\ldots(t,K)\) consistent with \(\sigma\), that starts in \(s\) at time \(0\), ends in \(t\) at time \(K\), and the maximum colour seen on the way is \(col(\pi)=c\). We call \(R_{s}^{\sigma}\) the _summary_ of \(s\) with respect to strategy \(\sigma\). A relation \(B\subseteq V\times C\) is \(s\)-realisable if there is a strategy \(\sigma\) with \(B=R_{s}^{\sigma}\). Example 2: Consider the game in Fig. 2 where vertex \(u\in V_{2}\) has colour \(2\) and all other vertices have colour \(1\). The graph has period \(K=2\). The relations \(\{(t,1)\}\) and \(\{(t,2),(t^{\prime},2)\}\) are \(s\)-realisable, as witnessed by the strategies \(\sigma(s)=t\) Figure 2: The game from Example 2. Labels on vertices and edges denote colours and available times, respectively. The graph has period \(2\). In two rounds, Player \(1\) can force to end in \(t\) having seen colour \(1\), or in either \(t\) or \(t^{\prime}\) but having seen a better colour \(2\). and \(\sigma(s)=u\)), respectively. However, \(\{(t,2)\}\) is not \(s\)-realisable as no Player 1 strategy guarantees to visit \(s\) then \(u\) then \(t\). Lemma 3: _Given \(B\subseteq V\times C\), checking realisability of \(B\) is in \(\mathsf{PSPACE}\)._ Proof: We reduce checking realisability to solving a reachability game on a temporal graph that is only polynomially larger. More precisely, given a game \(G=(V,E,col)\) consider the game \(G^{\prime}=(V^{\prime},E^{\prime},col^{\prime})\) over vertices \(V^{\prime}\stackrel{{\mbox{\tiny def}}}{{=}}V\times C\) that keep track of the maximum colour seen so far. That is, the ownership of vertices and colours are lifted directly as \((s,c)\in V^{\prime}_{1}\iff s\in V_{1}\) and \(col^{\prime}(s,c)\stackrel{{\mbox{\tiny def}}}{{=}}col(s)\), and for any \(i\in\mathbb{N}\), \(s,t,s_{0}\in V\), \(c,d\in C\), we let \((t,d)\) be an \(i\)-successor of \((s,c)\) if, and only if, both \(t\) is an \(i\)-successor of \(s\) and \(d=\max\{c,col(t)\}\). Consider some relation \(B\subseteq V\times C\). We have that \(B\) is \(s\)-realisable if, and only if, Player 1 wins the punctual reachability game on \(G^{\prime}\) from vertex \((s,col(s))\) at time 0, towards target vertices \(B\subseteq V^{\prime}\) at target time \(K\). Indeed, any winning Player 1 strategy in this game witnesses that \(B\) is \(s\)-realisable and vice versa. By Theorem 3.1, the existence of such a winning strategy can be verified in polynomial space by backwards-computing the winning region. The following defines a small, and \(\mathsf{PSPACE}\)-verifiable certificate for Player 1 to win the parity game on a periodic temporal graph. Definition 5 (Certificates): Given temporal parity game \((V,E,col)\) with period \(K\), a _certificate_ for Player 1 winning the game from initial vertex \(s_{0}\in V\) is a multigraph where the vertex set \(V^{\prime}\subseteq V\) contains \(s_{0}\), and edges \(E^{\prime}\subseteq V^{\prime}\times C\times V^{\prime}\) are labelled by colours, such that 1. For every \(s\in V^{\prime}\), the set \(\mathit{Post}(s)\stackrel{{\mbox{\tiny def}}}{{=}}\{(t,c)\mid(s,c,t)\in E^{\prime}\}\) is \(s\)-realisable. 2. The maximal colour on every cycle reachable from \(s_{0}\) is even. Notice that condition 1 implies that no vertex in a certificate is a deadlock. A certificate intuitively allows to derive Player 1 strategies based on those witnessing the realisability condition. Example 3: Consider the game from Example 1 played on the temporal graph with period 15. A certificate for Player 1 winning from state \(v\) at time 0 is depicted in Fig. 3. Indeed, the Player 1 strategy mentioned in Example 1 (aim to alternate between \(s\) and \(t\)) witnesses that \(Post(v)=\{(s,3),(t,3),(r,4)\}\) is \(v\)-realisable because it allows Player 1 to enforce that after \(K=15\) steps from \(v\), the game ends up in one of those states via paths whose colour is dominated by \(col(v)=3\) or \(col(r)=4\). Lemma 4: _Player 1 wins the parity game on \(G\) from vertex \(s_{0}\) if, and only if, there exists a certificate._ Proof: For the backward implication we argue that a certificate \(C\) allows to derive a winning strategy for Player 1 in the parity game \(G\). By the realisability assumption (1), for each vertex \(s\in V\) there must exist a Player 1 strategy \(\sigma_{s}\) with \(R_{s}^{\sigma_{s}}=Post(s)\) that tells her how to play in \(G\) for \(K\) rounds if the starting time is a multiple of \(K\). Moreover, suppose she plays according to \(\sigma_{s}\) for \(K\) rounds and let \(t\) and \(c\) be the vertex reached and maximal colour seen on the way. Then by definition of the summaries, \((t,c)\in R_{s}^{\sigma_{s}}=Post(s)\) and so in the certificate \(C\) there must be some edge \(s\stackrel{{ c}}{{\longrightarrow}}t\). Suppose Player 1 continues to play in \(G\) like this forever: From time \(i\cdot K\) to \((i+1)\cdot K\) she plays according to some strategy \(\sigma_{s_{i}}\) determined by the vertex \(s_{i}\) reached at time \(i\cdot K\). Any consistent infinite play \(\rho\) in \(G\), chosen by her opponent, describes an infinite walk \(\rho^{\prime}\) in \(C\) such that the colour seen in any step \(i\in\mathbb{N}\) of \(\rho^{\prime}\) is precisely the dominant colour on \(\rho\) between rounds \(iK\) and \((i+1)K\). Therefore the dominant colours seen infinitely often on \(\rho\) and \(\rho^{\prime}\) are the same and, by certificate condition (2) on the colouring of cycles, even. We conclude that the constructed strategy for Player 1 is winning. For the forward implication, assume that Player 1 wins the game on \(G\) from vertex \(s\) at time \(0\). Since the game \(G\) is played on a temporal graph with period \(K\), its expansion up to time \(K-1\) is an ordinary parity game on a static graph with vertices \(V\times\{0,1,\ldots,K-1\}\) where the second component indicates the time modulo \(K\). Therefore, by positional determinacy of parity games (Proposition 1), we can assume that Player 1 wins in \(G\) using a strategy \(\sigma\) that is itself periodic. That is, \(\sigma(hv)=\sigma(h^{\prime}v)\) for any two histories \(h,h^{\prime}\) of lengths \(|h|\equiv|h^{\prime}|\mod K\). Moreover, we can safely assume that \(\sigma\) is uniform, meaning that it is winning from any vertex \((s,0)\) for which a winning strategy exists. Such a strategy induces a multigraph \(C=(V,E^{\prime})\) where the edge relation is defined by \((s,c,t)\in E^{\prime}\iff(t,c)\in R_{s}^{\sigma}\). It remains to show the second condition for \(C\) to be a certificate, namely that any cycle in \(C\), reachable from the initial vertex \(s_{0}\), has an even maximal colour. Suppose otherwise, that \(C\) contains a reachable cycle whose maximal colour is odd. Then there must be play in \(G\) that is consistent with \(\sigma\) and which sees the same (odd) colour infinitely often. But this contradicts the assumption that \(\sigma\) was winning in \(G\) in the first place. Figure 3: A certificate that Player 1 wins the game in Example 1 from state \(v\) at time \(0\). Our main theorem is now an easy consequence of the existence of small certificates. Theorem 4.1: _Solving parity games on periodic temporal graphs is \(\mathsf{PSPACE}\)-complete._ Proof: Hardness already holds for reachability games Lemma 2. For the upper bound we show membership in \(\mathsf{NPSPACE}\) and use Savitch's theorem. By Lemma 4 it suffices to guess and verify a candidate certificate \(C\). These are by definition polynomial in the number of vertices and colours in the given temporal parity game. Verifying the cycle condition (2) is trivial in polynomial time and verifying the realisability condition (1) is in \(\mathsf{PSPACE}\) by Lemma 3. Remark 1: The \(\mathsf{PSPACE}\) upper bound in Theorem 4.1 can easily be extended to games on temporal graphs that are _ultimately_ periodic, meaning that there exist \(T,K\in\mathbb{N}\) so that for all \(n\geq T\), \(s\mathop{\longrightarrow}\limits^{n}t\) implies \(s\mathop{\longrightarrow}\limits^{n+K}t\). Such games can be solved by first considering the periodic suffix according to Theorem 4.1 thereby computing the winning region for Player 1 at time exactly \(T\), and then solving the temporal reachability game with horizon \(T\). ## 5 Monotonicity In this section, we consider the effects of monotonicity assumptions on the edge relation with respect to time on the complexity of solving reachability games. We first show that reachability games remain \(\mathsf{PSPACE}\)-hard even if the edge relation is decreasing (or increasing) with time. We then give a fragment for which the problem becomes solvable in polynomial time. Increasing and Decreasing temporal graphs:Let the edge between vertices \(u,v\in V\) of a temporal graph be referred to as _decreasing_ if \(u\mathop{\longrightarrow}\limits^{i+1}v\) implies \(u\mathop{\longrightarrow}\limits^{i}v\) for all \(i\in\mathbb{N}\), i.e. edges can only disappear over time. Similarly, call the edge _increasing_ if for all \(i\in\mathbb{N}\) we have that \(u\mathop{\longrightarrow}\limits^{i}v\) implies \(u\mathop{\longrightarrow}\limits^{i+1}v\); i.e. an edge available at current time continues to be available in the future. A temporal graph is decreasing (increasing) if all its edges are. We assume that the times at which edge availability changes are given in binary. More specifically, every edge is given as inequality constraint of the form \(\Phi_{u,v}(x)\stackrel{{\mbox{\tiny{\rm{def}}}}}{{=}}x\leq n\) (respectively \(x\geq n\)) for some \(n\in\mathbb{N}\). Although both restrictions imply that the graph is ultimately static, we observe that solving reachability games on such monotonically increasing or decreasing temporal graphs remains \(\mathsf{PSPACE}\)-complete. Theorem 5.1: _Solving reachability on decreasing (respectively increasing) temporal graphs is \(\mathsf{PSPACE}\)-complete._ Proof: The upper bound holds for temporal parity games Theorem 3.1. For the lower bound we reduce from punctual reachability games which are \(\mathsf{PSPACE}\)-hard by Lemma 2. Consider a (static) graph \(G\) and a target time \(T\in\mathbb{N}\) given in binary. Without loss of generality, assume that the target vertex \(v\) has no outgoing edges. We convert \(G\) into a temporal graph \(G^{\prime}\) with \(V^{\prime}=V\cup\{w,\top,\bot\}\), \(V^{\prime}_{1}=(V_{1}\setminus\{v\})\cup\{w\}\), \(V^{\prime}_{2}=V^{\prime}\setminus V^{\prime}_{1}\) and new target \(\top\). The vertex \(\bot\) is a sink state and the original target vertex \(v\) is now controlled by Player 2. Edge availabilities are \(v\mathop{\longrightarrow}\limits^{x}\bot\) if \(x\leq T-1\), \(v\mathop{\longrightarrow}\limits^{x}w\) if \(x\leq T+1\), \(w\mathop{\longrightarrow}\limits^{x}\top\) if \(x\leq T+1\), and all other edges disappear after time \(T+1\). The constructed temporal graph is finite and decreasing. See Fig. 4. The construction ensures that the only way to reach \(\top\) is to reach \(v\) at time \(T\), \(w\) at time \(T+1\) and take the edge from \(w\) to \(\top\) at time \(T+1\). Player 1 wins in \(G^{\prime}\) if and only if she wins the punctual reachability game on \(G\). A similar reduction works in the case of increasing temporal graphs by switching the ownership of vertices \(v\) and \(w\). The vertex \(v\), now controlled by Player 1 has the edge \(v\mathop{\longrightarrow}\limits^{x}w\) at times \(x\geq T\) and the edge \(v\mathop{\longrightarrow}\limits^{\bot}\) at all times. The vertex \(w\) now controlled by Player 2 has the edge \(w\mathop{\longrightarrow}\limits^{\bot}\) available at all times but the edge \(w\mathop{\longrightarrow}\limits^{x}\bot\) becomes available at time \(x\geq T+2\). Declining and improving temporal games:We now consider the restriction where all edges controlled by one player are increasing and those of the over player are decreasing. Taking the perspective of the system Player 1, we call a game on a temporal graph _declining_ if all edges \(u\mathop{\longrightarrow}\limits v\) with \(u\in V_{1}\) are decreasing and all edges \(u\mathop{\longrightarrow}\limits v\) with \(u\in V_{2}\) are increasing. Note that _declining_ is a property of the game and not the graph as the definition requires a distinction based on ownership of vertices, which is specified by the game and not the underlying graph. From now on, we refer to such games as declining temporal reachability (or parity) games. Notice that Player 1 has fewer, and Player 2 has more choices to move at later times. Analogously, call the game _improving_ if the conditions are Figure 4: Reduction from a punctual reachability game to a reachability game on a temporal graph that is finite and decreasing, see Theorem 3.1. Components added are shown in red. reversed, i.e., all edges \(u\mathop{\longrightarrow}v\) with \(u\in V_{1}\) are increasing and all edges \(u\mathop{\longrightarrow}v\) with \(u\in V_{2}\) are decreasing. We show that declining (and improving) temporal reachability games can be solved in polynomial time. Theorem 4.1: _Solving declining (respectively improving) temporal reachability games is in \(\mathsf{P}\)._ Proof: We first give the proof for declining games. Consider the reachability game on the expansion with vertices \(V\times\mathbb{N}\) such that the target set is \(F\times\mathbb{N}\). For \(k\in\mathbb{N}\) let \(W_{k}\subseteq V\) be the set of those vertices \(u\) such that Player 1 has a winning strategy from \((u,k)\). We first show that \[W_{i+1}\subseteq W_{i} \tag{1}\] For sake of contradiction, suppose there exists \(u\in W_{i+1}\setminus W_{i}\). Let \(\sigma^{1}_{i+1}\) be a (positional) winning strategy from \((u,i+1)\) for Player 1 in the expansion. Since \(u\not\in W_{i}\), by positional determinacy of reachability games (Proposition 1), Player 2 has a winning strategy \(\sigma^{2}_{i}\) from \((u,i)\). Consider a strategy \(\sigma^{1}_{i}\) for Player 1, such that for all \(v\in V_{1}\), \(\sigma^{1}_{i}(v,k)\stackrel{{\mbox{\tiny def}}}{{=}}\sigma^{1}_ {i+1}(v,k+1)\), for all \(k\geq i\). Similarly, let \(\sigma^{2}_{i+1}\) be the strategy for Player 2, such that for all \(v\in V_{2}\), \(\sigma^{2}_{i+1}(v,k+1)=\sigma^{2}_{i}(v,k)\), for all \(k\geq i\), Note that this is well defined because by definition of declining games, i.e, \(v\mathop{\longrightarrow}\limits^{k+1}u\) implies \(v\mathop{\longrightarrow}\limits^{k}u\) for all \(v\in V_{1}\), and \(v\mathop{\longrightarrow}\limits^{k}u\) implies \(v\mathop{\longrightarrow}\limits^{k+1}u\), for all \(v\in V_{2}\). Starting from the vertex \((u,i+1)\), the pair of strategies \((\sigma^{1}_{i+1},\sigma^{2}_{i+1})\) defines a unique play \(\pi_{i+1}\), which is winning for Player 1. Similarly, the pair of strategies \((\sigma^{1}_{i},\sigma^{2}_{i})\) define a play \(\pi_{i}\) which is winning for Player 2 starting from \((u,i)\). However, the two plays visit the same set of states, particularly, \((v,k)\) is visited in \(\pi_{i}\) if and only if \((v,k+1)\) is visited in \(\pi_{i+1}\). Therefore, either both are winning for Player 1 or both are losing for Player 2, which is a contradiction. Let \(N\subseteq\mathbb{N}\) be the set of times at which the graph changes, i.e. \[N=\{c\ \mid\ \exists\Phi_{u,v}(x)=x\triangleleft c,\,\mbox{where}\triangleleft \in\{\leq,\geq\}\}\}\] Let \(m\stackrel{{\mbox{\tiny def}}}{{=}}\max(N)\) be the latest time any edge availability changes. We show that \(W_{m}=W_{k}\) for all \(k\geq m\). To see this, note that \(W_{m}\) is equal to the winning region for Player 1 in the (static) reachability game played on \(G_{m}=(V,E_{m})\), where \(E_{m}=\{(u,v)\ \mid\ \ u\mathop{\longrightarrow}\limits^{m}v\}\). Consider a (positional) winning strategy \(\sigma_{m}\) for Player 1 in \(G_{m}\) and define a positional strategy \(\sigma(v,k)=\sigma_{m}(v)\), for \(k\geq m\). Since the graph is static after time \(m\), this is well defined. Starting from a vertex \((u,k)\), a vertex \((v,k+k^{\prime})\) is visited on a \(\sigma\)-consistent path if and only if there is a \(\sigma_{m}\)-consistent path \(u\mathop{\longrightarrow}\limits_{k^{\prime}}v\). Therefore, \(\sigma\) is a winning strategy from any vertex \((v,k)\) such that \(k\geq m\) and \(v\in W_{m}\). Moreover, the set \(W_{m}\) can be computed in time \(\mathcal{O}(|V|^{2})\) by solving the reachability game on \(G_{m}\)[13, Theorem 12]. To solve reachability on declining temporal games, we can first compute the winning region \(W_{m}\) in the stabilised game \(G_{m}\). This means \(W_{m}\times[m,\infty)\) is winning for Player 1. To win the declining temporal reachability game, Player 1 can play the punctual reachability game with target set \(W_{m}\) at target time \(m\). The winning region for Player 1 at time 0 can therefore be computed as \(Pre_{1}^{m}(W_{m}\times\{m\})\) as outlined in the proof of Theorem 2.1. Note that naively this only gives a \(\mathsf{PSPACE}\) upper bound as in the worst case, we would compute \(Pre_{1}\) an exponential (\(m\)) times. To overcome this, note that in the expansion graph \(Pre_{1}^{i}(W_{m}\times\{m\})=W_{m-i}\times\{m-i\}\). According to Eq. (1), \(W_{m-i}\subseteq W_{m-i^{\prime}}\) for \(i^{\prime}>i\). Let \(i,i^{\prime}\) be such that \(m-i\) and \(m-i^{\prime}\) are both consecutive change points, i.e, \(m-i,m-i^{\prime}\in N\) and \(\forall\ell\in N.\ell<m-i^{\prime}\vee\ell>m-i\). Since the edge availability of the graph does not change between time \(m-i^{\prime}\) and \(m-i\), we have \(W_{m-i-1}=W_{m-i}\) implies \(W_{m-i^{\prime}}=W_{m-i}\). Therefore, we can accelerate the \(Pre_{1}\) computation and directly move to the time step \(m-i^{\prime}\), i.e, the \(i^{\prime}\)th iteration in the computation. This case is illustrated at time \(n^{\prime}=m-i^{\prime}\) in Fig. 5. With this change, our algorithm runs the \(Pre_{1}\) computation at most \(|V|+|N|\), as each \(Pre_{1}\) computation either corresponds to a step a time in \(N\) when the graph changes, or a step in which the winning region grows such as at time \(n\) in Fig. 5. Since each \(Pre_{1}\) computation can be done in polynomial time, we get a PTIME algorithm in this case, shown in Algorithm 1. ``` \(W\leftarrow\mathrm{Solve}(G_{m})\)\(\triangleright\) Computes Player 1 winning region in \(G_{m}\) while\(N\neq\emptyset\)do \(n\gets max(N)\) if\((Pre_{1}(W\times\{n\})=W\)then \(N\gets N\setminus n\)\(\triangleright\) Accelerate to next change time else \(W\gets Pre_{1}(W)\) \(N\gets N\cup\{n-1\}\setminus\{n\}\) endif endwhile ``` **Algorithm 1** Algorithm for declining games with set of change times \(N\) and \(m=\max(N)\) The case for improving temporal reachability games can be solved similarly. Instead of computing the winning region for Player 1 in \(G_{m}\), we start with computing the winning region \(W_{m}^{2}\) for Player 2 in \(G_{m}\) and switch the roles of Player 1 and Player 2, i.e, Player 2 has the punctual reachability objective with target set \(W_{m}^{2}\) and target time \(m\), which can be solved as above. This gives us an algorithm to compute the winning region for Player 2 and by determinacy of reachability games on infinite graphs, we can compute the winning region for Player 1 at time 0 as well. Remark 2: Algorithm 1 also works for parity objectives by changing step 1, where \(\mathrm{Solve}(G_{m})\) would amount to solving the parity game on the static graph \(G_{m}\). This can be done in quasi-polynomial time and therefore gives a quasi-polynomial time algorithm to solve declining (improving) temporal parity games and in particular, gives membership in the complexity class \(\mathsf{NP}\cap\mathsf{coNP}\). Since the declining (improving) restriction on games on temporal graphs allow for improved algorithms, a natural question is to try to lift this approach to a larger class of games on temporal graphs. Note that the above restrictions are a special case of eventually periodic temporal graphs with a prefix of time \(m\) followed by a periodic graph with period \(1\). Now, we consider temporal graphs of period \(K>1\) such that the game arena is declining (improving) within each period. Formally, a game on a temporal graph \(G\) is _periodically declining_ (improving) if there exists a period \(K\) such that for all \(k\in\mathbb{N}\), \(k\in E(u,v)\) if and only if \(k+K\in E(u,v)\); and the game on the finite temporal graph resulting from \(G\) by making the graph constant from time \(K\) onwards, is declining (improving). We prove that this case is \(\mathsf{PSPACE}\)-hard, even with reachability objectives. Theorem 4.1: _Solving periodically declining (improving) temporal reachability games is \(\mathsf{PSPACE}\)-complete._ Proof: The upper bound follows from the general case of parity games on periodic temporal graphs in Theorem 4.1. The lower bound is by reduction from punctual reachability games. See Fig. 6. Given a (static) graph \(G\) with target state \(v\) and target time \(T\), we obtain a periodically declining game \(G^{\prime}\) with period \(K=T+1\), vertices \(V\cup\{w,\bot,\top\}\), new target \(\top\), such that \(V^{\prime}_{1}=V_{1}\cup\{w,\bot,\top\}\) and \(V^{\prime}_{2}=V_{2}\). We assume without loss of generality that the original target \(v\) is a Player \(1\) vertex, i.e, \(v\in V_{1}\). We describe the edge availability in \(G^{\prime}\) up to the period \(K=T+1\). For all edges \((s,t)\) of the original graph \(G\), such that \(s\in V_{1}\), the edge \(s\stackrel{{ x}}{{\longrightarrow}}t\) is available if and only if \(x<T\). Moreover for all \(s\in V_{1}\setminus\{v\}\), there is a new edge \(s\stackrel{{ x}}{{\longrightarrow}}\bot\) available at all times \(x\leq T\). For all \(s\in V_{2}\), there is an edge \(s\stackrel{{ x}}{{\longrightarrow}}t\) is available at all times (until end of period) and \(s\stackrel{{ x}}{{\longrightarrow}}\bot\) is available after time \(x\geq T\). These edges ensure that if a play in the original punctual reachability game ends in a vertex of the game other than \(v\) at time \(T\), then Player \(2\) can force the play to reach the sink state \(\bot\) and win. Figure 5: Illustration of Algorithm 1. The blue vertices at time \(i\) denote the winning region \(W_{i}\) for Player \(1\). The times \(n,n^{\prime}\in N\) and \(Pre_{1}\) computation at change point \(n\) increases the winning region but is stable at time \(n^{\prime}\). From the original target \(v\), there is an edge to the new state \(w\) at all times. From the state \(w\), there are edges \(w\longrightarrow\bot\) at all times and \(w\stackrel{{ x}}{{\longrightarrow}}\top\) if \(x=0\). If the state \(w\) is reached at time \(k\) such that \(1<k<T+1\), then the play is forced to go to \(\bot\). The only winning strategy for Player 1 is to reach \(v\) at time \(T\), \(w\) at time \(T+1\) at which the time is reset due to periodicity. The edge \(w\stackrel{{ T+1}}{{\longrightarrow}}\top\) is now available for Player 1 and they can reach the new target \(\top\). The lower bound for the case of periodically increasing temporal reachability games follows by the same construction and using the duality between improving and declining games on temporal graphs. Given a punctual reachability game \(G\) with vertices \(V=V_{1}\uplus V_{2}\) with target set \(F\), we obtain the dual punctual reachability game \(\hat{G}\) with same target time by first switch the ownership of vertices, i.e, \(\hat{V}_{i}=V_{3-i}\), \(i\in\{1,2\}\) and make the new target as \(V\setminus F\). It is easy to see that Player 1 wins \(G\) if and only if Player 2 wins \(\hat{G}\). Applying the same construction as shown in Fig. 6 to \(\hat{G}\), we obtain a periodically declining temporal reachability game \(\hat{G}^{\prime}\), preserving the winner. Now switching the ownership of vertices in \(\hat{G}^{\prime}\) yields a periodically improving temporal reachability game \(G^{\prime}\) which is winning for Player 1 if and only if Player 1 wins \(G\). ## 6 Conclusion In this work we showed that parity games on ultimately periodic temporal graphs are solvable in polynomial space. The lower bound already holds for the very special case of punctual reachability games, and the \(\mathsf{PSPACE}\) upper bound, which improves on the naive exponential-space algorithm on the unfolded graph, is achieved by proving the existence of small, \(\mathsf{PSPACE}\)-verifiable certificates. We stress again that all constructions are effective no matter how the temporal graphs are defined, as long as checking edge availability for binary encoded times is no obstacle. In the paper we use edge constraints given in the existential fragment of Presburger arithmetic but alternate representations, for example us Figure 6: Reduction from a punctual reachability game to a reachability game on a temporal graphs that is periodic and declining, see Theorem 5.1. Parts added are shown in red. ing compressed binary strings of length \(h(G)\) given as Straight-Line Programs [5, Section 3] would equally work. Checking existence of edge at time \(i\) would correspond to querying whether the \(i^{th}\) bit is 1 or not which is \(\mathsf{P}\)-complete [27, Theorem 1]. The games considered here are somewhat orthogonal to parity games played on the configuration graphs of timed automata, where time is continuous, and constraints are _quantifier-free_ formulae involving possibly more than one variable (clocks). Solving parity games on timed automata with two clocks is complete for \(\mathsf{EXP}\) but is in \(\mathsf{P}\) if there is at most one one clock [2][16, Contribution 3(d)]. Games on temporal graphs with quantifier-free constraints corresponds to a subclass of timed automata games with two-clocks, with intermediate complexity of \(\mathsf{PSPACE}\). This is because translating a temporal graph game to a timed automata game requires two clocks: one to hold the global time used to check the edge predicate and one to ensure that time progresses one unit per step. An interesting continuation of the work presented here would be to consider mean-payoff games [11] played on temporal graphs, possibly with dynamic step-rewards depending on the time. If rewards are constant but the edge availability is dynamic, then our arguments for improved algorithms on declining/improving graphs would easily transfer. However, the \(\mathsf{PSPACE}\) upper bound using summaries seems trickier, particularly checking realisability of suitable certificates. ###### Acknowledgements. This work was supported by the Engineering and Physical Sciences Research Council (EPSRC), grant EP/V025848/1. We thank Viktor Zamaraev and Sven Schewe for fruitful discussions and constructive feedback.
2301.04622
Ion filling of a one-dimensional nanofluidic channel in the interaction confinement regime
Ion transport measurements are widely used as an indirect probe for various properties of confined electrolytes. It is generally assumed that the ion concentration in a nanoscale channel is equal to the ion concentration in the macroscopic reservoirs it connects to, with deviations arising only in the presence of surface charges on the channel walls. Here, we show that this assumption may break down even in a neutral channel, due to electrostatic correlations between the ions arising in the regime of interaction confinement, where Coulomb interactions are reinforced due to the presence of the channel walls. We focus on a one-dimensional channel geometry, where an exact evaluation of the electrolyte's partition function is possible with a transfer operator approach. Our exact solution reveals that in nanometre-scale channels, the ion concentration is generally lower than in the reservoirs, and depends continuously on the bulk salt concentration, in contrast to conventional mean-field theory that predicts an abrupt filling transition. We develop a modified mean-field theory taking into account the presence of ion pairs that agrees quantitatively with the exact solution and provides predictions for experimentally-relevant observables such as the ionic conductivity. Our results will guide the interpretation of nanoscale ion transport measurements.
Paul Robin, Adrien Delahais, Lydéric Bocquet, Nikita Kavokine
2023-01-11T18:23:02Z
http://arxiv.org/abs/2301.04622v1
# Ion filling of a one-dimensional nanofluidic channel in the interaction confinement regime ###### Abstract Ion transport measurements are widely used as an indirect probe for various properties of confined electrolytes. It is generally assumed that the ion concentration in a nanoscale channel is equal to the ion concentration in the macroscopic reservoirs it connects to, with deviations arising only in the presence of surface charges on the channel walls. Here, we show that this assumption may break down even in a neutral channel, due to electrostatic correlations between the ions arising in the regime of interaction confinement, where Coulomb interactions are reinforced due to the presence of the channel walls. We focus on a one-dimensional channel geometry, where an exact evaluation of the electrolyte's partition function is possible with a transfer operator approach. Our exact solution reveals that in nanometre-scale channels, the ion concentration is generally lower than in the reservoirs, and depends continuously on the bulk salt concentration, in contrast to conventional mean-field theory that predicts an abrupt filling transition. We develop a modified mean-field theory taking into account the presence of ion pairs that agrees quantitatively with the exact solution and provides predictions for experimentally-relevant observables such as the ionic conductivity. Our results will guide the interpretation of nanoscale ion transport measurements. ## I Introduction A channel connects two reservoirs filled with a salt solution at concentration \(c_{\rm out}\). What is the salt concentration \(c_{\rm in}\) inside the channel? The straightforward answer \(c_{\rm in}=c_{\rm out}\) is challenged as soon as the channel's dimensions are at the nanometre scale [1]. A deviation typically occurs because of the presence of a surface charge density \(\Sigma\) on the channel walls. Indeed, a sufficiently long channel must remain electrically neutral [2], which results in an imbalance of the concentrations \(c_{\rm in}^{\pm}\) of the positive and negative ions. In a cylindrical channel of radius \(R\) that is smaller than the electrolyte's Debye length, the concentrations are given by the famous Donnan equilibrium result [3]: \[c_{\rm in}^{\pm}=\sqrt{c_{\rm out}^{2}+(2\Sigma/R)^{2}}\pm 2\Sigma/R. \tag{1}\] Eq. (1) is widely used to infer a channel's surface charge from measurements of its conductivity at different salt concentrations. For sufficiently small surface charges (\(2\Sigma/R\ll c_{\rm out}\)), Eq. (1) predicts \(c_{\rm in}=c_{\rm out}\) even at extreme nanoscales. Importantly, this prediction underlies the method for extracting confined ion mobilities from transport measurements, which has been applied down to 7-A-wide two-dimensional channels [4]. Yet, physically, \(c_{\rm in}=c_{\rm out}\) stems from the assumption that the electrolyte solutions, both in the reservoirs and in the channel, behave as ideal gases of non-interacting ions. While such a description is valid in the bulk reservoirs at reasonable salt concentrations [5], it must be challenged in the nanometre-scale channel which is subject to _interaction confinement_[6] - a reinforcement of the effective Coulomb interactions between the ions due to the dielectric contrast between the solvent (water) and the channel wall [3; 7; 8; 9; 10; 11; 12; 13; 14]. Due to interaction confinement, ions face a _self-energy barrier_\(E_{\rm s}\) when entering the channel [7; 8]. It was first noted by Parsegian [7] that this should result in ion exclusion: the salt concentration within the channel is then given by an Arrhenius scaling \(c_{\rm in}=c_{\rm out}e^{-E_{\rm s}/k_{\rm B}T}\) under the assumption of non-interacting ions. However, the result becomes more subtle as the confinement-reinforced ionic interactions are taken into account. Within a mean-field description of a spherical nanopore, Dresner [15] predicted an abrupt filling transition, where \(c_{\rm in}\) was a discontinuous function of \(c_{\rm out}\). Later, Palmeri and coworkers [16; 17] recovered a similar transition using a three-dimensional model of a cylindrical channel, treated within the variational field theory formalism of Netz and Orland [18]. While this approach could be applied to a realistic geometry, it took into account electrostatic correlations only approximately. An exact treatment of electrostatic correlations is possible upon simplification of the geometry to a purely one-dimensional model, with the channel wall being taken into account by introducing an effective confined Coulomb interaction. The 1D electrolyte can then be mapped onto an Ising or 1D Coulomb-gas-type model; the transfer matrix solution of such models was used, for example, to discuss the capacitance of nanoporous systems [19; 20; 21]. The lattice models may be taken to the continuum limit, and the resulting path integral solutions have been used to discuss various ion-exchange phase transitions that arise in the presence of fixed discrete charges inside the channel [22; 23; 9] and the ionic Coulomb blockade phenomenon [13]. Such models are particularly rich theoretically, as they support a mapping to non-Hermitian quantum mechanics [24]. Nevertheless, to our knowledge, the fundamental problem of ion filling in an uncharged channel has not been tackled within this framework. In this paper, we treat the ion-filling problem in the interaction confinement regime using an exactly-solvable one-dimensional model. We find that the value of \(c_{\rm in}\) is strongly affected by the formation of Bjerrum pairs - pairs of oppositely charged ions - within the channel, which preclude the occurence of an abrupt filling transition. This is in contrast to the prediction of Palmeri and coworkers [16; 17], and to the result of conventional mean-field theory. We then build on our exact results to propose a modified mean-field model that accounts for the relevant physical ingredients, and, particularly, for the presence of ion pairs. The paper is organized as follows. In Section II, we present the one-dimensional model and its solution within a path-integral formalism. The reader interested only in the physical outcomes may skip directly to Section III, where we discuss the model's prediction for the ion concentration within the channel, compare it to the mean-field solution, and interpret it in terms of tightly bound Bjerrum pairs. In Section IV, we establish a modified mean-field theory, based on the notion of _phantom pairs_, that reproduces our exact solution. The mean-field theory allows us to determine the number of unpaired ions and produces experimentally relevant predictions for a nanochannel's ionic conductance. Section V establishes our conclusions. ## II 1D Coulomb gas model ### Confined interaction We consider a cylindrical channel of radius \(R\) and length \(L\), connected to macroscopic reservoirs (Fig. 1**A**). We first assume for simplicity that the channel is filled with water that has isotropic dielectric permittivity \(\epsilon_{\rm w}=80\), and that it is embedded in an insulating medium with much lower permittivity \(\epsilon_{\rm m}\) (for a lipid membrane [7], \(\epsilon_{m}\sim 2\)). The effective Coulomb interaction \(V(x)\) between two monovalent ions separated by a distance \(x\) on the channel axis can be computed exactly by solving Poisson's equation [12; 13; 8]. A simple approximate expression can be obtained for \(x\sim R\) (ref. [3]): \[V(x)\approx\frac{e^{2}\alpha}{2\pi\epsilon_{0}\epsilon_{\rm w}R}e^{-|x|/( \alpha R)}, \tag{2}\] where \(\alpha\) is a numerical coefficient that depends on the ratio \(\epsilon_{\rm w}/\epsilon_{\rm m}\) (\(\alpha=6.3\) for \(\epsilon_{\rm w}/\epsilon_{\rm m}=40\)). The reinforcement of electrostatic interactions compared to the usual \(e^{2}/4\pi\epsilon_{0}\epsilon_{\rm w}r\) Coulomb interaction that ions experience in bulk water can be interpreted in terms of images charges within the channel walls (Fig. 1**B**). Two confined ions interact not only with each other, but also with their respective image charges. We introduce the parameters \(\xi\equiv\alpha R\) and \(x_{T}\equiv 2\pi\epsilon_{0}\epsilon_{\rm w}R^{2}k_{\rm B}T/e^{2}\): both have the dimension of a length. Figure 1: **Ion filling in the interaction confinement regime.****A**. Schematic of the ion filling problem: a cylindrical nanochannel (radius \(R\sim 1\,\)nm) is connected to macroscopic reservoirs of aqueous electrolyte. The salt concentration inside the channel, \(c_{\rm in}\), may differ from that in the reservoirs, \(c_{\rm out}\). **B**. Physics of interaction confinement. When a charged species enters a nanochannel, the dielectric contrast between water (\(\epsilon_{\rm w}\sim 80\)) and walls (\(\epsilon_{\rm m}\sim 2\)) constraints the electric field lines to remain within the channel. This process can be interpreted in terms of image charges inside the channel walls, and results in an electrostatic self-energy barrier for ions to enter the channel, and reinforced interactions between ions. With these notations, \[V(x)=k_{\rm B}T\frac{\xi}{x_{T}}e^{-|x|/\xi}. \tag{3}\] The effects of ion valence and of anisotropic dielectric response of confined water can be taken into account by adjusting \(\xi\) and \(x_{T}\)[13]. Formally, the expression in Eq. (2) is valid for any channel radius. Yet, it is only physically relevant if at \(x\sim R\) the interaction is significant compared to \(k_{\rm B}T\), which restricts in practice the applicability of Eq. (2) to \(R\lesssim 2\) nm. In such extreme 1D confinement, we may neglect the ions' degrees of freedom perpendicular to the channel axis and assume that they are constrained to move in one dimension. The partition function of such a 1D electrolyte may be computed exactly, as detailed in the next section. ### Path integral formalism Here, we detail the analytical solution for the partition function of a 1D Coulomb gas-like system that was first introduced in ref. [13]. We set \(k_{\rm B}T=1\) until the end of Sec. II. We start from a lattice model, in order to rigorously establish a path integral description in the continuum limit. Our computation is inspired by the original solution of the 1D Coulomb gas model by Lenard and Edwards [25], and subsequent studies by Demery, Dean and coworkers [19; 21; 26; 27], as well as Shklovskii and coworkers [22; 23]. We consider a one-dimensional lattice with sites \(1,\ldots,M\) as a model for the nanochannel of radius \(R\) and length \(L\). Each lattice site \(i\) carries a spin \(S_{i}\), which takes the values \(\{0,1,-1\}\), corresponding respectively to no ion, a positive ion, or a negative ion occupying the site. We model the surface charge distribution as an extra fixed charge \(q_{i}\) added at each lattice site. The spins interact with the Hamiltonian \[\mathcal{H}(\{S_{i}\})=\frac{\xi}{2x_{T}}\sum_{i,j=1}^{M}(S_{i}+q_{i})(S_{j}+ q_{j})e^{-|i-j|/\xi}\equiv\frac{1}{2x_{T}}(S+q)^{T}C(S+q). \tag{4}\] The system is in contact with a particle reservoir at concentration \(c_{\rm out}\). Here the parameters \(\xi\) and \(x_{T}\) are dimensionless, expressed in number of lattice sites. The grand partition function is given by \[\Xi=\sum_{S_{1},\ldots,S_{M}}z^{\sum_{i}|S_{i}|}e^{-\frac{1}{2x_{T}}(S+q)^{T} C(S+q)}, \tag{5}\] with \(z=c_{\rm out}\pi R^{2}L/M\) the fugacity. The matrix \(C\) can be analytically inverted: \[C^{-1}=\frac{1}{2\xi\sinh(1/\xi)}\cdot\left(\begin{array}{cccccccc}e^{1/\xi }&-1&0&0&\ldots&0&0\\ -1&2\cosh(1/\xi)&-1&0&\ldots&0&0\\ \vdots&\ddots&\ddots&\ddots&&\vdots&\vdots\\ \vdots&&\ddots&\ddots&\ddots&&\vdots\\ \vdots&&&\ddots&\ddots&\ddots&&\vdots\\ 0&0&\ldots&0&-1&2\cosh(1/\xi)&-1\\ 0&0&\ldots&\ldots&0&-1&e^{1/\xi}\end{array}\right). \tag{6}\] Hence we can carry out a Hubbard-Stratonovich transformation, that is rewrite the partition function as a gaussian integral, introducing the integration variable \(\varphi\): \[\Xi=\sqrt{\frac{x_{T}^{M}}{(2\pi)^{M}\mathrm{det}(C)}}\cdot\sum_{S_{1},\ldots, S_{M}}z^{\sum_{i}|S_{i}|}\int\mathrm{d}\varphi e^{-\frac{x_{T}}{2}\varphi^{T} C^{-1}\varphi+i(S+q)^{T}\varphi}, \tag{7}\] with \(\mathrm{det}(C)=\frac{e^{1/\xi}}{2\sinh(1/\xi)}\cdot\left[\xi(1-e^{-2/\xi}) \right]^{M}\). After performing the sum over the spins, which is now decoupled, we obtain \[\Xi =\sqrt{\frac{x_{T}^{M}}{(2\pi)^{M}\mathrm{det}(C)}}\cdot\int \mathrm{d}\varphi_{1}\ldots\mathrm{d}\varphi_{M}\prod_{j=1}^{M}(1+2z\cos \varphi_{j})\prod_{j=1}^{M}e^{iq_{j}\varphi_{j}}\ldots \tag{8}\] \[\ldots\exp\left(-\frac{x_{T}}{4\xi\sinh(1/\xi)}\left[\sum_{j=1}^{ M-1}(\varphi_{j+1}-\varphi_{j})^{2}+2(\cosh(1/\xi)-1)\sum_{j=2}^{M-1}\varphi_{j}^{2}+(e ^{1/\xi}-1)(\varphi_{1}^{2}+\varphi_{M}^{2})\right]\right).\] We now take a continuum limit of the lattice model. We call \(a\) the physical lattice spacing and let \(\tilde{\xi}=a\xi\), \(\tilde{x}_{T}=ax_{T}\) and \(\tilde{z}=Mz/L\). We then let \(a\to 0\) and \(M\to\infty\) while keeping the physical length of the system \(L=aM\) constant. We then drop the tilde sign to lighten the notation and obtain \[\Xi=\int\mathrm{d}\varphi(0)e^{-x_{T}\varphi(0)^{2}/4\xi}\int[\mathrm{d} \varphi]e^{-S[\varphi]}\int\mathrm{d}\varphi(L)e^{-x_{T}\varphi(L)^{2}/4\xi} \tag{9}\] with \[S[\varphi]=\int_{0}^{L}\mathrm{d}x\left[\frac{x_{T}}{4}\left(\frac{\mathrm{d} \varphi}{\mathrm{d}x}\right)^{2}+\frac{x_{T}}{4\xi^{2}}\varphi(x)^{2}-iq(x) \varphi(x)-2z\cos\varphi(x)\right]\equiv\int_{0}^{L}\mathcal{L}(\varphi,\dot{ \varphi}). \tag{10}\] \(q(x)\) is the one-dimensional density corresponding to the surface charge, and \(z\equiv\pi R^{2}c_{\mathrm{out}}\). At this point \(\xi\) and \(x_{T}\) have the dimension of length. The path integral measure is defined as \[[\mathrm{d}\varphi]=\lim_{\begin{subarray}{c}M\to 0\\ L=aM\end{subarray}}\left[\prod_{j=1}^{M}\sqrt{\frac{x_{T}}{4\pi a}}\mathrm{d} \varphi_{j}\right]. \tag{11}\] We now define the propagator \(P(\varphi,x|\varphi_{0},0)\), or simply \(P(\varphi,x)\), as \[P(\varphi,x)=\int\mathrm{d}\varphi(x)\delta(\varphi(x)-\varphi)\int[\mathrm{d} \varphi]e^{-\int_{0}^{x}\mathcal{L}(\varphi,\dot{\varphi})}\int\mathrm{d} \varphi(0)\delta(\varphi(0)-\varphi_{0}). \tag{12}\] Considering an infinitesimal displacement \(\Delta x\), \[\begin{split} P(\varphi,x)=\sqrt{\frac{x_{T}}{4\pi\Delta x}}\int \mathrm{d}(\Delta\varphi)& P(\varphi-\Delta\varphi,x-\Delta x) \ldots\\ &\ldots\exp\left(-\int_{x-\Delta x}^{x}\mathrm{d}x^{\prime}\left[ \frac{x_{T}}{4}\left(\frac{\Delta\varphi}{\Delta x}\right)^{2}+\frac{x_{T}}{4 \xi^{2}}\varphi^{2}-iq(x)\varphi-2z\cos\varphi\right]\right).\end{split} \tag{13}\] Expanding the propagator as \(P(\varphi-\Delta\varphi,x-\Delta x)=P(\varphi,x)-\Delta x\partial P/\partial x -\Delta\varphi\partial P/\partial\varphi+(1/2)(\Delta\varphi^{2})\partial^{2 }P/\partial\varphi^{2}\), and carrying out the gaussian integrals, we obtain \[\begin{split} P(\varphi,x)=&\left(P(\varphi,x)- \Delta x\frac{\partial P}{\partial x}+O(\Delta x^{2})\right)\left(1-\Delta x \left[\frac{x_{T}}{4\xi^{2}}\varphi^{2}-iq(x)\varphi-2z\cos\varphi\right]+O( \Delta x^{2})\right)\\ &+\frac{\Delta x}{x_{T}}\frac{\partial^{2}P}{\partial x^{2}}(1+O( \Delta x)).\end{split} \tag{14}\] \(P(\varphi,x)\) thus solves the partial differential equation \[\frac{\partial P}{\partial x}=\frac{1}{x_{T}}\frac{\partial^{2}P}{\partial \varphi^{2}}+\left(iq\varphi-\frac{x_{T}}{4\xi^{2}}\varphi^{2}+2z\cos\varphi \right)P, \tag{15}\] with initial condition \(P(\varphi,0)=\delta(\varphi-\varphi_{0})\), which is the equivalent of a Schrodinger equation for the path integral representation (9). The partition function can thus be computed as \[\Xi=\int\mathrm{d}\varphi(L)e^{-x_{T}\varphi^{2}/4\xi}P(\varphi,L|f_{0}), \tag{16}\] where \(P(\varphi,L|f_{0})\) is the solution of (15) with initial condition \(P(\varphi,0)=f_{0}(\varphi)\equiv e^{-x_{T}\varphi^{2}/4\xi}\). ### Transfer operator We introduce the Fourier transform of \(P\) with respect to \(\varphi\): \[\tilde{P}(k,x)=\frac{1}{\sqrt{2\pi}}\int\mathrm{d}\varphi e^{-ik\varphi}P( \varphi,x). \tag{17}\] Then \(\tilde{P}(k,x)\) satisfies \[\frac{\partial\tilde{P}}{\partial x}=-\frac{k^{2}}{x_{T}}\tilde{P}-q\frac{ \partial\tilde{P}}{\partial k}+\frac{x_{T}}{4\xi^{2}}\frac{\partial^{2}\tilde{ P}}{\partial k^{2}}+z\left[\tilde{P}(k+1,x)+\tilde{P}(k-1,x)\right]. \tag{18}\] From now on, we restrict ourselves to an uncharged channel (\(q=0\)). We then define the operator \(\mathcal{T}\) such that \[[\mathcal{T}(\tilde{P})](k)=-\frac{k^{2}}{x_{T}}\tilde{P}+\frac{x_{T}}{4\xi^{ 2}}\frac{\partial^{2}\tilde{P}}{\partial k^{2}}+z\left[\tilde{P}(k+1,x)+ \tilde{P}(k-1,x)\right], \tag{19}\] which plays the role of a functional transfer matrix. Recalling eq. (16), the partition function then reads \[\Xi=\langle f_{0}|e^{L\mathcal{T}}|f_{0}\rangle \tag{20}\] with \(f_{0}(k)=e^{-\xi k^{2}/x_{T}}\) and \(\langle f(k)|g(k)\rangle\equiv\int\mathrm{d}kf^{*}(k)g(k)\). Now, in the limit \(L\to\infty\), we may consider the largest eigenvalue \(\lambda\) of the operator \(\mathcal{T}\), and the associated eigenfunction \(\chi\): \[[\mathcal{T}(\chi)](k)=\lambda\chi(k). \tag{21}\] Then, up to an exponentially small correction, \[\Xi=|\langle f_{0}|\chi\rangle|^{2}\langle\chi|\chi\rangle e^{\lambda L}. \tag{22}\] ### Ion concentration Our aim is to compute the salt concentration \(c_{\mathrm{in}}\) in the nanoscale channel given a salt concentration \(c_{\mathrm{out}}\) in the reservoir. At the level of the lattice model, the probability to find, say, a positive ion at position \(k\), can be computed by replacing a factor \((1+2z\cos\varphi_{k})\) by \(ze^{i\varphi_{k}}\) in Eq. (8). In the continuum limit, we obtain the positive (negative) ion linear density at position \(x\) by inserting the operator \(ze^{i\varphi}\) (\(ze^{-i\varphi}\)) at position \(x\): \[\pi R^{2}\langle c_{\mathrm{in}}^{\pm}(x)\rangle=\frac{1}{\Xi}\int\mathrm{d} \varphi(0)\mathrm{d}\varphi(x)\mathrm{d}\varphi(L)e^{-x_{T}\varphi(0)^{2}/4 \xi}P(\varphi(x),x|\varphi(0),0)ze^{\pm i\varphi(x)}P(\varphi(L),L|\varphi(x ),x)e^{-x_{T}\varphi(L)^{2}/4\xi}, \tag{23}\] Upon Fourier-transformation, the insertion of \(e^{i\varphi}\) amounts to a shift by unity. Introducing the operator, \[S_{Q}:f\mapsto(g:k\mapsto f(k-Q)), \tag{24}\] the concentrations are given by \[\langle c_{\mathrm{in}}^{\pm}(x)\rangle=\frac{z}{\pi R^{2}}\frac{\langle f_{ 0}|e^{x\mathcal{T}}S_{\pm 1}e^{(L-x)\mathcal{T}}|f_{0}\rangle}{\Xi}=c_{ \mathrm{out}}\frac{\langle f_{0}|e^{x\mathcal{T}}S_{\pm 1}e^{(L-x)\mathcal{T}}|f_{0} \rangle}{\Xi}, \tag{25}\] since \(z=c_{\mathrm{out}}\pi R^{2}\). In the thermodynamic limit, and using Eq. (22) for the partition function, we obtain \[\langle c_{\mathrm{in}}^{\pm}\rangle=c_{\mathrm{out}}\frac{\langle\chi(k)| \chi(k\mp 1)\rangle}{\langle\chi(k)|\chi(k)\rangle}. \tag{26}\] Eq. (26) is the main result of our exact computation. In practice, the function \(\chi(k)\) is determined numerically, by finite-difference integration of Eq. (18). ## III Physics of ion filling ### Debye-Huckel solution We now go back to the ion filling problem (Fig. 1A) and present first a one-dimensional mean-field solution. Typically, the mean-field solution of an electrolyte problem is obtained by solving the Poisson-Boltzmann equation [28; 29]. For the conventional Poisson-Boltzmann equation to apply, we would need to consider the full three-dimensional geometry of our problem, and the effective interaction of Eq. (3) would be introduced implicitly through the boundary conditions at the channel walls [15]. In order to obtain a mean-field solution directly in the 1D geometry, we need to introduce a modified Poisson's equation for the electrostatic potential \(\Phi\) whose Green's function coincides with Eq. (3): \[\left(\frac{\mathrm{d}^{2}}{\mathrm{d}x^{2}}-\frac{1}{\xi^{2}}\right)\phi=-2\pi R ^{2}\frac{c_{+}-c_{-}}{x_{T}}, \tag{27}\] with \(\phi\equiv e\Phi/k_{\mathrm{B}}T\) the dimensionless potential. Imposing that the ions follow a Boltzmann distribution (\(c_{\pm}=c_{\mathrm{in}}e^{\mp\phi}\), where \(c_{\mathrm{in}}\) is understood as the average concentration inside the channel), we obtain the analogue of the Poisson-Boltzmann equation in our 1D geometry: \[\left(\frac{\mathrm{d}^{2}}{\mathrm{d}x^{2}}-\frac{1}{\xi^{2}}\right)\phi=2\pi R ^{2}\frac{c_{\mathrm{in}}}{x_{T}}\mathrm{sinh}\,\phi. \tag{28}\] In order to proceed analytically, we make a Debye-Huckel-type approximation and linearize Eq. (28) with respect to \(\phi\). Then, the potential around an ion placed in the channel at \(x=0\) is given by \[\phi(x)=\frac{\xi_{\mathrm{eff}}}{x_{T}}e^{-|x|/\xi_{\mathrm{eff}}}, \tag{29}\] with \[\xi_{\mathrm{eff}}^{2}=\frac{\xi^{2}}{1+4\pi R^{2}c_{\mathrm{in}}\xi^{2}/x_{ T}}. \tag{30}\] The chemical potential inside the channel is the sum of an ideal gas entropic part and of an excess part due to interactions: \[\mu_{\mathrm{in}}=\mu_{\mathrm{ent}}+\mu_{\mathrm{ex}}, \tag{31}\] with \[\mu_{\mathrm{ent}}=k_{\mathrm{B}}T\log c_{\mathrm{out}}\Lambda^{3}, \tag{32}\] \(\Lambda\) being the De Broglie thermal wavelength of the ions. \(\mu_{\mathrm{ex}}\) can be obtained via a Debye charging process [30]: \[\frac{\mu_{\mathrm{ex}}}{k_{\mathrm{B}}T}=\int_{0}^{1}\phi_{\lambda}(0) \mathrm{d}\lambda,\ \phi_{\lambda}(0)=\frac{\lambda\xi/x_{T}}{\sqrt{1+4\lambda\pi R^{2}c_{ \mathrm{in}}\xi^{2}/x_{T}}}. \tag{33}\] We determine \(c_{\mathrm{in}}\) by imposing equality of the chemical potentials between the channel and the reservoir: \[\mu_{\mathrm{out}}=k_{\mathrm{B}}T\log c_{\mathrm{out}}\Lambda^{3}=\mu_{ \mathrm{in}}, \tag{34}\] which yields \[c_{\mathrm{in}}=c_{\mathrm{out}}e^{-\mu_{\mathrm{ex}}/k_{\mathrm{B}}T}. \tag{35}\] Evaluating analytically the integral in Eq. (33), we obtain an implicit equation for \(c_{\mathrm{in}}\). With the notation \(\hat{c}_{\mathrm{in}}\equiv\pi R^{2}c_{\mathrm{in}}\), \[\begin{split} c_{\mathrm{in}}=c_{\mathrm{out}}\exp& \left(-\frac{\xi}{2x_{T}}\times\frac{x_{T}^{2}}{6\xi^{2}\hat{c}_{ \mathrm{in}}^{2}\xi^{2}}\left[1-\frac{3}{2}(1+4\hat{c}_{\mathrm{in}}\xi^{2}/x_ {T})^{1/2}\right.\right.\\ &\left.\left.+\frac{1}{2}(1+4\hat{c}_{\mathrm{in}}\xi^{2}/x_{T}) ^{3/2}\right]\right).\end{split} \tag{36}\] Figure 2: **Comparing mean-field approximations with the exact Coulomb gas solution.****A**. Schematic description of the mean-field approaches. The chemical potential of confined ions is determined by solving the (linear or nonlinear) Poisson-Boltzmann equation around a given ion, interacting with an oppositely charged Debye cloud. **B**. Dependence of the channel salt concentration \(c_{\mathrm{in}}\) on the reservoir salt concentration \(c_{\mathrm{out}}\), in a weakly-interacting case (\(R=1\,\mathrm{nm}\), \(\xi=7\,\mathrm{nm}\), \(x_{T}=7\,\mathrm{nm}\), \(E_{\mathrm{s}}=0.5\,k_{\mathrm{B}}T\)). We plot four different predictions for the ratio \(c_{\mathrm{in}}/c_{\mathrm{out}}\): the exact field-theoretical solution (Eq. (26), blue circles), its low concentration expansion (Eq. (47), black line), the mean-field predictions from solving the full Poisson-Boltzmann equation (Eq. (40), orange curve) or from its Debye-Hückel linearization (Eq. (36), yellow line). The two mean-field predictions are indistinguishable. In all cases, the naive estimate \(c_{\mathrm{in}}=c_{\mathrm{out}}\) is recovered for high enough concentrations. In the dilute limit, the concentration inside the channel is well approximated by the Arrhenius scaling \(c_{\mathrm{in}}=c_{\mathrm{out}}e^{-E_{\mathrm{s}}/k_{\mathrm{B}}T}\). **C**. Dependence of the channel salt concentration \(c_{\mathrm{in}}\) on the reservoir salt concentration \(c_{\mathrm{out}}\), in a strongly-interacting case (\(R=1\,\mathrm{nm}\), \(\xi=7\,\mathrm{nm}\), \(x_{T}=0.6\,\mathrm{nm}\), \(E_{\mathrm{s}}=6\,k_{\mathrm{B}}T\)). The color code is the same as in **B**. Here, the mean-field predictions strongly deviate from the exact solution, with the Debye-Hückel model predicting an abrupt filling transition. This discrepancy is due to the formation of Bjerrum pairs at intermediate concentrations, as evidenced by the scaling \(c_{\mathrm{in}}\propto c_{\mathrm{out}}^{2}\) in the exact solution. In Fig. 2**B** and **C**, we plot the ratio \(c_{\rm in}/c_{\rm out}\) as a function of \(c_{\rm out}\), as obtained by numerically solving Eq. (36). We fix \(\xi=7\) nm (which corresponds to a channel with \(R\approx 1\) nm and strong dielectric contrast), and vary \(x_{T}\) to set the ionic interaction strength. The interaction strength may be quantified through the self-energy barrier, \(E_{\rm s}=k_{\rm B}T\times\xi/(2x_{T})\). The limiting behavior of \(c_{\rm in}/c_{\rm out}\) may be understood directly from Eq. (36). When \(c_{\rm in}\) is small, Eq. (36) reduces to the Arrhenius scaling \(c_{\rm in}=c_{\rm out}e^{-E_{\rm s}/k_{\rm B}T}\): this results typically holds for biological ion channels which may contain either 0 or 1 ion at any given time, and the effect of inter-ionic interactions is negligible. When \(c_{\rm in}\) is large, we recover \(c_{\rm in}=c_{\rm out}\). Indeed, the excess term in the chemical potential vanishes at high concentrations, which is then dominated by the entropic term. The fact that \(\mu_{\rm ex}\to 0\) as \(c_{\rm in}\to\infty\) is non-trivial: it can be seen, physically, as resulting from the Coulomb potential of each ion being perfectly screened by the other ions. At small values of \(E_{\rm s}\), Eq. (36) has a single solution for all values of \(c_{\rm out}\), which interpolates smoothly between the two limiting regimes. However, for \(E_{\rm s}\gtrsim 5k_{\rm B}T\), it has three solutions in a certain range of \(c_{\rm out}\), pointing to a pseudo-first-order phase transition between a low-concentration and a high-concentration phase, similar to the one predicted by Dresner [15] and Palmeri _et al._[16]. The transition occurs at \(\hat{c}_{\rm in}\sim x_{T}/\xi^{2}\): as per Eq. (30), this corresponds to the concentration where the effect of the screening cloud on an ion's Coulomb potential becomes significant. ### Full Poisson-Boltzmann solution The physical content of the mean-field solution presented above is similar to the one of Dresner, based on a linearized Poisson-Boltzmann equation [15]. The difference in geometry, and the fact that he foregoes the use of the Debye charging process, do not seem to play a significant qualitative role. The solution of Palmeri _et al._[16] takes ionic correlations into account to some extent, yet it still involves a Debye-Huckel-type linear equation for the mean-field interaction potential between the ions. One may ask whether the same phenomenology persists if one does not linearize the Poisson-Boltzmann equation. The full Poisson-Boltzmann equation cannot be solved analytically, but supports the following integral form: \[\left(\frac{\mathrm{d}\phi}{\mathrm{d}x}\right)^{2}-\frac{1}{\xi^{2}}\phi^{2} =4\pi R^{2}\frac{c_{\rm in}}{x_{T}}\left(\cosh\phi-1\right), \tag{37}\] where we have used the fact that \(\phi\) should vanish at \(x\to\infty\). For \(x\to 0\), the solution of Eq. (37) should reduce to the unscreened potential in Eq. (3) up to an additive constant, so that \[\frac{1}{x_{T}^{2}}-\frac{1}{\xi^{2}}\phi^{2}(0)=4\pi R^{2}\frac{c_{\rm in}}{ x_{T}}\left(\cosh\phi(0)-1\right). \tag{38}\] Once again, one may express the excess chemical potential of the confined ions through a Debye charging process: \[\frac{\mu_{\rm ex}}{k_{\rm B}T} =\int_{0}^{1}\phi_{\lambda}(0)\mathrm{d}\lambda, \tag{39}\] \[\frac{\lambda^{2}}{x_{T}^{2}}-\frac{1}{\xi^{2}}\phi_{\lambda}^{2} (0) =4\pi R^{2}\frac{\lambda c_{\rm in}}{x_{T}}\left(\cosh\phi_{\lambda }(0)-1\right).\] This result is the analogue of Eq. (33), with \(\phi_{\lambda}(0)\) now being the solution of an implicit non-linear equation, so that \(\mu_{\rm ex}\) must be determined numerically. As before, the concentration inside the channel is then given by: \[c_{\rm in}=c_{\rm out}e^{-\mu_{\rm ex}/k_{\rm B}T}. \tag{40}\] The prediction of the full Poisson-Boltzmann equation is shown in Fig. 2**B** and **C**: we find \(c_{\rm in}\) to be a smooth function of \(c_{\rm out}\) for all values of parameters, in contrast to the linearized solution. We may not, however, unambiguously conclude that the filling transition is an artifact of linearization, since the non-linear solution still involves a mean-field approximation and is not guaranteed to yield the correct result. Interestingly, the "physically-motivated" mean-field solution in Eq. (28) differs from the mean-field limit of our exact solution. It is obtained by taking the saddle-point approximation in the path-integral expression of the partition function (Eq. (9)). The Euler-Lagrange equation for the minimizer \(\varphi(x)\) of the action \(S[\varphi]\) in Eq. (10) is, upon identifying \(\phi=-i\varphi\), \[\left(\frac{\mathrm{d}^{2}}{\mathrm{d}x^{2}}-\frac{1}{\xi^{2}}\right)\phi=2 \pi R^{2}\frac{c_{\rm out}}{x_{T}}{\rm sinh}\,\phi. \tag{41}\] This is Eq. (28) with \(c_{\rm in}\) replaced with \(c_{\rm out}\), and corresponds to a first order treatment of interactions. Indeed, if the ions are non-interacting, \(c_{\rm in}=c_{\rm out}\). By solving the mean-field equation, we determine how the ions' chemical potential is affected by Debye screening, which then results in value of \(c_{\rm in}\) that is different from \(c_{\rm out}\). Within a straightforward interaction expansion procedure, one should determine the effect of screening assuming the zeroth order value for the ion concentration inside the channel, which is \(c_{\rm out}\): this corresponds to Eq. (41). Eq. (28) contains an additional self-consistency condition, as it assumes the actual value \(c_{\rm in}\) for the ion concentration, which is not known until Eq. (28) is solved. One may draw a loose condensed matter physics analogy, where Eq. (41) resembles the Born approximation for impurity scattering, while Eq. (28) is analogous to the self-consistent Born approximation. [31] ### Exact solution We now turn to the exact solution obtained in Sec. II to unambiguously solve the ion filling problem. We determine \(c_{\rm in}\) according to Eq. (26): \[\langle c_{\rm in}^{\pm}\rangle=c_{\rm out}\frac{\langle\chi(k)|\chi(k\mp 1) \rangle}{\langle\chi(k)|\chi(k)\rangle}, \tag{42}\] where \(\chi(k)\) is the highest eigenfunction of the transfer operator in Eq. (19), determined in practice by numerical integration. The exact results for \(c_{\rm in}\), with the same parameter values as for the mean-field solution, are shown in Fig. 2**B** and **C**. When interactions are weak (small values of \(E_{s}\), Fig. 2**B**), the exact and mean-field solutions are in good agreement. Notably, all solutions smoothly interpolate between the bulk scaling \(c_{\rm in}=c_{\rm out}\) at high concentration, and the Arrhenius scaling \(c_{\rm in}=c_{\rm out}e^{-E_{s}/k_{\rm B}T}\) at low concentration. Conversely, in the strongly-interacting case (large \(E_{s}\), Fig. 2**C**), the exact result yields a much larger ion concentration that the mean-field solutions for intermediate values of \(c_{\rm out}\). In this intermediate regime, \(c_{\rm in}\) remains a smooth function of \(c_{\rm out}\), and obeys the scaling \(c_{\rm in}\propto c_{\rm out}^{2}\). Such a scaling is the signature of the formation of tightly bound Bjerrum pairs of positive and negative ions - strongly-correlated configurations that are not taken into account by mean-field solutions. Indeed, let us assume that the channel contains an ideal gas of ion pairs at concentration \(c_{\rm in}\). We further assume that in a pair, the distance between the two ions is uniformly distributed in the interval \([-x_{T}/2,x_{T}/2]\), and the binding energy of a pair is \(k_{\rm B}T\xi/x_{T}=2E_{s}\). Then, the grand partition function reads \[\Xi =\sum_{N}(ze^{-\beta E_{s}})^{2N}\frac{1}{N!}\prod_{i=1}^{N}L\int_ {-x_{T}/2}^{x_{T}/2}\mathrm{d}x\,e^{2\beta E_{s}} \tag{43}\] \[=\sum_{N}\frac{(z^{2}Lx_{T})^{N}}{N!}=e^{z^{2}Lx_{T}}, \tag{44}\] where we recall that \(z=\pi R^{2}c_{\rm out}\) and \(\beta\equiv 1/(k_{\rm B}T)\). Using that \[\pi R^{2}c_{\rm in}=\frac{1}{L}\frac{\partial\log\Xi}{\partial(\beta\mu)}= \frac{z}{L}\frac{\partial\log\Xi}{\partial z}, \tag{45}\] we obtain \[c_{\rm in}=\frac{2z^{2}x_{T}}{\pi R^{2}}=2\pi R^{2}x_{T}c_{\rm out}^{2}. \tag{46}\] We recover indeed the quadratic scaling. We may check that the prefactor in Eq. (46) is the correct one by evaluating analytically the expression in Eq. (26) in the low concentration limit \(z_{T}\equiv zx_{T}\ll 1\). An analytical expansion of the function \(\chi(k)\) in powers of \(z_{T}\) was derived in ref. [13]. Substituting it into Eq. (26), we obtain \[\begin{split}\pi R^{2}c_{\rm in}&=z(e^{-\beta E_{ s}}+2z_{T}-\frac{13}{2}z_{T}^{2}e^{-\beta E_{s}}\\ &\qquad-7z_{T}^{3}+O(z_{T}^{4})+O(e^{-2\beta E_{s}})).\end{split} \tag{47}\] The first term in the expansion corresponds to \(c_{\rm in}=c_{\rm out}e^{-\beta E_{s}}\). At the lowest salt concentrations, forming Bjerrum pairs is too entropically unfavorable, and the concentration inside the channel is controlled by the self-energy barrier. However, as the salt concentration increases, there is no abrupt transition to a highly-screened concentrated phase inside the channel; instead, the channel is progressively filled by Bjerrum pairs. This corresponds to the quadratic term in the expansion, with the prefactor agreeing indeed with Eq. (46).1 The expansion in Eq. (47) reproduces quite well the low-concentration behavior of the exact solution as shown in Fig. 2**B** and **C**. However, it fails at high concentrations, where it does not recover \(c_{\rm in}=c_{\rm out}\). Footnote 1: This justifies _a posteriori_ our choice of \([-x_{T}/2,x_{T}/2]\) as the interval in which a paired-up ion is allowed to move. Our exact analysis of the ion statistics in a nanoscale channel has revealed that Bjerrum pairs are a crucial ingredient of the filling process. We now develop a modified mean-field theory that accounts the presence of Bjerrum pairs and compare it to the exact solution. ## IV Pair-Enhanced Mean-Field Theory ### Debye-Huckel-Bjerrum theory The traditional mean-field treatment of electrolytes is incapable of taking Bjerrum pairs into account, as it naturally neglects any strong ion-ion correlations - pairing being a fundamentally discrete phenomenon. An idea proposed by Bjerrum to amend the Debye-Huckel theory was to introduce ion pairs as a separate species encapsulating all "strong" ion-ion correlations [32]. More precisely, any two oppositely charged ions that are closer than some minimum distance can be considered as a single neutral entity - a Bjerrum pair. The remaining "free" ions should then only experience weak interactions with each other, and can be treated at the mean-field level. Importantly, this last remark justifies the Debye-Huckel linearization, as all non-linear effects are assumed to be hidden in the definition of ion pairs. As before, we consider that pairs behave like particles of an ideal gas, and that their maximum extension is given by \(x_{T}\). Defining \(c_{\rm in}^{\rm p}\) the concentration pairs inside the channel, the chemical potential of pairs is given by: \[\mu_{\rm in}^{\rm p}=k_{\rm B}T\log\frac{c_{\rm in}^{\rm p}\Lambda^{6}}{2\pi x _{T}R^{2}}, \tag{48}\] where the geometrical factor inside the logarithm accounts for the internal degrees of freedom of a pair. The chemical potential only has an entropic term, because the binding energy of the pair exactly compensates the self-energy of the two separate ions. The chemical equilibrium between free ions and pairs inside the channel can be written as: \[\mu_{\text{in}}^{+}+\mu_{\text{in}}^{-}=2\mu_{\text{in}}=\mu_{\text{in}}^{\text{ p}}, \tag{49}\] where \(\mu_{\text{in}}^{+}\) and \(\mu_{\text{in}}^{-}\) are the chemical potentials of cations and anions, respectively. We then obtain, using the Debye-Huckel solution for \(\mu_{\text{in}}\) (equations (31) to (33)): \[c_{\text{in}}^{\text{p}}=2\pi R^{2}x_{T}c_{\text{out}}^{2}, \tag{50}\] which is the result obtained in the previous section. The average concentration in free ions \(c_{\text{in}}^{\text{f}}\) is not modified compared to the Debye-Huckel solution, and is therefore the solution of the self-consistent Eq. (36). One can then compute the total concentration inside the channel as \(c_{\text{in}}=c_{\text{in}}^{\text{f}}+c_{\text{in}}^{\text{p}}\), or, explicitly \[c_{\text{in}}=c_{\text{out}}e^{-\mu_{\text{ex}}(c_{\text{in}}^{\text{f}})/k_{ \text{B}}T}+2\pi R^{2}x_{T}c_{\text{out}}^{2}. \tag{51}\] In other words, the only impact of pairs in Bjerrum's computation is to add a quadratic term \(2\pi R^{2}x_{T}c_{\text{out}}^{2}\) to the Debye-Huckel result, matching with the expansion (47) of the exact solution up order 2 in the bulk concentration. We compare the two predictions on Fig. 3**B**. The Debye-Huckel-Bjerrum solution is found to match the exact one quite well at low and intermediate concentrations. This result is, however, unphysical for \(c_{\text{out}}\gtrsim 1/\pi R^{2}x_{T}\): \(c_{\text{in}}\) is found to grow much faster than the bulk concentration. One solution would be to consider higher-order terms in the mean-field treatment through the inclusion of triplets, quadruplets, etc. of ions, and all possible interactions between these entities. Truncating the sum at any finite order, however, would not yield a solution valid in the entire range of concentrations, nor is it guaranteed to converge to the exact solution. This approach is also unsatisfactory as it would not yield a closed-form expression for \(c_{\text{in}}\) and would not allow for qualitative understanding of the underlying physics. Instead, we develop a different method that, through physics-driven arguments, prevents the divergence of \(c_{\text{in}}\) at high bulk concentrations and reproduces quantitatively the exact solution. ### Phantom pairs Eq. (51) overestimates the number of Bjerrum pairs in the channel because it fails to account for the presence of Bjerrum pairs in the reservoir. The electrolyte in the reservoir is treated as an ideal gas : the ions are non-interacting and they cannot form actual tightly-bound pairs. Nevertheless, we have defined any two oppositely charged ions that find themselves in a cylinder of radius \(R\) and length \(x_{T}\) to be a separate chemical species. Such configurations may arise in the reservoir simply out of statistical chance: we dub them _phantom pairs_. For our Figure 3: **Pair-enhanced mean-field theory.****A**. Treatment of ion pairing in mean-field approaches. Top panel: Mean-field theories inevitably underestimate ion-ion correlations. To circumvent this problem, two ions that are distant by less than \(x_{T}\) are considered to form an ion pair, which is treated as a separate chemical species. Bottom panel: schematic representation of ion distribution around a fixed positive ion. The distribution is very peaked close to the central ion, due to the formation of an ion pair, and then relaxes smoothly to the mean value \(c_{\text{in}}\). **B**. Evolution of channel concentration \(c_{\text{in}}\) as function of reservoir concentration \(c_{\text{out}}\), in a strongly-interacting case (\(R=1\,\text{nm}\), \(\xi=7\,\text{nm}\), \(x_{T}=0.6\,\text{nm}\), \(E_{\text{s}}=6\,k_{\text{B}}T\)). We plot the ratio \(c_{\text{in}}/c_{\text{out}}\) obtained from three different models taking Bjerrum pairs into account: the exact field-theoretical solution (Eq. (26), blue circles), the Debye-Hückel-Bjerrum mean-field theory (Eq. (51), red line) and our modified mean-field theory based on the notion of phantom pairs (Eq. (55), orange line), which reproduces the exact solution quantitatively for all values of parameters. At high concentration, the Debye-Hückel-Bjerrum prediction fails due to the uncontrolled proliferation of Bjerrum pairs. **C**. Formation of phantom pairs inside the nanochannel. At low concentration (top panel), pairs are well-separated and ions forming a pair are tightly bound to each other. At high concentration (bottom panel), ionic interactions are weakened as a result of Debye screening, and two quasi-non-interacting ions may find themselves within a distance \(x_{T}\) of each other without actually binding: this is a phantom pair. mean-field theory to be consistent, these phantom pairs need to be taken into account. Let \(c_{\rm out}^{\rm p}\) be the concentration of phantom pairs in the reservoir. The chemical equilibrium between phantom pairs and free ions imposes \[c_{\rm out}^{\rm p}=2\pi R^{2}x_{T}(c_{\rm out}^{\rm f})^{2}. \tag{52}\] In addition, one has \(c_{\rm out}^{\rm f}+c_{\rm out}^{\rm p}=c_{\rm out}\), since an ion must either be free or part of a pair. Imposing this condition yields: \[c_{\rm out}^{\rm f}=\frac{\sqrt{1+8\pi c_{\rm out}x_{T}R^{2}}-1}{4x_{T}\pi R ^{2}}. \tag{53}\] We use this result to control the proliferation of pairs in the channel: we now equilibrate the free ions inside the nanochannel with only the free ions in the reservoir: \[c_{\rm in}^{\rm f}=c_{\rm out}^{\rm f}e^{-\mu_{\rm ex}(c_{\rm in}^{\rm f})/k_{ \rm B}T}, \tag{54}\] which corresponds to Eq. (35) with \(c_{\rm out}\) replaced by \(c_{\rm out}^{f}\). Eq. (54) is again a self-consistent equation, this time on the concentration of free ions \(c_{\rm in}^{\rm f}\), that must be solved numerically. Lastly, equilibrating pairs with free ions inside the channel (or, equivalently, pairs inside with pairs outside), we obtain: \[c_{\rm in}=c_{\rm in}^{\rm f}+2\pi R^{2}x_{T}(c_{\rm out}^{\rm f})^{2}, \tag{55}\] where the second term corresponds again to Bjerrum pairs. Eqs. (53) to (55) constitute the main result of our modified mean-field theory. Note that \(\mu_{\rm ex}\) may be determined at the Debye-Huckel level (Eq. (33)), or by solving the full Poisson-Boltzmann equation (Eq. (39)). In what follows, we will only discuss the latter, as it offers greater accuracy; however, the Debye-Huckel prediction provides reasonable results even in the case of strong interactions, and yields for a convenient analytical expression for \(\mu_{\rm ex}\) as function of \(c_{\rm in}^{\rm f}\). The prediction of our phantom pair Poisson-Boltzmann model is compared to the exact solution (26) in Fig. 3**B**. The two solutions are found to be in near perfect agreement for all values of parameters, even in strong coupling limit \(E_{\rm s}\gg k_{\rm B}T\). In the next two sections, we use our modified mean-field model to predict the conductance of a nanochannel, first in the case of a neutral channel, and then in presence of a surface charge. ### Conductance One strength of our modified mean-field model is that it offers insight into the physical properties of the confined system beyond the value of the ionic concentration. In particular, the decomposition of the electrolyte into free ions and bound pairs allows us to estimate the channel's conductance. Tightly bound Bjerrum pairs are electrically neutral, so that they do not contribute to the ionic current to first order in applied electric field: it would then be straightforward to assume that the channel's conductance is proportional to the concentration of free ions. However, the reasoning needs to be more subtle, since the channel, in the same way as the reservoir, may contain non-interacting phantom pairs. Indeed, we have decomposed the confined electrolyte into tightly bound pairs, that have no ionic atmosphere, and free ions that are dressed by a Debye screening cloud. As the concentration increases, the interaction between dressed ions becomes weak, and two of them may find themselves within a distance \(x_{T}\) without actually binding. Such a phantom pair is expected to still contribute to the conductance. The concentration of phantom pairs in the channel is obtained by imposing their chemical equilibrium with the free ions treated as an ideal gas. Thus, we estimate the channel's conductance as: \[G=2\frac{e^{2}D}{k_{\rm B}T}\frac{\pi R^{2}}{L}\left(c_{\rm in}^{\rm f}+2x_{T }\pi R^{2}(c_{\rm in}^{\rm f})^{2}\right), \tag{56}\] where \(D\) is the diffusion coefficient of ions; the second term corresponds to the contribution of phantom pairs. In Fig. 4**A**, we compare this result to the Ohm's law prediction where pairs are neglected and one assumes \(c_{\rm in}=c_{\rm out}\). Ohm's law is found to greatly overestimate the conductance at low concentration. In the dilute limit, we instead recover the Arrhenius scaling, where one assumes \(c_{\rm in}=c_{\rm out}e^{-E_{\rm s}/k_{\rm B}T}\). Finally, we stress that Eq. (56) only accounts for the electrophoresis of free ions, and is therefore only valid in the limit of weak external electric fields. Stronger voltage drops will result in the breaking of ion pairs, causing a conductivity increase in a process known as the second Wien effect. This phenomenon is described in refs. [13; 14], and has been used to create solid-state voltage-gated nanochannels [33]. ### Effect of a surface charge Up till now, we have restricted ourselves to channels with uncharged walls. However, in most experimentally relevant situations, the channel walls bear a surface charge density \(\Sigma\), which strongly impacts nanofluidic transport. While introducing a surface charge is tedious within the exact framework, we may readily assess the effect of surface charge in the interaction confinement regime using our pair-enhanced mean-field theory. In the limit where the channel's radius is smaller than the Debye length, we assume that the presence of the surface charge amounts to a homogeneous Donnan potential drop \(V_{\rm D}\) inside the channel, which we do not need to determine explicitly. Then, the chemical potential of ions inside the channel reads: \[\mu_{\rm in}^{\pm}=\mu_{\rm ex}\pm eV_{\rm D}+k_{\rm B}T\log c_{\rm in}^{\pm} \Lambda^{3}. \tag{57}\] Note that the concentration in free anions \(c_{\text{in}}^{-}\) and cations \(c_{\text{in}}^{+}\) are now distinct, so that \(\mu_{\text{ex}}\) is defined as a function of the average free ion concentration \(c_{\text{in}}^{\text{f}}=(c_{\text{in}}^{+}+c_{\text{in}}^{-})/2\). In a channel that is sufficiently long for local electroneutrality to hold, \[c_{\text{in}}^{+}-c_{\text{in}}^{-}+2\Sigma/R=0. \tag{58}\] Imposing chemical equilibrium with the reservoir, we obtain a modified version of the Donnan result (Eq. (1)): \[\left\{\begin{aligned} & c_{\text{in}}=c_{\text{in}}^{\text{f}}+c_{ \text{in}}^{\text{p}}\\ & c_{\text{in}}^{\text{f}}=\sqrt{\left(c_{\text{out}}^{\text{f}} e^{-\beta\mu_{\text{ex}}(c_{\text{in}}^{\text{f}})}\right)^{2}+\left(\frac{2\Sigma}{R} \right)^{2}},\\ & c_{\text{in}}^{\text{p}}=2\pi R^{2}x_{T}(c_{\text{out}}^{\text{ f}})^{2},\end{aligned}\right. \tag{59}\] with \(c_{\text{out}}^{\text{f}}\) given by Eq. (53). One can again obtain the channel's conductance through Eq. (56), which we compare to the Donnan / Ohm's law result in Fig. 4**B**. Importantly, the Donnan result predicts that conductance becomes independent of concentration for \(c_{\text{out}}\sim 2\Sigma/R\) (see Eq. (1)). In practice, this result is commonly used to estimate experimentally the surface charge as \(\Sigma\sim Rc^{*}/2\), where \(c^{*}\) is the reservoir concentration for which conductance levels off. In contrast, in the interaction confinement regime, we predict that the transition occurs instead at \(c_{\text{in}}^{\text{f}}\sim 2\Sigma/R\) - corresponding to a higher reservoir concentration, due to the self-energy barrier. In this case, Donnan's prediction overestimates the surface charge by typically one order of magnitude, as shown in Fig. 4**B**. Finally, let us stress that we considered here a charge homogeneously distributed along the channel's surface. This assumption is relevant in the case of conducting wall materials, such as systems where the charge is imposed via a gating electrode connected to the channel walls. This situation, however, may be different in experimentally-available devices, where the surface charge generally consists in localized charged groups and defects on the channel walls. In this case, the physics become more involved as ions may form bound pairs with the fixed surface charges. Some of these physics have been revealed by the exact computations of Shklovskii and coworkers [9; 22]; a technically simpler approach to these physics using our pair-enhanced mean-field theory would be possible, but extends beyond the scope of the present work. ## V Discussion and perspectives We have determined the salt concentration inside a nanometric channel connected to reservoirs filled with electrolyte. In the case of a fully 1D geometry, corresponding to a nanotube of radius \(R\sim 1\)nm, we developed an exact field-theoretical solution that allowed us to compute channel concentration \(c_{\text{in}}\) as function of the reservoir concentration \(c_{\text{out}}\). This solution clears up the ambiguities of pre-existing mean-field theories, and contradicts the naive expectation \(c_{\text{in}}=c_{\text{out}}\). In particular, the concentration inside the nanochannel is found to be always lower than in the bulk, as the confinement of electrostatic interactions creates an energy barrier for ions to Figure 4: **Channel conductance in the pair-enhanced mean-field model.****A**. Conductance of a nanochannel (\(R=1\,\text{nm}\), \(\xi=7\,\text{nm}\), \(x_{T}=0.7\,\text{nm}\), \(E_{\text{s}}=10\,k_{\text{B}}T\)) as function of the reservoir concentration. The red line corresponds to the prediction of the phantom pair mean-field model (Eq. (56)) for \(T=300\,\text{K}\), \(D=10^{-9}\,\text{m}^{2}/\text{s}\) and \(L=100\,\text{nm}\). The Ohm’s law bulk prediction (\(c_{\text{in}}=c_{\text{out}}\), blue line) and the Arrhenius model (\(c_{\text{in}}=c_{\text{out}}e^{-E_{\text{s}}/k_{\text{B}}T}\), yellow line) are also represented for comparison. **B**. Conductance of a nanochannel with a weak surface charge \(\Sigma=10^{-3}\,\text{C}/\text{m}^{2}\). We represented the predictions of the conventional Donnan equilibrium (Eq. (1), blue line) and of the phantom pair mean-field theory (equations (56) and (59), red line). Because interaction confinement results in a lower ion concentration in the channel, the usual formula \(\Sigma\sim Rc^{*}/2\), where \(c^{*}\) is the reservoir concentration for which conductance levels off overestimates the surface charge by one order of magnitude, as indicated on the plot. enter the channel. Yet, we found that \(c_{\rm in}\) is in fact higher than the prediction of the mean-field Debye-Huckel theory, as ion pairing is counterbalances to some extent the energy cost of interaction confinement. Such strong ion-ion correlations cannot be directly accounted for in a mean-field theory, and the filling transition that emerges in Debye-Huckel theory appears to be an artefact of linearization. To overcome this issue, one can add Bjerrum pairs as a separate chemical species within the Debye-Huckel model. Carefully accounting for the statistical formation of unbound _phantom pairs_, we obtain a modified mean-field theory that reproduces the result of the exact computation with nearly-perfect accuracy, and that can be extended to account for a non-zero surface charge on the channel wall. Despite the concurring results, the two original formalisms developed in this work serve distinct purposes. The field-theoretical solution plays the role of a touchstone model, owing to its exact treatment of all many-body interactions. Modeling electrolytes is a notoriously hard problem in statistical physics, and simplified models often lack a lack a reference solution for benchmarking their approximations. This difficulty is lifted in the 1D geometry: thanks to the existence of the exact solution, we have been able to build a quantitatively precise mean-field model, adding step-by-step the qualitative ingredients necessary to reproduce the exact result. Moreover, the field theory formalism gives access to the entire statistics of the system, including finite-size effects which elude any mean-field treatment. The latter are expected to be relevant in many experimental situations, as a substantial amount of current works focuses on short pores, where the length of the channel is comparable to its radius. For instance, one can expect shorter channels to deviate from electroneutrality [2] - something entirely impossible in the limit of infinitely long channels. On the other hand, our modified mean-field formalism has the advantage of mathematical simplicity, allowing for convenient physical interpretations. The simple distinction between free ions and Bjerrum pairs can be used to straightforwardly estimate the channel's conductance. The influence of ion-ion correlations on conductivity is of particular importance as conductance measurements underpin many nanofluidic experiments. In contrast, the exact solution does not provide any such insight on transport properties, as it is limited to thermal equilibrium. Furthermore, the mean-field model may easily be adapted to other geometries, whereas an exact treatment is only possible in the strictly 1D case. Extensions of our results to 2D nanochannels would be of significant interest. In particular, 2D nanochannels can be made out of various materials with different electronic properties, which directly impact the confined ionic interactions [6]. Therefore, 2D nanochannels could serve as a platform for exploring the impact of wall metallicity on the ion filling problem. Both our exact and mean-field solutions can be expected to fail at very high concentrations. Indeed, our work relies on a simplified picture of electrolytes, where all steric effects are discarded. We considered point-like ions with no short-distance repulsion; therefore, no effect like saturation or layering can be accounted for. Similarly, we neglected any interaction with the solvent - for example, we did not consider the decrement in relative permittivity at high salt concentrations [34]. However, since all electrostatic interactions are screened in the limit of high concentrations, such considerations should not impact the conclusions of the present work: particularly, we would still expect that \(c_{\rm in}=c_{\rm out}\) at high concentration. Lastly, let us briefly recall our results for the ion filling problem. In channels larger than a few nanometers, the conventional mean-field picture is valid, so that in absence of any surface charge the salt concentration inside the channel equals that of the reservoirs: \(c_{\rm in}=c_{\rm out}\). For nanometre-scale confinement and low concentrations, interaction confinement amounts to a finite energy barrier for ions to enter the channel: \(c_{\rm in}=c_{\rm out}e^{-E_{\rm s}/k_{\rm B}T}\). As concentration increases, more ions are able to overcome the barrier by forming Bjerrum pairs, neutralizing the electrostatic cost of confinement, at the price of entropy: \(c_{\rm in}\propto c_{\rm out}^{2}\). Only at high concentrations can one recover the intuitive estimate \(c_{\rm in}=c_{\rm out}\), as intense screening cancels out all electrostatic interactions. Overall, interaction confinement has a significant impact on the properties of nanofluidic systems, and the assumption \(c_{\rm in}=c_{\rm out}\) should be questioned any time the system's size reaches the nanometre scale. ###### Acknowledgements. N.K. acknowledges support from a Humboldt fellowship. L.B. acknowledges funding from the EU H2020 Framework Programme/ERC Advanced Grant agreement number 785911-Shadoks. The Flatiron Institute is a division of the Simons Foundation. ## Data Availability Statement The data that support the findings of this study are available from the corresponding author upon reasonable request.
2305.07804
Improving Small Language Models on PubMedQA via Generative Data Augmentation
Large Language Models (LLMs) have made remarkable advancements in the field of natural language processing. However, their increasing size poses challenges in terms of computational cost. On the other hand, Small Language Models (SLMs) are known for their efficiency, but they often struggle with limited capacity and training data, especially in specific domains. In this paper, we introduce a novel method aimed at improving SLMs in the medical domain using LLM-based generative data augmentation. The objective of our approach is to develop more efficient and capable models that are specifically tailored for specialized applications. Through experiments conducted on the PubMedQA dataset, we demonstrate the effectiveness of LLMs in refining and diversifying existing question-answer pairs. This refinement process leads to improved performance in a significantly smaller model after fine-tuning. Notably, our best SLM, with under 1.6 billion parameters, outperforms the few-shot GPT-4 on the PubMedQA dataset. Our code and generated data are publicly available to facilitate further explorations.
Zhen Guo, Peiqi Wang, Yanwei Wang, Shangdi Yu
2023-05-12T23:49:23Z
http://arxiv.org/abs/2305.07804v4
# Improving Small Language Models on PubMedQA via Generative Data Augmentation ###### Abstract Large Language Models (LLMs) have made remarkable advancements in the field of natural language processing. However, their increasing size poses challenges in terms of computational cost. On the other hand, Small Language Models (SLMs) are known for their efficiency, but they often struggle with limited capacity and training data, especially in specific domains. In this paper, we introduce a novel method aimed at improving SLMs in the medical domain using LLM-based generative data augmentation. The objective of our approach is to develop more efficient and capable models that are specifically tailored for specialized applications. Through experiments conducted on the PubMedQA dataset, we demonstrate the effectiveness of LLMs in refining and diversifying existing question-answer pairs. This refinement process leads to improved performance in a significantly smaller model after fine-tuning. Notably, our best SLM, with under 1.6 billion parameters, outperforms the few-shot GPT-4 on the PubMedQA dataset. Our code and generated data are publicly available to facilitate further explorations [1]. large language models, small language models, medical question-answering, data augmentation + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote † †: thanks: [ + Footnote †: Footnote † †: thanks: [ + Footnote † †: Footnote † †: thanks: [ + Footnote † †: Footnote † †: thanks: [ + Footnote † †: Footnote † †: thanks: [ + Footnote † † †: Footnote † † †: thanks: [ + Footnote † † †: Footnote techniques, such as Prefix Tuning and Low-rank Adaptation, as alternatives to traditional fine-tuning methods that update the model's weights entirely. Prefix tuning [16] adapts the behavior of a language model to specific tasks without modifying its pre-trained weights. While the low-rank adaptation [17] allows the model to capture the essential characteristics of the data and adapt to domain-specific tasks effectively by decomposing the weight matrices into smaller matrices. ### Data Augmentation using LLMs LLMs serve as powerful tools to generate realistic text samples based on existing data. For NLP tasks, generating data with LLM can involve paraphrasing text, creating alternative question-answer pairs, or generating new sentences [18]. Producing diverse representations of input data enables models to learn various ways to express the same underlying concepts, increasing their adaptability to real-world data variations. For our preliminary study on the PubMedQA dataset, we used GPT-3.5 Turbo and GPT-4 to either rewrite existing medical question-answering pairs or generate new pairs from the training dataset (with a size of 450) with zero-shot prompting. This approach helped improve the diversity and coverage of the training data, ultimately improving the performance of the medical question-answering model trained on the augmented dataset. ## 3 Experimental settings We performed experiments on the MIT Supercloud [19], using PyTorch 1.12 and Python 3.8 with eight NVIDIA V100 GPUs and Intel Xeon Gold 6248 processors. We investigated the effectiveness of prefix tuning and low-rank adaptation on BioGPT-Large [20], LLaMA-7b [21], and Alpaca-7b [22] for medical question-answering tasks. The evaluation was carried out on the PubMedQA dataset [12], splitting it into 450 training, 50 validation, and 500 test samples. Accuracy and F1 score were calculated based on a hard match between predicted and ground truth answers. For prefix tuning, we follow the original implementation [16] and explored a token range of 16 to 512, while low-rank adaptation varied alpha from 16 to 512 with a fixed rank of 4. Fine-tuning employed a learning rate of 5e-5, AdamW optimizer [23], linear warm-up scheduler [24], gradient accumulation of 32 steps [25], and a batch size of 1024 tokens. During inference, we applied techniques including Repetition Penalty Logits Processor (penalty factor of 2.0), Temperature Logits Harper (temperature of 0.8), and beam search decoding with a beam size of 5 to ensure output quality. ## 4 Results ### Low-rank Adaptation outperforms Prefix Tuning We compared the performance of two techniques, Low-rank Adaptation and Prefix Tuning, for BioGPT-Large (Figure 1). We observed that Low-rank Adaptation demonstrated stability across with different hyperparameters (16 to 512), while Prefix Tuning showed sensitivity to the virtual token range. This finding suggests that Low-rank adaptation is more robust and less sensitive to hyperparameter selection, providing consistent and reliable performance for efficient fine-tuning. For all the results below, Low-rank adaptation is the default fine-tuning technique. ### Instruction-tuning constrains domain adaptability of language models In Table 2, we present a comparison of BioGPT-Large, LLaMA-7b, and Alpaca-7b, all fine-tuned on the original PubMedQA dataset without data augmentation as the baseline. Alpaca-7b, a derivative of LLaMA-7b, is an instruction-tuned LLM designed to improve task-specific performance by following instructions. However, this approach restricts its adaptability to other domain-specific tasks compared to a naive pre-trained model. In our experiments, LLaMA-7b shows superior generalizability compared to BioGPT-Large by exhibiting a higher F1 score, when fine-tuned only on the original PubMedQA dataset. The reported accuracy for BioGPT-Large is lower than the numbers reported by Luo et al. [20] because we have different fine-tuning settings. While Luo et al. [20] inserted the virtual tokens right before the answer token, we inserted the virtual tokens before the question token to avoid the risk of overfitting. Figure 1: Comparison between two fine-tuning techniques for BioGPT-Large. ### Comparison between generative data augmentation approaches In Table 3, we provide a comparison of LLaMA-7b and BioGPT-Large fine-tuned on the augmented PubMedQA dataset. Our experiments demonstrate the efficacy of utilizing LLMs such as ChatGPT for refining and expanding question-answer pairs to enhance domain-specific QA datasets, even when the LLM exhibits near-random performance in generating answers (as for the case for gpt-3.5-turbo). The resulting alternative representations of questions and answers facilitated the construction of more diverse and robust training datasets suitable for SLMs. However, we found that instructing an LLM (gpt-3.5-turbo) lacking domain knowledge to generate entirely new question-answer pairs did not lead to an improvement and resulted in a degradation of the downstream task performance for the fine-tuned SLM. This observation suggests that while LLMs are effective in refining and diversifying existing question-answer pairs, their ability to create novel, high-quality pairs for domain-specific tasks remains limited. On the other hand, recent advances in LLMs such as GPT-4, which have domain-specific knowledge and question-answering capacity for PubMedQA, can generate useful new training data. By incorporating new question-answer pairs from GPT-4 into the training process, we can significantly improve the performance of the fine-tuned smaller models. This finding highlight the importance of LLMs with domain-specific knowledge in enhancing domain-specific QA datasets and improving the performance of downstream tasks. Finally, not surprisingly, when BioGPT is fine-tuned on an augmented dataset, it outperforms LLaMA-7B. This is consistent with the previous finding [20], and highlights the effectiveness of pretraining with domain-specific data, enabling BioGPT to better understand and excel in domain-specific tasks. Leveraging domain-specific knowledge during fine-tuning improves the model's accuracy and contextual relevance, resulting in superior performance for domain-specific questions or tasks. ## 5 Future works A promising direction for future work is to investigate the application of knowledge distillation, a popular technique that trains a smaller language model to mimic the behavior of a larger language model on medical question-answering tasks. Another potential approach is through contrastive learning. By training an SLM using contrastive learning on medical question-answering data, contrastive loss can help the model learn to identify similarities and differences between different instances of data and improve its ability to generalize to new and unseen data. ## 6 Conclusion Our research highlights the effectiveness of LLM-based generative data augmentation in enhancing domain-specific question answering datasets. However, instructing LLMs without domain knowledge, such as GPT-3.5-turbo, to generate new question-answer pairs resulted in decreased performance for fine-tuned smaller models. Conversely, leveraging LLMs with domain-specific knowledge, like GPT-4, significantly improved the performance of fine-tuned models by generating valuable new training data. These findings underscore the importance of incorporating domain-specific knowledge when applying generative data augmentation techniques. ## Acknowledgments We thank Prof. Yoon Kim at MIT CSAIL for his guidance and feedback. The authors acknowledge the MIT SuperCloud and Lincoln Laboratory Supercomputing Center for providing computing resources. We would also like to acknowledge OpenAI for providing access to their API. \begin{table} \begin{tabular}{l c c} \hline \hline **Model** & **Accuracy** & **Macro-F1** \\ \hline BioGPT-Large & 0.630 & 0.387 \\ LLaMA-7b & 0.594 & 0.495 \\ Alpaca-7b & 0.380 & 0.335 \\ \hline \hline \end{tabular} \end{table} Table 2: Comparison of BioGPT-Large, LLaMA-7b and Alpaca-7b fine-tuned on the original PubMedQA dataset. \begin{table} \begin{tabular}{l l l l l} \hline \hline **SLM** & **LLM** & **Augment.** & **Acc. (best)** & **F1 (best)** \\ \hline \multirow{8}{*}{LLaMA} & \multirow{4}{*}{GPT-3.5} & none & 0.594 & 0.495 \\ & & **rewriteQA** & **0.642** & **0.497** \\ & & newQA & 0.552 & 0.460 \\ & & combinedQA & 0.582 & 0.485 \\ \cline{2-5} & \multirow{4}{*}{GPT-4} & rewriteQA & 0.540 & 0.463 \\ & & **newQA** & **0.576** & **0.451** \\ & & combinedQA & 0.506 & 0.446 \\ \hline \multirow{8}{*}{BioGPT} & none & 0.630 & 0.387 \\ & & **rewriteQA** & **0.720** & **0.498** \\ & & newQA & 0.718 & 0.491 \\ & & combinedQA & 0.714 & 0.493 \\ \cline{2-5} & \multirow{4}{*}{GPT-4} & rewriteQA & 0.654 & 0.471 \\ & & **newQA** & **0.754** & **0.520** \\ & & combinedQA & 0.708 & 0.518 \\ \hline \hline \end{tabular} \end{table} Table 3: Comparison of LLaMA-7b and BioGPT-Large fine-tuned on augmented PubMedQA (_Acc._ stands for accuracy, _GPT-3.5_ refers to GPT-3.5-turbo, _BioGPT_ represents BioGPT-Large, and _F1_ denotes marco F1 Score).
2309.00266
Functional Deutsch Uncertainty Principle
Let $\{f_j\}_{j=1}^n$ and $\{g_k\}_{k=1}^m$ be Parseval p-frames for a finite dimensional Banach space $\mathcal{X}$. Then we show that \begin{align} (1) \quad\quad\quad\quad \log (nm)\geq S_f (x)+S_g (x)\geq -p \log \left(\displaystyle\sup_{y \in \mathcal{X}_f\cap \mathcal{X}_g, \|y\|=1}\left(\max_{1\leq j\leq n, 1\leq k\leq m}|f_j(y)g_k(y)|\right)\right), \quad \forall x \in \mathcal{X}_f\cap \mathcal{X}_g, \end{align} where \begin{align*} &\mathcal{X}_f:= \{z\in \mathcal{X}: f_j(z)\neq 0, 1\leq j \leq n\}, \quad \mathcal{X}_g:= \{w\in \mathcal{X}: g_k(w)\neq 0, 1\leq k \leq m\},\\ &S_f (x):= -\sum_{j=1}^{n}\left|f_j\left(\frac{x}{\|x\|}\right)\right|^p\log \left|f_j\left(\frac{x}{\|x\|}\right)\right|^p, \quad S_g (x):= -\sum_{k=1}^{m}\left|g_k\left(\frac{x}{\|x\|}\right)\right|^p\log \left|g_k\left(\frac{x}{\|x\|}\right)\right|^p, \quad \forall x \in \mathcal{X}_g. \end{align*} We call Inequality (1) as \textbf{Functional Deutsch Uncertainty Principle}. For Hilbert spaces, we show that Inequality (1) reduces to the uncertainty principle obtained by Deutsch \textit{[Phys. Rev. Lett., 1983]}. We also derive a dual of Inequality (1).
K. Mahesh Krishna
2023-09-01T05:51:44Z
http://arxiv.org/abs/2309.00266v1
**FUNCTIONAL DEUTSch UNCERTAINTY PRINCIPLE** **FUNCTIONAL DEUTSch UNCERTAINTY PRINCIPLE** **K. MAHESH KRISHNA** Post Doctoral Fellow Statistics and Mathematics Unit Indian Statistical Institute, Bangalore Centre Karnataka 560 059, India Email: [email protected] Date: September 4, 2023 **Abstract**: Let \(\{f_{j}\}_{j=1}^{n}\) and \(\{g_{k}\}_{k=1}^{m}\) be Parseval p-frames for a finite dimensional Banach space \(\mathcal{X}\). Then we show that \[(1)\ \log(nm)\geq S_{f}(x)+S_{g}(x)\geq-p\log\left(\sup_{y\in\mathcal{X}_{f} \cap\mathcal{X}_{g},\|y\|=1}\left(\max_{1\leq j\leq n,1\leq k\leq m}|f_{j}(y)g_ {k}(y)|\right)\right),\quad\forall x\in\mathcal{X}_{f}\cap\mathcal{X}_{g},\] where \[\mathcal{X}_{f}\coloneqq\{z\in\mathcal{X}:f_{j}(z)\neq 0,1\leq j \leq n\},\quad\mathcal{X}_{g}\coloneqq\{w\in\mathcal{X}:g_{k}(w)\neq 0,1\leq k \leq m\},\] \[S_{f}(x)\coloneqq-\sum_{j=1}^{n}\left|f_{j}\left(\frac{x}{\|x\| }\right)\right|^{p}\log\left|f_{j}\left(\frac{x}{\|x\|}\right)\right|^{p}, \quad S_{g}(x)\coloneqq-\sum_{k=1}^{m}\left|g_{k}\left(\frac{x}{\|x\|}\right) \right|^{p}\log\left|g_{k}\left(\frac{x}{\|x\|}\right)\right|^{p},\quad \forall x\in\mathcal{X}_{g}.\] We call Inequality \((1)\) as **Functional Deutsch Uncertainty Principle**. For Hilbert spaces, we show that Inequality \((1)\) reduces to the uncertainty principle obtained by Deutsch _[Phys. Rev. Lett., 1983]_. We also derive a dual of Inequality \((1)\). **Keywords**: Uncertainty Principle, Orthonormal Basis, Parseval Frame, Hilbert space, Banach space. **Mathematics Subject Classification (2020)**: 42C15. ###### Contents * 1 Introduction * 2 Functional Deutsch Uncertainty Principle ## 1. Introduction Let \(d\in\mathbb{N}\) and \(\widehat{\cdot}\colon\mathcal{L}^{2}(\mathbb{R}^{d})\to\mathcal{L}^{2}( \mathbb{R}^{d})\) be the unitary Fourier transform obtained by extending uniquely the bounded linear operator \[\widehat{\cdot}\colon\mathcal{L}^{1}(\mathbb{R}^{d})\cap\mathcal{L}^{2}( \mathbb{R}^{d})\ni f\mapsto\widehat{f}\in C_{0}(\mathbb{R}^{d});\quad \widehat{f}\colon\mathbb{R}^{d}\ni\xi\mapsto\widehat{f}(\xi)\coloneqq\int \limits_{\mathbb{R}^{d}}f(x)e^{-2\pi i\langle x,\xi\rangle}\,dx\ \in\mathbb{C}.\] The **Shannon entropy** at a function \(f\in\mathcal{L}^{2}(\mathbb{R}^{d})\setminus\{0\}\) is defined as \[S(f)\coloneqq-\int\limits_{\mathbb{R}^{d}}\left|\frac{f(x)}{\left\|f\right\|} \right|^{2}\log\left|\frac{f(x)}{\left\|f\right\|}\right|^{2}\,dx\] (with the convention \(0\log 0=0\)) [19]. In 1957, Hirschman proved the following result [13]. **Theorem 1.1**.: _[_13_]_ _(**Hirschman Inequality**) For all \(f\in\mathcal{L}^{2}(\mathbb{R}^{d})\setminus\{0\}\),_ \[S(f)+S(\widehat{f})\geq 0. \tag{2}\] In the same paper [13] Hirschman conjectured that Inequality (2) can be improved to \[S(f)+S(\widehat{f})\geq d(1-\log 2),\quad f\in\mathcal{L}^{2}(\mathbb{R}^{d}) \setminus\{0\}. \tag{3}\] Inequality (3) was proved independently in 1975 by Beckner [2] and Bialynicki-Birula and Mycielski [5]. **Theorem 1.2**.: _[_2, 5_]_ _(**Hirschman-Beckner-Bialynicki-Birula-Mycielski Uncertainty Principle**) For all \(f\in\mathcal{L}^{2}(\mathbb{R}^{d})\setminus\{0\}\),_ \[S(f)+S(\widehat{f})\geq d(1-\log 2).\] Now one naturally asks whether there is a finite dimensional version of Shannon entropy and uncertainty principle. Let \(\mathcal{H}\) be a finite dimensional Hilbert space. Given an orthonormal basis \(\{\tau_{j}\}_{j=1}^{n}\) for \(\mathcal{H}\), the **(finite) Shannon entropy** at a point \(h\in\mathcal{H}_{\tau}\) is defined as \[S_{\tau}(h)\coloneqq-\sum_{j=1}^{n}\left|\left\langle\frac{h}{\left\|h\right\| },\tau_{j}\right\rangle\right|^{2}\log\left|\left\langle\frac{h}{\left\|h \right\|},\tau_{j}\right\rangle\right|^{2}\geq 0,\] where \(\mathcal{H}_{\tau}\coloneqq\{h\in\mathcal{H}:\langle h,\tau_{j}\rangle\neq 0,1\leq j\leq n\}\)[11]. In 1983, Deutsch derived following uncertainty principle for Shannon entropy which is fundamental to several developments in Mathematics and Physics [11]. **Theorem 1.3**.: _[_11_]_ _(**Deutsch Uncertainty Principle**) Let \(\{\tau_{j}\}_{j=1}^{n}\), \(\{\omega_{j}\}_{j=1}^{n}\) be two orthonormal bases for a finite dimensional Hilbert space \(\mathcal{H}\). Then_ \[2\log n\geq S_{\tau}(h)+S_{\omega}(h)\geq-2\log\left(\frac{1+\max_{1\leq j,k \leq n}|\langle\tau_{j},\omega_{k}\rangle|}{2}\right)\geq 0,\quad\forall h\in \mathcal{H}_{\tau}. \tag{4}\] Recently, author derived Banach space versions of Donoho-Stark-Elad-Bruckstein-Ricaud-Torresani uncertainty principle [16], Donoho-Stark approximate support uncertainty principle [15] and Ghobber-Jaming uncertainty principle [17]. We then naturally ask what is the Banach space version of Inequality (4)? In this paper, we are going to answer this question. ## 2. Functional Deutsch Uncertainty Principle In the paper, \(\mathbb{K}\) denotes \(\mathbb{C}\) or \(\mathbb{R}\) and \(\mathcal{X}\) denotes a finite dimensional Banach space over \(\mathbb{K}\). Dual of \(\mathcal{X}\) is denoted by \(\mathcal{X}^{*}\). We need the notion of Parseval p-frames for Banach spaces. **Definition 2.1**.: _[_1, 8_]_ _Let \(\mathcal{X}\) be a finite dimensional Banach space over \(\mathbb{K}\). A collection \(\{f_{j}\}_{j=1}^{n}\) in \(\mathcal{X}^{*}\) is said to be a **Parseval p-frame** (\(1\leq p<\infty\)) for \(\mathcal{X}\) if_ \[\|x\|^{p}=\sum_{j=1}^{n}|f_{j}(x)|^{p},\quad\forall x\in\mathcal{X}. \tag{5}\] Note that (5) says that \(\|f_{j}\|\leq 1\) for all \(1\leq j\leq n\). Given a Parseval p-frame \(\{f_{j}\}_{j=1}^{n}\) for \(\mathcal{X}\), we define the **(finite) p-Shannon entropy** at a point \(x\in\mathcal{X}_{f}\) as \[S_{f}(x)\coloneqq-\sum_{j=1}^{n}\left|f_{j}\left(\frac{x}{\|x\|}\right)\right|^ {p}\log\left|f_{j}\left(\frac{x}{\|x\|}\right)\right|^{p}\geq 0,\] where \(\mathcal{X}_{f}\coloneqq\{x\in\mathcal{X}:f_{j}(x)\neq 0,1\leq j\leq n\}\). Following is the fundamental result of this paper. **Theorem 2.2**.: _(**Functional Deutsch Uncertainty Principle**) Let \(\{f_{j}\}_{j=1}^{n}\) and \(\{g_{k}\}_{k=1}^{m}\) be Parseval p-frames for a finite dimensional Banach space \(\mathcal{X}\). Then_ \[\frac{1}{(nm)^{\frac{1}{p}}}\leq\sup_{y\in\mathcal{X},\|y\|=1}\left(\max_{1 \leq j\leq n,1\leq k\leq m}|f_{j}(y)g_{k}(y)|\right)\] _and_ \[\log(nm)\geq S_{f}(x)+S_{g}(x)\geq-p\log\left(\sup_{y\in\mathcal{X}_{f}\cap \mathcal{X}_{g},\|y\|=1}\left(\max_{1\leq j\leq n,1\leq k\leq m}|f_{j}(y)g_{k} (y)|\right)\right)>0,\quad\forall x\in\mathcal{X}_{f}\cap\mathcal{X}_{g}. \tag{6}\] Proof.: Let \(z\in\mathcal{X}\) be such that \(\|z\|=1\). Then \[1 =\left(\sum_{j=1}^{n}|f_{j}(z)|^{p}\right)\left(\sum_{k=1}^{m}|g_{ k}(z)|^{p}\right)=\sum_{j=1}^{n}\sum_{k=1}^{m}|f_{j}(z)g_{k}(z)|^{p}\] \[\leq\sum_{j=1}^{n}\sum_{k=1}^{m}\left(\sup_{y\in\mathcal{X},\|y\| =1}\left(\max_{1\leq j\leq n,1\leq k\leq m}|f_{j}(y)g_{k}(y)|\right)\right)^{p}\] \[=\left(\sup_{y\in\mathcal{X},\|y\|=1}\left(\max_{1\leq j\leq n,1 \leq k\leq m}|f_{j}(y)g_{k}(y)|\right)\right)^{p}mn\] which gives \[\frac{1}{mn}\leq\left(\sup_{y\in\mathcal{X},\|y\|=1}\left(\max_{1\leq j\leq n,1\leq k\leq m}|f_{j}(y)g_{k}(y)|\right)\right)^{p}.\] Since \(1=\sum_{j=1}^{n}\left|f_{j}\left(\frac{x}{\|x\|}\right)\right|^{p}\) for all \(x\in\mathcal{X}\setminus\{0\}\), \(1=\sum_{k=1}^{m}\left|g_{k}\left(\frac{x}{\|x\|}\right)\right|^{p}\) for all \(x\in\mathcal{X}\setminus\{0\}\) and log function is concave, using Jensen's inequality (see [20]) we get \[S_{f}(x)+S_{g}(x) =\sum_{j=1}^{n}\left|f_{j}\left(\frac{x}{\|x\|}\right)\right|^{p} \log\left(\frac{1}{\left|f_{j}\left(\frac{x}{\|x\|}\right)\right|^{p}}\right)+ \sum_{k=1}^{m}\left|g_{k}\left(\frac{x}{\|x\|}\right)\right|^{p}\log\left( \frac{1}{\left|g_{k}\left(\frac{x}{\|x\|}\right)\right|^{p}}\right)\] \[\leq\log\left(\sum_{j=1}^{n}\left|f_{j}\left(\frac{x}{\|x\|} \right)\right|^{p}\frac{1}{\left|f_{j}\left(\frac{x}{\|x\|}\right)\right|^{p}} \right)+\log\left(\sum_{k=1}^{m}\left|g_{k}\left(\frac{x}{\|x\|}\right)\right| ^{p}\frac{1}{\left|g_{k}\left(\frac{x}{\|x\|}\right)\right|^{p}}\right)\] \[=\log n+\log m=\log(nm),\quad\forall x\in\mathcal{X}_{f}\cap \mathcal{X}_{g}.\] Let \(x\in\mathcal{X}_{f}\cap\mathcal{X}_{g}\). Then \[S_{f}(x)+S_{g}(x) =-\sum_{j=1}^{n}\sum_{k=1}^{m}\left|f_{j}\left(\frac{x}{\|x\|}\right) \right|^{p}\left|g_{k}\left(\frac{x}{\|x\|}\right)\right|^{p}\left[\log\left|f_ {j}\left(\frac{x}{\|x\|}\right)\right|^{p}+\log\left|g_{k}\left(\frac{x}{\|x\|} \right)\right|^{p}\right]\] \[=-\sum_{j=1}^{n}\sum_{k=1}^{m}\left|f_{j}\left(\frac{x}{\|x\|} \right)\right|^{p}\left|g_{k}\left(\frac{x}{\|x\|}\right)\right|^{p}\log\left| f_{j}\left(\frac{x}{\|x\|}\right)g_{k}\left(\frac{x}{\|x\|}\right)\right|^{p}\] \[=-p\sum_{j=1}^{n}\sum_{k=1}^{m}\left|f_{j}\left(\frac{x}{\|x\|} \right)\right|^{p}\left|g_{k}\left(\frac{x}{\|x\|}\right)\right|^{p}\log \left|f_{j}\left(\frac{x}{\|x\|}\right)g_{k}\left(\frac{x}{\|x\|}\right)\right|\] \[\geq-p\sum_{j=1}^{n}\sum_{k=1}^{m}\left|f_{j}\left(\frac{x}{\|x\|} \right)\right|^{p}\left|g_{k}\left(\frac{x}{\|x\|}\right)\right|^{p}\log \left(\sup_{y\in\mathcal{X}_{f}\cap\lambda_{g},\|y\|=1}\left(\max_{1\leq j\leq n,1\leq k\leq m}|f_{j}(y)g_{k}(y)|\right)\right)\] \[=-p\log\left(\sup_{y\in\mathcal{X}_{f}\cap\lambda_{g},\|y\|=1} \left(\max_{1\leq j\leq n,1\leq k\leq m}|f_{j}(y)g_{k}(y)|\right)\right).\] **Corollary 2.3**.: _Theorem 1.3 follows from Theorem 2.2._ Proof.: Let \(\{\tau_{j}\}_{j=1}^{n}\), \(\{\omega_{j}\}_{j=1}^{n}\) be two orthonormal bases for a finite dimensional Hilbert space \(\mathcal{H}\). Define \[f_{j}:\mathcal{H}\ni h\mapsto\langle h,\tau_{j}\rangle\in\mathbb{K};\quad g_{j }:\mathcal{H}\ni h\mapsto\langle h,\omega_{j}\rangle\in\mathbb{K},\quad\forall 1 \leq j\leq n.\] Now by using Buzano inequality (see [12, 6]) we get \[\sup_{h\in\mathcal{H},\|h\|=1}\left(\max_{1\leq j,k\leq n}|f_{j}( h)g_{k}(h)|\right) =\sup_{h\in\mathcal{H},\|h\|=1}\left(\max_{1\leq j,k\leq n}|\langle h,\tau_{j}\rangle||\langle h,\omega_{k}\rangle|\right)\] \[\leq\sup_{h\in\mathcal{H},\|h\|=1}\left(\max_{1\leq j,k\leq n} \left(\|h\|^{2}\frac{\|\tau_{j}\|\|\omega_{k}\|+|\langle\tau_{j},\omega_{k} \rangle|}{2}\right)\right)\] \[=\frac{1+\max_{1\leq j,k\leq n}|\langle\tau_{j},\omega_{k} \rangle|}{2}.\] Theorem 2.2 brings the following question. **Question 2.4**.: _Given \(p\), \(m\), \(n\) and a Banach space \(\mathcal{X}\), for which pairs of Parseval p-frames \(\{f_{j}\}_{j=1}^{n}\) and \(\{g_{k}\}_{k=1}^{m}\) for \(\mathcal{X}\), we have equality in Inequality (6)?_ Next we derive a dual inequality of (6). For this we need dual of Definition 2.1. **Definition 2.5**.: _[_7, 21, 22_]_ _Let \(\mathcal{X}\) be a finite dimensional Banach space over \(\mathbb{K}\). A collection \(\{\tau_{j}\}_{j=1}^{n}\) in \(\mathcal{X}\) is said to be a **Parseval p-frame** (\(1\leq p<\infty\)) for \(\mathcal{X}^{*}\) if_ \[\|f\|^{p}=\sum_{j=1}^{n}|f(\tau_{j})|^{p},\quad\forall f\in\mathcal{X}^{*}. \tag{7}\] Note that (7) says that \[\|\tau_{j}\|=\sup_{f\in\mathcal{X}^{*},\|f\|=1}|f(\tau_{j})|\leq\sup_{f\in \mathcal{X}^{*},\|f\|=1}\left(\sum_{j=1}^{n}|f(\tau_{j})|^{p}\right)^{\frac{1}{p} }=\sup_{f\in\mathcal{X}^{*},\|f\|=1}\|f\|=1,\quad\forall 1\leq j\leq n.\] Given a Parseval p-frame \(\{\tau_{j}\}_{j=1}^{n}\) for \(\mathcal{X}^{*}\), we define the **(finite) p-Shannon entropy** at a point \(f\in\mathcal{X}^{*}_{\tau}\) as \[S_{\tau}(f)\coloneqq-\sum_{j=1}^{n}\left|\frac{f(\tau_{j})}{\|f\|}\right|^{p} \log\left|\frac{f(\tau_{j})}{\|f\|}\right|^{p}\geq 0,\] where \(\mathcal{X}^{*}_{\tau}\coloneqq\{f\in\mathcal{X}^{*}:f(\tau_{j})\neq 0,1\leq j \leq n\}\). We now have the following dual to Theorem 2.2. **Theorem 2.6**.: _(**Functional Deutsch Uncertainty Principle**) Let \(\{\tau_{j}\}_{j=1}^{n}\) and \(\{\omega_{k}\}_{k=1}^{m}\) be two Parseval p-frames for the dual \(\mathcal{X}^{*}\) of a finite dimensional Banach space \(\mathcal{X}\). Then_ \[\frac{1}{(nm)^{\frac{1}{p}}}\leq\sup_{g\in\mathcal{X}^{*},\|g\|=1}\left(\max_ {1\leq j\leq n,1\leq k\leq m}|g(\tau_{j})g(\omega_{k})|\right)\] _and_ \[\log(nm)\geq S_{\tau}(f)+S_{\omega}(f)\geq-p\log\left(\sup_{g\in\mathcal{X}^{ *}_{\tau}\cap\mathcal{X}^{*}_{\omega},\|g\|=1}\left(\max_{1\leq j\leq n,1\leq k \leq m}|g(\tau_{j})g(\omega_{k})|\right)\right)>0,\quad\forall f\in\mathcal{X} ^{*}_{\tau}\cap\mathcal{X}^{*}_{\omega}.\] Proof.: Let \(h\in\mathcal{X}^{*}\) be such that \(\|h\|=1\). Then \[1 =\left(\sum_{j=1}^{n}|h(\tau_{j})|^{p}\right)\left(\sum_{k=1}^{m}| h(\omega_{k})|^{p}\right)=\sum_{j=1}^{n}\sum_{k=1}^{m}|h(\tau_{j})h(\omega_{k})|^{p}\] \[\leq\sum_{j=1}^{n}\sum_{k=1}^{m}\left(\sup_{g\in\mathcal{X}^{*}, \|g\|=1}\left(\max_{1\leq j\leq n,1\leq k\leq m}|g(\tau_{j})g(\omega_{k})| \right)\right)^{p}\] \[=\left(\sup_{g\in\mathcal{X}^{*},\|g\|=1}\left(\max_{1\leq j\leq n,1\leq k\leq m}|g(\tau_{j})g(\omega_{k})|\right)\right)^{p}mn\] which gives \[\frac{1}{mn}\leq\left(\sup_{g\in\mathcal{X}^{*},\|g\|=1}\left(\max_{1\leq j \leq n,1\leq k\leq m}|g(\tau_{j})g(\omega_{k})|\right)\right)^{p}.\] Since \(1=\sum_{j=1}^{n}\left|\frac{f(\tau_{j})}{\|f\|}\right|^{p}\) for all \(f\in\mathcal{X}^{*}\setminus\{0\}\), \(1=\sum_{k=1}^{m}\left|\frac{f(\omega_{k})}{\|f\|}\right|^{p}\) for all \(f\in\mathcal{X}^{*}\setminus\{0\}\) and \(\log\) function is concave, using Jensen's inequality we get \[S_{\tau}(f)+S_{\omega}(f) =\sum_{j=1}^{n}\left|\frac{f(\tau_{j})}{\|f\|}\right|^{p}\log \left(\frac{1}{\left|\frac{f(\tau_{j})}{\|f\|}\right|^{p}}\right)+\sum_{k=1}^ {m}\left|\frac{f(\omega_{k})}{\|f\|}\right|^{p}\log\left(\frac{1}{\left|\frac{ f(\omega_{k})}{\|f\|}\right|^{p}}\right)\] \[\leq\log\left(\sum_{j=1}^{n}\left|\frac{f(\tau_{j})}{\|f\|}\right| ^{p}\frac{1}{\left|\frac{f(\tau_{j})}{\|f\|}\right|^{p}}\right)+\log\left(\sum _{k=1}^{m}\left|\frac{f(\omega_{k})}{\|f\|}\right|^{p}\frac{1}{\left|\frac{f( \omega_{k})}{\|f\|}\right|^{p}}\right)\] \[=\log n+\log m=\log(nm),\quad\forall f\in\mathcal{X}^{*}_{\tau} \cap\mathcal{X}^{*}_{\omega}.\] Let \(f\in\mathcal{X}^{*}_{\tau}\cap\mathcal{X}^{*}_{\omega}\). Then \[\log(nm)\geq S_{\tau}(f)+S_{\omega}(f)\geq-p\log\left(\sup_{g\in\mathcal{X}^{*} _{\tau}\cap\mathcal{X}^{*}_{\omega},\|g\|=1}\left(\max_{1\leq j\leq n,1\leq k \leq m}|g(\tau_{j})g(\omega_{k})|\right)\right)>0,\quad\forall f\in\mathcal{X} ^{*}_{\tau}\cap\mathcal{X}^{*}_{\omega}.\] Proof.: Let \(h\in\mathcal{X}^{*}\) be such that \(\|h\|=1\). Then \[1 =\left(\sum_{j=1}^{n}|h(\tau_{j})|^{p}\right)\left(\sum_{k=1}^{m}|h (\omega_{k})|^{p}\right)=\sum_{j=1}^{n}\sum_{k=1}^{m}|h(\tau_{j})h(\omega_{k})| ^{p}\] \[\leq\sum_{j=1}^{n}\sum_{k=1}^{m}\left(\sup_{g\in\mathcal{X}^{*} _{\tau}\cap\mathcal{X}^{*}_{\omega},\|g\|=1}\left(\max_{1\leq j\leq n,1\leq k \leq m}|g(\tau_{j})g(\omega_{k})|\right)\right)^{p}\] \[=\left(\sup_{g\in\mathcal{X}^{*}_{\tau}\cap\mathcal{X}^{*}_{\omega}, \|g\|=1}\left(\max_{1\leq j\leq n,1\leq k\leq m}|g(\tau_{j})g(\omega_{k})| \right)\right)^{p}mn\] which gives \[\frac{1}{mn}\leq\left(\sup_{g\in\mathcal{X}^{*},\|g\|=1}\left(\max_{1\leq j\leq n,1\leq k\leq m}|g(\tau_{j})g(\omega_{k})|\right)\right)^{p}.\] Since \(1=\sum_{j=1}^{n}\left|\frac{f(\tau_{j})}{\|f\|}\right|^{p}\) for all \(f\in\mathcal{X}^{*}\setminus\{0\}\), \(1=\sum_{k=1}^{m}\left|\frac{f(\omega_{k})}{\|f\|}\right|^{p}\) for all \(f\in\mathcal{X}^{*}\setminus\{0\}\) and \(\log\) function is concave, using Jensen's inequality we get \[S_{\tau}(f)+S_{\omega}(f) =\sum_{j=1}^{n}\left|\frac{f(\tau_{j})}{\|f\|}\right|^{p}\log\left( \frac{1}{\left|\frac{f(\tau_{j})}{\|f\|}\right|^{p}}\right)+\sum_{k=1}^{m} \left|\frac{f(\omega_{k})}{\|f\|}\right|^{p}\log\left(\frac{1}{\left|\frac{f( \omega_{k})}{\|f\|}\right|^{p}}\right)\] \[\leq\log\left(\sum_{j=1}^{n}\left|\frac{f(\tau_{j})}{\|f\|} \right|^{p}\frac{1}{\left|\frac{f(\tau_{j})}{\|f\|}\right|^{p}}\right)+\log \left(\sum_{k=1}^{m}\left|\frac{f(\omega_{k})}{\|f\|}\right|^{p}\frac{1}{\left| \frac{f(\omega_{k})}{\|f\|}\right|^{p}}\right)\] \[=\log n+\log m=\log(nm),\quad\forall f\in\mathcal{X}^{*}_{\tau} \cap\mathcal{X}^{*}_{\omega}.\] Let \(f\in\mathcal{X}^{*}_{\tau}\cap\mathcal{X}^{*}_{\omega}\). Then \[\log(nm)\geq S_{\tau}(f)+S_{\omega}(f)\geq-p\log\left(\sup_{g\in\mathcal{X}^{*} _{\tau}\cap\mathcal{X}^{*}_{\omega},\|g\|=1}\left(\max_{1\leq j\leq n,1 \leq k\leq m}|g(\tau_{j})g(\omega_{k})|\right)\right)>0,\quad\forall f\in \mathcal{X}^{*}_{\tau}\cap\mathcal{X}^{*}_{\omega}.\] Proof.: Let \(h\in\mathcal{X}^{*}\) be such that \(\|h\|=1\). Then \[1 =\left(\sum_{j=1}^{n}|h(\tau_{j})|^{p}\right)\left(\sum_{k=1}^{m}|h (\omega_{k})|^{p}\right)=\sum_{j=1}^{n}\sum_{k=1}^{m}|h(\tau_{j})h(\omega_{k})| ^{p}\] \[\leq\sum_{j=1}^{n}\sum_{k=1}^{m}\left(\sup_{g\in\mathcal{X}^{*}, \|g\|=1}\left(\max_{1\leq j\leq n,1\leq k \[S_{\tau}(f)+S_{\omega}(f) =-\sum_{j=1}^{n}\sum_{k=1}^{m}\left|\frac{f(\tau_{j})}{\|f\|}\right|^ {p}\left|\frac{f(\omega_{k})}{\|f\|}\right|^{p}\left[\log\left|\frac{f(\tau_{j}) }{\|f\|}\right|^{p}+\log\left|\frac{f(\omega_{k})}{\|f\|}\right|^{p}\right]\] \[=-\sum_{j=1}^{n}\sum_{k=1}^{m}\left|\frac{f(\tau_{j})}{\|f\|} \right|^{p}\left|\frac{f(\omega_{k})}{\|f\|}\right|^{p}\log\left|\frac{f(\tau_ {j})}{\|f\|}\frac{f(\omega_{k})}{\|f\|}\right|^{p}\] \[=-p\sum_{j=1}^{n}\sum_{k=1}^{m}\left|\frac{f(\tau_{j})}{\|f\|} \right|^{p}\left|\frac{f(\omega_{k})}{\|f\|}\right|^{p}\log\left|\frac{f(\tau _{j})}{\|f\|}\frac{f(\omega_{k})}{\|f\|}\right|\] \[\geq-p\sum_{j=1}^{n}\sum_{k=1}^{m}\left|\frac{f(\tau_{j})}{\|f\| }\right|^{p}\left|\frac{f(\omega_{k})}{\|f\|}\right|^{p}\log\left(\sup_{g\in \mathcal{X}^{*},\|g\|=1}\left(\max_{1\leq j\leq n,1\leq k\leq m}|g(\tau_{j})g( \omega_{k})|\right)\right)\] \[=-p\log\left(\sup_{g\in\mathcal{X}^{*}_{\tau}\cap\mathcal{X}^{*} _{\tau},\|g\|=1}\left(\max_{1\leq j\leq n,1\leq k\leq m}|g(\tau_{j})g(\omega_{ k})|\right)\right)\sum_{j=1}^{n}\sum_{k=1}^{m}\left|\frac{f(\tau_{j})}{\|f\|} \right|^{p}\left|\frac{f(\omega_{k})}{\|f\|}\right|^{p}\] \[=-p\log\left(\sup_{g\in\mathcal{X}^{*}_{\tau}\cap\mathcal{X}^{*} _{\tau},\|g\|=1}\left(\max_{1\leq j\leq n,1\leq k\leq m}|g(\tau_{j})g(\omega_{ k})|\right)\right).\] Theorem 2.6 again gives the following question. **Question 2.7**.: _Given \(p\), \(m\), \(n\) and a Banach space \(\mathcal{X}\), for which pairs of Parseval p-frames \(\{\tau_{j}\}_{j=1}^{n}\) and \(\{\omega_{k}\}_{k=1}^{m}\) for \(\mathcal{X}^{*}\), we have equality in Inequality (8)?_ Author is aware of the improvement of Theorem 1.3 by Maassen and Uffink [18] (cf. [10]) (motivated from a conjecture of Kraus [14]) but unable to derive Maassen-Uffink uncertainty principle from Theorem 2.2. Motivated from Renyi entropy, we can easily generalize the notion of p-Shannon entropy and as follows. Given a Parseval p-frame \(\{f_{j}\}_{j=1}^{n}\) for \(\mathcal{X}\), we define the **(finite) p-Renyi entropy of order \(\alpha\in(0,\infty)\)**, \(\alpha\neq 1\) at a point \(x\in\mathcal{X}_{f}\) as \[R_{f,\alpha}(x)\coloneqq\frac{1}{1-\alpha}\log\left(\sum_{j=1}^{n}\left|f_{j} \left(\frac{x}{\|x\|}\right)\right|^{p\alpha}\right).\] Given a Parseval p-frame \(\{\tau_{j}\}_{j=1}^{n}\) for \(\mathcal{X}^{*}\), we define the **(finite) p-Renyi entropy of order \(\alpha\in(0,\infty)\)**, \(\alpha\neq 1\) at a point \(f\in\mathcal{X}^{*}_{\tau}\) as \[R_{\tau,\alpha}(f)\coloneqq\frac{1}{1-\alpha}\log\left(\sum_{j=1}^{n}\left| \frac{f(\tau_{j})}{\|f\|}\right|^{p\alpha}\right).\] Using L'Hopital rule, we have \[\lim_{\alpha\to 1}R_{f,\alpha}(\cdot)=S_{f}(\cdot),\quad\lim_{\alpha\to 1}R_{\tau, \alpha}(\cdot)=S_{\tau}(\cdot).\] Theorem 2.2 and Theorem 2.6 the result in following problems. **Problem 2.8**.: _Given a finite dimensional Banach space \(\mathcal{X}\), let \(\mathcal{P}(\mathcal{X})\) be the set of all finite Parseval p-frames for \(\mathcal{X}\). What is the best function \(\Psi:((0,1)\cup(1,\infty))\times\mathcal{P}(\mathcal{X})\times\mathcal{P}( \mathcal{X})\rightarrow(0,\infty)\) satisfying the following: If \(\{f_{j}\}_{j=1}^{n}\) and \(\{g_{k}\}_{k=1}^{m}\) are Parseval p-frames for \(\mathcal{X}\), then_ \[R_{f,\alpha}(x)+R_{g,\alpha}(x)\geq\Psi(\alpha,\{f_{j}\}_{j=1}^{n},\{g_{k}\}_{ k=1}^{m}),\quad\forall x\in\mathcal{X}_{f}\cap\mathcal{X}_{g}.\] **Problem 2.9**.: _Given a finite dimensional Banach space \(\mathcal{X}\), let \(\mathcal{P}(\mathcal{X}^{*})\) be the set of all finite Parseval p-frames for \(\mathcal{X}^{*}\). What is the best function \(\Psi:((0,1)\cup(1,\infty))\times\mathcal{P}(\mathcal{X}^{*})\times\mathcal{P}( \mathcal{X}^{*})\rightarrow(0,\infty)\) satisfying the following: If \(\{\tau_{j}\}_{j=1}^{n}\) and \(\{\omega_{k}\}_{k=1}^{m}\) are Parseval p-frames for \(\mathcal{X}^{*}\), then_ \[R_{\tau,\alpha}(f)+R_{\omega,\alpha}(f)\geq\Psi(\alpha,\{\tau_{j}\}_{j=1}^{n}, \{\omega_{k}\}_{k=1}^{m}),\quad\forall f\in\mathcal{X}^{*}_{\tau}\cap\mathcal{ X}^{*}_{\omega^{*}}.\] Based on breakthrough result of Berta, Christandl, Colbeck, Renes, and Renner [3, 4] (which is later generalized by Coles and Piani [9]), we also set the following problem. **Problem 2.10**.: _What is the finite dimensional Banach space analogue of Berta-Christandl-Colbeck-Renes-Renner Uncertainty Principle (in the presence of quantum memory?)_
2306.01925
Improving the generalizability and robustness of large-scale traffic signal control
A number of deep reinforcement-learning (RL) approaches propose to control traffic signals. In this work, we study the robustness of such methods along two axes. First, sensor failures and GPS occlusions create missing-data challenges and we show that recent methods remain brittle in the face of these missing data. Second, we provide a more systematic study of the generalization ability of RL methods to new networks with different traffic regimes. Again, we identify the limitations of recent approaches. We then propose using a combination of distributional and vanilla reinforcement learning through a policy ensemble. Building upon the state-of-the-art previous model which uses a decentralized approach for large-scale traffic signal control with graph convolutional networks (GCNs), we first learn models using a distributional reinforcement learning (DisRL) approach. In particular, we use implicit quantile networks (IQN) to model the state-action return distribution with quantile regression. For traffic signal control problems, an ensemble of standard RL and DisRL yields superior performance across different scenarios, including different levels of missing sensor data and traffic flow patterns. Furthermore, the learning scheme of the resulting model can improve zero-shot transferability to different road network structures, including both synthetic networks and real-world networks (e.g., Luxembourg, Manhattan). We conduct extensive experiments to compare our approach to multi-agent reinforcement learning and traditional transportation approaches. Results show that the proposed method improves robustness and generalizability in the face of missing data, varying road networks, and traffic flows.
Tianyu Shi, Francois-Xavier Devailly, Denis Larocque, Laurent Charlin
2023-06-02T21:30:44Z
http://arxiv.org/abs/2306.01925v2
# Improving the generalizability and robustness of large-scale traffic signal control ###### Abstract A number of deep reinforcement-learning (RL) approaches propose to control traffic signals. Compared to traditional approaches, RL approaches can learn from higher-dimensionality input road and vehicle sensors and better adapt to varying traffic conditions resulting in reduced travel times (in simulation). However, these RL methods require training from massive traffic sensor data. To offset this relative inefficiency, some recent RL methods have the ability to first learn from small-scale networks and then generalize to unseen city-scale networks without additional retraining (_zero-shot transfer_). In this work, we study the robustness of such methods along two axes. First, sensor failures and GPS occlusions create missing-data challenges and we show that recent methods remain brittle in the face of these missing data. Second, we provide a more systematic study of the generalization ability of RL methods to new networks with different traffic regimes. Again, we identify the limitations of recent approaches. We then propose using a combination of distributional and vanilla reinforcement learning through a policy ensemble. Building upon the state-of-the-art previous model which uses a decentralized approach for large-scale traffic signal control with graph convolutional networks (GCNs), we first learn models using a distributional reinforcement learning (DisRL) approach. In particular, we use implicit quantile networks (IQN) to model the state-action return distribution with quantile regression. For traffic signal control problems, an ensemble of standard RL and DisRL yields superior performance across different scenarios, including different levels of missing sensor data and traffic flow patterns. Furthermore, the learning scheme of the resulting model can improve zero-shot transferability to different road network structures, including both synthetic networks and real-world networks (e.g., Luxembourg, Manhattan). We conduct extensive experiments to compare our approach to multi-agent reinforcement learning and traditional transportation approaches. Results show that the proposed method improves robustness and generalizability in the face of missing data, varying road networks, and traffic flows. keywords: Distributional reinforcement learning, Graph neural networks, Policy ensemble, Robustness, Generalizability, Traffic signal control. ## 1 Introduction As the number of cars on our roads continues to rise it is imperative to adapt road networks to minimize congestion. Developing robust yet efficient traffic control strategies is a powerful mitigator (Wei et al., 2018; Devailly et al., 2021; Wei et al., 2019). Powerful traffic signal control (TSC) methods, for example, based on deep reinforcement learning Silver et al. (2017), now exist to optimize the control signal phase (e.g., red or green). They learn from and use available historical and real-time traffic and vehicle data (Shi et al., 2019; Essa and Sayed, 2020; Wei et al., 2019; Varaiya, 2013). Real-time data can be collected from the built-in sensors of the vehicles and then transmitted to the control system to help in decision-making (e.g., to free busy lanes by changing the phase of the TSC) (Zhang et al., 2020). However, missing values in the collected data from vehicles (Nanthawichit et al., 2003), (e.g., caused by GPS occlusions and transmission delays) -- are common. Downstream, missing data will introduce uncertainty in the observations of the system, which will then be challenging for the decision-making module. Controlling traffic signals under these exogenous sources of uncertainty requires robust control policies. A second challenge is that traffic conditions can be non-stationary because of singular events such as accidents and construction and also due to recurring patterns (e.g., periodic daily and weekly ones). They can also evolve over time as a result of other infrastructure changes (e.g., new roads nearby). As a result, it is advantageous to use control policies that can adapt to new scenarios, varying traffic-flow patterns, and even allow deployment across networks of different scales. The ability to obtain policies that are both robust (to sensor failures) and that can generalize to new situations (traffic and networks) is important for deploying control policies in complex road systems that are ubiquitous in our cities. Current methods do not yield policies with both desiderata (we show this below). This is the gap we address in this paper. Next, we introduce the classes of existing approaches for traffic signal control. First, hand-crafted policies for TSCs form a class of traditional approaches. For example, _fixed-time_ approaches (Koonce and Rodegerdts, 2008) define a fixed cycle length and phase time for each intersection based on the road configuration. Greedy (Varaiya, 2013) maximizes the throughput of the road networks by greedily picking the phase that can maximize the pressure. In principle, hand-crafted policies generalize across networks and traffic conditions. However, they rely on unrealistic assumptions, such that the road lanes have unlimited capacity and that the traffic flow is constant. As a result, their application in real-world and complex road networks is limited (Varaiya, 2013). Reinforcement learning (RL), a formalism for sequential decision-making, is proving to be an effective tool to learn complex policies for diverse traffic-control problems (Wei et al., 2018, 2019; Chu et al., 2019). RL models traffic signals as agents that use the current _state_ of the environments (e.g., the position of all nearby vehicles) to control the light phase. Reinforcement learning agents are trained to maximize a utility function called a _reward_. For traffic-signal control, rewards are often taken to be proxies of the traffic efficiency, measured, for example, as the inverse (vehicle) delay or queue length. In simulation, RL has been trained to control traffic lights in real-world road networks and outperforms hand-crafted policies (Wei et al., 2018; Koonce and Rodegerdts, 2008). RL has shown robustness in small-scale road networks (one to five intersections). In particular, the standard Deep Q-Networks (DQNs) for RL, using a replay buffer to store previous experiences, have demonstrated a level of generalizability for different traffic demands. (Rodrigues and Azevedo, 2019; Zhang et al., 2020). Figure 1 shows that DQNs still suffer from a performance decrease when faced with missing data. The performance further decreases in larger road networks. Generalizability is also important for RL policies since training RL agents is computationally costly even for small-scale networks. To scale agents to larger-scale road networks (of the order of neighborhoods or whole cities) with different traffic flow patterns, Wei et al. (2019) and Devailly et al. (2021) explore scalable and decentralized multi-agent reinforcement learning (MARL) approaches. In particular, to encourage better utilization of the spatial-temporal information, researchers model the road network using graph neural networks (Zhou et al., 2018) trained with RL to encourage cooperation (Wei et al., 2019) and improve transferability (Devailly et al., 2021). We are interested in further studying these approaches. In particular, we investigate their robustness to missing data as well as their ability to generalize to larger-size networks with different traffic regimes. We introduce an initial experiment to demonstrate the limitation of current deep-reinforcement learning approaches. We learn a traffic signal control agent based on decentralized independent deep reinforcement learning (Rodrigues and Azevedo, 2019). We also add a few standard Deep RL tricks: Double Q-Learning (Hasselt, 2010) to prevent overestimation and to stabilize the learning process, and parameter noise for exploration (Fortunato et al., 2017). The experiment compares the performance of this Deep RL agent trained on a small network with 3 intersections and tested on the same small network as well as a larger one with 30 intersections. Sensor failures are also presented in the test scenarios (the exact setup is described later 4.1). As noted above, we find that faced with sensor failures, the RL agent performs comparatively worse in a large road network versus in a small one (Figure 1). Furthermore, we find that when demand surges,1 the performance decreases more in the large road network (Figure 2). This result demonstrates that a shift in the distribution of network architectures and the distribution of demand hinders the robustness of reinforcement learning approaches. These observations 1 and 2 motivate the development of robust and transferable Deep RL-based methods for traffic signal control. Footnote 1: The heavy traffic regime is simulated by doubling the number of cars in the network. In this work, we propose RGLight, a method that can further improve both the robustness and generalizability of traffic-signal controllers compared to previous works (as shown in Table 1). RGLight uses distributional RL (DisRL) (Bellemare et al., 2017; Dabney et al., 2018). Compared to standard RL that estimates the mean value of _returns_ (actions in each state), DisRL constructs a (full) distribution over returns. DisRL tends to improve the stability of the learning process, i.e., improve convergence, especially in dynamic environments (Bellemare et al., 2017; Lyle et al., 2019). Until now, DisRL instantiations focus on the single-agent setting without exogenous uncertainty. We conjecture that DisRL can also improve the Figure 1: Sensor failures can create larger delays in large networks compared to small networks. In the experiment, the large-scale network has 30 intersections while the small-scale network has 3 intersections. We tune the traffic demand parameter so that both small and large networks have a similar queue length. As a result, we can obtain a comparable baseline (shown in green). Figure 2: The comparison of different road networks given different traffic demands. In the test, we tune the arrival rate to make two networks have similar congestion (i.e., average queue length across the whole simulation steps), then increase the traffic regime (density) by two times to simulate the demand surge. learning stability in multi-agent settings and in particular in large-scale traffic signal control settings. Building upon the prior work of IGRL (Devailly et al., 2021), we find that a policy ensemble that combines distributional and deterministic modeling further boosts the generalizability of IGRL across a number of scenarios. We also propose several criteria to evaluate the robustness and generalizability of the learned policies and conduct extensive experiments to evaluate RGLight in both real-world settings and synthetic settings. Results show that RGLight improves the robustness and generalizability of traffic signal control compared to several state-of-the-art baselines. To summarize, our main contributions are: * A method based on a policy ensemble of distributional RL and standard graph-based RL for traffic signal control. Our approach focuses on improving the overall generalization performance and robustness of the trained RL policies. * An empirical evaluation with different types of missing values, flow patterns, and network structures using both synthetic and real-world road networks. We compare approaches using an _evaluation matrix_ to provide a more systematic analysis of the generalization ability of different models. We highlight that RGLight outperforms several state-of-the-art baselines. ## 2 Background and Related work ### RL-based Traffic Signal Control The very first implementation of RL in TSC uses tabular Q-Learning to learn from a single intersection (Wiering et al., 2004). Cai et al. (2009) then uses RL with function approximations. However, most previous investigations are limited to toy scenarios. To develop RL methods for more realistic traffic data, researchers turned their attention to deep RL. Wei et al. (2018); Shabestary and Abdulhai (2022) show that deep reinforcement learning can dynamically adjust to real-time traffic. However, the high dimension of the joint action space still limits the scalability of centralized RL approaches. ### Large-Scale Traffic Signal Control Multi-agent Reinforcement Learning (MARL) is introduced to improve the scalability of RL agents by using a decentralized control framework. Chu et al. (2019) use advantage actor-critic (A2C) as a large-scale TSC method. To be specific, neighbors' information is adapted to improve sample efficiency and promote cooperative strategy. Furthermore, a spatial discount factor is introduced to improve the learning efficiency, i.e. to reduce fitting difficulty. To enable cooperation of traffic signals, recent works study how to encourage cooperation through graph representation learning. Wei et al. (2019) propose to use a graph attention neural network in the setting of large-scale road networks with hundreds of traffic signals. They model each TSC as an agent. Agents learn to communicate by attending to the representations of neighboring intersections. Their results demonstrate the effectiveness of the attention mechanism to help cooperation and achieve superior performance over state-of-the-art methods. Concurrently, Devailly et al. (2021) further exploit the vehicular data at its finest granularity by representing every vehicle as a node. They demonstrate the flexibility of GCNs, which can enable transferability to unseen road networks. However, neither of these works evaluates their methods under exogenous uncertainties. ### Robustness in Traffic Signal Control There are several factors that could affect the model's robustness, such as sensor failures and demand surges. In transportation research, a very straightforward way to solve the exogenous uncertainty problem from sensor failure is to use imputation methods (Tang et al., 2015; Chen et al., 2019, 2021). For example, recent work uses a variational Bayes approach to predict missing values accurately (Chen et al., 2019). Graph Neural Network (GNN) can also be an efficient and effective tool for recovering information from malfunctioning sensors (Wu et al., 2020). Bayesian multiple imputation and bootstrap have also been used to approximate the distribution of the training set in order to estimate the state-action value function given missing data (Lizotte et al., 2008). Such methods are tailored to sensor failures and do not solve problems related to demand surges and different road networks. Therefore, we do not focus on imputation methods here. Recently, deep RL has proved to be robust in small-scale networks under the impact of special events, such as demand surges, sensor failures, and partial detection. Rodrigues and Azevedo (2019) developed the callback-based framework to enable flexible evaluation of different deep RL configurations under special events. They concluded that when training in scenarios with sensor failures, the RL approach can be quite robust to the wide sensor failure and demand surge problems. Zhang et al. (2020) demonstrate that deep RL agents can be robust within the partially detected intelligent transportation systems (PDITS), which is a partially observable Markov decision process (POMDP) in the RL community, in which only part of vehicle information can be acquired. They have conducted experiments under different detection rates and report that the RL-based control method can improve travel efficiency even with a low detection rate. However, their evaluation scenario is limited to one to five intersection cases. Most importantly, they have not further discussed how to improve the robustness based on previous reinforcement learning methods. Our model can be extended to a large-scale network. Ghanadbashi et al. (2023) introduces a model called OnCertain to improve decision-making in self-adaptive systems that interact with each other in dynamic environments. The proposed system can handle uncertainty caused by unpredictable and rare events while having limited information about the environment. ### Generalization in Traffic Signal Control The training mechanism for Deep RL follows a trial-and-error approach and is computationally expensive (see chapter 4 in Sutton and Barto (2018)). For traffic signal control, training models on large-scale networks or using a variety of different traffic demands quickly becomes prohibitive (Wei et al., 2019). As a result, designing methods that can learn on smaller networks and transfer their knowledge to large-scale ones can be beneficial. Recently, meta-RL2 has been applied to traffic signal control problems. Zang et al. (2020) propose to use value-based meta-reinforcement learning for traffic signal control which includes periodically alternating individual-level adaptation and global-level adaptation. Based on the previous work (Zang et al., 2020), Zhu et al. (2023) take the policies of neighbor agents into consideration and consider learning a latent variable to represent task-specific information to not only balance exploration and exploitation but also help learn the shared structures of reward and transition across tasks. Zhang et al. (2020) design a WGAN-based (Arjovsky et al., 2017) flow generator to generate different traffic flows to improve the generalization ability of TSC models to different traffic flow environments. However, MetaLight (Zang et al., 2020) considers training on larger-scale networks, then testing on a subset of training networks or smaller networks. Recently, GNNs have demonstrated generalizability to different road structures and traffic flow rates or demands. Nishi et al. (2018) stack multiple GCN layers onto neural networks to improve the generalizability to different vehicle generation rates during training. Wei et al. (2019) use graph attentional networks to facilitate communication and promote cooperation among intersections. Devailly et al. (2021) represent traffic entities as nodes in the graph to enable generalizability to new road networks, traffic distributions, and traffic regimes. Footnote 2: meta-RL: a learning-to-learn approach that involves learning on training tasks in order to ease training on test tasks drawn from the same family of problems. ### Summary of Previous Work on Robustness and Generalizability for Traffic Signal Control Table 1 summarizes and compares the previous works with respect to the following aspects: 1. Generalizability to different networks and traffic flows or demands, and 2. Robustness to sensor failures (noise). Deep reinforcement learning methods have demonstrated robustness to sensor failures (Tan et al., 2020; Rodrigues and Azevedo, 2019). Furthermore, by using the transfer learning technique (Tan et al., 2020), the trained model can also handle demand surges. However, the above methods do not adapt to new road networks. At best these methods require a fine-tuning step before being deployed on a new network. Some work proposes using meta-learning to improve the generalizability to different road networks and traffic flow distributions (Zang et al., 2020; Zhu et al., 2023; Zhang et al., 2020a). However, the training data sets usually include more scenarios than the testing sets, or the testing sets are a subset of training sets (Zang et al., 2020). Furthermore, MetaLight (Zang et al., 2020) still needs to re-train its model parameter on new intersections. As a result, they cannot perform zero-shot transfer to new road networks. Recently, graph-convolutional networks have demonstrated their ability to further improve generalizability, enabling zero-shot transfer learning to new road structures and traffic settings that have never been experienced during training. In summary, IGRL Devailly et al. (2021) is the only work that can enable zero-shot transfer learning for new scenarios. Therefore, we choose the IGRL model and its variant as our reinforcement learning baseline methods. In this work, we build upon the previous work (Devailly et al., 2021) and systematically evaluate the transferability of IGRL. We are the first to jointly improve generalizability to different networks and robustness to sensor failures and demand surges. ## 3 Methodology The proposed framework is shown in Figure 3. Like Devailly et al. (2021), we first encode the road network around each TSC including the moving components as a graph with nodes and edges. We abstract each vehicle feature (V), lane feature (L), connection feature (C), and traffic signal controller (TSC) feature as nodes of the graph (Section 3.1). Then a representation of the graph is learned using a graph convolutional network (GCN), see Section 3.2. We train the GCN to estimate state-action values (or returns) either using a standard RL objective (Section 3.2) or a DisRL objective (Section 3.3). In standard RL, the GCN provides a graph representation embedding \(\psi\) (Figure 3 right branch). In DisRL, we combine the embedding with an _embedding function_\(\phi(\cdot)\) (Figure 3 left branch). We then combine the values of the returns estimated by the DisRL and the standard RL objectives (Section 3.4). The combined estimated returns can then be decoded (greedily) to obtain the agent's action. Once an action \(a_{t}\) is executed, the environment changes (e.g., following a micro-traffic simulator) and the agent can then pick its next action (\(a_{t+1}\)). In practice, we assume that the agent can execute an action every second (i.e., a timestep lasts one second). From Figure 3, we can find that on the right (traditional DQN/IGRL), pointwise estimates of state-action returns are used (one point per action/color) while on the left, multiple samples (i.e. multiple points per \begin{table} \begin{tabular}{l c c c} \hline \hline Method & Disjoint train \& test & Varying Traffic (demand) & Sensor failure (noise) \\ \hline MetaLight (Zang et al., 2020) & MetaLight (Zang et al., 2020) & MetaLight (Zang et al., 2020a) & MetaLight (Zang et al., 2020a) \\ GCN + RL (Distili et al., 2018) & Cross-talk (Wie et al., 2019) & Cross-talk (Wie et al., 2019) & Cross-talk (Wie et al., 2019) \\ IGRL (Devailly et al., 2021) & Cross-talk (Wie et al., 2020) & Cross-talk (Wie et al., 2020a) & Cross-talk (Wie et al., 2020a) \\ Transfer learning-Dueling DQN (Wu et al., 2020) & Cross-talk (Wie et al., 2020a) & Cross-talk (Wie et al., 2020a) & Cross-talk (Wie et al., 2020a) \\ Call-back Deep RL (Rodrigues and Azevedo, 2019) & Cross-talk (Wie et al., 2020a) & Cross-talk (Wie et al., 2020a) & Cross-talk (Wie et al., 2020a) \\ Robust TSC(Tan et al., 2020) & Cross-talk (Wie et al., 2020a) & Cross-talk (Wie et al., 2020a) & Cross-talk (Wie et al., 2020a) \\ Interception-based robust feedback controller (Komarovsky and Haddad, 2019) & Cross-talk (Wie et al., 2020a) & Cross-talk (Wie et al., 2020a) & Cross-talk (Wie et al., 2020a) \\ \hline RGLight (this paper) & Cross-talk (Wie et al., 2020a) & Cross-talk (Wie et al., 2020a) & Cross-talk (Wie et al., 2020a) & Cross-talk (Wie et al., 2020a) \\ \hline \hline \end{tabular} \end{table} Table 1: Previous works address generalization and robustness separately. RGLight, the method proposed in this paper, studies their combination. action/color) are drawn from quantiles and implicitly define the distribution. of state-action returns for all actions. ### Agent Design #### 3.1.1 State space Given the state observation for each signal controller \(i\), the state-action pairs for each TSC are denoted \((s_{i},a_{i})\in S\times A,i=1,\ldots,K\). We assume that there are \(K\) intersections in the system and each agent, i.e., TSC, can observe part of the system state \(s\in S\). The number of layers in the GCN defines how large the observable part of the state space is for a given agent. For instance, when using only 2-3 layers, given the architecture of the GCN, only information regarding a local intersection (connectivity features corresponding to controllable connections and traffic features corresponding to immediately inbound and outbound lanes) is perceivable to that intersection's agent. Based on (Devailly et al., 2021), we consider the following features in each entity: * _TSC feature:_ represents the state of a controller. The features are the number of seconds since a traffic controller performed its last phase switch. * _Connection feature:_ represents the state of an existing link between an entry lane and an exit lane. For example, the connection exists between an entry lane A and an exit lane B if a vehicle on lane A is allowed to continue its travel to lane B. The features in the connection feature are whether a connection is opened under the current phase; whether an open connection between an entry and an exit lane has priority or not; the number of switches the controller has to perform before the next opening of a given connection; and whether the next opening of the connection will have priority or not. Figure 3: Framework overview (inspired by Dabney et al. (2018)). The graph (nodes and edges) encodes the structure of the road network. The current state of the road network at each time step is encoded as node features in this graph. The graph is modeled using a graphical convolutional network (GCN). The parameters of the GCN are learned using one of two objectives. Either the standard RL objective (Devailly et al., 2021) which estimates pointwise state-action returns. Either the distributional RL objective for which multiple samples (left branch, multiple points per action/color) are drawn from quantiles and implicitly define the distribution of state-action returns for all actions (right branch, one point per action/color). In both cases, an embedding function \(\psi\) is used followed by a non-linear layer (not represented on the figure) to provide the value function \(Q(s,a)\). In the distributional RL case, the embedding is combined with a quantile embedding \(\phi\). Mathematical details are provided in Sections 3.2 and 3.3. * _Lane feature:_ represents the state of a lane. It includes the length of the lane. * _Vehicle feature:_ represents the state of a vehicle which includes its current speed and position on the current lane as a feature. #### 3.1.2 Action space At every intersection of the road network, there is a predefined logical program, composed of a given number of phases, depending on the roads, lanes, and the connection information. The program is given by the road network. The binary action of the agent is either to switch to the next phase or prolong the current phase. This modelling is compatible with TSCs using different programs. #### 3.1.3 Reward function Each agent \(i\) obtains a reward \(r_{i}^{t}\) at time \(t\) from the environment. In this paper, we want to minimize the travel time of the vehicles. The reward is defined as the negative sum of total queue lengths per intersection \(q\), \(r_{i}^{t}=-\sum_{l}q_{i,l}^{t}\). where \(q_{i,l}^{t}\) is the queue length on the lane \(l\) at time \(t\). ### Graph Representation Learning on Different Nodes #### 3.2.1 Graph representation using a GCN As in Devailly et al. (2021), we encode the state of the network as a graph. Traffic signal controllers, lanes, connections between lanes, and vehicles are nodes in this graph. Edges connect nodes that are adjacent on the road network (e.g., a vehicle node to its current lane node or a lane node to its connections with a neighbor lane). The graph is encoded using its adjacency matrix \(A\) and it is processed by a graph convolutional network (GCN) (Kipf and Welling, 2017; Liu and Zhou, 2020). The GCN propagates information between nodes to obtain a representation \(H^{n}\) at each layer \(n\): \[H^{n+1}=\sigma\left(D^{-\frac{1}{2}}AD^{-\frac{1}{2}}H^{n}W^{n}\right), \tag{1}\] where \(D\) is a (diagonal) degree matrix (\(D_{ii}=\sum_{j}A_{ij}\)) which normalizes \(A\) using its number of neighbors, \(W^{n}\) are learned parameters and \(\sigma\) is the sigmoid activation function (Kipf and Welling, 2017). Along with the graph structure, nodes and edges can have features \(X\). These features are used to obtain the first-layer representation: \[H^{0}=\sigma(W^{0\top}X+b^{0}) \tag{2}\] where \(W^{0}\) and \(b^{0}\) are learned parameters. Assuming \(N\) hidden layers, we use the last-layer representation \(H^{N}\) to predict a value function. Let \(\psi:\mathcal{X}\rightarrow\mathbb{R}^{d}\) be an embedding function parameterized by the GCN layers. We add a subsequent fully-connected layer to map \(\psi(x)\) to the estimated action values, such that \(Q(x,a)\equiv f(\psi(x))_{a}\), where \(a\) in \(f(\cdot)_{a}\) indexes the output action. We can get the estimated Q values as: \[Q(s,a)=(H^{N}W_{p}+b_{p})_{(s,a)}, \tag{3}\] where \(W_{p}\in R^{c\times p}\) and \(b_{p}\in R^{p}\) are parameters of the neural networks, and \(p\) is the number of phases (action space). In Deep RL, the objective to optimize at each time step \(t\) is \[\mathcal{L}(\theta)=(y_{t}-Q\left(s_{t},a_{t};\theta\right))^{2}, \tag{4}\] where \(y_{t}=r_{t}+\gamma max_{a}Q(s_{t+1},a_{t+1})\), \(\theta\) represents all trainable parameters \((b^{0},W^{0,\ldots,N-1},b_{p},W_{p})\) and \(\gamma\) is the (fixed) discount factor. The (greedy) action associated with the value function can be obtained for each state as: \[\pi(s)=\underset{a\in A}{arg\,max}\ Q(s,a). \tag{5}\] where \(\pi(s)\) denotes the policy in state \(s\). #### 3.2.2 Parameter sharing Each TSC learns to maximize its local reward and as such TSCs are independent. However, the parameters of all TSCs are shared to encourage learning parameters that transfer to a variety of situations. In particular, nodes of the same type both within the same TSC and across TSCs share the same parameters. Parameter sharing also reduces the memory footprint of the system (since the number of parameters is now independent of the number of TSCs). The system can then scale to very large networks (Devailly et al., 2021). ### Distributional RL The previous section introduces standard RL for GCNs (4). Now, we discuss learning the GCN model using distributional RL (DisRL). Compared to traditional RL, DisRL models the distribution over returns. The expectation of that distribution yields the standard value function. In this work, we use implicit quantile networks (Dabney et al., 2018), a distributional version of Deep Q-Networks (Silver et al., 2017). Implicit quantile networks can approximate any distribution over returns and show superior performance compared to other DisRL methods (Bellemare et al., 2017; Dabney et al., 2018). Implicit quantile networks define an implicit distribution using samples \(\tau\) from a base distribution \(\tau\sim U([0,1])\). The implicit distribution is parameterized using \(\phi:[0,1]\to R^{d}\). The function \(\phi\) provides the embedding for quantile \(\tau\). This embedding \(\phi\) is combined with the GCN's output embedding \(\psi\) to form the approximation of the distributional Q-values (see Figure 3 (a)): \[Z_{\tau}(s,a)\equiv f(\psi(s)\odot\phi(\tau))_{a}, \tag{6}\] where \(\odot\) represents the element wise product, the \(a\) on the RHS indexes the output of the function \(f\). We use the same embedding function as in (Dabney et al., 2018): \[\phi_{j}(\tau):=\text{ReLU}\left(\sum_{i=0}^{n-1}\cos(\pi i\tau)w_{ij}+b_{j} \right), \tag{7}\] where \(n\) is the size of the input embedding, \(j\in{1,\ldots,d}\) indexes different units (neurons), and \(w_{ij}\) and \(b_{j}\) are parameters shared across all TSCs (much like parameters of the GCN Equation (1) are also shared across TSCs). As a result, the state-action value function can be represented as the expectation: \[Q(s,a):=\underset{\tau\sim U([0,1])}{E}\left[Z_{(\tau)}(s,a)\right], \tag{8}\] and its associated greedy policy can be obtained from Equation (5). In DisRL, we want to minimize the distance between two distributions so as to minimize the temporal difference error (TD-Error). For two samples \(\tau,\tau^{\prime}\sim U([0,1])\), and policy \(\pi\), the TD-Error at time step \(t\) can be computed as: \[\delta_{t}^{\tau,\tau^{\prime}}=r_{t}+\gamma Z_{\tau^{\prime}}\left(s_{t+1}, \pi\left(s_{t+1}\right)\right)-Z_{\tau}\left(s_{t},a_{t}\right). \tag{9}\] Furthermore, the random return is approximated by a uniform mixture of \(K\) Dirac delta function: \[Z(s,a):=\frac{1}{K}\sum_{i=1}^{K}\delta_{\mu_{i}(s,a)}, \tag{10}\] where each \(\mu_{i}\) assigned a fixed quantile target. The quantile target's estimations are trained using the Huber loss (Crow and Siddiqui, 1967) with threshold \(\lambda\). As a result, the distributional version of loss function is formulated as: \[\mathcal{L}_{dis}\left(\theta\right)=\frac{1}{M^{\prime}}\sum_{i=1}^{M}\sum_{j =1}^{M^{\prime}}\rho_{\tau_{i}}^{\lambda}\left(\delta_{t}^{\tau_{i},\tau_{j}^{ \prime}}\right), \tag{11}\] with \(\rho_{\tau_{i}}^{\lambda}\) is the quantile regression term (Dabney et al., 2018), \(M\) and \(M^{\prime}\) the number of samples used to evaluate the TD-error. ### RGLight In the previous sections, we introduce two different reinforcement learning formulations for learning TSC policies (see Figure 3). Our initial experiments show important empirical differences between the two approaches. First, we find that distributional RL converges faster than classical RL in our domain. We also note that the embeddings learned by both approaches are different (see Figure 6 in the supplementary material for an example). We suspect a combination of the learned policy might yield the best of both worlds. To do so, we train both approaches separately and then combine their (estimated) Q-values (during testing) (see Figure 3). Given a set of actions \(A(s_{t})=\{a[1],...,a[n]\}\), The estimated Q-value for action \(a_{i}\) is \(Q(s_{t},a_{i})\) at time \(t\). We first normalize the Q values of both methods. We find that exponentiating the values first yields better results (Wiering and Van Hasselt, 2008): \[\tilde{Q}(s,a)=\frac{e^{Q(s,a)/T}}{\sum_{i}e^{Q(s,a_{i})/T}}. \tag{12}\] We then obtain \(\tilde{Q}^{RG}\) the Q-value used by RGLight as a convex combination of the normalized Q-values of the two methods: \[\tilde{Q}^{RG}=\kappa\tilde{Q}^{deter}+(1-\kappa)\tilde{Q}^{dis}, \tag{13}\] where we dropped the \(s\) and \(a\) indexes for clarity and \(\kappa\in[0,1]\) is the relative importance of the standard RL approach. We ensemble the prediction results from two frameworks to improve the robustness and generalizability of our model. Based on preliminary simulations, we find that \(\kappa=0.6\) and \(T=5\) offer more consistent and higher performance across experiments. ## 4 Experiments In this section, we study the effectiveness of the RGLight method for multi-agent TSC. We aim at answering the following questions: * How does the proposed method perform compared with other state-of-the-art baselines? (Section 4.2.1 and Section 4.2.2) * Is the proposed method more robust to sensor failure problems compared to other baseline methods? (Section 4.2.1 and Section 4.2.2) * Can the proposed method generalize to different road network structures and traffic regimes? (Section 4.3) * How can we balance the trade-off between representation capacity and learning stability to improve the overall robustness and generalizability? (Section 4.3 and Section 4.2.2) ### Experiment Setup The scenario we study is one where a system learns in a "controlled environment" on synthetic networks with no missing data. Then the performance, robustness, and generalizability of the system are tested by "deploying" it in a more realistic scenario that involves new networks (synthetic or from the real world), different traffic regimes (demand surges), and missing data. A visualization of the learning setup is shown in Figure 4. To be more precise, we train RL methods (DGRL, IGRL, and GNN-TSC) on synthetic road networks for 60 episodes without missing data or demand surge. Then we test their performance on either other synthetic networks or, perform zero-shot generalization by controlling the TSCs of two real-world networks (a part of Luxembourg and Manhattan). All of our studies use the simulation of urban mobility (SUMO) (Krajzewicz et al., 2002) micro simulator. #### 4.1.1 Background and Assumption * **Sensor Failures:** In all of our experiments, we assume that _we know the lane each vehicle is in_. We imagine, for example, that on each traffic signal controller, there would be a camera/detector that can sense which vehicle has entered which lane, and it is not likely to fail (Wu et al., 2020). The most common cause of missing data comes from the sensor failure of probed vehicles, which means that the system detects the vehicle, but does not get its current speed and exact position (Lu et al., 2008; Qiu et al., 2010). We assume faulty vehicle sensors provide a value of zero. * **Traffic flows:** We consider different traffic flows as both different traffic distributions and traffic demands. Particularly, different traffic demands are based on the arrival rate. For all these experiments, the trip is generated by SUMO's trip generator.3 The arrival rate is controlled by the option _period_ in SUMO (Krajzewicz et al., 2002). By default, this generates vehicles with a constant period and arrival rate of (1/period) per second. Note that for different scales of road networks, the same arrival rate will end up with different traffic signal performances.4 For the trip distribution, the number of departures per second will be drawn from a binomial distribution. In our experiment setting, the trip distribution (the probability of a successful departure) will be changed every 120 seconds. As a result, both the traffic distribution and the traffic demands can be changed in our study. Footnote 3: [https://sumo.dlr.de/docs/Tools/Trip.html](https://sumo.dlr.de/docs/Tools/Trip.html) * **Evaluation metrics:** We discuss the performance of the methods using several standard evaluation metrics (Devailly et al. (2021); Wei et al. (2018)). #### Travel time The travel time is defined as the time duration between the real departure time and the time the vehicle has arrived. The information is generated for each vehicle as soon as the vehicle arrives at its destination and is removed from the network. #### Queue length The queue length is calculated at the lane level using the end of the last standing vehicle. This criterion measures congestion, representing whether it significantly slowed close to an intersection. #### Delay The delay \(d_{t}\) measures the gap between the current speed of the vehicle and its maximum theoretically reachable speed, which is constrained by the type of the vehicle and the maximum allowed speed on the current lane \[s_{v}^{*}=\min\left(s_{v^{*}},s_{l}\right), \tag{14}\] \[d_{t}=\sum_{v\in V}\left(s_{v}^{*}-s_{vt}\right)/s_{v}^{*} \tag{15}\] where \(V\) is the total number of vehicles traveling in the current network, \(s_{v^{*}}\) is the maximum speed that the vehicle can reach, \(s_{l}\) is the speed limit of this road, and \(s_{vt}\) is the vehicle speed at time step \(t\) and \(d_{t}\) denotes the delay at time \(t\). Instantaneous delay for 1 vehicle is how far it currently is from its optimal theoretically reachable speed #### 4.1.2 Datasets We evaluate the different methods using both synthetic networks with synthetic data and real-world networks with real-world traffic routes. * Synthetic networks: We use the same approach to generate the synthetic networks as in IGRL (Devailly et al., 2021). The structure of the synthetic road networks is generated at random using the SUMO simulator, the number of intersections varies between two and ten; the length of every edge is between 100 and 300 meters, and the number of lanes per route is between one and four. Some examples of the generated networks can be seen in Figure 4. We try to maximize the variability of the training networks by generating random networks to cover the most typical cases in real-world networks. * Real-world networks: We use representative traffic data5 from part of Luxembourg and Manhattan to evaluate the performance of our model in real-world settings. Manhattan has a grid-like road network and contains 75 traffic lights and 550 intersections. The Luxembourg network contains 22 traffic lights and 482 intersections. It is also more irregular than Manhattan. Both networks have different traffic demand evolution characteristics as shown in Figure 1 and 2 in the supplementary material. Footnote 5: Luxembourg: [https://github.com/lcodeca/LuSTScenario](https://github.com/lcodeca/LuSTScenario), Manhattan: [https://traffic-signal-control.github.io/](https://traffic-signal-control.github.io/) #### 4.1.3 Baselines We compare our method with several state-of-the-art methods, including both classical transportation methods and learned ones. **Transportation Methods**: * Fixed time Baseline (Koonce and Rodegerdts, 2008): It uses a predetermined plan for cycle length and phase time. This technique is widely used when the traffic flow is steady (Koonce and Rodegerdts, 2008). * Max-moving-car-dynamic-heuristic (Greedy): This dynamic heuristic-based method aims at ensuring that as many vehicles as possible are moving on inbound lanes at any given time, in the spirit of the popular baseline _Greedy_(Variya, 2013) under a cyclic setting. Controllers switch to the next phase if, on inbound lanes, the number of stopped vehicles is superior to the number of moving vehicles, and prolongs the current phase otherwise. **Reinforcement Learning Methods**: Figure 4: Learning scheme for our model. Diverse synthetic road networks are used for the training set while real-world road networks are used for the testing set. * Inductive Graph Reinforcement Learning (IGRL) (Devailly et al., 2021): This recent approach uses graph convolutional networks with a decentralized RL objective. The authors show that their approach can scale and transfer to massive-scale networks. Our robust learning framework is based on IGRL. We compare against their best-performing model IGRL-V which models vehicles as nodes. * Graph Neural Networks for TSC (GNN-TSC) (Wei et al., 2019): Similar to IGRL, the authors propose a GNN-based RL-trained model. Compared to IGRL (Devailly et al., 2021), the method does not consider individual vehicles as nodes in the graph. Instead, they model information at the lane level. With that in mind, we use IGRL-L, a version of IGRL that models lane nodes rather than vehicles as nodes. This version is similar to the CoLight method (Wei et al., 2019).6 Footnote 6: The authors of (Wei et al., 2019) rely on the CityFlow simulator [https://cityflow-project.github.io/](https://cityflow-project.github.io/), we use SUMO, which prevents a direct comparison without a major code rewrite. * Independent Reinforcement Learning (IRL): An independent deep Q-Learning (DQN) agent can be used to model each TSC. DQNs have som level of robustness given demand surges and sensor failures (Rodrigues and Azevedo, 2019; Zhang et al., 2020). Further, the IRL baseline couples DQNs with recent developments for improved robustness: double Q-Learning (Hasselt, 2010), a dueling architecture (Wang et al., 2016), and noisy layers (Fortunato et al., 2017). ### Performance Comparison In this section, we compare the performance of the above baselines to the performance of RGLight with respect to different traffic regimes and sensor failures. All experiments are repeated 30 times with different random seeds for trip generations and the average results are presented. For every evaluation metric, we report the sum of a 1,000-time-step simulation. Note that for each criterion, for readability, the obtained value is divided by 100 in the tables. We also provide a video illustrating the different methods.7 Footnote 7: Simulation video link: [https://youtu.be/wTUkoXvVghs](https://youtu.be/wTUkoXvVghs) #### 4.2.1 Comparison under Different Traffic Regime in Synthetic Networks Table 2 reports the performance of different methods for both normal and heavy traffic regimes in synthetic networks.8 We use the same road network (not seen in the training set) in tests for all methods with 30 random seeds for trips. Footnote 8: We conduct the demand surge experiment in a synthetic network because it is difficult to control the demand parameter in real networks with real traffic demand. Overall, RGLight outperforms others in the normal regime across the three metrics except in terms of travel time where IGRL does as well. RGLight also shines in a heavy regime showing that it is more robust to demand surges. We see that Fixed time does not perform as well as _Greedy_ in normal traffic regimes but better than _Greedy_ in heavy traffic regimes. In terms of travel time, RGLight performs about the same as IGRL in the \begin{table} \begin{tabular}{c c c c c c c} \hline \hline \multicolumn{5}{c}{Normal regime} & \multicolumn{3}{c}{Heavy regime} \\ \hline Methods & Delay & Queue length & Travel time & Delay & Queue length & Travel time \\ \hline Fixed time & 789.26(36.36) & 588.88(53.35) & 1182.26(132.57) & 4095.91(108.54) & 4553.34(112.34) & 13901.72(922.15) \\ _Greedy_ & 379.91(12.22) & 191.91(10.14) & 670.28(232.55) & 6201.11(188.23) & 6685.94(190.42) & 15150.86(743.36) \\ \hline IRL & 1257.58(31.84) & 1013.89(29.40) & 1242.38(46.78) & 5257.58(152.62) & 6670.75(160.25) & 14112.98(498.12) \\ GNN-TSC & 311.85(4.32) & 210.43(10.53) & 517.15(34.32) & 2998.63(614.47) & 3645.75(92.68) & 6092.63(428.75) \\ IGRL & 288.16(8.60) & 128.89(7.72) & **501.36(22.22)** & 2962.92(81.81) & 3515.23(860.04) & 6081.32(355.51) \\ RGLight & **244.15(4.25)** & **80.11(2.74)** & 501.95(20.77) & **2503.96(71.91)** & **3029.45(76.75)** & **5030.31(318.82)** \\ \hline \hline \end{tabular} \end{table} Table 2: Comparison result under different traffic regimes (average and standard deviation in seconds). In this experiment, we use synthetic traffic data to better control the traffic demand surge, where the heavy regime’s traffic demand is twice the normal traffic regime. Lower is better, and the best mean value is bolded. normal regime. As shown in Figure 7, although IGRL and RGLight provide similar average travel times, the empirical distribution of their difference is skewed to the right. This seems to indicate that under this evaluation RGLight is more equitable. In a heavy traffic regime, we see that RGLight outperforms IGRL by a large margin. #### 4.2.2 Comparison under Sensor Failures in Different Real-world Road Networks In this experiment, we test our model's performance with two real-world road networks using real traffic demand (see Figure 1 and 2 in supplementary material). The IRL method does not scale to such large networks (the parameters increase linearly with the number of TSCs) and so we cannot report its performance. Transportation baselines do not consider speed or vehicle position and so their performance is robust to noisy sensors. We first discuss the performance in the Manhattan road network from table 3. We find RGLight outperforms other methods. It is also more robust in scenarios with higher proportions of missing data compared to the other RL baselines. Second, we study methods on Luxembourg's road network. Results in table 4 are similar to previous ones. RGLight outperforms other methods, especially as missing data increases. However, given higher probabilities of missing data, i.e., 60%, both IGRL, and GAT-TSC perform worse than the Fixed time method, which might limit their usefulness. Contrary to the Manhattan study, _Greedy_ performs worse than the Fixed time method. This result suggests that when the road network becomes more irregular as is the case for Luxembourg, _Greedy_ tends to fail. To confirm, we tested the _Greedy_ method on two synthetic networks with the same number of intersections, one with irregular road patterns (more similar to Luxemburg) and the second one laid out as a grid (similar to Manhattan). We confirm that _Greedy_ performs better on the latter. To visualize the performance of the learned policy, we collect the average delays per time step in two road networks. We select the best RL baseline and two transportation baselines. In Figure 5, we see that RGLight better mitigates the effect of demand surge compared to other baselines. Moreover, from Figure 6, faced with a more challenging demand evolution in the Luxembourg road network, RGLight also demonstrates the overall best robustness. \begin{table} \begin{tabular}{l c c} \hline \hline \multirow{2}{*}{Methods} & \multicolumn{2}{c}{Missing Probability (20/ 60 \%)} \\ \cline{2-3} & Delay & Queue Length & Travel Time \\ \hline Fixed time & 594.22(16.24) & 509.79(14.33) & 620.98(68.54) \\ _Greedy_ & 754.27(22.16) & 663.01(39.97) & 781.38(13.184) \\ \hline GNN-TSC & 489.50 (6.38) / 595.544(8.82) / 723.65(10.79) & 385.65 (5.06) / 511.68 (8.71) / 627.66(10.59) & 554.16(29.69) / 651.36(46.48) / 721.98(58.02) \\ IGRL & 438.26 (8.31) / 531.25(9.30) / 678.75(14.37) & 373.33 (4.89) / 460.07 (6.23) / 589.61(7.35) & 527.38(31.20) / 591.92(22.71) / 683.22(40.51) \\ RGLight & **419.43(6.28)** / **501.86(7.12)** / **545.68(8.36)** & **356.28(3.27)** / **421.85(5.71)** / **460.28(7.91)** & **467.94(16.35)** / **535.66(23.98)** / **572.67(28.01)** \\ \hline \hline \end{tabular} \end{table} Table 4: Comparison Result under missing values in Luxembourg road Network. \begin{table} \begin{tabular}{l c c c} \hline \hline \multirow{2}{*}{Methods} & \multicolumn{2}{c}{Missing Probability (20/ 40/ 60 \%)} \\ \cline{2-3} & Delay & Queue Length & Travel Time \\ \hline Fixed time & 1364.45(41.29) & 937.47(40.48) & 1871.36 (238.90) \\ _Greedy_ & 1144.30(34.32) & 907.24(44.43) & 1630.67(264.48) \\ \hline GNN-TSC & 484.49(48.4) / 497.18(96.61) / 696.15(9.82) & 409.75(7.84) / 573.98(7.68) / 612.96(5.24) & 973.46(27.23) / 1273.31(12.67) / 1346.75(41.45) \\ RGL & 413.94(39.94) / 518.14(11.87) / 653.28(13.76) & 314.74(3.96) / 417.53(3.86) / 499.80(3.55) & 906.65(25.47) / 1163.89(10.83) / 1206.46(18.27) \\ RGLight & **364.23(9.95)** / **307.91(4.05)** / **492.89(9.12)** & **311.99(3.01)** / **363.00(3.17)** / **403.11(3.22)** & **954.28(15.66)** / **1032.56(13.63)** / **1088.7(17.36)** \\ \hline \hline \end{tabular} \end{table} Table 3: Comparison result under missing values in Manhattan road network (average and standard deviation in seconds). These two experiments with real-world road networks can test not only test the robustness of different methods but also test how they generalize to different road networks since we train our model on smaller synthetic networks. Figure 5: Average delays evolution in Manhattan road network. Figure 6: Average delays evolution in Luxembourg road network. ### Generalizability analysis Now we test more systematically the ability of the models to generalize to networks of different shapes and scales and under different traffic demands. This departs from most previous works (Wei et al., 2019; Zhang et al., 2020; Oroojlooy et al., 2020) that keep training and testing conditions similar. We also introduce DGRL, a pure distributional baseline version of IGRL, obtained by setting \(k=0\) in Equation 13. We train models on _irregular_ synthetic networks with 2 to 6 intersections. The horizontal direction on each sub-figure in Figures 8 and 9 represents different traffic demands (0.5, 1, 2, 4), and the vertical direction represents different grid network scales, that is, how many columns and rows in the grid network (4, 6, 8). In total, we test 16 different scenarios for each model to evaluate its generalizability. We use the average delay over the whole simulation process to evaluate model performance. Furthermore, we normalize the average delay of each method for readability: \[x_{i}^{\prime}=\frac{x_{i}-x_{min}}{x_{max}-x_{min}}\times 10,000 \tag{16}\] where \(x_{i}\) is the average delay calculated from method \(i\), \(x_{max}\) and \(x_{min}\) are the maximum and minimum delay calculated across all methods given the specific scenario. Then we can use the normalized average delay to plot the colormap in Figure 8. The values of \(x_{i}^{\prime}\) range between 0 and 10,000 and smaller values indicate better performances. Figure 8 shows that all methods tend to perform worse for heavy-traffic regimes in small networks (upper-left corner). This matches common knowledge about network traffic capacity (Loder et al., 2019). We also find that the _Greedy_ baseline performs relatively well in small-scale networks but performs worse in large-scale networks. We hypothesize it assumes that the downstream lanes have an unlimited capacity which makes it not very realistic in large-scale networks. As a result, we can see that the model's performance worsens when the network scale increases. This is similar to the finding in Wei et al. (2019). On the other hand, we find that RL-based methods (i.e., IGRL and DGRL) are less sensitive to network scale change compared to the transportation method. This result demonstrates that RL methods can better generalize to different network structures than standard transportation baselines. We now focus on the reinforcement-learning methods. In the bottom right corner, IGRL performs better than DGRL, but DGRL performs better than IGRL in the upper-left corner (i.e., smaller network with higher demand). These results indicate the weaker generalization ability of IGRL since its performance tends to decrease in test scenarios that are very different from the training scenarios (e.g., a small network under a heavy-traffic regime). We also find that DGRL performs better than IGRL in a small network with a heavy-traffic regime. We suspect that since the distributional approach uses a robust loss it might be less sensitive to outliers. However, in a normal traffic regime with a larger network, DGRL performs worse than IGRL. These findings further motivate the policy ensemble approach. Overall, we find that the RGLight Figure 7: Differences of paired trips travel time compared to RGLight. We report the difference between RGLight and the method (i.e. RGLight - method) and so numbers higher than 0 indicate the method being outperformed by RGLight. The y-axis is normalized. Figure 8: Comparison of generalizability using delay for different methods. The lateral direction on each sub-figure represents different traffic demands and the longitudinal direction represents different grid network scales (how many columns and rows are in the grid network). For example, in a scenario with a network scale of 4 and a demand of 0.5, we have a grid network with 4 columns and 4 rows and the arrival rate is 1/0.5=2 veh/seconds. The shading can only be compared across the methods by using the same scenario configuration (network scale and demand). For example, in a scenario with a network scale of 2 and a demand of 0.5, the Fixed time approach performs the worst so the color is darker compared to corresponding cells in other methods. method performs best across most scenarios. This result indicates that an ensemble of policies can boost generalizability. ### Interpretation of Learned Policies To further analyze the characteristics of the policies learned by the RL methods, we examine the switch rates of IGRL, DGRL, and RGLight. Recall that the actions are binary and correspond to either switching to the next phase in a signal's program (action 1) or not switching (action 0). The switching rate is the ratio of signals that perform a phase switch (action 1) in a single timestep across all intersections. Using a similar matrix across network scale and demand as before, Figure 9 reports the average switch rate across methods. Comparing Figure 9 (b) and (c), we see that overall IGRL exhibits a higher switch rate compared to DGRL. In contrast, RGLight is often in-between IGRL and DGRL except when the demand is the highest (first column) and it switches more often than both. This seems to indicate that RGLight attains states Figure 9: Comparison of switch rates for different methods. We also use the same strategy to normalize the switch rate. Values closer to 1 indicate a higher switch rate. The numbers on each cell stand for the average switch rate multiplied by 1000. that are different than the two other methods. We further discuss the scenario with a 2x2 network and a demand of 1800 veh/h. By considering Figure 8 (a) and Figure 9 (a) together, we observe that RGLight does best. Further, its switch rate (58) is in-between IGRL's (109.4) and DGRL's (30.62). We provide a video demonstration of this simulation.9 In the video we notice that a policy that switches too often (IGRL) leads to a shock wave or gridlock. On the other hand, switching too slowly (DGRL) ends up preventing significant traffic from passing to allow less busy lanes to advance. RGLight seems to have found a good comprise. We believe it is worth further investigating how to design the signal phase and the action space based on these types of results. Footnote 9: Simulation video link: [https://youtu.be/-n_LUbNjIUs](https://youtu.be/-n_LUbNjIUs) ## 5 Conclusions and Discussion Motivated by gaps in the current literature (Table 1), we propose RGLight, an RL approach that combines two reinforcement learning agents and that provides more generalizable and robust policies. Further, we conduct a series of experiments on two different real-world networks with real traffic demands and show that our method outperforms several state-of-the-art baselines. In future work, we plan to study the empirical and theoretical properties of RGLight to model multi-agent systems in other similar domains. Such general multi-agent settings include connected and automated vehicles environment (Wang et al., 2020) and traffic junction environment (Liu et al., 2020). As a second avenue, we will investigate combinations of RGLight (model-free) and model-based reinforcement learning that can both improve performance and also (training) data efficiency (Schrittwieser et al., 2020). ## Acknowledgment This research is supported by the Natural Sciences and Engineering Research Council (NSERC) of Canada, Mitacs Canada, the Canada Foundation for Innovation (CFI), and LC is supported by a Canada AI CIFAR Chair.
2307.08396
Schur property for jump parts of gradient measures
We consider weakly null sequences in the Banach space of functions of bounded variation $\mathrm{BV}(\mathbb{R}^d)$. We prove that for any such sequence $\{f_n\}$ the jump parts of the gradients of functions $f_n$ tend to $0$ strongly as measures. It implies that Dunford--Pettis property for the space $\mathrm{SBV}$ is equivalent to the Dunford--Pettis property for the Sobolev space $W^{1,1}.$
Krystian Kazaniecki, Anton Tselishchev, Michał Wojciechowski
2023-07-17T11:23:06Z
http://arxiv.org/abs/2307.08396v2
# Schur property for jump parts of gradient measures ###### Abstract. We consider weakly null sequences in the Banach space of functions of bounded variation \(\mathrm{BV}(\mathbb{R}^{d})\). We prove that for any such sequence \(\{f_{n}\}\) the jump parts of the gradients of functions \(f_{n}\) tend to \(0\) strongly as measures. Key words and phrases:Bounded variation, gradient measures, weak convergence 2020 Mathematics Subject Classification: 26B30, 46E35 This research was supported by the National Science Centre, Poland, and Austrian Science Foundation FWF joint CEUS programme. National Science Centre project no. 2020/02/Y/ST1/00072 and FWF project no. I5231. ### Basic definitions and formulation of the main result The space \(\operatorname{BV}(\mathbb{R}^{d})\) is the space of functions \(u\) in \(L^{1}(\mathbb{R}^{d})\) such that their distributional gradient \(Du\) is a (vector-valued) measure. The norm on this space is defined in a following way: \(\|u\|_{\operatorname{BV}}=\|u\|_{L^{1}}+\|Du\|\). Here and everywhere below for any measure \(\mu\) (real or vector-valued) notation \(\|\mu\|\) stands for its total variation. We present certain definitions and facts about \(BV\) functions here (which are essentially well known); all of them are taken from [3, Chapter 3]. The gradient \(Du\) of any function \(u\in\operatorname{BV}\) can be written as a sum of its absolutely continuous and singular parts: \(Du=D^{a}u+D^{s}u\). The singular part can be further decomposed as the sum of the Cantor and jump parts. We proceed with the description of this decomposition. We denote by \(B_{\rho}(x)\) the ball in \(\mathbb{R}^{d}\) of radius \(\rho\) with center at the point \(x\). For any vector \(\nu\) (say, of unit length) we will also use the following convenient notation for two halves of the ball \(B_{\rho}(x)\): \[B_{\rho}^{+}(x,\nu) =\{y\in B_{\rho}(x):(y-x)\cdot\nu>0\};\] \[B_{\rho}^{-}(x,\nu) =\{y\in B_{\rho}(x):(y-x)\cdot\nu<0\}.\] The set \(J_{u}\) is now defined as the set of all approximate jump points of \(u\), that is, \(x\in J_{u}\) if there exist numbers \(a\neq b\) and the unit vector \(\nu\) such that \[\lim_{\rho\to+0}\frac{1}{|B_{\rho}^{+}(x,\nu)|}\int_{B_{\rho}^{ +}(x,\nu)}|u(y)-a|\,dy =0;\] \[\lim_{\rho\to+0}\frac{1}{|B_{\rho}^{-}(x,\nu)|}\int_{B_{\rho}^{-} (x,\nu)}|u(y)-b|\,dy =0.\] Here the triple \((a,b,\nu)\) is uniquely determined by these conditions up to a permutation of \(a\) and \(b\) and change of sign of the vector \(\nu\). We denote \(\nu=\nu_{u}(x)\), \(u^{+}(x)=a\), \(u^{-}(x)=b\). The "jump part" of \(Du\) is defined as \(D^{j}u=(D^{s}u)|_{J_{u}}\). We will also call the set \(J_{u}\) a "jump set" of a \(BV\) function \(u\). The following identity holds for a jump part of the gradient (see [3, Theorem 3.77]): \[D^{j}u=(u^{+}-u^{-})\otimes\eta_{u}\mathcal{H}^{d-1}|_{J_{u}}.\] We say that \(u\) has an approximate limit in \(x\in\mathbb{R}^{d}\) if there exists \(z\in\mathbb{R}\) such that \[\lim_{\rho\to 0+}\frac{1}{|B_{\rho}(x)|}\int_{B_{\rho}(x)}|u(y)-z|\,dy=0.\] The set of all points which do not satisfy this property is called an approximate discontinuity set and denoted by \(S_{u}\). The set \(S_{u}\) is countably \(\mathcal{H}^{d-1}\)-rectifiable, i.e. it means that up to \(\mathcal{H}^{d-1}\)-negligible set it is covered by \(\bigcup_{k\in\mathbb{N}}\Phi_{k}([0,1]^{d-1})\) where \(\Phi_{k}:[0,1]^{d-1}\to\mathbb{R}^{d}\) are Lipschitz functions. Besides that, we have \(\mathcal{H}^{d-1}(S_{u}\setminus J_{u})=0\) (see [3, Theorem 3.78]). The Cantor part of the gradient is defined as \(D^{c}u=(D^{s}u)|_{\mathbb{R}^{d}\setminus S_{u}}\). For simplicity, we may assume that \(u\) is a "precise representative" of an element of \(\operatorname{BV}\), i.e. \(u^{+}=u^{-}=u\) outside of the set \(S_{u}\). We will use the following fact: for \(u\in\operatorname{BV}(\Omega)\) there exists a sequence of smooth functions \(u_{n}\) such that \(u_{n}\to u\) in \(L^{1}(\Omega)\) and \(\|Du_{n}\|_{L^{1}(\Omega)}\to\|Du\|\). This convergence is called "strict convergence" (see [3, Theorem 3.9]). From above definitions (and the fact that \(Du\) vanishes on \(\mathcal{H}^{d-1}\)-negligible set \(S_{u}\setminus J_{u}\)) it follows that if \(u\) is a \(BV\) function then its gradient has the following canonical decomposition: \[Du=D^{a}u+D^{c}u+D^{j}u.\] Our main theorem states that if a sequence of functions \(f_{n}\) converges _weakly_ in the space \(BV\) then the jump parts of the gradients of these functions converge _strongly_ (as measures). **Theorem 1**.: _Let \(\{f_{n}\}_{n\in\mathbb{N}}\) be a sequence of functions in \(\mathrm{BV}\). If \(\{f_{n}\}\) converges weakly in \(\mathrm{BV}(\mathbb{R}^{d})\) to a function \(f\) then_ \[\lim_{n\to\infty}\|\left(D^{j}(f-f_{n})\right)\|=0.\] **Remark 1.** We would like to point out that considering only the jump part of the gradient in Theorem 1 is crucial: it is not true for the whole singular part of the gradient. Indeed, let \(f\) be a Cantor function. We can view the Cantor set as the group \(\mathbb{Z}_{2}^{\omega}\). It is well know that Walsh functions can be viewed as the characters of this group. Let \(r_{n}\) be the corresponding Rademacher functions (we view them as the functions on the unit interval supported on the \(n\)-th generation of the Cantor set). Now we put \(f_{n}(x,y)=f(x)r_{n}(x)\Phi(y)\) where \(\Phi\) is a non-negative smooth function with compact support such that \(\int\Phi=1\). Clearly, \[D^{s}f_{n}=D^{c}f_{n}=r_{n}e_{1}\,d\,\mu\otimes\Phi\,d\lambda,\] where \(d\,\mu\) is the Cantor measure, \(d\,\lambda\) is a one-dimensional Lebesgue measure and \(e_{1}\) is an element of the standard basis in \(\mathbb{R}^{2}\). Observe that \(\|D^{s}f_{n}\|=1\). To check that \(f_{n}\) tends weakly to \(0\) in \(\mathrm{BV}\) it is enough to check that \(f_{n}\) and \(D^{a}f_{n}\) tend weakly to \(0\) in \(L^{1}(\mathbb{R}^{2})\) and \(D^{s}f_{n}\) tends to \(0\) weakly in \(L^{1}(d\,\mu\otimes d\,\lambda)\). Since the measure of the supports of functions \(r_{n}\) tends to \(0\), we see that \(\|f\|_{L^{1}(\mathbb{R}^{2})}\to 0\) and \(\|D^{a}f_{n}\|_{L^{1}(\mathbb{R}^{2})}\to 0\). Now we take an \(\mathbb{R}^{2}\)-valued function \(g=(g_{1},g_{2})\in L^{\infty}(d\,\mu\otimes d\,\lambda)\) and observe that \[\int gD^{s}f_{n}\,d\,\mu\,d\,\lambda=\int r_{n}\widetilde{g}\,d\,\mu, \tag{1.1}\] where \(\widetilde{g}(x)=\int g_{1}(x,y)\Phi(y)\,d\,\lambda(y)\). Gelfand's transform of the function \(\widetilde{g}\) is a function from \(C_{0}(\bigoplus_{n\in\mathbb{N}}\mathbb{Z}_{2})\) and therefore the right-hand side of the equation (1.1) tends to \(0\). **Remark 2.** It is well known that if \(\Gamma\) is a Lipschitz curve then there exists a trace operator \(\mathrm{Tr}:\mathrm{BV}(\mathbb{R}^{d})\to L^{1}(\Gamma)\). It is easy to see that Theorem 1 implies complete continuity of this operator (note that such operator can not be compact). The remaining part of the paper is dedicated to the proof of Theorem 1. ## 2. Scheme of proof and certain technical simplifications Let us briefly describe the scheme of proof of Theorem 1. It is enough to prove it when \(f\) is equal to \(0\). Suppose that the statement of the theorem does not hold. Then, extracting a subsequence, we may assume that a sequence \(\{f_{n}\}\) converges weakly in \(\mathrm{BV}\) to \(0\) but \(\|D^{j}f_{n}\|\geq c>0\). After normalization we may also assume that \(\|D^{j}f_{n}\|=1\). We will start with the case of dimension \(d=2\) (the case of arbitrary dimension is similar, we address it in the final section). At first, we will work with the jump sets of our functions and do some technical simplifications. In particular, we will show that (up to a small negligible error) we may assume that the sets \(J_{f_{n}}\) have a nice structure: that is, each of them is contained in a finite union of compact Lipschitz graphs. Next, we show that these sets stabilize in a certain sense: that is, there exists such number \(N\) that the sets \(J_{f_{1}},\ldots,J_{f_{N}}\) cover a "large part" of each jump set \(J_{f_{n}}\) with \(n>N\). After that, since locally Lipschitz graphs are intervals we prove our theorem under the assumption that there exists one interval that contains a "large portion" of the jump parts of the gradients of functions in our sequence. Finally, we will use our assumptions in order to find a lot of small sets where our functions highly oscillate. That will mean that their gradients have large norm and it will give us the desired contradiction. Let us start implementing the above strategy of proof. For each set \(J_{k}\) we know that \[\mathcal{H}^{1}\Big{(}J_{k}\setminus\bigcup_{m=1}^{\infty}\Gamma_{k,m}\Big{)}=0\] where each \(\Gamma_{k,m}\) is a compact Lipschitz graph. Fix any small number \(\varepsilon_{k}>0\). Due to the regularity of the Hausdorff measure, we can find such number \(N(k)\) that \[|D^{j}f_{k}|\Big{(}J_{k}\setminus\bigcup_{m=1}^{N(k)}\Gamma_{k,m}\Big{)}< \varepsilon_{k}.\] Now we can find the functions \(u_{k}\) such that \(D^{j}u_{k}=D^{j}f_{k}\) on the set \(J_{k}\setminus\bigcup_{m=1}^{N(k)}\Gamma_{k,m}\) and \(\|u_{k}\|_{\mathrm{BV}}\leq C\varepsilon_{k}\) (see [3, Theorem 4.6]; it is worth noting that, multiplying by appropriate smooth functions with compact supports, we may assume that the supports of all our functions are compact). If we choose the numbers \(\varepsilon_{k}\) tending to \(0\) then we see that \(f_{k}-u_{k}\to 0\) weakly in BV, \(\|D^{j}(f_{k}-u_{k})\|\geq 1/2\) and \[\mathcal{H}^{d-1}\Big{(}\mathrm{supp}\,D^{j}(f_{k}-u_{k})\setminus\bigcup_{m= 1}^{N(k)}\Gamma_{k,m}\Big{)}=0.\] It means that from the beginning we may assume that for each function in our sequence the support of the jump part of its gradient is contained (up to \(\mathcal{H}^{d-1}\)-negligible set) in a finite union of compact Lipschitz graphs. In what follows we will always use this assumption. ## 3. Stabilization of jump sets Our next goal is to prove the following simple statement about the jump sets of functions in our sequence. **Lemma 1**.: _For any \(\varepsilon>0\) there exists \(N\) such that for \(k>N\) we have_ \[|D^{j}f_{k}|\Big{(}J_{k}\setminus\bigcup_{i=1}^{N}J_{i}\Big{)}<\varepsilon.\] Proof.: Assume the opposite. It means that for some \(\varepsilon>0\) there exists a subsequence \(f_{n_{k}}\) such that \[|D^{j}f_{n_{k}}|\Big{(}J_{n_{k}}\setminus\bigcup_{i=1}^{n_{k}-1}J_{i}\Big{)} \geq\varepsilon.\] We would like to construct a bounded linear functional \(\phi\) on BV such that \(|\phi(f_{n_{k}})|\geq\varepsilon\); then we will get a contradiction with the assumption of the weak convergence. Let us define the following bounded function on the set \(\bigcup_{k,m}\Gamma_{k,m}\): \[h=\epsilon_{k}\mathrm{sign}(f_{n_{k}}^{+}-f_{n_{k}}^{-})\nu_{f_{n_{k}}}\text{ on the set }J_{n_{k}}\setminus\bigcup_{i=1}^{n_{k}-1}J_{i}.\] We inductively choose the sign \(\epsilon_{k}=\pm 1\) in this formula for each \(k\) so that \[\Big{|}\int_{\bigcup J_{n}}(f_{n_{k}}^{+}-f_{n_{k}}^{-})h\cdot\nu_{f_{n_{k}}} \,d\mathcal{H}^{1}\Big{|}\geq\varepsilon. \tag{3.1}\] We indeed can choose such sign \(\epsilon_{k}\): we only need it to satisfy the condition \[\mathrm{sign}\,\Big{(}\int_{\bigcup_{i=1}^{k-1}J_{n_{i}}}(f_{n_{k}}^{+}-f_{n_ {k}}^{-})h\cdot\nu_{f_{n_{k}}}\,d\mathcal{H}^{1}\Big{)}=\mathrm{sign}\,\Big{(} \int_{J_{n_{k}}}(f_{n_{k}}^{+}-f_{n_{k}}^{-})h\cdot\nu_{f_{n_{k}}}\,d\mathcal{ H}^{1}\Big{)}.\] We put \(h=0\) outside the set where we have just defined it. For any function \(f\in\mathrm{BV}\) such that \(\mathrm{supp}\,D^{j}f\subset\bigcup_{k,m}\Gamma_{k,m}\) me can now define the functional \(\psi\) as follows: \[\psi(f)=\int_{J_{f}}(f^{+}-f^{-})\nu_{f}\cdot h\,d\mathcal{H}^{1}.\] Obviously, for each such function we have \(|\psi(f)|\leq\|D^{j}f\|\leq\|f\|_{\mathrm{BV}}\). Therefore, by Hahn-Banach theorem this functional can be extended to a bounded linear functional \(\phi\) on the whole space \(\mathrm{BV}\). But the inequality (3.1) implies that \(|\phi(f_{n_{k}})|\geq\varepsilon\) and we get the desired contradiction. Summarizing the results of the last two subsections, we see that without loss of generality we can assume that \(\|D^{j}f_{n}\|=1\) and we have a finite set of compact Lipschitz graphs \(\{\Gamma_{k}\}_{k=1}^{N}\) such that for every \(n\) \[|D^{j}f_{n}|\Big{(}\bigcup_{k=1}^{N}\Gamma_{k}\Big{)}\geq 1-\varepsilon.\] ## 4. Reasoning for one interval ### Preparation and outline of the proof Application of the above lemma for a small number \(\varepsilon\) (say, \(\varepsilon=\frac{1}{100}\) will suffice) we get that the jump parts of the gradients of our functions concentrate on the finite union of Lipschitz graphs. Since locally every Lipschitz graph "looks like an interval", we will now proceed under the assumption that they really concentrate on one interval, namely, interval \(I\) such that \[|D^{j}f_{n}|(I)\geq c.\] This will allow us to present the main idea of the proof. We will discuss how to pass to a general case in the next subsection. It is worth noting that the constant \(c\) here is actually equal to \(\frac{99}{100}\). For simplicity of notation, assume that \(I\subset\mathbb{R}\times\{0\}\). Consider the partition of \(I\) into \(2^{m}\) intervals of equal lengths; we denote the set of intervals in such partition by \(\mathcal{D}_{m}\): \[\mathcal{D}_{m}=\{I_{i,m}\}_{i=1}^{2^{m}}.\] Consider now the strip \(I\times[-\gamma,\gamma]\). We will specify the choice of a small number \(\gamma\) in a moment. We have the following inequality for any function \(u\in\operatorname{BV}(\mathbb{R}^{2})\) and almost every \(t\in(0,\gamma)\): \[\int_{I}|u^{+}(x,0)-u(x,t)|\,dx\leq\int_{I\times(0,\gamma]}d\,|Du|. \tag{4.1}\] Several remarks are in order here. The functions in \(\operatorname{BV}(\mathbb{R}^{2})\) do not have values at points but they are well defined \(\mathcal{H}^{1}\)-a.e. (see [3, Remark 3.79]); we will assume that we work with precise representatives of our functions. If the function \(u\) is smooth then the inequality (4.1) is obvious: it is just a consequence of fundamental theorem of calculus. For general \(\operatorname{BV}\) functions this inequality can be proved using the definition of the values \(u^{+}\) and \(u^{-}\) using the approximation with respect to strict convergence of \(\operatorname{BV}\) function \(u\) by smooth functions. For \(0<t<\gamma\) put \(g_{n}(x,t)=f_{n}(x,t)-f_{n}(x,-t)\); also for \(t=0\) put \(g_{n}(x,0)=f_{n}^{+}(x,0)-f_{n}^{-}(x,0)\). Then we have: \[\begin{split}\int_{I}|g_{n}(x,0)-g_{n}(x,t)|&\,dx \leq\int_{I\times[-\gamma,\gamma]}|D^{a}f_{n}|\,dx\,dy\\ &+\int_{I\times[-\gamma,\gamma]}d|D^{c}f_{n}|+|D^{j}f_{n}|(I \times[-\gamma,\gamma]\setminus I\times\{0\}).\end{split} \tag{4.2}\] The following Lemma shows that the right-hand side of this inequality can be made arbitrarily small. **Lemma 2**.: _Under our assumptions for every \(\delta>0\) there exists \(\gamma\) such that \(|Df_{n}|(I\times[-\gamma,\gamma]\setminus I\times\{0\})\leq\delta\)._ We postpone the proof of this Lemma untill the end of the present section. Now we use this Lemma and choose \(\gamma\) sufficiently small. Summing up, we see that for almost every \(0<t<\gamma\) we have \[\int_{I}|g_{n}(x,t)-g_{n}(x,0)|\,dx\leq\varepsilon \tag{4.3}\] and hence \[\int_{I}|g_{n}(x,t)|\,dx\geq c-\varepsilon.\] Let us now briefly describe the main idea of the remaining part of the proof. Since the jump part of the gradient converges weakly to \(0\) in \(L^{1}\) on the interval \(I\times\{0\}\), it oscillates. Using the inequality (4.2) we will transfer this oscillation to \(I\times\{t\}\) for a.e. \(t\in(0,\gamma]\). This will imply that the total variation of gradients of functions in our sequence is unbounded. ### Intervals with big averages of absolute values Arguing as in the proof of Lemma 1, we see that if we treat the jump parts of the derivatives \(D^{j}f_{n}\) as \(L^{1}\) functions on the union of all Lipschitz graphs containing the jump sets \(J_{f_{n}}\) (with respect to \(\mathcal{H}^{1}\) measure) then these functions converge weakly to \(0\) in \(L^{1}\) (every bounded function on the union of these Lipschitz graphs gives rise to a bounded linear functional on \(\operatorname{BV}(\mathbb{R}^{2})\)). It means that the functions \(f_{n}^{+}-f_{n}^{-}\) are uniformly integrable, that is, there exists such number \(p\) that if for some set \(A\subset\bigcup_{m=1}^{\infty}\Gamma_{m}\) we have \[\int_{A}|f_{n}^{+}-f_{n}^{-}|\,d\mathcal{H}^{1}>\frac{1}{100} \tag{4.4}\] then \[\mathcal{H}^{1}(A)\geq 3p. \tag{4.5}\] We fix this number \(p\). Let us define the set \(L_{n}^{(m)}(t)\subset\{1,\dots,2^{m}\}\): \[i\in L_{n}^{(m)}(t)\quad\text{if}\quad\frac{1}{\mathcal{H}^{1}(I_{i,m})}\int_{I _{i,m}}|g_{n}(x,t)|\,dx\geq\frac{c-\varepsilon}{2\mathcal{H}^{1}(I)}.\] Clearly, it follows that if \(i\in L_{n}^{(m)}(t)\), then \[\frac{1}{\mathcal{H}^{1}(I_{i,m})}\int_{I_{i,m}}|g_{n}(x,t)|\,dx\geq\frac{c}{4 \mathcal{H}^{1}(I)}.\] The following notation will also be convenient for us: \[L_{n}^{(m)}(t)^{*}=\bigcup_{i\in L_{n}^{(m)}(t)}I_{i,m}.\] Then we have: \[c-\varepsilon\leq\int_{I}|g_{n}(x,t)|\,dx=\sum_{i\in L_{n}^{(m)} (t)}\int_{I_{i,m}}|g_{n}(x,t)|\,dx+\sum_{i\not\in L_{n}^{(m)}(t)}\int_{I_{i,m} }|g_{n}(x,t)|\,dx\\ \leq\int_{L_{n}^{(m)}(t)^{*}}|g_{n}(x,t)|\,dx+\frac{c-\varepsilon }{2}.\] From here we deduce that \[\int_{L_{n}^{(m)}(t)^{*}}|g_{n}(x,t)|\,dx\geq\frac{c-\varepsilon}{2}.\] Using the inequality (4.3) (for the set \(L_{n}^{(m)}(t)^{*}\), which is a union of finite number of intervals, instead of \(I\)), we deduce that then \[\int_{L_{n}^{(m)}(t)^{*}}|g_{n}(x,0)|\,dx\geq\frac{c-3\varepsilon}{2}\geq \frac{c}{4}\] provided that \(\varepsilon<\frac{c}{12}\). Note however that the functions \(g_{n}(x,0)\) are exactly the jump parts of the gradients of the functions \(f_{n}\) (restricted to \(I\)) and therefore (see (4.4) and (4.5)) we have \(\mathcal{H}^{1}(L_{n}^{(m)}(t)^{*})\geq 3p\). So, we obtained the following estimate: \[\#L_{n}^{(m)}(t)\geq 2^{m}3p\mathcal{H}^{1}(I)^{-1}. \tag{4.6}\] ### Intervals with small averages Now we apply Lemma 2 once again, this time with the parameter \(3\varepsilon_{0}\) where \(\varepsilon_{0}\leq\frac{pc}{300\mathcal{H}^{1}(I)}\). It follows that there exists \(\gamma_{0}\) such that for a. e. \(0<t_{0}<\gamma_{0}\) we have \[\int_{I}|g_{n}(x,t)-g_{n}(x,0)|\,dx\leq 3\varepsilon_{0}. \tag{4.7}\] Now, for every small \(\delta_{0}>0\) there exists a number \(n(\delta_{0},m)\) such that for \(n>n(\delta_{0},m)\) we have \(\Big{|}\int_{I_{i,m}}\,dD^{j}f_{n}\Big{|}<2^{-m}\mathcal{H}^{1}(I)\delta_{0}\) for every \(i\). We can also rewrite this inequality as follows: \[\frac{1}{\mathcal{H}^{1}(I_{i,m})}\Big{|}\int_{I_{i,m}}(f_{n}^{+}(x,0)-f_{n}^{ -}(x,0))\,dx\Big{|}<\delta_{0}.\] We will choose \(\delta_{0}=\varepsilon_{0}/p\). Let us denote \[\alpha_{i,n}^{(m)}(t)=\frac{1}{\mathcal{H}^{1}(I_{i,m})}\int_{I_{i,m}}g_{n}(x,t)\,dx.\] Then for sufficiently big \(n\) we have \(|\alpha_{i,n}^{(m)}(0)|<\delta_{0}\). Fix yet another small number \(\widetilde{\varepsilon}=\frac{3\varepsilon_{0}}{p}\) and define the following set \(K_{n}^{(m)}(t)\subset\{1,\ldots,2^{m}\}\): \[i\in K_{n}^{(m)}(t)\quad\text{if}\quad|\alpha_{i,n}^{(m)}(t)|\leq\delta_{0}+ \widetilde{\varepsilon}=\frac{4\varepsilon_{0}}{p}\leq\frac{c}{75\mathcal{H} ^{1}(I)}.\] Clearly, for \(i\not\in K_{n}^{(m)}(t)\) we have \(|\alpha_{i,n}^{(m)}(t)-\alpha_{i,n}^{(m)}(0)|\geq\widetilde{\varepsilon}\). We can now use the inequality (4.7) and this observation and write an easy estimate: \[3\varepsilon_{0}\cdot 2^{m}\mathcal{H}^{1}(I)^{-1}\geq\sum_{i\in\{1,\ldots,2^{m} \}\setminus K_{n}^{(m)}(t)}|\alpha_{i,n}^{(m)}(t)-\alpha_{i,n}^{(m)}(0)|\geq \widetilde{\varepsilon}(2^{m}-\#K_{n}^{(m)}(t)).\] Hence \[\#K_{n}^{(m)}(t)\geq 2^{m}(1-p\mathcal{H}^{1}(I)^{-1}). \tag{4.8}\] ### The end of proof Denote \(S_{n}^{(m)}(t)=L_{n}^{(m)}(t)\cap K_{n}^{(m)}(t)\). Comparing the estimates (4.6) and (4.8) we see that \[\#S_{n}^{(m)}(t)\geq 2^{m+1}p\mathcal{H}^{1}(I)^{-1}. \tag{4.9}\] #### 4.4.1. Proof for smooth functions If all our functions \(f_{n}\) were smooth (outside of \(I\)), then the contradiction would follow almost immediately. Indeed, if \(i\in S_{n}^{(m)(t)}\), then we have: \[\frac{1}{\mathcal{H}^{1}(I_{i,m})}\int_{I_{i,m}}|g_{n}(x,t)|\,dx \geq\frac{c}{4\mathcal{H}^{1}(I)} \tag{4.11}\] \[\Big{|}\frac{1}{\mathcal{H}^{1}(I_{i,m})}\int_{I_{i,m}}g_{n}(x,t) \,dx\Big{|} \leq\frac{c}{75\mathcal{H}^{1}(I)}. \tag{4.10}\] It means that the function \(x\mapsto g_{n}(x,t)\) oscillates on the interval \(I_{i,m}\). To be more precise, the following inequality follows from here: \[\int_{I_{i,m}}|Dg_{n}(x,t)|\,dx\geq\frac{21c}{100\mathcal{H}^{1}(I)}\geq\frac{ c}{5\mathcal{H}^{1}(I)} \tag{4.12}\] The reason for it is elementary for smooth functions: from (4.10) we see that there exists a point \(x_{1}\in I_{i,m}\) such that \(|g_{n}(x_{1},t)|\geq\frac{c}{4\mathcal{H}^{1}(I)}\). Without loss of generality we may assume that \(g_{n}(x_{1},t)>0\). Then using (4.11) we find a point \(x_{2}\in I_{i,m}\) such that \(g(x_{2},t)\leq\frac{c}{25\mathcal{H}^{1}(I)}\) and then apply the fundamental theorem of calculus to the segment between these two points. Summing the inequalities (4.12) over all \(i\in S_{n}^{(m)}(t)\) and applying (4.9), we get: \[\int_{I}|Dg_{n}(x,t)|\,dx\geq\frac{2^{m+1}pc}{5}\mathcal{H}^{1}(I)^{-2}.\] By Fubini's theorem, we get from here that \[\int_{I\times[\gamma_{0}/2,\gamma_{0}]}|Dg_{n}(x,t)|\,dx\,dt\geq\gamma_{0} \frac{pc}{5}2^{m}\mathcal{H}^{1}(I)^{-2}. \tag{4.13}\] Making \(m\) large (recall that \(n\) here depends on \(m\)) we see that \[2\|f_{n}\|_{\mathrm{BV}}\geq\|g_{n}\|_{\mathrm{BV}}\to\infty,\] and it clearly contradicts the assumption of the weak convergence. In our case (when functions \(f_{n}\) are non-smooth) we could write a similar proof (using for example [10, Theorem 5.3.5] instead of Fubini's theorem); however, it seems to be difficult to generalize such proof to the case of arbitrary dimension. So, we proceed with a different proof which works in any dimension. #### 4.4.2. Proof for functions in \(\mathrm{BV}\) In a general case (when functions \(f_{n}\) are not necessarily smooth) we still would like to prove an inequality similar to (4.13). Let us consider the partition of the segment \([0,\gamma_{0}]\) into intervals of lengths between \(2^{-m-1}\mathcal{H}^{1}(I)\) and \(2^{-m+1}\mathcal{H}^{1}(I)\): \[0=t_{0}<t_{1}<t_{2}<\ldots<t_{N}=\gamma_{0}.\] Recall that the sets \(S_{n}^{(m)}(t)\) are defined only for a.e. \(t\); obviously, we can assume that they are defined for all \(t\in\{t_{0},\ldots,t_{N}\}\). Now the rectangle \(I\times[0,\gamma_{0}]\) is divided into strips of the form \(I\times[t_{k},t_{k+1}]\). Consider one such strip which is in turn divided into rectangles (that are actually "almost squares") \(I_{i,m}\times[t_{k},t_{k+1}]\). Let us define yet another set of indices \(G_{n}^{(m)}(t_{k})\) as follows: \[G_{n}^{(m)}(t_{k})=\Big{\{}i\in S_{n}^{(m)}(t_{k}):\,|Dg_{n}|(I_{i,m}\times[t_ {k},t_{k+1}])\leq 2^{-m}\frac{c}{100}\Big{\}}. \tag{4.14}\] We will call the elements of \(G_{n}^{(m)}(t_{k})\) the _good_ numbers. Recall that \(\#S_{n}^{(m)}(t_{k})\geq 2^{m+1}p\mathcal{H}^{1}(I)^{-1}\). Since \[|Dg_{n}|(I\times[t_{k},t_{k+1}])\leq|Dg_{n}|(I\times[0,\gamma_{0}])\leq 3 \varepsilon_{0}\leq\frac{pc}{100}\mathcal{H}^{1}(I)^{-1},\] there are at least \(2^{m}p\mathcal{H}^{1}(I)^{-1}\) good numbers. Take any good number \(i\in G_{n}^{(m)}(t_{k})\). We can rewrite inequalitites (4.10) and (4.11) as follows: \[\int_{I_{i,m}}|g_{n}(x,t_{k})|\,dx \geq\frac{c}{4}2^{-m},\] \[\Big{|}\int_{I_{i,m}}g_{n}(x,t_{k})\,dx\Big{|} \leq\frac{c}{75}2^{-m}.\] Using (4.14) and applying the same estimate as (4.2) to rectangle \(I_{i,m}\times[t_{k},t_{k+1}]\) we obtain that for a.e. \(t\in[t_{k},t_{k+1}]\) the following to estimates hold: \[\int_{I_{i,m}}|g_{n}(x,t)|\,dx \geq\frac{c}{4}2^{-m}-\frac{c}{100}2^{-m}=\frac{26c}{100}2^{-m},\] \[\Big{|}\int_{I_{i,m}}g_{n}(x,t)\,dx\Big{|} \leq\frac{c}{75}2^{-m}+\frac{c}{100}2^{-m}\leq\frac{3c}{100}2^{-m}.\] Denote the rectangle \(I_{i,m}\times[t_{k},t_{k+1}]\) by \(Q_{i,m}^{(k)}\). Recall that \[2^{-m-1}\mathcal{H}^{1}(I)\leq|t_{k+1}-t_{k}|\leq 2^{-m+1}\mathcal{H}^{1}(I).\] Applying Fubini's theorem we get that \[\int_{Q_{i,m}^{(k)}}|g_{n}(x,t)|\,dx\,dt \geq\frac{26c}{100}2^{-2m-1}\mathcal{H}^{1}(I)=\frac{13c}{100}2^{ -2m}\mathcal{H}^{1}(I), \tag{4.16}\] \[\Big{|}\int_{Q_{i,m}^{(k)}}g_{n}(x,t)\,dx\,dt\Big{|} \leq\frac{3c}{100}2^{-2m+1}\mathcal{H}^{1}(I)=\frac{6c}{100}2^{-2m} \mathcal{H}^{1}(I). \tag{4.15}\] Now we use the Poincare inequality (see [3, Theorem 3.44 and Remark 3.45]) for each rectangle \(Q^{(k)}_{i,m}\); recall that the lengths of its sides are comparable to \(2^{-m}\mathcal{H}^{1}(I)\) and therefore they are bi-Lipshitz images of the ball of radius \(2^{-m}\mathcal{H}^{1}(I)\): \[|Dg_{n}|(Q^{(k)}_{i,m})\gtrsim 2^{m}\mathcal{H}^{1}(I)^{-1}\int_{Q^{(k) }_{i,m}}\,\Big{|}g_{n}(x,t)-\fint_{Q^{(k)}_{i,m}}g_{n}\Big{|}\,dx\,dt\] \[\geq 2^{m}\mathcal{H}^{1}(I)^{-1}\Big{(}\int_{Q^{(k)}_{i,m}}|g_{n }(x,t)|\,dx\,dt-\Big{|}\int_{Q^{(k)}_{i,m}}g_{n}(x,t)\,dx\,dt\Big{|}\Big{)} \geq\frac{7c}{100}2^{-m}. \tag{4.17}\] This inequality holds for at least \(2^{m}p\mathcal{H}^{1}(I)^{-1}\) rectangles in one strip and the number of strips is (comparable to) \(2^{m}\gamma_{0}\mathcal{H}^{1}(I)^{-1}\). Summing all these inequalities, we get: \[|Dg_{n}|(I\times[0,\gamma_{0}])\gtrsim\gamma_{0}\frac{7c}{100}2^{m}p\mathcal{H }^{1}(I)^{-2}. \tag{4.18}\] As before, it means that \(2\|f_{n}\|_{\mathrm{BV}}\geq\|g_{n}\|_{\mathrm{BV}}\to\infty\) and we get a contradiction. ### Proof of Lemma 2 It remains only to prove Lemma 2. We will treat the absolutely continuous, Cantor and jump parts of the gradient separately. #### 4.5.1. Estimate for absolutely continuous parts Note that the functions \(D^{a}f_{n}\) weakly converge to \(0\) in \(L^{1}\): indeed, for any \(\mathbb{R}^{2}\)-valued function \(g\in L^{\infty}(\mathbb{R}^{2})\) the functional \[f\mapsto\int_{\mathbb{R}^{2}}g\cdot D^{a}f\] is a bounded linear functional on \(\mathrm{BV}\). Therefore, the functions \(D^{a}f_{n}\) are uniformly integrable and hence if \(\gamma\) is sufficiently small then \[\int_{I\times[-\gamma,\gamma]}|D^{a}f_{n}|\,dx\leq\delta/10 \tag{4.19}\] for every \(n\). #### 4.5.2. Estimate for jump parts Let us turn to the jump parts. We apply Lemma 1 once again, this time with the parameter \(\delta/20\). We get that there exists a compact set \(K\) (a finite union of Lipschitz graphs) such that \(|D^{j}f_{n}|(\mathbb{R}^{2}\setminus K)\leq\delta/20\). We have: \[|D^{j}f_{n}|(I\times[-\gamma,\gamma]\setminus I\times\{0\}) \leq|D^{j}f_{n}|(\mathbb{R}^{2}\setminus I)+|D^{j}f_{n}|((I\times[- \gamma,\gamma]\setminus I\times\{0\})\cap K)\] \[\leq\delta/20+|D^{j}f_{n}|((I\times[-\gamma,\gamma]\setminus I \times\{0\})\cap K).\] Note that the functions \(f_{n}^{+}-f_{n}^{-}\) are uniformly integrable on \(K\) (with respect to the measure \(\mathcal{H}^{1}\) on \(K\)): indeed, we can argue as in the proof of Lemma 1 to see that every function \(g\in L^{\infty}(K)\) gives rise to a bounded linear functional on \(\mathrm{BV}\) (at first we define it on a closed subspace of \(\mathrm{BV}\) which contains functions \(f\) such that \(\mathrm{supp}\,D^{j}f\subset K\) in an obvious way and then extend to the whole space). Therefore, the second summand in our inequality will be smaller than \(\delta/20\) for all \(n\) if \(\gamma_{0}\) is sufficiently small because the measure \(\mathcal{H}^{1}\) on \(K\) is finite and hence we only need to note that we can make the quantity \[\mathcal{H}^{1}((I\times[-\gamma,\gamma]\setminus I\times\{0\})\cap K)\] arbitrarily small and then apply the uniform integrability. We see now that for sufficiently small \(\gamma\) and every \(n\) we have \[|D^{j}f_{n}|(I\times[-\gamma,\gamma]\setminus I\times\{0\})\leq\delta/10. \tag{4.20}\] #### 4.5.3. Estimate for Cantor parts Finally, we show how to treat Cantor parts. It is more difficult because there is no notion of uniform integrability which we used for two other parts. We already found such number \(\gamma=\gamma_{1}\) that the estimates (4.19) and (4.20) hold. Assume that the statement of Lemma is false. Then there exists a number \(n_{1}\) such that \[|D^{c}f_{n_{1}}|(I\times[0,\gamma_{1}])\geq 8\delta/10.\] Now we find such small number \(\gamma_{2}\) that \[|D^{c}f_{l}|(I\times[0,\gamma_{2}])\leq\delta/10\text{ for }l=1,2,\ldots,n_{1}.\] We continue this process and construct in such a way a subsequence \(f_{n_{k}}\) such that \[|D^{c}f_{n_{k}}|(I\times[0,\gamma_{k}])\geq 8\delta/10\] and \[|D^{c}f_{n_{k}}|(I\times[0,\gamma_{k+1}])\leq\delta/10. \tag{4.21}\] Note that these two conditions imply that \[|D^{c}f_{n_{k}}|(I\times[\gamma_{k+1},\gamma_{k}])\geq 7\delta/10.\] Now we need to define a functional on BV that will lead us to the contradiction. The main idea is similar to one we used in the proof of Lemma 1, however here we need to be more careful in order to make a correct definition. Let us introduce the following measure on \(\mathbb{R}^{2}\): \[\mu=\lambda+\sum_{k=1}^{\infty}\frac{1}{2^{k}}|Df_{n_{k}}|,\] where \(\lambda\) is a Lebesgue measure on \(\mathbb{R}^{2}\). It is a positive finite Radon measure on \(\mathbb{R}^{2}\). Besides that, all our gradients are absolutely continuous with respect to this measure which means that there exist such measurable functions \(g^{a}_{n_{k}}\), \(g^{c}_{n_{k}}\) and \(g^{j}_{n_{k}}\) (all of them \(\mathbb{R}^{2}\)-valued) that \[D^{a}f_{n_{k}}=g^{a}_{n_{k}}d\mu,\quad D^{c}f_{n_{k}}=g^{c}_{n_{k}}d\mu,\quad D ^{j}f_{n_{k}}=g^{j}_{n_{k}}d\mu.\] Also, the measures \(f_{n_{k}}\,d\lambda\) are of course absolutely continuous with respect to \(\mu\), which means that there exist scalar-valued functions \(g_{n_{k}}\) such that \[f_{n_{k}}d\lambda=g_{n_{k}}d\mu.\] Therefore, we can naturally identify each function \(f_{n_{k}}\in BV\) with an element of the space \(L^{1}(\mu;\mathbb{R}^{2})\oplus L^{1}(\mu)\); this identification is given by the following map: \[f_{n_{k}}\mapsto(g^{a}_{n_{k}}+g^{c}_{n_{k}}+g^{j}_{n_{k}},g_{n_{k}}).\] We can extend this map to the whole closed linear span of the sequence \(\{f_{n_{k}}\}\) in BV; besides that, this map is an isometry. Hence it is enough for us to provide a correct definition of a functional on the space \(L^{1}(\mu;\mathbb{R}^{2})\oplus L^{1}(\mu)\): then we will be able to extend it to the whole space BV by Hahn-Banach theorem. In the proof only the functional on the space \(L^{1}(\mu;\mathbb{R}^{2})\) will be used, i.e. it will act only on the first components of the elements of the space \(L^{1}(\mu;\mathbb{R}^{2})\oplus L^{1}(\mu)\). We will construct a function \(h\in L^{\infty}(\mu;\mathbb{R}^{2})=L^{1}(\mu;\mathbb{R}^{2})^{*}\) inductively on every rectangle \(I\times[\gamma_{k+1},\gamma_{k}]\). On the first step put \(h_{1}=\frac{g_{n_{1}}^{c}}{|g_{n_{1}}^{c}|}\) on \(I\times[\gamma_{2},\gamma_{1}]\). Then by (4.5.3) we have: \[\int_{I\times[\gamma_{2},\gamma_{1}]}(g_{n_{1}}^{a}+g_{n_{1}}^{c}+g_{n_{1}}^{j} )\cdot h_{1}\,d\mu=\int_{I\times[\gamma_{2},\gamma_{1}]}|g_{n_{1}}^{c}|\,d\mu \geq 7\delta/10.\] Besides that, if we put \(h=h_{1}\) on \(I\times[\gamma_{2},\gamma_{1}]\) and define \(h\) on \(I\times[0,\gamma_{2}]\) in any way so that \(|h|\leq 1\), then still we will have \[\Big{|}\int_{\mathbb{R}^{2}}(g_{n_{1}}^{a}+g_{n_{1}}^{c}+g_{n_{1} }^{j})\cdot h\,d\mu\Big{|}\geq\Big{|}\int_{I\times[\gamma_{2},\gamma_{1}]}(g_{ n_{1}}^{a}+g_{n_{1}}^{c}+g_{n_{1}}^{j})\cdot h_{1}\,d\mu\Big{|}\\ -\Big{(}\int_{I\times[\gamma_{2},\gamma_{1}]}|g_{n}^{a}||h|\,d \mu+\int_{I\times[\gamma_{2},\gamma_{1}]}|g_{n}^{c}||h|\,d\mu+\int_{I\times[ \gamma_{2},\gamma_{1}]}|g_{n}^{j}||h|\,d\mu\Big{)}.\] Each of three summands on the second line here do not exceed \(\delta/10\): this is guaranteed by inequalities (4.19), (4.20) and (4.21). Overall, we see that \[\Big{|}\int_{\mathbb{R}^{2}}(g_{n_{1}}^{a}+g_{n_{1}}^{c}+g_{n_{1}}^{j})\cdot h \,d\mu\Big{|}\geq 4\delta/10.\] Now we define \(h_{2}=h_{1}\) on \(I\times[\gamma_{2},\gamma_{1}]\) and \(h_{2}=\pm\,\frac{g_{n_{2}}^{c}}{|g_{n_{2}}^{c}|}\) on \(I\times[\gamma_{3},\gamma_{2}]\). As before, we have \[\Big{|}\int_{I\times[\gamma_{3},\gamma_{2}]}(g_{n_{1}}^{a}+g_{n_{1}}^{c}+g_{n _{1}}^{j})\cdot h_{2}\,d\mu\Big{|}\geq 7\delta/10,\] and similarly to what we have done in the proof of Lemma 1 we choose the sign in the above definition in such way that \[\Big{|}\int_{I\times[\gamma_{3},\gamma_{1}]}(g_{n_{1}}^{a}+g_{n_{1}}^{c}+g_{n _{1}}^{j})\cdot h_{2}\,d\mu\Big{|}\geq 7\delta/10.\] Arguing as before, we see that if we put \(h=h_{2}\) on \(I\times[\gamma_{3},\gamma_{1}]\) and define \(h\) as any function with \(|h|\leq 1\) on \(I\times[0,\gamma_{3}]\) then we will have \[\Big{|}\int_{\mathbb{R}^{2}}(g_{n_{1}}^{a}+g_{n_{1}}^{c}+g_{n_{1}}^{j})\cdot h _{2}\,d\mu\Big{|}\geq 4\delta/10.\] We continue this process and construct in such way a function \(h\in L^{\infty}(\mu;\mathbb{R}^{2})=L^{1}(\mu;\mathbb{R}^{2})^{*}\) so that we have \[|\langle g_{n_{k}}^{a}+g_{n_{k}}^{c}+g_{n_{k}}^{j},h\rangle|\geq 4\delta/10.\] Since, as we mentioned before, we can extend the functional defined by the function \(h\) to the whole space \(\mathrm{BV}\) and get in such way a functional \(\phi\in\mathrm{BV}^{*}\) such that \(|\langle f_{n_{k}},\phi\rangle|\geq 4\delta/10\), we clearly get a contradiction with our assumption of weak convergence, and our Lemma is proved. ## 5. General case: passing from interval to Lipschitz graphs and proof in higher dimensions ### Lipschitz graphs In the previous section we were presenting the proof under the assumption that when we apply Lemma 1 (for the first time) the finite union of compact Lipschitz graphs is one interval. Now we address this issue. It is standard: the main idea is simply to "straighten" the Lipschitz graph; we will present some details here. We apply Lemma 1 and obtain a finite number of Lipschitz graphs \(\{\Gamma_{k}\}_{k=1}^{N}\) such that \[|D^{j}f_{n}|\Big{(}\bigcup_{k=1}^{N}\Gamma_{k}\Big{)}\geq 1-\varepsilon.\] It means that there exists a number \(l\in\{1,2,\ldots,N\}\) and a subsequence \(f_{n_{k}}\) such that \[|D^{j}f_{n_{k}}|(\Gamma_{l})\geq\frac{1}{2N}.\] Put \(\mathcal{U}=(0,1)\times(-1,1)\). Since \(\Gamma_{l}\) is a compact Lipschitz graph, there exists a neighborhood \(U\) of \(\Gamma_{l}\) and a bijective bi-Lipschitz function \(\varphi:\overline{\mathcal{U}}\to\overline{U}\) such that \(\varphi(I\times\{0\})=\Gamma_{l}\) where \(I\subset[0,1]\) is an interval. We put \[V=\varphi(I\times(-\tau,\tau))\] for some \(\tau<1\). Let \(\Psi\) be a smooth function which is equal to \(1\) on \(V\) and to \(0\) on \(\mathbb{R}^{2}\setminus U\). The multiplication operator \(M_{\Psi}:BV(\mathbb{R}^{2})\to\mathrm{BV}(U)\) given by the formula \(M_{\Psi}(f)=\Psi f\) is bounded and hence weakly continuous. Therefore, the sequence \(\widetilde{f}_{n_{k}}=M_{\Psi}f_{n_{k}}\) converges weakly to \(0\) in \(\mathrm{BV}(U)\). Now consider the operator \(T_{\varphi}:\mathrm{BV}(U)\to\mathrm{BV}(\mathcal{U})\) given by the formula \(T_{\varphi}(f)=f\circ\varphi\) (strictly speaking, this is a composition with \(\varphi|_{\mathcal{U}}\)). This is indeed a bounded operator (see [3, Theorem 3.16]). Therefore, the sequence of functions \(\widetilde{\widetilde{f}}_{n_{k}}=T_{\varphi}\widetilde{f}_{n_{k}}\) converges weakly to \(0\) in \(\mathrm{BV}(\mathcal{U})\). Besides that, by [3, Theorem 3.16] (since \(\varphi\) is bi-Lipschitz) there exists a constant \(c>0\) such that \(|D\widetilde{f}_{n_{k}}|(I\times\{0\})\geq c\). Obviously, since the set \(I\times\{0\}\) has Hausdorff dimension \(1\), we have \(|D\widetilde{\widetilde{f}}_{n_{k}}|(I\times\{0\})=|D^{j}\widetilde{\widetilde {f}}_{n_{k}}|(I\times\{0\})\). Note that we can treat \(D^{j}\widetilde{\widetilde{f}}_{n_{k}}\) as functions in \(L^{1}(I\times\{0\})\) putting them equal to \(0\) outside \(J_{\widetilde{f}_{n_{k}}}\). After that we may repeat the proof from the previous section and get a contradiction. ### Higher dimensions Let us now describe how to modify the proof for the space \(\mathrm{BV}(\mathbb{R}^{d})\). As we mentioned, it is almost the same: we only need to modify the constants in the proof for the \(2\)-dimensional case. First of all, we may consider a \((d-1)\)-dimensional cube \(I\) with side length \(\ell(I)\) instead of an interval. We will divide it into \(2^{m(d-1)}\) equal dyadic subcubes. By the same arguements as above we get that the analogue of the formula (4.9) will be \(\#S_{n}^{(m)}(t)\geq 2^{m(d-1)+1}p\mathcal{H}^{d-1}(I)^{-1}.\) Next, the inequalities (4.15) and (4.16) in \(d\) dimensions will look as follows: \[\int_{Q_{i,m}^{(k)}}|g_{n}(x,t)|\,dx\,dt \geq\frac{13c}{100}2^{-md}\ell(I);\] \[\Big{|}\int_{Q_{i,m}^{(k)}}g_{n}(x,t)\,dx\,dt\Big{|} \leq\frac{6c}{100}2^{-md}\ell(I).\] Since the constant in Poincare inequality on a cube \(Q\) in \(d\) dimensions is comparable to \(\ell(Q)\), the inequality (4.17) takes the following form: \[|Dg_{n}|(Q^{(k)}_{i,m})\gtrsim 2^{m}\ell(I)^{-1}\int_{Q^{(k)}_{i,m}} \left|g_{n}(x,t)-\fint_{Q^{(k)}_{i,m}}g_{n}\right|dx\,dt\] \[\quad\geq 2^{m}\ell(I)^{-1}\Big{(}\int_{Q^{(k)}_{i,m}}|g_{n}(x,t)| \,dx\,dt-\Big{|}\int_{Q^{(k)}_{i,m}}g_{n}(x,t)\,dx\,dt\Big{|}\Big{)}\geq\frac{ 7c}{100}2^{-m(d-1)}.\] Since this inequality holds now for at least \(2^{m(d-1)}p\mathcal{H}^{d-1}(I)^{-1}\) parallelepipeds in one strip and the number of strips is (comparable to) \(2^{m}\gamma_{0}\ell(I)^{-1}\), we get the following analogue of the final estimate (4.18): \[|Dg_{n}|(I\times[0,\gamma_{0}])\gtrsim\gamma_{0}\frac{7c}{100}2^{m}p\,\ell(I)^ {d}.\] As before, this yields a contradiction. Finally, passing to the general case of Lipschitz graphs is the same.
2302.14161
Object Reconfiguration with Simulation-Derived Feasible Actions
3D object reconfiguration encompasses common robot manipulation tasks in which a set of objects must be moved through a series of physically feasible state changes into a desired final configuration. Object reconfiguration is challenging to solve in general, as it requires efficient reasoning about environment physics that determine action validity. This information is typically manually encoded in an explicit transition system. Constructing these explicit encodings is tedious and error-prone, and is often a bottleneck for planner use. In this work, we explore embedding a physics simulator within a motion planner to implicitly discover and specify the valid actions from any state, removing the need for manual specification of action semantics. Our experiments demonstrate that the resulting simulation-based planner can effectively produce physically valid rearrangement trajectories for a range of 3D object reconfiguration problems without requiring more than an environment description and start and goal arrangements.
Yiyuan Lee, Wil Thomason, Zachary Kingston, Lydia E. Kavraki
2023-02-27T21:48:31Z
http://arxiv.org/abs/2302.14161v1
# Object Reconfiguration with Simulation-Derived Feasible Actions ###### Abstract 3D object reconfiguration encompasses common robot manipulation tasks in which a set of objects must be moved through a series of physically feasible state changes into a desired final configuration. Object reconfiguration is challenging to solve in general, as it requires efficient reasoning about environment physics that determine action validity. This information is typically manually encoded in an explicit transition system. Constructing these explicit encodings is tedious and error-prone, and is often a bottleneck for planner use. In this work, we explore embedding a physics simulator within a motion planner to implicitly discover and specify the valid actions from any state, removing the need for manual specification of action semantics. Our experiments demonstrate that the resulting simulation-based planner can effectively produce physically valid rearrangement trajectories for a range of 3D object reconfiguration problems without requiring more than an environment description and start and goal arrangements. ## I Introduction Robot manipulation planning is primarily a problem of finding a sequence of _valid_ actions that move a set of target objects to a given goal configuration. Actions are valid if they respect the problem's constraints, which may be task-specific (e.g., keeping a glass of water upright or maintaining dynamic stability of a stack of objects) or implicit and arising from the environment (e.g., avoiding collisions). Most approaches to manipulation planning (e.g., [1, 2, 3, 4, 5, 6, 7]) rely on explicitly specified problem constraints through formal languages like Linear Temporal Logic (LTL) [8] and the Planning Domain Definition Language (PDDL) [9, 10] or through natural language [11]. These manually created specifications identify both the valid, meaningful subsets of the state space and the valid transitions between these subsets. The resulting transition system guides a search for a sequence of valid actions that perform the given task when executed by the robot. However, such specifications are onerous and error-prone to construct [12], and may not capture the full set of possible actions. They must not only define the valid dynamics for a problem's environment, but also be rich enough to describe a wide range of problems and goals. Furthermore, problem specifications are not unique and the choice of specification can impact planning performance [13, 14, 15, 16]. Conversely, some manipulation planners forgo full generality for simplified problem specification and improved performance [1, 2]. These planners tend to be restricted to planar tabletop object rearrangement or similar problems [17]. We propose a middle ground: planners that can solve a broad set of classes of manipulation planning problems with no more problem specification than a typical low-level motion planning problem. Our insight is that the necessary transition systems can be _implicitly defined_ through an environment simulator, reducing the manual specification burden. This paper contributes a novel perspective on manipulation problem specification and manipulation planner design. This perspective centers around embedding an environment simulation in a sampling-based planning algorithm as an implicit specification of a problem's valid transition system. In support of these ideas, we contribute (1) the _arrangement space_, a novel planning space representing object arrangements and dynamically discovered low-level robot motions moving between them, (2) _Stable Arrangement Search Trees_ (SAST), an arrangement-space planner using embedded environment simulators to discover valid action sequences and the associated low-level motions (Sec. IV-A), and (3) a procedure to simplify the solutions found in the arrangement space. Concretely, we investigate the use of an embedded off-the-shelf physics simulator [18, 19] in SAST to efficiently find statically stable, collision-free states for 3D object reconfiguration problems without manually specifying action semantics. This setting is a specific instance of the broader manipulation planning paradigm that we propose. We demonstrate that our proposed framework can efficiently solve 3D object reconfiguration problems with phys Fig. 1: To rotate the pyramid, the robot must reason about the physical validity of sequences of manipulation actions. For example, removing cubes from the bottom of the pyramid before the cubes at the top will cause the structure to topple. Placing a cube at certain positions on the bumpy surface will affect the ability to transfer the other cubes. Explicitly specifying potential intermediate arrangements and action validity for this setting is tedious. Our approach leverages a physics simulator to discover the valid actions for a given arrangement and plan to reconfigure the objects. ical constraints without requiring more than an environment description and start/goal configurations to specify a problem. These results argue for the viability of a family of planners based upon implicitly simulator-defined transition systems. ## II Background and Related Work This paper proposes a novel perspective on planning with embedded simulators that combines and extends earlier uses of simulation in robot planning and control (Sec. II-A). SAST, an example of this perspective in practice, is an efficient sampling-based planning algorithm (Sec. II-B) that builds upon ideas from tabletop rearrangement planning and integrated task and motion planning (Sec. II-C) to solve dynamically-constrained object reconfiguration problems. ### _Simulation in robotics_ Simulation is widely used in robot control and learning. Model-predictive control (MPC) simulates control trajectories forward through time to choose optimal inputs [20]. Recent work improves MPC performance by integrating differentiable physics simulation [21, 22]. Efficient simulation for training [23, 24] has been core to learning-based methods for robot control [25, 26, 27], despite the challenge of translating controllers from simulation to the real world [28]. However, as noted by [29], simulation for planning is under-studied. Prior work combining manipulation planning and simulation restricts to specific motion primitives [30, 31] or 2D settings [32]; we operate in a 3D workspace and dynamically discover the valid motions available to the robot via a bi-level search over object arrangements and robot motions. [29] studied efficient planning with simulators by selectively simulating actions. [33] improved the long-horizon planning efficiency of a Monte-Carlo Tree Search-based planner by integrating parallel simulation. [34, 35] use simulators with different precisions and interleaved simulation and execution to improve manipulation planning performance and robustness. These approaches complement SAST. We propose that an embedded simulator can be an effective implicit specification of a problem's constraints. ### _Sampling-based motion planning_ Sampling-based motion planning (SBMP) is a family of efficient robot motion planning techniques based upon constructing approximations of a robot's high-dimensional configuration space [36] from sampled configurations [37, 38, 39, 40]. Most SBMP algorithms operate by building up a graph [37] or tree [38] of valid configurations connected by short valid motions. RRTConnect [39] is among the fastest SBMP algorithms for many problems, due to its technique of growing two trees of valid motions, one from each of the start and the goal, toward each other using a local "extension" planner to control the trees' growth. SAST adapts the high-level planning loop of RRTConnect to search an expansive space of stable object arrangements (Sec. IV-A) using a simulation-based extension planner (Sec. IV-C). ### _Object reconfiguration_ Object reconfiguration has been studied in contexts including manipulation planning [4, 5, 41, 42], rearrangement planning [1, 2, 43, 44], and integrated task and motion planning (TAMP) [10]. These approaches span an axis ranging from problem specialization (i.e., planar rearrangement planners [1, 2, 43]) to relative generality (i.e., full TAMP solving [3, 6, 7]). This axis also corresponds to the relative _specification effort_ for each planner: a measure of the work a user must do to provide a given planner with the information it needs to operate. Planar rearrangement planners typically only specify the desired object arrangement (as well as the environment geometry), and exploit their assumption of planar problems to find solutions faster. TAMP solvers also rely on symbolic action specifications, mechanisms for discovering states that satisfy action preconditions, and more (e.g., explicit problem constraint specifications) [10]. We strike a balance: simulators still require manual effort to create, but are more broadly reusable across problems and domains than the specifications and samplers required by most TAMP solvers. Simulators can also implicitly encode a more general set of constraints than most rearrangement solvers, allowing for richer problems. Further, as progress in learning problem-specific dynamics models advances [45, 46, 47, 48], the effort required to create simulators for planning will decrease. SAST, like [2], relies on an arrangement-aware extension primitive to find valid action sequences. [32] also proposes a rearrangement planner incorporating a simplified 2D physics model to evaluate a predefined set of rearrangement actions. Similarly, [49] explores kinodynamic planning for planar rearrangement with a focus on reacting to unexpected events during rearrangement plan execution, and using a heuristic-based task specification. SAST uses full 3D physics, does not predefine motion primitives, and models dynamic constraints such as stability. In future work, synergistically combining SAST with the techniques of [32, 49] could allow SAST to use richer non-prehensile motions for manipulating objects. ## III Problem Formulation We demonstrate implicit constraint definition via embedded simulation in a specific application: 3D object reconfiguration with stability constraints, using pick-and-place actions. Consider a 3D workspace containing movable rigid-body _objects_, \(o\in\mathcal{O}\), and a known set of posed static _obstacle_ geometries. Objects have known 3D geometries and poses in \(\mathrm{SE}(3)\). An _arrangement_ assigns a pose to each object: **Definition 1:**_Arrangement_ An arrangement, \(\alpha\), prescribes a pose, \(\alpha[o]\in\mathrm{SE}(3)\), to each object in the workspace. Denote the _arrangement space_, the set of all arrangements, as \(\mathcal{A}\), and let \(\alpha\setminus o\) be arrangement \(\alpha\) with object \(o\in\mathcal{O}\) removed from consideration. Arrangements may be _valid_ or _invalid_. Valid arrangements are those that are both _collision-free_ and _statically stable_. **Definition 2:**_Valid arrangement_ Let \(\mathsf{CollisionFree}(\cdot)\) be a collision test for arrangements, such that \(\mathsf{CollisionFree}(\alpha)=\mathtt{True}\) if \(\alpha\) has no objects in collision. Similarly, let \(\texttt{Stable}(\alpha)\) be a static stability test for arrangements, such that \(\texttt{Stable}(\alpha)=\texttt{True}\) if \(\alpha\) is statically stable after a fixed duration. An arrangement, \(\alpha\in\mathcal{A}\), is valid if and only if \(\texttt{CollisionFree}(\alpha)=\texttt{True}\) and \(\texttt{Stable}(\alpha)=\texttt{True}\). We evaluate \(\texttt{CollisionFree}(\cdot)\) via a physics simulator's collision checker. We check \(\texttt{Stable}(\cdot)\) by stepping the simulator for a fixed number of time steps and verifying that all objects' displacements remain below a small heuristic threshold. Let \(r\) be a robot arm with a static base and joint configuration space \(\mathcal{Q}\)[36]. The arm is capable of two classes of motion: _Transit_ motions move the empty end effector along a collision-free path between two workspace poses. _Transfer_ motions grasp a target object and move it and the end effector to a new pose along a collision-free path [4]. **Definition 3**:: _Transit motions_ A transit motion \(\texttt{TRANSIT}\left(\alpha,q_{i},q_{j}\right)\) is a continuous motion of the robot arm from initial configuration \(q_{i}\in\mathcal{Q}\) to \(q_{j}\in\mathcal{Q}\) that is collision-free with respect to \(\alpha\). **Definition 4**:: _Transfer motions_ A transfer motion \(\texttt{TRANSER}\left(\alpha_{i},o,q^{\prime},\alpha_{j}\right)\) is a continuous motion of the robot arm, holding object \(o\in\mathcal{O}\), from \(q\in\mathcal{Q}\) to \(q^{\prime}\in\mathcal{Q}\), that is collision-free with respect to \(\alpha_{i}\setminus o\). \(q\) and \(q^{\prime}\) must place object \(o\) at \(\alpha_{i}[o]\) and \(\alpha_{j}[o]\), respectively. Note that these motion classes do not predefine concrete motion primitives or actions. We are now equipped to formally state the object reconfiguration problem: **Definition 5**:: _Object Reconfiguration Problem_ Given an initial valid arrangement (Def. 2), \(\alpha_{\mathrm{start}}\in\mathcal{A}\), robot configuration, \(q_{\mathrm{start}}\in\mathcal{Q}\), and valid goal arrangement \(\alpha_{\mathrm{goal}}\in\mathcal{A}\), the object reconfiguration problem is to find a sequence of objects and robot configurations, \([q_{1},o_{1},q^{\prime}_{1},\ldots,q_{n},o_{n},q^{\prime}_{n}]\) and corresponding alternating \(\texttt{TRANSIT}\) and \(\texttt{TRANSER}\) motions such that the sequence: \[\texttt{TRANSIT}\left(\alpha_{\mathrm{start}},q_{\mathrm{start}},q_{1}\right)\] \[\rightarrow \texttt{TRANSER}\left(\alpha_{0},o_{1},q_{1},q^{\prime}_{1}, \alpha_{1}\right)\] \[\rightarrow \cdots\] \[\rightarrow \texttt{TRANSIT}\left(\alpha_{n-1},q^{\prime}_{n-1},q_{n}\right)\] \[\rightarrow \texttt{TRANSER}\left(\alpha_{n-1},o_{n},q_{n},q^{\prime}_{n}, \alpha_{n}\right)\] is valid and \(\alpha_{n}=\alpha_{\mathrm{goal}}\), where \(\alpha_{i}\) is the arrangement after executing the \(i\)-th \(\texttt{TRANSER}\) motion. This problem formulation is similar to that of [2], but adds a 3D workspace and consideration of stability constraints. ## IV Approach We propose to solve the reconfiguration problem with a bidirectional tree search algorithm, SAST, that operates in a given problem's arrangement space. SAST resembles RRTConnect, but operates in the arrangement space with a novel extension operator that exploits an embedded physics simulator (Sec. IV-C) to automatically discover valid actions. ### _Stable Arrangement Search Trees (Sast)_ SAST initializes two trees in the arrangement space, one rooted at the start arrangement, \(\alpha_{\mathrm{start}}\), and the other at the goal arrangement, \(\alpha_{\mathrm{goal}}\). Vertices in these trees represent valid arrangements (Def. 2); edges represent transformations between valid arrangements. In this work, we consider pick-and-place transformations which move _exactly_ one object. Given two valid arrangements \(\alpha_{i-1}\) and \(\alpha_{i}\), a connecting edge can be described as \((q_{i},o_{i},q^{\prime}_{i})\). This transformation corresponds to a \(\texttt{TRANSIT}\) motion of the robot to \(q_{i}\), followed by a stable \(\texttt{TRANSER}\) motion moving \(o_{i}\) from its pose in \(\alpha_{i-1}\) to \(\alpha_{i}\) by grasping \(o_{i}\) and moving the robot from \(q_{i}\) to \(q^{\prime}_{i}\). Edges are bidirectional: the reverse transformation from \(\alpha_{i}\) to \(\alpha_{i-1}\) corresponds to a \(\texttt{TRANSIT}\) motion to \(q^{\prime}_{i}\), followed by a stable \(\texttt{TRANSER}\) motion of \(o_{i}\) from its pose in \(\alpha_{i}\) to \(\alpha_{i-1}\) by grasping \(o_{i}\) and moving the robot from \(q^{\prime}_{i}\) to \(q_{i}\). In the arrangement space representation, a solution to a reconfiguration problem is a path of edges that connect \(\alpha_{\mathrm{start}}\) to \(\alpha_{\mathrm{goal}}\). Planning starts from the tree rooted at \(\alpha_{\mathrm{start}}\). Each iteration of the planning loop samples a random arrangement \(\alpha_{\mathrm{rand}}\) and finds its closest neighbor, \(\alpha_{\mathrm{nearest}}\) in the current tree (Alg. 1, Fig. 2: Bidirectional search trees in the arrangement space. Each vertex represents a valid arrangement (Def. 2). An edge \((q_{i},o_{i},q^{\prime}_{i})\) represents a transformation between the two connected arrangements \(\alpha_{i-1}\) and \(\alpha_{i}\). This comprises of a \(\texttt{TRANSIT}\) motion of the robot to configuration \(q_{i}\), followed by a stable \(\texttt{TRANSER}\) motion of object \(o_{i}\) from its pose in \(\alpha_{i-1}\) to \(o_{i}\) by grasping \(o_{i}\) and moving the robot from \(q_{i}\) to \(q^{\prime}_{i}\). Edges are bidirectional—one can also transform arrangement \(\alpha_{i}\) to \(\alpha_{i-1}\) using a \(\texttt{TRANSIT}\) motion to \(q^{\prime}_{i}\), followed by a stable \(\texttt{TRANSER}\) motion of object \(o_{i}\) from its pose in \(\alpha_{i}\) to its pose in \(\alpha_{i-1}\) by grasping \(o_{i}\) and moving the robot from \(q^{\prime}_{i}\) to \(q_{i}\). lines 4 and 5). This is done via spatial lookup on a GNAT [50] with arrangement distance defined as the summed \(\mathrm{SE}(3)\) distance1 between the respective poses of each object in the two arrangements. SAST then attempts to Extend the tree from \(\alpha_{\mathrm{nearest}}\) toward \(\alpha_{\mathrm{rand}}\) by growing a sequence of edges according to Alg. 3. If the resulting sequence of edges is non-empty (Alg. 1, line 7), we try to Connect the other tree to the terminal vertex of the extended trajectory (Alg. 1, line 8). This is done (Alg. 2) by repeatedly extending the closest arrangement on the other tree to the terminal vertex, until either the connection succeeds or until the extension fails. If connection succeeds, SAST has found a solution and terminates. Otherwise, it swaps the trees and repeats the planning loop. Footnote 1: \(\mathrm{SE}(3)\) distance is the sum of the Euclidean distance of the translational components and the angular distance of the rotational components. ### _Sampling stable arrangements_ The SampleArrangement subroutine (Fig. 3) samples a valid arrangement for use with Extend. Here, we leverage the embedded physics simulator to find stable arrangements. First, SampleArrangement picks uniform-random 3D poses for each object within the workspace bounds, using rejection sampling to ensure that the objects do not intersect. Then, it simulates the dynamics of the arrangement forward for a fixed number of small timesteps, checking at fixed intervals if the objects have maintain zero displacement since the previous interval. If so, the arrangement resulting from the applied dynamics is kinematically valid and statically stable, and is returned as a result. Otherwise, this process repeats until a valid sample is found. SampleArrangement is easy to parallelize--our implementation of SAST uses multiple threads to sample stable arrangements. Uniform-random initial pose sampling trades off performance for ease of specification, avoiding the specialized samplers used by TAMP solvers to find states on low and zero-measure state manifolds. ### _Generating valid transformation actions_ The Extend subroutine (Alg. 3) searches for a sequence of valid edges that transform a given arrangement \(\alpha\) of \(k\) objects into a target arrangement \(\alpha^{\prime}\). A major contribution of our work is to use an embedded physics simulator in Extend to reason about the validity of these transformations. The simulator allows us to treat the physics of the environment as an implicit specification of the valid transformation actions from any state. We also ensure that a valid transformation has a valid instantiation with the robot by motion planning for its associated TRANSIT and TRANSFER motions. Extend starts by selecting a random order to move the objects2 and setting the current arrangement, \(\alpha_{\mathrm{cur}}\) to the given start arrangement, \(\alpha\). It then tries to move each object in the chosen order to its target position in the given target arrangement, \(\alpha^{\prime}\), while maintaining stability of the other objects. Footnote 2: We choose a random order for simplicity, but could substitute a more sophisticated permutation selector for performance. For each object, \(o_{i}\), Extend creates a new arrangement \(\alpha_{\mathrm{next}}\) equal to \(\alpha_{\mathrm{cur}}\) with \(o_{i}\) at its pose in \(\alpha^{\prime}\) and samples collision free robot configurations grasping \(o_{i}\)'s pose in \(\alpha_{\mathrm{cur}}\) and at \(\alpha_{\mathrm{next}}\), using the same grasp. Then, it checks that: (1) \(\alpha_{\mathrm{next}}\) is collision-free, (2) \(\alpha_{\mathrm{cur}}\setminus o_{i}\) is stable, allowing \(o_{i}\) to be moved, and (3) \(\alpha_{\mathrm{next}}\setminus o_{i}\) is also stable, allowing \(o_{i}\) to be moved in the _reverse_ transformation. If these conditions are met, Extend attempts to find a valid TRANSIT motion between the preceding configuration of \(\alpha_{\mathrm{cur}}\)3 and the sampled grasp for \(o_{i}\)'s pose in \(\alpha_{\mathrm{cur}}\), and a valid TRANSFER motion between the sampled grasps for \(o_{i}\)'s pose in \(\alpha_{\mathrm{cur}}\) and \(\alpha_{\mathrm{next}}\), respectively, using a standard motion planner (Alg. 3, lines 9 and 10). These motions are considered infeasible if the sub-planner fails to find a solution within a predefined timeout. Footnote 3: If \(\alpha_{\mathrm{cur}}=\alpha_{\mathrm{start}}\), we select \(q_{0}\) as \(q^{\prime}_{\mathrm{prev}}\). If \(\alpha_{\mathrm{cur}}=\alpha_{\mathrm{goal}}\), we skip this check since there is no constraint on the robot configuration at the goal. If Extend finds the requisite TRANSIT and TRANSFER motions, then it either adds the discovered edge to the _current_ tree (Alg. 3, lines 17 to 19) and continues with the next object and \(\alpha_{\mathrm{cur}}=\alpha_{\mathrm{next}}\), or attempts to connect the newly-reached Fig. 3: Stable arrangement sampling. The pose of each object is uniformly sampled within workspace bounds; rejection sampling ensures no object intersection. The dynamics of the world are stepped until the arrangement stabilizes, that is, zero displacement over a sufficient number of steps. arrangement to the _other_ tree (Alg. 3, lines 11 to 14) if \(\alpha_{\mathrm{next}}\) is the target arrangement and a connection is desired. In the latter case, Extend returns the target arrangement (Alg. 3, line 15); otherwise, it iterates through the remaining objects and returns the last reached arrangement. ### _Solution simplification_ Although unnecessary for completeness, SAST applies heuristic simplifications to solutions to improve their quality. If an object \(o\) has been moved twice along a solution trajectory, one of these motions may be unnecessary. We can remove the first motion by altering the second motion to move \(o\) starting from the first motion's starting pose. Similarly, we can remove the second motion by altering the first motion to move \(o\) to the second motion's ending pose. Both cases modify the pose of \(o\) in the arrangements between the first and second motions. This requires recomputing the grasps and planning motions for these intermediate arrangements to validate the altered arrangement trajectory. In a third case, motions may also be removed if the pickup and placement locations of the object are exactly the same. SAST iterates through these three simplification cases on solutions, rechecking for stability and recomputing the TRANSIT and TRANSFER motions after each modification to ensure that the solution remains feasible. This simplification process continues until no potentially redundant actions remain. Note that this heuristic set is non-exhaustive and does not guarantee optimal motions. ## V Experiments We evaluate SAST on a set of 3D tabletop rearrangement problems (Fig. 4)--Reverse (a-b), Transform (c-d), and Rotate (e-f). These problems involve using a single-arm manipulator to reconfigure cubes from one 3D structure to another. They require reasoning about the physical constraints between the cubes, as well as with the environment. The solutions are non-trivial, in that the robot must choose and move the objects through intermediate arrangements to achieve its goal. In addition, some problems contain obstacles such as tiles and bumps which complicate the validity of actions. Grounding these details in order to apply contemporary approaches would be tedious and challenging. ### _Implementation_ We use DART [19] as our embedded physics simulator and plan TRANSIT and TRANSFER motions via Robowflex [51] with the Open Motion Planning Library (OMPL) [52]. All experiments ran on an AMD 5900x desktop CPU with 12 cores at \(4.8\) GHz, using \(6\) parallel threads for sampling stable arrangements and inverse kinematics. ### _Planning performance_ We applied SAST to each test problem for \(50\) trials with a maximum timeout of \(300\) seconds per trial. In each trial, we randomize the start and goal positions of the structure to rearrange, together with the the obstacle positions. Table I shows that SAST was almost always able to find a solution within the stipulated time limit. Solution times were also reasonable, taking not more than a minute per successful run despite having to invoke the simulator repeatedly for collision and static stability checking, and having to integrate the low-level motion planning of the TRANSIT and TRANSFER motions. The sizes of the search trees, in terms of the number of nodes, were also small, indicating that a sparse coverage of arrangements was sufficient to identify a solution. Across each problem and trial, we only had to specify the geometry and positions of the obstacles (steps and bumps) and the start and goal arrangement poses of the objects. This highlights the strength of our approach in using the physics simulator to automatically derive action validity without requiring any manual, explicit specification. Solution lengths, however, often require about twice the optimal number of steps. This is because SAST, like RRTConnect, is non-optimizing. Fig. 4: Test problems used in our experiments. (a-b) Reverse: The robot starts with a stack of 6 cubes and must re-stack them on the same base location in reversed order. (c-d) Transform: The robot must transform a stack of \(6\) cubes into a pyramid, centered at the same location. The table is covered in tiles of random height and size, which constrain the feasible intermediate arrangements. (e-f) Rotate: The robot is given a _diagonal_ pyramid of cubes and must manipulate the cubes into another pyramid with cubes stacked cross-diagonally. The table is covered in bumps of random size and location, which prevent the cubes from being placed flat in intermediate arrangements and make it more difficult to find stable arrangements. In all problems, the robot starts with its armtucked in (Fig. 1). Across runs, the \(x\) and \(y\) positions of the structures to reconfigure, as well as the tiles and bumps, are randomized. \begin{table} \begin{tabular}{c c c c} \hline \hline & Reverse & Transform & Rotate \\ \hline Success Rate & 0.96 (0.03) & 0.96 (0.03) & 1.00 (0) \\ Solve Time (s) & 16.5 (0.9) & 55.0 (5.6) & 61.5 (6.6) \\ Solution Length & 25.6 (0.7) & 23.7 (0.8) & 15.2 (0.5) \\ Num. Nodes & 44.1 (2.4) & 86.2 (9.6) & 57.4 (6.0) \\ \hline \hline \end{tabular} \end{table} Table I: Run-time metrics of SAST across the test problems. _Success rate_ refers to the proportion of runs where SAST finds a solution within the given time limit. For successful runs, _Solue Time_ refers to the time taken to find a solution; _Solution Length_ refers to the solution length, in terms of the number of objects moved; _Num. Nodes_ refer to the total number of tree vertices created by SAST during search. Mean values are shown, with standard error shown in parentheses. ### _Simplification performance_ The results in Table I do not use the solution simplification heuristics of Sec. IV-D. Table II shows the results of applying these heuristics to the solutions found by each successful run, indicating that solution simplification usually terminated within \(40\) seconds. Most of the additional time comes from rechecking for stability and replanning for the low-level TRANSIT and TRANSFER motions, required whenever two actions moving the same object merge. This is done up to \(\mathcal{O}(n^{3})\) times in the number of actions in the initial solution. Simplification usually decreased solution length by roughly half, reaching or coming close to the optimal solution length. Fig. 5 shows an example of a simplified Rotate solution. ### _How important is integrating motion planning?_ SAST verifies the feasibility of each TRANSIT and TRANSFER motion by planning a valid trajectory in the robot's configuration space for the motion. To investigate the impact of this integrated verification, we conducted an ablation experiment by removing these low-level feasibility checks. We find solutions in terms of sequences of object arrangements and object grasps, assuming that the transformations between arrangements are feasible. After finding a full solution, we attempt to compute a motion plan for each of the associated low-level motions to check solution validity. Table III shows a substantial drop in solution feasibility when low-level motion checks are skipped. Indeed, TRANSIT motions require checking that an object is reachable with a given grasp without the manipulator colliding with the other objects; TRANSFER motions require checking that an object can be pulled away without the object or the manipulator intersecting with the other nearby objects. The problems we consider have environment obstacles (table, tiles, and bumps) that do not interfere much with the robot's motion--in more constrained environments, such as in a cupboard or drawer, we could expect feasibility to worsen. ## VI Discussion This work contributes a novel perspective on manipulation planning that embeds a simulator to implicitly encode the valid actions and states for a problem domain. We demonstrate this perspective for 3D object reconfiguration planning, where we are able to efficiently find statically stable object configurations that would otherwise be onerous to specify. SAST currently uses random sampling and extension to grow the arrangement space graph, but informed approaches like Expansive Space Trees [53] or Monte Carlo Tree Search [33] may discover solutions faster. SBMP advances such as biased samplers [54, 55] and optimizing planners [56, 57, 58, 59] may also complement embedded-simulator planning. Embedded-simulator planning is broadly applicable outside object reconfiguration, which poses several directions for future work. How can we use simulators to encode constraints beyond stability, such as orientation or contact dynamics? Similarly, what are the precise requirements for an embedded simulator? For some problems, precise physics simulation may be unnecessary; for others, non-standard physics can encode problem constraints. Further, how well do plans found via embedded simulation transfer to the real world? Finally, we wish to explore richer uses of the embedded simulator, including combining differentiable simulation with optimization techniques, to broaden the manipulation problem classes that we can efficiently solve. \begin{table} \begin{tabular}{c c c c} \hline & Reverse & Transform & Rotate \\ \hline Simplify Time (s) & 26.5 (1.4) & 41.8 (2.7) & 28.0 (2.0) \\ Simplified Solution Length & 12.1 (0.1) & 12.0 (0.1) & 8.0 (0.1) \\ Improvement (\%) & 51.5 (1.2) & 46.8 (1.7) & 44.9 (1.7) \\ \hline \end{tabular} \end{table} Table II: Solution simplification results. _Simplify Time_ is the time taken for the simplification procedure to terminate. _Simplified Solution Length_ is the length of the simplified solutions, in terms of the number of objects moved. _Improvement_ is the percentage of the original solution length that simplification eliminated. Mean values are shown, with standard error in parentheses. Fig. 5: Sequence of actions in a computed solution after simplification for the Rotate problem. The robot is able to identify actions that remove the blocks on top before removing those at the bottom. This ensures that blocks are not removed from the bottom that will cause those stacked on top to topple. The sequence shown achieves the minimum possible length for the given problem.
2306.06092
Realistic Saliency Guided Image Enhancement
Common editing operations performed by professional photographers include the cleanup operations: de-emphasizing distracting elements and enhancing subjects. These edits are challenging, requiring a delicate balance between manipulating the viewer's attention while maintaining photo realism. While recent approaches can boast successful examples of attention attenuation or amplification, most of them also suffer from frequent unrealistic edits. We propose a realism loss for saliency-guided image enhancement to maintain high realism across varying image types, while attenuating distractors and amplifying objects of interest. Evaluations with professional photographers confirm that we achieve the dual objective of realism and effectiveness, and outperform the recent approaches on their own datasets, while requiring a smaller memory footprint and runtime. We thus offer a viable solution for automating image enhancement and photo cleanup operations.
S. Mahdi H. Miangoleh, Zoya Bylinskii, Eric Kee, Eli Shechtman, Yağız Aksoy
2023-06-09T17:52:34Z
http://arxiv.org/abs/2306.06092v1
# Realistic Saliency Guided Image Enhancement ###### Abstract Common editing operations performed by professional photographers include the cleanup operations: de-emphasizing distracting elements and enhancing subjects. These edits are challenging, requiring a delicate balance between manipulating the viewer's attention while maintaining photo realism. While recent approaches can boast successful examples of attention attenuation or amplification, most of them also suffer from frequent unrealistic edits. We propose a realism loss for saliency-guided image enhancement to maintain high realism across varying image types, while attenuating distractors and amplifying objects of interest. Evaluations with professional photographers confirm that we achieve the dual objective of realism and effectiveness, and outperform the recent approaches on their own datasets, while requiring a smaller memory footprint and runtime. We thus offer a viable solution for automating image enhancement and photo cleanup operations. ## 1 Introduction In everyday photography, the composition of a photo typically encompasses subjects on which the photographer intends to focus our attention, rather than other distracting things. When distracting things cannot be avoided, photographers routinely edit their photos to de-emphasize them. Conversely, when the subjects are not sufficiently visible, photographers routinely emphasize them. Among the most common emphasis and de-emphasis operations performed by professionals are the elementary ones: changing the saturation, exposure, or the color of each element. Although conceptually simple, these operations are challenging to apply because they must delicately balance the effects on the viewer attention with photo realism. To automate this editing process, recent works use saliency models as a guide [16, 17, 2, 4, 8, 1]. These saliency models [19, 3, 7, 10, 14] aim to predict the regions in the image that catch the viewer's attention, and saliency-guided image editing methods are optimized to increase or decrease the predicted saliency of a selected region. Optimizing solely based on the predicted saliency, however, often results in unrealistic edits, as illustrated in Fig. 1. This issue results from the instability of saliency models under the image editing operations, as saliency models are trained on unedited images. Unrealistic edits can have low predicted saliency even when they are highly noticeable to human observers, or vice versa. This was also noted by Aberman et al. [1], and is illustrated in Fig. 2. Previous methods tried to enforce realism using adversarial setups [2, 4, 8, 17], GAN priors [1, 8], or cycle consistency [2] but with limited success (Fig. 1). Finding the exact point when an image edit stops looking realistic is challenging. Rather than focusing on the entire image, in this work, we propose a method for measuring the realism of a local edit. To train our network, we generate realistic image edits by subtle perturbations to exposure, saturation, color or white balance, as well as very unrealistic edits by applying extreme adjustments. Although our network is trained with only positive and negative examples at the extremes, we successfully learn a continuous measure of realism for a variety of editing operations as shown in Fig. 3. We apply our realism metric to saliency-guided image editing by training the system to optimize the saliency of a selected region while being penalized for deviations from realism. We show that a combined loss allows us to enhance or suppress a selected region successfully while maintaining high realism. Our method can be also be applied to multiple regions in a photograph as shown in Fig. 1. Evaluations with professional photographers and photo editors confirm our claim that we maintain high realism and succeed at redirecting attention in the edited photo. Further, our results are robust to different types of images including human faces, and are stable across different permutations of edit parameters. Taken together with our model size of 26Mb and run-time of 8ms, these results demonstrate that we have a more viable solution for broader use than the approaches that are available for these tasks to date. ## 2 Related Work Various image enhancement methods have been introduced in the literature to amplify a region of interest or de-emphasise distracting regions, improve image aesthetics, and redirect the viewer's attention. This task has been referred to as _attention retargeting_[15] or _re-attentionizing_[18] as well. Earlier methods [5, 15, 20, 22, 23] incorporated prior knowledge of saliency cues (saturation, sharpness, color, gamut, etc.) to guide the editing process to achieve the desired change in saliency. But, relying solely on saliency cues both limits the diversity of generated edits, and creates unrealistic edits due to the lack of semantic constraints. As our experiments show, OHR [15] tends to generate unrealistic color changes that are semantically incorrect, and WRS [23] is limited to contrast and saturation adjustments with limited effectiveness. Recent works leverage saliency estimation networks [3, 7, 10, 14, 19] to optimize for a desired saliency map instead of relying on prior saliency cues. Saliency models are trained to output a heatmap that represents where human gaze would be concentrated in an image. These models are not trained to respond to the realism of the input image. Hence they might predict an inconsistent decrease or increase in the saliency of a region when unrealistic or semantically implausible edits are applied, which would be otherwise jarring to human viewers (Fig. 2). Using saliency as the only supervision can result in unrealistic images. To prevent unrealistic edits, prior works enforce constraints on the allowable changes, use adversarial training [2, 4, 8, 17] or exploit learned priors from GAN-based models [1, 8]. For instance, Mechrez et al. [16] and Aberman et al. (Warping) [1] constrain the result to match the input content in order to maintain its appearance. Aberman et al. (CNN and Recolorization) [1] use a regularization term that limits the amount of change an image can undergo to maintain the realism. Mejjati et al. [17] designed a global parametric approach to limit the editing operations to a set of common photographic ones. Chen et al. [2] exploit cycle consistency to keep the output within the domain of the input image. Gatys et al. [4] use a texture loss alongside the VGG perceptual loss as a proxy for realism. Lalonde et al. [11] argue that humans prefer color consistency within images, regardless of object semantics. They use color statistics to measure realism and use it to recolor the images to match the background in compositing task. Zhu et al. [26] train a network to discriminate between natural images and computer-generated composites and use it as a realism measure for compositing task. Realism is also a crucial factor in GANs, as highlighted by [9]. Figure 2: Predicted saliency maps [7] for the original images and edited versions, with extreme edits applied. Note that saliency models are typically trained with realistic images. This makes them susceptible to inaccurate predictions for unrealistic inputs, as the green woman in the top row estimated to have low saliency. We present a new method for estimating the realism of a local edit. Combining our realism loss with saliency guidance, we show that we can successfully apply attention attenuation or amplification while keeping the final result realistic without requiring data annotated with realism or bulky GAN priors to estimate realism. ## 3 Realism Network When editing specific regions in an image, it is challenging to maintain the overall realism of the photograph. How quickly realism starts to degrade depends on the contents and size of the image regions, the overall composition of the scene, as well as the type of edits being applied. This makes the problem of defining precisely when an image edit stops looking realistic particularly challenging. In this work, we propose to train a realism network using only realistic and unrealistic examples at the extremes. We generate realistic edits by slightly perturbing image values, and unrealistic edits by applying aggressive edits. We show that, despite being trained on binary data, our network can estimate continuous realism scores that can adapt to different types of image regions and scenes. Our approach was inspired by the work of Zhu et al. [26], who similarly learn their realism from binary real and synthetic composites. To generate _real_ and _fake_ samples, we exploit different parameter ranges for commonly used editing operations - exposure, saturation, color curve, and white balancing (formal definitions in the Supplementary Material). For instance, increasing the exposure of a region too much can result in an unrealistic image, while a subtle increase to saturation will not signficantly affect the realism. Based on experimentation, we determined the set of parameter ranges in Tab. 1 to apply to image regions to create our training data. To generate a training example, we first select a random number of edits (between 1-4), then an order for the edit operations (e.g., exposure, saturation, color curve, white balancing), and values for each of the operations, sampled uniformly at random from the pre-specified ranges in Tab. 1. We apply these edits in the selected order to a region segment in an MS-COCO image [12]. Fake examples are generated by purposefully selecting extreme values. Real examples are generated by sampling subtle edits within narrower ranges. Because of the semantic importance of human faces and increased sensitivity to edits in facial regions, we enforce smaller parameter ranges when applying edits to faces. Fig. 4 shows several examples. We use the Pix2Pix [6] network architecture followed by two MLP layers to estimate the realism score \(\mathcal{R}\) of the input. For our samples in the training data, \(\mathcal{R}\) is defined as 1 for real and 0 for fake samples. We also condition the output on the input region by feeding the region's mask \(M\) as input to the network. We use squared error [13] as the critic to compute the loss on the estimated value: \[\mathcal{L}_{\text{disc}}=\frac{1}{2}\mathcal{R}(I_{fake},M)^{2}+\frac{1}{2}( \mathcal{R}(I_{real},M)-1)^{2} \tag{1}\] where \(I_{fake}\) and \(I_{real}\) are the generated fake and real samples. To measure the effect of the edit on the realism of the image, we compute the difference between the scores \begin{table} \begin{tabular}{l|c c c c c} & Exposure & Saturation & Color curve & White balancing & Number of edits \\ \hline Real & \([0.85,1.15]\) & \([0.85,1.15]\) & \([0.85,1.15]\) & Not allowed & \([1,3]\) \\ Fake & \([0.5,0.75]\cup[1.5,2]\) & \([0,0.5]\cup[1.5-2]\) & \([0.5,2]\) & \([0.9,1]\) & \([2,4]\) \\ Fake(human specific) & \([0.5,0.75]\cup[1.25,1.5]\) & \([0.5,0.75]\cup[1.25,1.5]\) & \([0.5,2]\) & Not allowed & \([2,3]\) \\ \end{tabular} \end{table} Table 1: Parameter value ranges used to generate real and fake training images for the realism estimation network. Figure 3: The efficacy of the realism estimation network is illustrated over a range of exposure and saturation adjustments. Left is \(\Delta\mathcal{R}\) plotted (vertical axis) for example images (second column) when selected regions (third column) are edited. Right, the edited images are shown with the corresponding change in estimated realism (inset numbers), and the value of the editing parameter applied (underneath). estimated for the original image \(I\) and the edited image \(I^{\prime}\): \[\Delta\mathcal{R}(I^{\prime},I,M)=\mathcal{R}(I^{\prime},M)-\mathcal{R}(I,M), \tag{2}\] where the edited region is defined by the mask \(M\). As Fig. 3 demonstrates \(\Delta\mathcal{R}\) gives us continuous realism values for a range of edit parameters, despite the network being trained only on extreme cases. It also shows that the range of edits that are considered realistic by our network is not the same for each image and depends on the subject and editing operation. We show more examples of edits that are classified realistic or unrealistic by our network in Fig. 9 and the Supplementary Material. ## 4 Saliency Guided Image Enhancement We develop a saliency-guided image editing pipeline that enforces our realism loss to generate realistic and effective object enhancement or distractor suppression results for a given mask. Our system can estimate a set of editing parameters for any permutation of 4 editing operators: exposure, saturation, color curve, and white balancing. In constructing our system, we borrow many ideas from the existing saliency-guided image editing literature, and focus our design improvements on improving the realism of the results, especially by including our proposed realism loss. Since these edit operations are non-linear, different orderings of edits changes the end results. As a result, we condition the regressed parameters on the permutation of the edit operations by feeding the permutation as an input to the network. More details on the architecture of the network and the embedding used to encode the permutation is included in the Supplementary Material. Saliency LossA pretrained saliency model [7] (SalNet) is used as a proxy for the viewer attention that would be captured by image regions before and after applying the edits, to supervise the image editing process. We measure the change in the saliency of the region of interest as the expected value of its relative change within the masked region: \[S(I,I^{\prime},M)=\mathbb{E}_{M}\left[\frac{SalNet(I)-SalNet(I^{\prime})}{SalNet (I)}\right] \tag{3}\] where \(\mathbb{E}\) denotes the expectation and \(M\) is the region mask. As Fig. 2 shows, the predicted saliency heatmaps can change drastically when applied to unrealistic edits. As a result, relying on conventional metrics (e.g., absolute and relative differences by [1, 17], Binary cross entropy by [2] and KL-divergence by [4]) to measure the change in saliency can cause large rewards or penalties during optimization. Infinitely large rewards for an unreal edit reduces the effectiveness of the realism term in the final loss function. To tackle this issue we define our saliency loss function as: \[\mathcal{L}_{\text{sal}}=\exp\left(w_{\text{sal}}S(I,I^{\prime},M)\right) \tag{4}\] When saliency moves in the desired direction, the exponential squashes the loss, converging to the minimum and reducing the penalty quickly, acting as a soft margin. This converging behaviour prevents large rewards that can be generated by unrealistic edits during training. The exponential term imposes larger penalties when saliency moves in the wrong direction, providing robustness against outliers and faster convergence. \(w_{sal}\) controls the absolute value of the loss to balance the weight of saliency loss in our final loss, which we set to 5 and -1 for amplification and attenuation, respectively. Realism LossThe realism loss is defined as: \[\mathcal{L}_{\text{realism}}=ReLU(-\Delta\mathcal{R}(I^{\prime},I,M)-b_{r}) \tag{5}\] This loss is designed to penalize unrealistic edits, while giving no rewards for edits that improve the estimated realism score of the input. This prevents the network from being penalized by images that receive a low realism score even before any edits are applied. ReLU and offset \(b_{r}\) provide a margin that allows a slight decrease in realism without a penalty which we set to 0.1 in our experiments. We train two separate networks for each. The final network objective is the product of the two loss functions: \[\mathcal{L}=(1+\mathcal{L}_{\text{realism}})\times\mathcal{L}_{\text{sal}}. \tag{6}\] In this formulation, the realism score acts as a weight for the penalty imposed on the change in the saliency. This allows us to balance the realism and saliency objectives. Figure 4: Examples of fake and real images that are used to train the realism estimation network. See Section 3 for more details. We use an EfficientNet-lite3 [21] backbone and cascaded MLP layers as decoders to estimate parameters for each of the edit operations. A detailed explanation of the architecture specifics, datasets and training is provided in Supplementary Material. ## 5 Experiments and Results We compare our method against state of the art saliency based image editing approaches - Deepsal [1], Gazeshift [17] and MEC [16]. MEC provides results on their dataset alongside pre-computed results of WRS [23] and OHR [15] on the same dataset. We use this dataset to compare against WRS and OHR as well as MEC. 1 Footnote 1: Deepsal, WRS and MEC do not provide an open-source implementation. Hence, we relied on the results included on their project pages. Also, Deepsal authors kindly provided us with results on Adobe Stock dataset for their “convolutional network” variation. The EfficientNet [21] backbone used in our architecture is known for its small size. Our results are thus generated significantly faster than the other state-of-the-art (SOTA) methods with bulkier architectures and slower per-image optimizations. Based on speed measures reported in [17] Table 0(c), MEC takes more than a day, OHR needs 30 seconds and Gazeshift takes 8 seconds to process each image, while our model requires only 8ms per image. We present both qualitative and quantitative results. Since our method takes the permutation of the edits as input during inference time we select the permutation at random for the presented results unless mentioned otherwise. ### Qualitative Comparison Figs. 5, 6, and 7 illustrate our results compared to the SOTA. They show our method performs different edits based on the contents of the image. It can apply more significant color changes that camouflage the distractor (\(2^{\text{nd}}\) and \(4^{\text{th}}\) rows of Fig. 6, \(3^{\text{rd}}\) row of Fig. 5) or very subtle edits for human faces (\(1^{\text{st}}\) row of Fig. 6). The intensity and characteristics of the applied edits depends on semantics. The use of adversarial loss in Gazeshift [17] and the regularization used in Deepsal [1] constrain the edits their methods apply without taking realism explicitly into account. As results show, they often apply unrealistic edits (e.g., the camouflaged signs in Fig. 5 or the unattural skin tone and the color artifacts in Fig. 6) or very subtle edits with lower effectiveness. MEC [16] reuses the color patterns and textures available in the image to update the target region. On the other hand different regions and textures can correspond to different semantics. Consequently, as illustrated in Fig. 6(a) this method can apply incompatible color and texture values to produce unrealistic edits (green crocodile eye, orange traffic sign) or ineffective enhancements (brown bird). Fig. 6(b) provides a comparison on their _distractor suppression_ image set. Our method performs comparable in terms of effectiveness and generates realistic results consistently. OHR [15] tries to maximize the color distinction between the masked region and the rest of the image for the image enhancement task. Without explicit realism modeling, it tends to generate unrealistic colors (e.g., blue crocodile, bird, and horse in Fig. 6(a)). While incorrect colors increase the saliency of these regions, they do so at the cost of realism. For similar reasons, this method is ineffective suppressing distractors (Fig. 6(b)). WRS [23] does not generate unrealistic images, but also makes edits that are hardly noticeable, and less effective at enhancing or suppressing the target regions. We believe this is due to the purposely limited range of allowed edit parameters (luminance, saturation and sharpness). ### What Do Photographers Think? To include the perspective of professional photographers in comparing our results to others, we ran a user study. We report our results using three choices of parameter orderings: choosing the one that achieves the _Best Saliency_, the one that generates the _Best Realism_ (according to our realism estimation network), and a permutation of parameters selected at _Random_ as used for the qualitative figures. User StudyWe recruited 8 professionals from UpWork, all of whom have multiple years of experience with photography, as well as using Photoshop to edit photos. We used the Appen platform for hosting our rating tasks. Figure 5: Saliency attenuation compared to Deepsal [1] on the images provided by the authors on their project webpage. Our method is able to effectively attenuate the saliency of the target region without applying an unrealistic camouflage. Our study participants were presented with a panel of 3 images: the original image, mask, and an edited result from one of methods evaluated. They were asked to _"rate each image based on 2 criteria"_ - effectiveness and realism, with the following definitions provided for the _attenuate_ version of the task: _"The images were edited to make certain objects and regions less distracting. An image edit is effective if the masked objects/regions have indeed become less distracting. An image edit is realistic if the photo does not look edited."_ For the _amplify_ version of the task, the wording for effectiveness was modified to: _"The images were edited to make certain objects and regions popout (more attention-capturing, or salient). An image edit is effective if the masked objects/regions have indeed become more attention-capturing."_ Images were randomly shuffled in each task, so the photographers rated the images and methods independently of each other. ResultsIn Tab. 2 we compare our approach to GazeShift and Deepsal on the 30 Adobe Stock images from [17]. We find that our approach achieves significantly higher scores for both effectiveness and realism compared to GazeShift in Figure 6: Saliency modulation compared to GazeShift [17] and Deepsal [1] on Adobe Stock images from [17]. Figure 7: Saliency modulation compared to MEC [16], WRS [23] and OHR [15] on the Mechrez dataset [16]. the attenuation task. This matches our qualitative observations that GazeShift is not successful at the task of attenuating distractor. GazeShift specializes in amplifying saliency in image regions, and we achieve similar performance on this task, while also maintaining significantly higher realism levels. In addition, results show a poor effectiveness score for Deepsal as a result of subtle edits in Fig. 6. Subtle edits mean the realism score remains high since the results are almost identical to the original images. Since Deepsal was ineffective on Adobe Stock images, to provide a fair comparison we also compare to Deepsal on 14 images they provided on their project page in Tab. 2(a). We achieve significantly higher realism scores while being similarly effective at reducing the saliency of the distractors. This matches our qualitative observations that Deepsal edits can be quite extreme and not always photo realistic. Tab. 2(b) shows user study results on Mechrez dataset [16].2 We used 77 images from the dataset to perform the user study. Results confirm that our results are superior in the realism while we achieve a comparable effectiveness compared MEC. WRS's low effectiveness yields a high realism score as its results are almost identical to the input; while the unrealistic color changes by OHR result in low realism and effectiveness scores. Footnote 2: Dataset has only 10 images for attenuation task, which is inadequate for a meaningful user study. Hence we only provide amplification results. ### Ablation Study We trained a variation of our method in which instead of a fixed realism score estimation model we used a discriminator as adversarial loss. We trained the discriminator as part of an adversarial training approach, similar to related work [2, 4, 17]. We used the input image as the real sample and the generated image as the fake sample during training. Fig. 8 illustrates sample results with this training strategy. Since the discriminator is trained to penalize "any edits" applied in the previous steps of training it encourages the network to apply subtle edits and hence a drop in effectiveness of the method. On the other hand, due to the lack of explicit realism training, the edits are unrealistic while the effectiveness is reasonable. Ratings reported in Tab. 4 also confirm our findings. ### Diversity and Optimality of Estimated Edits Fig. 8(b) illustrates the distribution of edit parameters estimated by our parameter estimation network for different images on ADE20K [24, 25] dataset. It shows that edit parameters are different for each image and is based on its content. Also, it shows that the range of estimated edits is not the same as the ranges used in Tab. 1 for real samples. To evaluate if the estimated edits are close to optimal with respect to realism, we provide Fig. 8(a). In the figure we show a realism heatmap obtained by adding a small additive constant to the estimated edit parameter of _saturation_ and _exposure_. Heatmaps shows the estimated edit parameters (center of the heatmap) are in the optimal realism region. Changing the edit parameters in each direction reduces the realism of the end result. ### Generalization to Multiple Image Regions Since our model only modifies the region of interest, and performs a forward pass efficiently, we can run it on multiple regions and multiple masks by generating edit parameters for each region, in an iterative manner. Examples are provided in Figs. 1 and 10. We used the same approach with Gazeshift [17], which edits the whole image by estimating two sets of edit parameters, one for the region of interest (foreground) and one for the background. This formulation of Gazeshift makes iterative editing impractical, since there would be contradictory objectives between the \begin{table} \begin{tabular}{l|c c} & \multicolumn{2}{c}{Saliency Attenuation} \\ Method & Effectiveness \(\uparrow\) & Realism \(\uparrow\) \\ \hline Adversarial Training & 5.06 (2.84) & 7.36 (3.07) \\ Ours - Random & 6.36 (2.79) & 6.34 (2.88) \\ \end{tabular} \end{table} Table 4: Photographer ratings as in Tab. 2 comparing our main method to a variation with adversarial training instead of our fixed realism network. Figure 8: When the model trained via adversarial training produces results that are effective at reducing saliency, the resulting images are not realistic according to our user study. \begin{table} \begin{tabular}{l|c c|c c c} & \multicolumn{2}{c|}{Saliency Attenuation} & \multicolumn{2}{c}{Saliency Amplification} \\ Method & Effectiveness \(\uparrow\) & Realism \(\uparrow\) & Effectiveness \(\uparrow\) & Realism \(\uparrow\) \\ \hline GazeShift [17] & 4.78 (2.89) & 5.93 (3.13) & 7.36 (2.37) & 7.07 (2.76) \\ Deepsal [1] & 4.04 (2.90) & 8.49 (2.72) & - & - \\ \hline Ours - Best Realism & 6.56 (2.73) & 6.78 (2.70) & 7.39 (2.17) & 8.31 (1.89) \\ Ours - Random & 6.36 (2.79) & 6.34 (2.88) & 7.36 (2.21) & 8.27 (1.94) \\ Ours - Best Saliency & 6.64 (2.79) & 6.31 (2.70) & 7.50 (2.08) & 8.15 (2.10) \\ \end{tabular} \end{table} Table 2: Photographer ratings (on a 1 to 10 scale, higher is better) of effectiveness (i.e., achieve objective of attenuation or amplification of saliency) and realism (i.e., photo looks natural) on the dataset of 30 Adobe stock images. Numbers are the mean score across 8 photographers, with standard deviation in parentheses. iterations (what is foreground in one iteration becomes a background in the next iteration). For a more practical comparison, we omit background edits when running Gazeshift. Figure 10 shows that Gazeshift performance suffers on an iterative saliency enhancement task, but our method is able to generalize to multiple regions robustly. ### Limitations The global edits (applying the same edits to every pixel inside a mask), used in both our method and Gazeshift [17] require an accurate mask of the target region. As shown in Fig. 11 mask imperfections can cause unsmooth transitions around the boundaries. In these cases, pixel-wise optimization approaches likes Deepsal [1] and MEC [16] do not suffer from heavy artifacts due to mask imperfections. ## 6 Conclusion and Future work We describe a method to edit images using conventional image editing operators to attenuate or amplify the attention captured by a target region while preserving image realism. Realism is achieved by introducing an explicit, and separate realism network that is pre-trained to distinguish edited images. This strategy to achieve realism is distinct from prevailing approaches, including adversarial training schemes, as it introduces an additional form of weak supervision--manually specified ranges of parameter values that correspond to realistic and unrealistic ("fake") edits. Training with this realism critic makes it possible to estimate saliency modulating image edits that are significantly more realistic and robust. Together with our millisecond-level inference time, our approach offers a practical and deployable application of saliency guided image editing. Figure 11: The effect of non-smooth mask boundaries. Left, an input image has a mask with a sharp edge. Center, our method and Gazeshift [17] produce strong boundary artifacts around the mask region (see inset). Right, MEC [16] and Deepsal [1] do not exhibit this problem because they operate in a pixel-wise manner. Figure 10: Given an input image and masks to attenuate and amplify(left), Gazeshift when used iteratively on each object suffers from color artifacts (center top, faces, bowl and watermelons). Ours produces a notably more realistic and effective result (right). Contradictory objective of edits applied to background and foreground, Gazeshift fails to generalize to multiple regions and omitting the background edits (center bottom) reduces the effectiveness of the edits. Image credit: @tssonbrand Figure 9: Visualizing diversity and optimality of edit parameters estimated by our method
2306.15788
Evaluating GPT-3.5 and GPT-4 on Grammatical Error Correction for Brazilian Portuguese
We investigate the effectiveness of GPT-3.5 and GPT-4, two large language models, as Grammatical Error Correction (GEC) tools for Brazilian Portuguese and compare their performance against Microsoft Word and Google Docs. We introduce a GEC dataset for Brazilian Portuguese with four categories: Grammar, Spelling, Internet, and Fast typing. Our results show that while GPT-4 has higher recall than other methods, LLMs tend to have lower precision, leading to overcorrection. This study demonstrates the potential of LLMs as practical GEC tools for Brazilian Portuguese and encourages further exploration of LLMs for non-English languages and other educational settings.
Maria Carolina Penteado, Fábio Perez
2023-06-27T20:37:54Z
http://arxiv.org/abs/2306.15788v2
# Evaluating GPT-3.5 and GPT-4 on ###### Abstract We investigate the effectiveness of GPT-3.5 and GPT-4, two large language models, as Grammatical Error Correction (GEC) tools for Brazilian Portuguese and compare their performance against Microsoft Word and Google Docs. We introduce a GEC dataset for Brazilian Portuguese with four categories: Grammar, Spelling, Internet, and Fast typing. Our results show that while GPT-4 has higher recall than other methods, LLMs tend to have lower precision, leading to overcorrection. This study demonstrates the potential of LLMs as practical GEC tools for Brazilian Portuguese and encourages further exploration of LLMs for non-English languages and other educational settings. Machine Learning, GPT-3.5, GPT-4, GPT-4, GPT-4 ## 1 Introduction Large language models (LLMs) have revolutionized the field of natural language processing by enabling computers to process and generate human-like language (Kasneci et al., 2023). LLMs have the potential to be particularly useful for Grammatical Error Correction (GEC) (Wu et al., 2023; Bryant et al., 2022) and can be a valuable educational tool to enhance students' writing skills by providing real-time feedback and corrections. Traditional GEC methods usually rely on pre-defined rules to identify and correct errors. While these methods can effectively detect simple misspellings, they may struggle to correct more complex grammatical errors. In contrast, LLMs can model language from large amounts of text data, which could lead to more natural and contextually appropriate corrections. By analyzing the context and meaning of a sentence, LLMs may identify errors that traditional methods may miss and provide more nuanced corrections. Although large language models (LLMs) have gained widespread attention for their performance in English language applications, recent studies have shown that they can produce good results for other languages. While the amount of data available for training LLMs in languages other than English is often more limited, the success of these models in tasks such as translation, language modeling, and sentiment analysis demonstrates their potential for improving language processing across a range of different languages. In this work, we take an initial step on investigating the effectiveness of GPT-3.5 and GPT-4 (OpenAI, 2023), two LLMs created by OpenAI, as a GEC tool for Brazilian Portuguese. Our main contributions are the following: 1. We compare GPT-3.5 and GPT-4 against Microsoft Word and Google Docs and show that LLMs can be a powerful tool for GEC. 2. We crafted a GEC dataset for Brazilian Portuguese, including four categories: _Grammar_, _Spelling_, _Internet_, and _Fast typing_. 3. We quantitative and qualitatively evaluated LLMs as a GEC tool for Brazilian Portuguese. ## 2 Related Work Nunes et al. (2023) explored the use of GPT-3.5 and GPT-4 to answer questions for the _Exame Nacional do Ensino Medio_ (ENEM), an entrance examination used by many Brazilian universities. They tested different prompt strategies, including using Chain-of-Thought (CoT) to generate explanations for answers, and found that GPT-4 with CoT was the best-performing approach, achieving an accuracy of 87% on the 2022 exam. Wu et al. (2023) evaluated the performance of different models for GEC, including Grammarly, GECToR, and ChatGPT (authors did not specify whether they used GPT-3.5 or GPT-4), and found that automatic evaluation methods result in worse numbers for ChatGPT than other GEC methods. In contrast, human evaluation showed that ChatGPT produces fewer under-corrections or miscorrections and more overcorrections, indicating not only the potential of LLMs for GEC but also the limitation of automatic metrics to evaluate GEC tools. Fang et al. (2023) investigated GPT-3.5's potential for GEC using zero- and few-shot chain-of-thought settings. The model was evaluated in English, German, and Chinese, showcasing its multilingual capabilities. The study found that GPT-3.5 exhibited strong error detection and generated fluent sentences but led to over-corrections. Despite their outstanding performance on many tasks, LLMs may not be the silver bullet for NLP in multi-lingual settings. Lai et al. (2023) evaluated ChatGPT on various NLP tasks and languages, showing that it performs significantly worse than state-of-the-art supervised models for most tasks in different languages, including English. Their work does not include GEC, and Portuguese is only evaluated for relation extraction. The shortage of academic research on LLMs for multilingual settings, especially for Brazilian Portuguese, highlights the need for further engagement in this field. Our work aims to fill this gap by exploring the potential of GPT-3.5 and GPT-4 as GEC tools for Brazilian Portuguese. ## 3 Methodology ### Dataset We created the dataset (Table 1) by having native Brazilian Portuguese speakers manually write multiple sentences and dividing them into four categories: grammar, orthography, mistypes, and internet language. All categories list incorrect sentences and their correct pairs. The categories are described as follows: * **Grammar** -- 34 sets of three (totaling 102) phrases containing two words or expressions that are commonly swapped due to their similarity. * **Spelling** -- 100 sentences with spelling, punctuation, or accentuation errors. * **Fast typing** -- 40 mistyped (e.g., when typing too fast) sentences. * **Internet language** -- 40 sentences containing slang, abbreviations, and neologisms often used in virtual communication. We find it important to acknowledge that the dataset may reflect the biases of the human curators and may not fully encompass the complexities and variations present in real-world data. However, the limited availability of corpora specifically designed for GEC in Brazilian Portuguese compelled us to create our dataset, which, despite its potential limitations, represents a starting point in the task. The dataset is available in the supplementary material. ### Experiments We compared GPT-3.5 and GPT-4, two LLMs, against the spelling and grammar error correction features on Google Docs and Microsoft Word, two widely-used text editors. For Google Docs (docs.google.com), we first set the language on _File \(\rightarrow\) Language \(\rightarrow\) Portuguese (Brasil)_. Then we selected _Tools \(\rightarrow\) Spelling and grammar \(\rightarrow\) Spelling and grammar check_. Finally, for every error, we clicked on _Accept_. We used the online version of Microsoft Word (onedrive.live.com). First, we set the language on _Set Proofing Language \(\rightarrow\) Current Document \(\rightarrow\) Portuguese (Brazil)_. Then, we opened the _Corrections_ tab and selected all errors under _Spelling and Grammar_. For each error, we selected the first suggestion. We repeated the process until Word stopped finding errors. For GPT-3.5 and GPT-4, we used ChatGPT (chat.openai.com) with the prompt shown in Table 2. We shuffled the phrases and ensured the same pair of correct and incorrect phrases did not appear in the same prompt. Instead of running phrases individually, we ran 20 to 26 simultaneous phrases in one prompt, depending on the category. We used the ChatGPT interface and not the OpenAI API since we did not have access to the GPT-4 API at the time of the experiments. We did not focus on optimizing the prompt as our goal is to evaluate the usefulness of LLMs for GEC in Brazilian Portuguese without requiring deep LLMs knowledge. We believe more careful prompt engineering may improve the results. ## 4 Results CoNLL2014 (Ng et al., 2014) employs an evaluation method in which GEC tools are evaluated by all edits they made on phrases against gold-standard edits. Instead, we evaluate GEC tools by comparing the modified phrases against the gold-standard ones. For the _Grammar_ and _Spelling_ categories, we also ran GEC tools on phrases without grammatical errors to evaluate false positives. We calculated four metrics: * **Precision** -- From the phrases modified by the GEC tool, how many were successfully corrected? * **Recall** -- From the ungrammatical phrases, how many were successfully corrected by the GEC tool? * \(F_{0.5}\)**Score** -- A metric that combines both precision and recall, but emphasizes precision twice as much as recall. It is commonly used in GEC studies (Ng et al., 2014). * **True Negative Rate (TNR)** -- From the grammatical phrases, how many were successfully not modified by the GEC tool? We evaluated _Grammar_ and _Spelling_ using the four metrics and _Internet_ and _Fast typing_ using recall. Table 3 shows the results for all experiments. We define true/false positive/negative as follows (see Table A1 for examples): * **True Positive (TP)** -- incorrect phrase is corrected by the GEC tool. * **False Positive (FP)** -- correct phrase is wrongly modified by the GEC tool. * **True Negative (TN)** -- correct phrase is not modified by the GEC tool. * **False Negative (FN)** -- incorrect phrase is not corrected by the GEC tool. ## 5 Discussion Results (Table 3) for _Grammar_ and _Spelling_ show that GPT-3.5 and GPT-4 have superior recall and worse precision than Microsoft Word and Google Docs. These results agree with those by Wu et al. (2023) and Fang et al. (2023) and suggest that while GPT models are very effective at identifying errors, they tend to make more corrections than necessary, potentially altering the meaning or style of the text. The lower TNR values also confirms that LLMs tend to modify correct phrases. One possible explanation for the higher recall of LLMs is their ability to model language from large amounts of text data, allowing them to capture a wide range of language patterns and contextual nuances. This makes them effective at detecting complex grammatical errors, but their open-ended nature can lead to overcorrection by generating multiple possible corrections without clearly picking the most appropriate one. Furthermore, LLMs may have lower precision because they often prioritize fluency and coherence over grammatical accuracy, leading to unnecessary changes to the text, increasing false positives. In contrast, rule-based methods prioritize grammatical accuracy and make changes only when necessary. Although strongly impacted by the lower precision, GPT-4 shows a higher \(F_{0.5}\) score than any other methods for both _Grammar_ and _Spelling_. GPT-3.5, however, has lower \(F_{0.5}\) scores than Google Docs and Microsoft Word, indicating that GPT-4 is a clear improvement over GPT-3.5 as a GEC tool for Brazilian Portuguese. Finally, GPT-3.5 and GPT-4 perform much better than Microsoft Word and Google Docs for the _Internet_ and _Fast typing_ categories. Traditional methods struggle with these tasks as they are strongly context-dependent, while LLMs thrive due to being trained on vast amounts of text. This demonstrates the capabilities of LLMs as a GEC tool for non-traditional GEC scenarios. \begin{table} \begin{tabular}{l c l l l} \hline \hline & **Cor.** & **Inc.** & **Corr. Example** & **Incorr. Example** \\ \hline Grammar & 102 & 102 & Vocé nunca mais falou com & Vocé nunca mais falou com a \\ & & & agente & \\ \hline Spelling & 100 & 100 & A analize do documentoserá & A analise do documentoserá \\ & & & feita por umadvogado & feita por umadvogado \\ \hline Fast typing & - & 40 & elejá quebru todos occopos & Elejá quebru todos oscopos \\ & & & nvoos que comépri esse mésd & novos que comprei esse més \\ \hline Internet & - & 40 & ná d p escutar, n sei o pq & Não dá para escutar, não sei o \\ & & & porque & \\ \hline \hline \end{tabular} \end{table} Table 1: Description of the developed dataset, divided into four categories: _Grammar_, _Spelling_, _Fast typing_, and _Internet_. The table shows the total correct and incorrect phrases per category and example phrases from the dataset. _Grammar_ and _Spelling_ include only one error per phrase, while _Fast typing_ and _Internet_ include multiple. \begin{table} \begin{tabular}{l l} \hline \hline **Prompt** \\ Corrijá os erros grammaticais das \\ seguinte s frases em Portugès brasileiro. \\ Não aletre o sognificado das frases, \\ apenas as corrijá. Não aletre frases \\ gramaticalmente corretas, apenas escreva \\ [Correta] após a frase. \\ \hline **Prompt** \\ Prompt** \\ Fix the grammatical errors in the \\ following Brazilian Portuguese sentences. \\ Do not change the meaning of the \\ sentences, just fix them. Do not change \\ gramatically correct sentences, just \\ write [Correct] after the sentence. \\ \hline \hline \end{tabular} \end{table} Table 2: Prompt used for GPT-3.5 and GPT-4 and its English translation as reference. We prompted both models to add _[Correta]_ if the phrase is correct to avoid them appending long texts saying the phrase is correct. We removed any _[Correta]_ occurrence before evaluating the models. We also performed a qualitative analysis by checking each correction provided by GPT-3.5 and GPT-4. We identified four explicit behaviors. See Table A2 for examples of phrases for each behavior. The first behavior (over-correction) considers extra edits that lead to grammatically correct sentences without meaning changes (e.g., add/remove commas, convert commas into semicolons, and upper vs. lower case). GPT-3.5 delivered 54 (out of 484) sentences with such behavior vs. six from GPT-4. The second behavior (omission) refers to models failing to detect errors and occurred 22 and 23 times on GPT-3.5 and GPT-4, respectively. The third behavior (grammatical miscorrection) includes changes that adhere to grammatical rules but modify the sentence's meaning (e.g., removing/adding/substituting words and inverting the order of excerpts). GPT-3.5 corrections fell in this category 41 times vs. 13 times for GPT-4. Finally, the fourth behavior (ungrammatical miscorrection) is similar to the previous one but leads to ungrammatical sentences. GPT-3.5 and GPT-4 produced 3 and 1 outputs in this category, respectively. ### Limitations and Challenges of LLMs as GEC tools While large language models (LLMs) have shown considerable promise for Grammatical Error Correction (GEC), limitations and challenges must be considered when using these models for GEC tasks. Open-endednessLLMs are open-ended and stochastic by nature. Unlike rule-based models, LLMs generate text based on patterns learned from training data. This can make it difficult to constrain the model, resulting in the replacement of grammatically correct words with other words that may occur more frequently in a given context (Bryant et al., 2022). Another unpredictability of LLMs is their tendency to produce "hallucinations" - outputs that are not necessarily true or based on the input data (OpenAI, 2023). This can result in the generation of incorrect or irrelevant corrections. Prompt engineeringLLMs' performance rely on the used prompts (Brown et al., 2020), where LLM-based GEC tools might need prompt engineering to achieve high-quality outputs. The effectiveness of a prompt may vary significantly depending on the task, and determining an optimal prompt may require extensive experimentation. Hardware constraintsThe large-scale nature of LLMs requires powerful hardware, which can be a barrier for many users and institutions. This can make LLMs less accessible and cost-effective for GEC tasks, particularly for those with limited resources or budget constraints. To interact with LLMs that cannot run on consumer hardware, one must send requests to third-party servers, requiring an internet connection and posing a privacy risk. Biases and malicious usesLLMs may contain biases and inaccuracies, posing a challenge in ensuring that corrections do not inadvertently perpetuate harmful stereotypes or misinformation (Blodgett et al., 2020; Nadeem et al., 2020; Garrido-Munoz et al., 2021). LLMs may also suffer from malicious attacks intent to misleading the model (Perez and Ribeiro, 2022; Greshake et al., 2023). ## 6 Conclusion Our study demonstrates the potential of LLMs as effective GEC tools for Brazilian Portuguese. We hope this work encourages further exploration of the impact of LLMs on Brazilian Portuguese and other non-English languages and spurs interest in developing and refining LLMs for diverse linguistic contexts. As a suggestion for future works, we believe that curating larger and better datasets that capture real-world data (e.g., by collecting grammatical errors \begin{table} \begin{tabular}{l l c c c c} \hline \hline & & **MS Word** & **Google Docs** & **GPT-3.5** & **GPT-4** \\ \hline **Internet** & Recall & 12.5\% & 5.0\% & 78.3\(\pm\)1.3\% & **89.3\(\pm\)1.3\%** \\ \hline **Fast typing** & Recall & 27.5\% & 40.0\% & 85.0\(\pm\)0.0\% & **90.0\(\pm\)1.3\%** \\ \hline **Grammar** & Precision & 89.1\% & **97.4\%** & 67.5\(\pm\)0.2\% & 86.8\(\pm\)0.7\% \\ & Recall & 40.2\% & 36.3\% & 63.7\(\pm\)1.7\% & **75.5\(\pm\)1.7\%** \\ & \(F_{0.5}\) & 71.7\% & 72.8\% & 66.7\(\pm\)0.5\% & **84.3\(\pm\)1\%** \\ & TNR & 95.1\% & **99.0\%** & 69.3\(\pm\)0.6\% & 88.5\(\pm\)0.6\% \\ \hline & Precision & 94.9\% & **100\%** & 79.7\(\pm\)1.7\% & 99.3\(\pm\)0.6\% \\ **Spelling** & Recall & 74.0\% & 66.0\% & 85\(\pm\)3.5\% & **92.0\(\pm\)6.1\%** \\ & \(F_{0.5}\) & 89.8\% & 90.7\% & 80.7\(\pm\)2\% & **97.7\(\pm\)1.8\%** \\ & TNR & 96.0\% & **100\%** & 78.3\(\pm\)1.5\% & 99.3\(\pm\)0.6\% \\ \hline \hline \end{tabular} \end{table} Table 3: Evaluation results for all experiments. Since the results are not deterministic, values for GPT-3.5 and GPT-4 represent the average and standard deviation for three runs. made in real scenarios) could strengthen the field. Moreover, we encourage researchers to continue investigating the potential of LLMs in educational settings (see Appendix B).
2303.07552
Correlators of double scaled SYK at one-loop
In this paper, we study one-loop contributions in the double-scaling limit of the SYK model from the chord diagrams and Liouville type effective action. We compute and clarify the meaning of each component consisting of the one-loop corrections for the two- and time-ordered four-point functions of light operators. We also reproduce the exact expression of the out-of-time-ordered four-point function at arbitrary temperatures within the one-loop level, which were previously computed from different methods.
Kazumi Okuyama, Kenta Suzuki
2023-03-14T00:43:52Z
http://arxiv.org/abs/2303.07552v1
# Correlators of double scaled SYK at one-loop ###### Abstract In this paper, we study one-loop contributions in the double-scaling limit of the SYK model from the chord diagrams and Liouville type effective action. We compute and clarify the meaning of each component consisting of the one-loop corrections for the two- and time-ordered four-point functions of light operators. We also reproduce the exact expression of the out-of-time-ordered four-point function at arbitrary temperatures within the one-loop level, which were previously computed from different methods. ## 1 Introduction The large \(N\) dynamics of the Sachdev-Ye-Kitaev (SYK) model [1; 2; 3; 4; 5] represents a particularly useful laboratory to understand the origins of the AdS/CFT correspondence. In the low temperature limit, the crucial role is played by the Schwarzian mode which is responsible for the breaking of the emergent IR reparametrization symmetry. This Schwarzian mode triggers the maximal chaos behaviour [6] in the low temperature. This mode also builds a connection with the Jackiw-Teitelboim (JT) gravity through the near-AdS\({}_{2}\)/near-CFT\({}_{1}\) correspondence [7; 8; 9; 10], where breaking of the diffeomorphism in AdS\({}_{2}\) also leads to an emergence of the Schwarzian mode. Although the SYK model led us to a great deal of understanding of the AdS/CFT correspondence, most of the previous works on this model were limited to low temperature (near conformal) limit, and one might wish to go beyond this limit. The large \(p\) limit [5; 11; 12; 13] and the double-scaling limit [14; 15; 16; 17; 18; 11] of the SYK model give one possibility in this direction. Previous studies on the large \(p\) SYK model showed a transition from the maximal chaos bound to non-maximal chaos behaviour [5], effective Liouville type action [11] and corrections on top of that action [13]. The large \(p\) limit also allows us to investigate modifications away from the conformal two-point function [12] as well as out-of-time-order four-point functions [19; 20; 21]. The double-scaling limit of the SYK model studied in [14; 15; 16; 17; 22; 23] uses a particular method called chord diagrams, which we review in section 2, and leads to an exact results for the partition function and two- and four-point functions. However, the bulk gravitational dual of the DSSYK model is not well understood, if it exists, and it is desirable to investigate further in this direction. Some recent works toward this direction includes [24; 18; 25]. In order to contribute for this investigation, we study one-loop corrections in the DSSYK model in this paper. The rest of the paper is organized as follows. In section 2, we give a brief review of the DSSYK model with two formalisms. One is based on the chord diagrams and the other is formulated by a Liouville type effective action. In section 3, we study one-loop corrections for the two- and uncrossed four-point functions in the DSSYK model, starting from the results obtained by the chord diagram method. Some of these one-loop corrections were partially studied in [18] as well, but we explain the meaning of each components consisting of the one-loop contributions. In section 4, we study one-loop corrections for the two- and uncrossed four-point functions starting from the Liouville type effective action, and reproduce the results found in section 3. In section 5, we continue our study of the Liouville theory for the out-of-time-ordered four-point functions and we reproduce the results obtained in [19; 20; 21] by different methods. In section 6, we compare our results obtained in the previous section with the known results from the low temperature Schwarzian theory. Finally, we conclude in section 7 with some discussion of future directions. In appendix A, we present an alternative derivation of the one-loop determinant based on a change of variables which diagonalizes the Hessian. In appendix B, we summarize useful summation formulae used in the main text. In appendix C, we discuss zero temperature factorization of the four-point function into a pair of two-point functions in the DSSYK model. ## 2 Review of double scaled SYK In this section, we give a brief review of the double-scaled SYK model. The Sachdev-Ye-Kitaev model [1; 2; 3] is a quantum mechanical many body system with all-to-all \(p\)-body interactions on fermionic \(N\) sites (\(N\gg 1\)), represented by the Hamiltonian \[H\,=\,\mathrm{i}^{\frac{p}{2}}\sum_{1\leq i_{1}<\cdots<i_{p}\leq N}J_{i_{1}i_{ 2}\cdots i_{p}}\,\psi_{i_{1}}\,\psi_{i_{2}}\,\cdots\psi_{i_{p}}\,, \tag{1}\] where \(\psi_{i}\) are Majorana fermions, which satisfy \(\{\psi_{i},\psi_{j}\}=2\delta_{ij}\). The coupling constant \(J_{i_{1}i_{2}\cdots i_{p}}\) is random with a Gaussian distribution \[\left\langle J_{i_{1}i_{2}\cdots i_{p}}\right\rangle_{J}\,=\,0\,,\qquad\left \langle J_{i_{1}i_{2}\cdots i_{p}}^{2}\right\rangle_{J}\,=\,\binom{N}{p}^{-1} \quad\text{(no sum for indices)}\,. \tag{2}\] he double-scaled SYK (DSSYK) model is defined by taking the double scaling limit \[N,\,p\,\to\,\infty\quad\text{with}\quad\lambda\,:=\,\frac{2p^{2}}{N}\quad\text{ fixed}\,. \tag{3}\] There are two ways to study the DSSYK. The one is by the chord diagrams [14; 15] which leads to complicated but exact results. The other is by the \(G\Sigma\) formalism [11] which leads to a Liouville type action well suited for small \(\lambda\) perturbations. In this paper, we study both formalisms. The chord diagram method consider, for example, the disorder averaged partition function (but the same method also works for correlation functions as well) \[\big{\langle}Z\big{\rangle}_{J}\,:=\,\big{\langle}\operatorname{Tr}e^{-\beta H }\big{\rangle}_{J}\,, \tag{4}\] and expands \(e^{-\beta H}\) to rewrite in terms of summation over the moments \(m_{k}\) \[\big{\langle}Z\big{\rangle}_{J}\,=\,\sum_{k=0}^{\infty}\frac{(-\beta)^{k}}{k! }\,m_{k}\,,\qquad m_{k}\,:=\,\big{\langle}\operatorname{Tr}\big{(}H^{k}\big{)} \big{\rangle}_{J}\,. \tag{5}\] The evaluation of the moments is reduced to a product between a trace over fermions and disorder average over the coupling constants as \[m_{k}\,=\,\mathrm{i}^{\frac{kp}{2}}\,\big{\langle}J_{I_{1}}\cdots J_{I_{k}} \big{\rangle}_{J}\operatorname{Tr}(\psi_{I_{1}}\cdot\psi_{I_{k}})\,, \tag{6}\] where the capital index \(I\) represents a set of \(p\) indices \(i_{1}\cdots i_{p}\) and \(\psi_{I}=\psi_{i_{1}}\cdots\psi_{i_{p}}\). Evaluation of the disorder average over the coupling constants by (2) leads to contraction of the indices. Then the trace over the fermions is represented by the chord diagrams (for example see Figure 1 for \(k=8\) chord diagram.) After resummining over the moments, the disorder averaged partition function is found as [14; 15] \[\big{\langle}Z\big{\rangle}_{J}\,=\,\int_{0}^{\pi}\frac{d\theta}{2\pi}\,\mu( \theta)e^{-\beta E(\theta)}\,, \tag{7}\] Figure 1: An example of a chord diagram for \(k=8\). The black circle represents a trace over the fermions. Each orange dot represents an insertion of a Hamiltonian and each blue line denotes a contraction of capital index \(I\). where \[E(\theta)\,=\,-\frac{2\cos\theta}{\sqrt{1-q}}\,,\qquad q\,:=\,e^{- \lambda}\,, \tag{8}\] and \[\mu(\theta)\,=\,(q;q)_{\infty}\,(e^{2{\rm i}\theta};q)_{\infty}\,(e^ {-2{\rm i}\theta};q)_{\infty}\,. \tag{9}\] Here the \(q\)-Pochhammer symbol is defined by \[(a;q)_{n}\,=\,\prod_{k=0}^{n-1}(1-aq^{k})\,. \tag{10}\] Similarly, using the chord diagram method, the two-point function of operators with dimension \(\Delta\) \[\widetilde{G}_{2}(\beta_{1},\beta_{2})\,=\,\int_{0}^{\pi}\prod_{k= 1,2}\frac{d\theta_{k}}{2\pi}\mu(\theta_{k})e^{-\beta_{k}E(\theta_{k})}\frac{( e^{-2\Delta};q)_{\infty}}{(e^{-\Delta+{\rm i}(\pm\theta_{1}\pm\theta_{2})};q)_{ \infty}}\,, \tag{11}\] the uncrossed four-point function of pairs of operators with dimension \(\Delta_{1}\) and \(\Delta_{2}\) \[\widetilde{G}_{4}(\beta_{1},\beta_{2},\beta_{3},\beta_{4})\,=\, \int\prod_{i=1}^{3}\left(\frac{d\theta_{i}}{2\pi}\mu(\theta_{i})e^{-\beta_{i} E(\theta_{i})}\right)\frac{(e^{-2\Delta_{1}};q)_{\infty}}{(e^{-\Delta_{1}+{\rm i }(\pm\theta_{1}\pm\theta_{3})};q)_{\infty}}\frac{(e^{-2\Delta_{2}};q)_{\infty }}{(e^{-\Delta_{2}+{\rm i}(\pm\theta_{2}\pm\theta_{3})};q)_{\infty}}\,, \tag{12}\] are obtained. By \((e^{-\Delta+{\rm i}(\pm\theta_{1}\pm\theta_{2})};q)_{\infty}\), we mean to take a product of all four possible combination of the signs. The crossed four-point function is also obtained by the chord diagram method in [15], but we do not write the result explicitly here since we don't use it. We study small \(\lambda\) expansions of these results obtained from chord diagrams in section 3. The other \(G\Sigma\) formalism, we introduce bi-local fields \(G(\tau_{1},\tau_{2})\) and \(\Sigma(\tau_{1},\tau_{2})\) as in the usual Hubbard-Stratonovich transformation to integrate out the original fermions. Then, the disorder averaged partition function is written as \[\big{\langle}Z\big{\rangle}_{J}\,=\,\int{\cal D}G{\cal D}\Sigma\,e ^{-S}\,, \tag{13}\] where \[S\,=\,-\,\frac{N}{2}\log\det(\partial_{\tau}-\Sigma)\,+\,\frac{ N}{2}\int d\tau_{1}d\tau_{2}\left[\Sigma G-\frac{1}{2p^{2}}(2G)^{p}\right]\,. \tag{14}\] In order to take the double scaling limit (3), we set \[G(\tau_{1},\tau_{2})\,=\,\frac{{\rm sgn}(\tau_{12})}{2}\left(1+ \frac{g(\tau_{1},\tau_{2})}{p}\right)\,,\qquad\Sigma(\tau_{1},\tau_{2})\,=\, \frac{\sigma(\tau_{1},\tau_{2})}{p}\,, \tag{15}\] where \(\tau_{12}:=\tau_{1}-\tau_{2}\). Substituting this expression into the action, one can integrate out the \(\sigma\) field in the leading order of large \(p\). Hence, in the double scaling limit, we find the action given by \[S\,=\,\frac{1}{2\lambda}\int d\tau_{1}d\tau_{2}\left[\,\frac{1} {4}\,\partial_{1}g(\tau_{1},\tau_{2})\partial_{2}g(\tau_{1},\tau_{2})\,-\,e^{ g(\tau_{1},\tau_{2})}\right]\,. \tag{16}\] We study this small \(\lambda\) Liouville theory in section 4 and 5. Saddle point computation of correlators The small \(\lambda\) regime of DSSYK corresponds to the semi-classical bulk gravitational theory. In [18] the small \(\lambda\) expansion of the matter correlators of DSSYK was computed up to the one-loop order. In this section, we will generalize the analysis in [18] and compute the one-loop correction to the uncrossed four-point function. In order to take a well-defined small \(\lambda\) limit, we have to rescale the inverse temperature as \(\beta\to\beta/\sqrt{\lambda}\). Equivalently, we can rescale \(E(\theta)\) in (8) as \[E(\theta)=-\frac{2\cos\theta}{\sqrt{\lambda(1-q)}}, \tag{11}\] with \(\beta\) intact. We will use this convention throughout the rest of this section. ### Partition function Let us first consider the small \(\lambda\) expansion of the partition function. As discussed in [18], this expansion is obtained from the saddle point approximation of the \(\theta\)-integral. To do that, we need the small \(\lambda\) expansion of the \(q\)-Pochhammer symbol \[(a;q)_{\infty}=\exp\left[-\sum_{n=1}^{\infty}\frac{a^{n}}{n(1-q^{n})}\right]= \exp\left[-\sum_{g=0}^{\infty}\frac{\lambda^{2g-1}B_{2g}}{(2g)!}\text{Li}_{2 -2g}(a)+\frac{1}{2}\log(1-a)\right], \tag{12}\] where \(B_{2g}\) denotes the Bernoulli number and \(\text{Li}_{n}(z)\) is the polylogarithm. The small \(\lambda\) expansion of \((q;q)_{\infty}\) is also obtained by using its relation to the Dedekind \(\eta\)-function \[(q;q)_{\infty}=q^{-\frac{1}{24}}\eta(q)\approx\sqrt{\frac{2\pi}{\lambda}}e^{ \frac{\lambda}{24}-\frac{\pi^{2}}{6\lambda}}, \tag{13}\] where in the last step we used the S-transformation of the \(\eta\)-function. Using the relation \[\text{Li}_{2-2g}(e^{2\text{i}\theta})+\text{Li}_{2-2g}(e^{-2\text{i}\theta})= \begin{cases}2\left(\theta-\frac{\pi}{2}\right)^{2}-\frac{\pi^{2}}{6},&(g=0),\\ -1,&(g=1),\\ 0,&(g\geq 2),\end{cases} \tag{14}\] the measure factor \(\mu(\theta)\) is expanded as \[\mu(\theta)=(q;q)_{\infty}(e^{\pm 2\text{i}\theta};q)_{\infty}\approx\sqrt{ \frac{2\pi}{\lambda}}\exp\left[\frac{\lambda}{8}-\frac{2}{\lambda}\left( \theta-\frac{\pi}{2}\right)^{2}+\log(2\sin\theta)\right]. \tag{15}\] Note that there is no corrections to \(\mu(\theta)\) higher than \(\mathcal{O}(\lambda^{2})\).1 Footnote 1: The same conclusion can be obtained by using the expression of \(\mu(\theta)\) in terms of the Jacobi theta-function \(\vartheta_{1}(z,\tau)\) \[\mu(\theta)=2q^{-\frac{1}{8}}\sin\theta\,\vartheta_{1}\left(2\theta,\frac{ \text{i}\lambda}{2\pi}\right). \tag{16}\] where \[\begin{split} F&=2\left(\theta-\frac{\pi}{2}\right)^{2}-2 \beta\cos\theta,\\ h&=\log(2\sin\theta)+\frac{1}{2}\beta\cos\theta. \end{split} \tag{23}\] In the small \(\lambda\) limit, the \(\theta\)-integral in (21) is evaluated by the saddle point approximation. The saddle point \(\theta=\theta_{*}\) is determined from the saddle point equation \(\partial_{\theta}F=0\) as \[\theta_{*}=\frac{\pi}{2}-u, \tag{24}\] where \(u\) is related to \(\beta\) as \[\beta=\frac{2u}{\cos u}. \tag{25}\] The saddle point value of \(F\) is \[F_{*}=F(\theta_{*})=2(u^{2}-2u\tan u). \tag{26}\] Note that our \(u\) and \(v\) in [5] are related by \[u=\frac{\pi v}{2}. \tag{27}\] One can systematically improve the approximation by expanding the integral around the saddle point \[\theta=\theta_{*}+\sqrt{\lambda}\varepsilon. \tag{28}\] Expanding \(F\) up to the quadratic order in \(\varepsilon\), we find \[\frac{1}{\lambda}(F-F_{*})=2(1+u\tan u)\varepsilon^{2}+\mathcal{O}( \varepsilon^{3}). \tag{29}\] Then, by performing the Gaussian integral over \(\varepsilon\), we find the one-loop correction to the partition function [18] \[Z(\beta)\approx\frac{\cos u}{\sqrt{1+u\tan u}}\exp\left[-\frac{2}{\lambda} \Big{(}u^{2}-2u\tan u\Big{)}+u\tan u\right]. \tag{30}\] ### Two-point function The two-point function is given by [15] \[\widetilde{G}_{2}=\int_{0}^{\pi}\prod_{k=1,2}\frac{d\theta_{k}}{2\pi}\mu( \theta_{k})e^{-\beta_{k}E(\theta_{k})}\frac{(e^{-2\Delta};q)_{\infty}}{(e^{- \Delta+\mathrm{i}(\pm\theta_{1}\pm\theta_{2})};q)_{\infty}}. \tag{31}\] We have put tilde to indicate that the two-point function is not normalized by the partition function. We define the normalized \(n\)-point function by \[G_{n}=\frac{\widetilde{G}_{n}}{Z}. \tag{32}\] \(\beta_{1,2}\) in (31) are given by \[\beta_{1}=\tau_{12},\quad\beta_{2}=\beta-\tau_{12}, \tag{33}\] with \(\tau_{ij}=\tau_{i}-\tau_{j}\). In the small \(\lambda\) limit, (3.16) is written as \[\widetilde{G}_{2}=\frac{2\pi}{\lambda}\int\prod_{k=1,2}\frac{d\theta_{k}}{2\pi} e^{-\lambda^{-1}F+h+\mathcal{O}(\lambda)}, \tag{3.19}\] where \(F\) and \(h\) are given by \[\begin{split} F&=\sum_{k=1,2}\bigg{[}2\left(\theta _{k}-\frac{\pi}{2}\right)^{2}-2\beta_{k}\cos\theta_{k}\bigg{]}+\text{Li}_{2}(e^ {-2\Delta})-\text{Li}_{2}(e^{-\Delta+\text{i}(\pm\theta_{1}\pm\theta_{2})}),\\ h&=\sum_{k=1,2}\bigg{[}\frac{1}{2}\beta_{k}\cos \theta_{k}+\log(2\sin\theta_{k})\bigg{]}+\frac{1}{2}\log(1-e^{-2\Delta})- \frac{1}{2}\log(1-e^{-\Delta+\text{i}(\pm\theta_{1}\pm\theta_{2})}).\end{split} \tag{3.20}\] Note that the last term of \(F\) and \(h\) are written as \[\begin{split}-\text{Li}_{2}(e^{-\Delta+\text{i}(\pm\theta_{1} \pm\theta_{2})})&=-4\sum_{n=1}^{\infty}\frac{e^{-\Delta n}}{n^{2} }\cos(n\theta_{1})\cos(n\theta_{2}),\\ -\frac{1}{2}\log(1-e^{-\Delta+\text{i}(\pm\theta_{1}\pm\theta_ {2})})&=-\frac{1}{2}\log\Bigl{[}1-2e^{-\Delta}\cos(\theta_{1}+ \theta_{2})+e^{-2\Delta}\Bigr{]}\\ &-\frac{1}{2}\log\Bigl{[}1-2e^{-\Delta}\cos(\theta_{1}-\theta_{2 })+e^{-2\Delta}\Bigr{]}.\end{split} \tag{3.21}\] We would like to evaluate \(\widetilde{G}_{2}\) by the saddle point approximation. The saddle point equation reads \[\begin{split}\frac{1}{2}\frac{\partial F}{\partial\theta_{1}}& =2\theta_{1}-\pi+\beta_{1}\sin\theta_{1}+2\sum_{n=1}^{\infty} \frac{e^{-\Delta n}}{n}\sin n\theta_{1}\cos n\theta_{2}=0,\\ \frac{1}{2}\frac{\partial F}{\partial\theta_{2}}& =2\theta_{2}-\pi+\beta_{2}\sin\theta_{2}+2\sum_{n=1}^{\infty} \frac{e^{-\Delta n}}{n}\cos n\theta_{1}\sin n\theta_{2}=0,\end{split} \tag{3.22}\] where the last terms in (3.22) are also written as \[\begin{split} 2\sum_{n=1}^{\infty}\frac{e^{-\Delta n}}{n}\sin n \theta_{1}\cos n\theta_{2}&=\arctan\left(\frac{\sin(\theta_{1}+ \theta_{2})}{e^{\Delta}-\cos(\theta_{1}+\theta_{2})}\right)+\arctan\left( \frac{\sin(\theta_{1}-\theta_{2})}{e^{\Delta}-\cos(\theta_{1}-\theta_{2})} \right),\\ 2\sum_{n=1}^{\infty}\frac{e^{-\Delta n}}{n}\cos n\theta_{1} \sin n\theta_{2}&=\arctan\left(\frac{\sin(\theta_{1}+\theta_{2}) }{e^{\Delta}-\cos(\theta_{1}+\theta_{2})}\right)-\arctan\left(\frac{\sin( \theta_{1}-\theta_{2})}{e^{\Delta}-\cos(\theta_{1}-\theta_{2})}\right).\end{split} \tag{3.23}\] As discussed in [18], we can solve the saddle point equation (3.22) order by order in the small \(\Delta\) expansion. To this end, it is convenient to set \[\beta_{1}=\frac{u-\phi}{\cos u},\quad\beta_{2}=\frac{u+\phi}{\cos u}. \tag{3.24}\] From (3.18), one can see that \[\beta=\beta_{1}+\beta_{2}=\frac{2u}{\cos u},\quad\phi=\left(1-\frac{2\tau_{12 }}{\beta}\right)u. \tag{3.25}\] The saddle point value of \(\theta_{k}\) (\(k=1,2\)) is expanded as \[\theta_{k}=\frac{\pi}{2}-u+a_{k}\Delta+b_{k}\Delta^{2}+\mathcal{O}(\Delta^{3}). \tag{3.26}\] As discussed in [18], the leading term of (3.26) is the same as the saddle point for the partition function (3.9). At the \(\mathcal{O}(\Delta^{0})\) of saddle point equation, we find \[a_{1}-a_{2}=\tan\phi. \tag{3.27}\] At the next order \(\mathcal{O}(\Delta)\) of saddle-point equation, we find \[\begin{split} a_{1}+a_{2}&=\frac{(1+\phi\tan\phi )\tan u}{1+u\tan u},\\ b_{1}-b_{2}&=\frac{1}{2\cos^{2}\phi}\left[-(1+u \tan u)\tan\phi+\frac{\phi\tan^{2}u(1+\phi\tan\phi)}{1+u\tan u}\right].\end{split} \tag{3.28}\] Let us compute the saddle point value of \(F\). We are interested in the regime where \(\lambda\) and \(\Delta\) are of the same order \[\Delta\sim\mathcal{O}(\lambda),\quad\frac{\Delta}{\lambda}=\text{finite}. \tag{3.29}\] Then, in order to evaluate \(\widetilde{G}_{2}\) up to \(\mathcal{O}(\lambda)\), we have to compute the saddle point value of \(F\) up to \(\mathcal{O}(\Delta^{2})\) since \[\frac{\Delta^{2}}{\lambda}\sim\mathcal{O}(\lambda). \tag{3.30}\] We expand the saddle point value of \(F\) as \[-F_{*}=f_{0}+f_{1}\Delta+\frac{1}{2}I\Delta^{2}+\mathcal{O}(\Delta^{3}). \tag{3.31}\] One can compute \(f_{1}\) and \(I\) by using the following relation \[\begin{split}\frac{dF_{*}}{d\Delta}&=\frac{\partial F _{*}}{\partial\Delta}+\sum_{k=1,2}\frac{\partial\theta_{k}}{\partial\Delta} \frac{\partial F_{*}}{\partial\theta_{k}}\\ &=\frac{\partial F_{*}}{\partial\Delta},\end{split} \tag{3.32}\] where in the last equality we used the saddle point equation (3.22). From (3.20) we find \[\frac{dF_{*}}{d\Delta}=\log\frac{\cosh^{2}\Delta}{\left[\cosh\Delta-\cos( \theta_{1}+\theta_{2})\right]\left[\cosh\Delta-\cos(\theta_{1}-\theta_{2}) \right]}. \tag{3.33}\] Plugging the saddle point value (3.26) into (3.33) and expanding in \(\Delta\), we find that \(f_{1}\) and \(I\) in (3.31) are given by \[f_{1}=\log\frac{\cos^{2}u}{\cos^{2}\phi},\quad I=\frac{f(\phi)f(-\phi)}{1+u \tan u}, \tag{3.34}\] where \(f(x)\) is defined by \[f(x)=(1+u\tan u)\tan x-(1+x\tan x)\tan u. \tag{3.35}\] The first term \(f_{0}\) in (3.31) is equal to the leading order free energy (3.11) and it is canceled when we normalize by the partition function (3.17). At the one-loop level, we have to perform the Gaussian integral around the saddle point and evaluate the one-loop determinant \[\frac{1}{\sqrt{\det F_{ij}}}, \tag{3.36}\] where \(F_{ij}=\frac{\partial^{2}F}{\partial\theta_{i}\partial\theta_{j}}\) is the Hessian at the saddle point. This computation was already carried out in [18] and we will not repeat it here.2 Finally, we find the normalized two-point function (3.17) at the one-loop order Footnote 2: As we discuss in appendix A, the Hessian can be diagonalized by a change of variables. \[\begin{split} G_{2}&=\left(\frac{\cos^{2}u}{\cos^ {2}\phi}\right)^{\frac{\Delta}{\lambda}}\left(1+\frac{\Delta^{2}}{2\lambda}I+ \Delta\mathcal{A}\right)\\ &\approx e^{\frac{\Delta^{2}}{2\lambda}I}\left[\frac{\cos^{2}u}{ \cos^{2}\phi}(1+\lambda\mathcal{A})\right]^{\frac{\Delta}{\lambda}},\end{split} \tag{3.37}\] where \(\mathcal{A}\) is given by \[\mathcal{A}=\frac{1}{4(1+u\tan u)}\Bigg{[}-\frac{(1+u\tan u)^{2}}{\cos^{2} \phi}+\frac{(1+\phi\tan\phi)^{2}}{\cos^{2}u}+\phi^{2}(\tan^{2}u-\tan^{2}\phi )-\frac{1+\phi\tan\phi}{1+u\tan u}+1\Bigg{]}. \tag{3.38}\] We have checked that our \(\mathcal{A}\) agrees with \(G_{1}\) in [18]. One can easily see that \[I|_{\phi=\pm u}=\mathcal{A}|_{\phi=\pm u}=0. \tag{3.39}\] Our key observation is that \(\mathcal{A}\) in (3.38) satisfies a simple relation \[\left(\partial_{\phi}^{2}-\frac{2}{\cos^{2}\phi}\right)\mathcal{A}=\frac{1}{ \cos^{2}\phi}I. \tag{3.40}\] In section 4, we will see that this relation naturally follows from the computation in the Liouville theory. We note in passing that \(f(x)\) in (3.35) satisfies \[f(u)=0,\qquad\left(\partial_{x}^{2}-\frac{2}{\cos^{2}x}\right)f(x)=0\,, \tag{3.41}\] as this is the zero-mode wave function of the Liouville theory as we explain in the next section. ### Uncrossed four-point function Next, let us consider the small \(\lambda\) expansion of the uncrossed four-point function \[\begin{split}\widetilde{G}_{4}&=\int\prod_{i=1}^{3 }\frac{d\theta_{i}}{2\pi}\mu(\theta_{i})e^{-\beta_{i}E(\theta_{i})}\frac{(e^ {-2\Delta_{1}};q)_{\infty}}{(e^{-\Delta_{1}+\mathrm{i}(\pm\theta_{1}\pm\theta _{3})};q)_{\infty}}\frac{(e^{-2\Delta_{2}};q)_{\infty}}{(e^{-\Delta_{2}+ \mathrm{i}(\pm\theta_{2}\pm\theta_{3})};q)_{\infty}}\\ &=\int\prod_{i=1}^{3}\frac{d\theta_{i}}{2\pi}e^{-\lambda^{-1}F+h+ \mathcal{O}(\lambda)},\end{split} \tag{3.42}\] where \[\begin{split} F&=\sum_{i=1}^{3}\Biggl{[}2\left(\theta_{i}- \frac{\pi}{2}\right)^{2}-2\beta_{i}\cos\theta_{i}\Biggr{]}+\text{Li}_{2}(e^{-2 \Delta_{1}})+\text{Li}_{2}(e^{-2\Delta_{2}})\\ &\qquad-\text{Li}_{2}(e^{-\Delta_{1}+\text{i}(\pm\theta_{1}\pm \theta_{3})})-\text{Li}_{2}(e^{-\Delta_{2}+\text{i}(\pm\theta_{2}\pm\theta_{3} )}),\\ h&=\sum_{i=1}^{3}\Biggl{[}\frac{1}{2}\beta_{i}\cos \theta_{i}+\log(2\sin\theta_{i})\Biggr{]}+\frac{1}{2}\log(1-e^{-2\Delta_{1}}) +\frac{1}{2}\log(1-e^{-2\Delta_{2}})\\ &\qquad-\frac{1}{2}\log(1-e^{-\Delta_{1}+\text{i}(\pm\theta_{1} \pm\theta_{3})})-\frac{1}{2}\log(1-e^{-\Delta_{2}+\text{i}(\pm\theta_{2}\pm \theta_{3})}).\end{split} \tag{43}\] The saddle point equation reads \[\begin{split}\frac{1}{2}\frac{\partial F}{\partial\theta_{1}}& =2\theta_{1}-\pi+\beta_{1}\sin\theta_{1}+\arcsin\left(\frac{\sin (\theta_{1}+\theta_{3})}{e^{\Delta_{1}}-\cos(\theta_{1}+\theta_{3})}\right)+ \arcsin\left(\frac{\sin(\theta_{1}-\theta_{3})}{e^{\Delta_{1}}-\cos(\theta_{1 }-\theta_{3})}\right),\\ \frac{1}{2}\frac{\partial F}{\partial\theta_{2}}&=2 \theta_{2}-\pi+\beta_{2}\sin\theta_{2}+\arcsin\left(\frac{\sin(\theta_{2}+ \theta_{3})}{e^{\Delta_{2}}-\cos(\theta_{2}+\theta_{3})}\right)+\arcsin\left( \frac{\sin(\theta_{2}-\theta_{3})}{e^{\Delta_{2}}-\cos(\theta_{2}-\theta_{3}) }\right),\\ \frac{1}{2}\frac{\partial F}{\partial\theta_{3}}&=2 \theta_{3}-\pi+\beta_{3}\sin\theta_{3}+\arcsin\left(\frac{\sin(\theta_{1}+ \theta_{3})}{e^{\Delta_{1}}-\cos(\theta_{1}+\theta_{3})}\right)-\arcsin\left( \frac{\sin(\theta_{1}-\theta_{3})}{e^{\Delta_{1}}-\cos(\theta_{1}-\theta_{3}) }\right)\\ &\qquad+\arcsin\left(\frac{\sin(\theta_{2}+\theta_{3})}{e^{\Delta _{2}}-\cos(\theta_{2}+\theta_{3})}\right)-\arcsin\left(\frac{\sin(\theta_{2}- \theta_{3})}{e^{\Delta_{2}}-\cos(\theta_{2}-\theta_{3})}\right).\end{split} \tag{44}\] As in the previous subsection, one can solve this saddle point equation order by order in the small \(\Delta_{i}\) expansion. To this end, it is convenient to parameterize \(\beta_{i}\) (\(i=1,2,3\)) as \[\beta_{1}=\frac{u-\phi_{1}}{\cos u},\quad\beta_{2}=\frac{u-\phi_{2}}{\cos u}, \quad\beta_{3}=\frac{\phi_{1}+\phi_{2}}{\cos u}. \tag{45}\] Then the saddle point solution is expanded as \[\begin{split}\theta_{1}&=\frac{\pi}{2}-u+\frac{a_ {1}+\tan\phi_{1}}{2}\Delta_{1}+\frac{a_{2}+\tan\phi_{2}}{2}\Delta_{2}+b_{1} \Delta_{1}^{2}+b_{2}\Delta_{1}\Delta_{2},\\ \theta_{2}&=\frac{\pi}{2}-u+\frac{a_{1}+\tan\phi_{1} }{2}\Delta_{1}+\frac{a_{2}+\tan\phi_{2}}{2}\Delta_{2}+c_{1}\Delta_{2}^{2}+c_{ 2}\Delta_{1}\Delta_{2},\\ \theta_{3}&=\frac{\pi}{2}-u+\frac{a_{1}-\tan\phi_{1} }{2}\Delta_{1}+\frac{a_{2}-\tan\phi_{2}}{2}\Delta_{2},\end{split} \tag{46}\] where \[\begin{split} a_{1}&=\frac{(1+\phi_{1}\tan\phi_{1} )\tan u}{1+u\tan u},\\ b_{1}&=\frac{1}{2\cos^{2}\phi_{1}}\left[-(1+u\tan u )\tan\phi_{1}+\frac{\phi_{1}(1+\phi_{1}\tan\phi_{1})\tan^{2}u}{1+u\tan u} \right],\\ b_{2}&=\frac{1}{2\cos^{2}\phi_{1}}\left[(1+u\tan u )\tan\phi_{2}+\frac{\phi_{1}(1+\phi_{2}\tan\phi_{2})\tan^{2}u}{1+u\tan u}-(1+ \phi_{2}\tan\phi_{2}+\phi_{1}\tan\phi_{2})\tan u\right],\\ a_{2}&=a_{1}\Big{|}_{\phi_{1}\leftrightarrow\phi_{2 }},\quad c_{1}=b_{1}\Big{|}_{\phi_{1}\leftrightarrow\phi_{2}},\quad c_{2}=b_{2} \Big{|}_{\phi_{1}\leftrightarrow\phi_{2}}.\end{split} \tag{47}\] We also expand the saddle point value of \(F\) up to \(\mathcal{O}(\Delta^{2})\) \[-F_{*}=f_{0}+\sum_{i=1,2}f_{i}\Delta_{i}+\frac{1}{2}\sum_{i,j=1,2}I_{ij}\Delta_{i }\Delta_{j}. \tag{3.48}\] Here \(f_{0}\) is the leading order free energy (3.11) and \(f_{i}=\log\frac{\cos^{2}u}{\cos^{2}\phi_{i}}\) (\(i=1,2\)). We find that the diagonal part of \(I_{ij}\) is equal to the corresponding \(I\) in the two-point function (3.34) \[I_{ii}=I\big{|}_{\phi=\phi_{i}}=\frac{f(\phi_{i})f(-\phi_{i})}{1+u\tan u}, \quad(i=1,2), \tag{3.49}\] and the off-diagonal part \(I_{12}\) is given by \[I_{12}=\frac{f(\phi_{1})f(\phi_{2})}{1+u\tan u}, \tag{3.50}\] where \(f(x)\) is defined in (3.35). The computation of the one-loop determinant \(\det F_{ij}\) is almost parallel to the two-point function. After some algebra, we find the uncrossed four-point function at the one-loop level \[G_{4}=e^{\frac{1}{2\lambda}I_{ij}\Delta_{i}\Delta_{j}}\prod_{i=1,2}\left[\frac {\cos^{2}u}{\cos^{2}\phi_{i}}(1+\lambda\mathcal{A}_{i})\right]^{\frac{\Delta_ {i}}{\lambda}}, \tag{3.51}\] where \(\mathcal{A}_{i}\) is the same as the one-loop correction \(\mathcal{A}\) (3.38) appeared in the two-point function \[\mathcal{A}_{i}=\mathcal{A}|_{\phi=\phi_{i}}. \tag{3.52}\] Our result (3.51) is a generalization of the one-loop computation of the two-point function in [18]. We should stress that the factor \(\frac{1}{2\lambda}I_{ij}\Delta_{i}\Delta_{j}\) was not considered in [18], but it should be included in the scaling regime (3.30). #### 3.3.1 Relation to the energy fluctuation If we normalize the four-point function (3.51) by the two-point function (3.37) at the one-loop level, we find \[\frac{G_{4}}{G_{2}(\phi_{1})G_{2}(\phi_{2})}=e^{\frac{\Delta_{1}\Delta_{2}}{ \lambda}I_{12}}. \tag{3.53}\] This relation suggests that \(I_{12}\) can be thought of as the interaction term of the two particles in the bulk spacetime corresponding to the boundary operators with dimension \(\Delta_{1}\) and \(\Delta_{2}\). As discussed in [5], this interaction can be understood from the coupling of the matter operator to the energy fluctuation \(\widehat{H}=H-\langle H\rangle\). Let us repeat the argument in [5]. The saddle-point value of the energy is \[E=-\frac{2\cos\theta_{*}}{\lambda}=-\frac{2\sin u}{\lambda}, \tag{3.54}\] where we used (3.9). Thus the energy fluctuation is related to \(\delta u\) by \[\delta E=-\frac{2\cos u}{\lambda}\delta u. \tag{3.55}\] Using the relation \[\phi=\left(1-\frac{2\tau}{\beta}\right)u, \tag{3.56}\] the fluctuation of \(\phi\) under the variation of \(\beta\) is written as \[\begin{split}\delta\phi&=\frac{2\tau u}{\beta}\frac{ \delta\beta}{\beta}+\phi\frac{\delta u}{u}\\ &=(1+u\tan u-\phi\tan u)\delta u.\end{split} \tag{3.57}\] Here we have used \[\frac{\delta\beta}{\beta}=\delta\log\beta=\delta\log\frac{2u}{\cos u}=(1+u \tan u)\frac{\delta u}{u}. \tag{3.58}\] Then the change of two-point function is \[\begin{split}\delta\log G_{2}&=\delta\log\left( \frac{\cos^{2}u}{\cos^{2}\phi}\right)^{\frac{\Delta}{\lambda}}\\ &=\frac{2\Delta}{\lambda}(\tan u\delta u-\tan\phi\delta\phi)\\ &=-\frac{2\Delta}{\lambda}f(\phi)\delta u,\end{split} \tag{3.59}\] where \(f(x)\) is defined in (3.35). From (3.55), the variance of \(\delta u\) is estimated as \[\begin{split}\langle(\delta u)^{2}\rangle&=\frac{ \lambda^{2}}{4\cos^{2}u}\langle(\delta E)^{2}\rangle\\ &=\frac{\lambda^{2}}{4\cos^{2}u}\partial_{\beta}^{2}\log Z.\end{split} \tag{3.60}\] Plugging the leading order free energy \[\log Z=\frac{2}{\lambda}(-u^{2}+2u\tan u) \tag{3.61}\] into (3.60), we find \[\langle(\delta u)^{2}\rangle=\frac{\lambda}{4}\frac{1}{1+u\tan u}. \tag{3.62}\] Finally, combining (3.59) and (3.62) we find \[\begin{split}\langle\delta\log G_{2}(\phi_{1})\delta\log G_{2}( \phi_{2})\rangle=\frac{\Delta_{1}\Delta_{2}}{\lambda}\frac{f(\phi_{1})f(\phi_ {2})}{1+u\tan u}=\frac{\Delta_{1}\Delta_{2}}{\lambda}I_{12}.\end{split} \tag{3.63}\] This precisely matches the interaction we found in (3.53). This result implies that the off-diagonal part \(I_{12}\) represents a total energy exchange of the external operators. Note that \(\delta u\) corresponds to \(\sqrt{\lambda}\varepsilon\) in (3.14) and the variance \(\langle(\delta u)^{2}\rangle\) in (3.62) agrees with the one obtained from the quadratic action for \(\varepsilon\) in (3.14). One-loop correction from Liouville theory In this section, we will show that the one-loop correction to the two- and four-point functions obtained in the previous section can be reproduced from the Liouville theory. As shown in [11], the double scaling limit of the \(G\Sigma\) action of the SYK model reduces to the Liouville action for the bi-local field \(g(\tau_{1},\tau_{2})\) \[S=\frac{1}{8\lambda}\int d\tau d\nu\left[-\frac{1}{2}(\partial_{\tau}g)^{2}+ \frac{1}{2}(\partial_{\nu}g)^{2}-2e^{g}\right], \tag{4.1}\] where \[\tau=\tau_{1}-\tau_{2},\quad\nu=\tau_{1}+\tau_{2}. \tag{4.2}\] Introducing the coordinate \(x,y\) by \[x=u-\tau\cos u,\quad y=\nu\cos u, \tag{4.3}\] the Liouville action is written as \[S=\frac{1}{8\lambda}\int_{-u}^{u}dx\int_{0}^{4u}dy\left[-\frac{1}{2}(\partial _{x}g)^{2}+\frac{1}{2}(\partial_{y}g)^{2}-\frac{2}{\cos^{2}u}e^{g}\right]. \tag{4.4}\] Note that \(x\) corresponds to \(\phi\) in (3.25). We assumed that \(\tau_{1}\) and \(\tau_{2}\) are ordered pair of points on the thermal circle \(S^{1}_{\beta}\) \[0<\tau_{2}<\tau_{1}<\beta. \tag{4.5}\] Then the range of \(\tau,\nu\) and \(x,y\) are related by \[0<\tau<\beta,\ 0<\nu<2\beta\quad\Rightarrow\quad-u<x<u,\ 0<y<4u, \tag{4.6}\] where \(\beta\) and \(u\) are related by (3.10). The equation of motion following from the action (4.4) is \[\partial_{x}^{2}g-\partial_{y}^{2}g-\frac{2}{\cos^{2}u}e^{g}=0. \tag{4.7}\] One can easily see that \[g_{\rm cl}(x)\,=\,\log\left(\frac{\cos^{2}u}{\cos^{2}x}\right)\,, \tag{4.8}\] is a "static" (i.e. independent of \(y\)) classical solution of the equation of motion (4.7), with boundary conditions \(g(x=\pm u)=0\). Let us consider the expansion of the Liouville action (4.4) around the classical solution (4.8) \[g(x,y)=g_{\rm cl}(x)+\sqrt{\lambda}\varepsilon(x,y). \tag{4.9}\] The classical action is given by \[\begin{split} S_{\rm cl}&=\frac{1}{8\lambda}\int_{- u}^{u}dx\int_{0}^{4u}dy\left[-\frac{1}{2}(\partial_{x}g_{\rm cl})^{2}-\frac{2}{ \cos^{2}u}e^{g_{\rm cl}}\right]\\ &=\frac{2}{\lambda}(u^{2}-2u\tan u),\end{split} \tag{4.10}\] which agrees with the leading order free energy in (3.11). The quadratic part of the action of \(\varepsilon\) becomes \[S_{2}=\frac{1}{8}\int dxdy\left[-\frac{1}{2}(\partial_{x}\varepsilon)^{2}+\frac{ 1}{2}(\partial_{y}\varepsilon)^{2}-\frac{2}{\cos^{2}x}\frac{\varepsilon^{2}}{ 2}\right]. \tag{4.11}\] Then the propagator of \(\varepsilon\) is defined by \[\frac{1}{8}\left(\partial_{x}^{2}-\partial_{y}^{2}-\frac{2}{\cos^{2}x}\right) \langle\varepsilon(x,y)\varepsilon(x^{\prime},y^{\prime})\rangle=\delta(x-x^ {\prime})\widehat{\delta}(y-y^{\prime}), \tag{4.12}\] where \(\widehat{\delta}(y-y^{\prime})\) is the periodically extended \(\delta\)-function \[\widehat{\delta}(y-y^{\prime})=\sum_{m\in\mathbb{Z}}\delta(y-y^{\prime}+4um)= \sum_{n\in\mathbb{Z}}\frac{e^{\mathrm{i}\widetilde{n}(y-y^{\prime})}}{4u}, \tag{4.13}\] with \(\widetilde{n}=n/v\) (see (3.12) for the relation between \(u\) and \(v\)). Note that the translation symmetry of \(x\)-coordinate is spontaneously broken by choosing the classical background \(g_{\mathrm{cl}}\), but the \(y\)-coordinate remains periodic with periodicity \(4u=2\pi v\). The propagator is also written as \[\langle\varepsilon(x,y)\varepsilon(x^{\prime},y^{\prime})\rangle=\frac{2}{u} \sum_{n\in\mathbb{Z}}D_{n}(x,x^{\prime})e^{\mathrm{i}\widetilde{n}(y-y^{ \prime})}, \tag{4.14}\] where \(D_{n}(x,x^{\prime})\) satisfies \[\left[\partial_{x}^{2}-\frac{2}{\cos^{2}x}+\widetilde{n}^{2}\right]D_{n}(x,x ^{\prime})=\delta(x-x^{\prime}). \tag{4.15}\] One can solve (4.15) under the boundary condition \[D_{n}(u,x^{\prime})=D_{n}(-u,x^{\prime})=0,\qquad D_{n}(x^{\prime},x)=D_{n}(x, x^{\prime})=D_{n}(-x,-x^{\prime}). \tag{4.16}\] As is well-known, the solution of (4.15) can be constructed from the two independent solutions of the homogeneous equation \[\left[\partial_{x}^{2}-\frac{2}{\cos^{2}x}+\widetilde{n}^{2}\right]f_{n}(x)=0, \tag{4.17}\] whose explicit form is easily obtained as \[f_{n}^{(1)}=\cos(\widetilde{n}x)\tan x-\widetilde{n}\sin(\widetilde{n}x), \quad f_{n}^{(2)}=\sin(\widetilde{n}x)\tan x+\widetilde{n}\cos(\widetilde{n}x). \tag{4.18}\] We can take a linear combination of \(f_{n}^{(1)}(x)\) and \(f_{n}^{(2)}(x)\) so that \(f_{n}(x)\) vanishes at \(x=u\) \[f_{n}(x)=f_{n}^{(1)}(x)f_{n}^{(2)}(u)-f_{n}^{(1)}(u)f_{n}^{(2)}(x). \tag{4.19}\] Then the propagator for \(n\neq 0\) is given by \[D_{n}(x,x^{\prime})=\frac{\theta(x-x^{\prime})f_{n}(x)f_{n}(-x^{\prime})+ \theta(x^{\prime}-x)f_{n}(-x)f_{n}(x^{\prime})}{\{f_{n}(x),f_{n}(-x)\}}, \tag{4.20}\] where the denominator is the Wronskian \[\begin{split}\{f_{n}(x),f_{n}(-x)\}&=\partial_{x}f_{n} (x)f_{n}(-x)-f_{n}(x)\partial_{x}f_{n}(-x)\\ &=2(-1)^{n}\widetilde{n}^{2}(\widetilde{n}^{2}-1)\tan u.\end{split} \tag{4.21}\] For the zero mode, we have3 Footnote 3: The zero-mode propagator \(D_{0}(x,x^{\prime})\) has been considered in [12]. \[D_{0}(x,x^{\prime})=-\frac{\theta(x-x^{\prime})f(x)f(-x^{\prime})+\theta(x^{ \prime}-x)f(-x)f(x^{\prime})}{2\tan u(1+u\tan u)}, \tag{4.22}\] where \(f(x)\) is defined in (3.35). Note that the zero-mode wavefunction \(f(x)\) is formally related to the non-zero mode \(f_{n}(x)\) as \[f(x)=\lim_{n\to 0}\frac{1}{\widetilde{n}}f_{n}(x). \tag{4.23}\] Plugging \(D_{0}(x,x^{\prime})\) (4.22) and \(D_{n}(x,x^{\prime})\) (4.20) into (4.14), we find \[\langle\varepsilon(x,y)\varepsilon(x^{\prime},y^{\prime})\rangle=\frac{1}{u \tan u}\left[-\frac{f(x)f(-x^{\prime})}{1+u\tan u}+\sum_{|n|\geq 1}\frac{(-1)^{n}f _{n}(x)f_{n}(-x^{\prime})}{\widetilde{n}^{2}(\widetilde{n}^{2}-1)}e^{i \widetilde{n}(y-y^{\prime})}\right]. \tag{4.24}\] Here we assumed \(x>x^{\prime}\). The sum over \(n\) can be performed using the formula in (B.1) and we find \[\sum_{|n|\geq 1}\frac{(-1)^{n}f_{n}(x)f_{n}(-x^{\prime})}{\widetilde{n}^{2}( \widetilde{n}^{2}-1)}e^{i\widetilde{n}(y-y^{\prime})}=f(x)f(-x^{\prime}). \tag{4.25}\] Finally, the propagator becomes \[\langle\varepsilon(x,y)\varepsilon(x^{\prime},y^{\prime})\rangle=\frac{f(x)f( -x^{\prime})}{1+u\tan u},\quad(x>x^{\prime}). \tag{4.26}\] Note that this propagator is \(y\)-independent; the \(y\)-independence of the time-ordered four-point function was also mentioned in [5]. Note also that (4.26) is finite at the coincident point \((x,y)=(x^{\prime},y^{\prime})\) and hence there is no need of the normal ordering to define the bi-local operator \(e^{\frac{\Delta}{\lambda}g(x,y)}\) at the perturbative level. ### One-loop correction of two-point function Let us compute the one-loop correction to the two-point function from the Liouville theory. The two-point function is defined by \[G_{2}=\left\langle e^{\frac{\Delta}{\lambda}g(\phi,y_{0})}\right\rangle=\frac {1}{Z}\int\mathcal{D}g\,e^{-S}e^{\frac{\Delta}{\lambda}g(\phi,y_{0})}, \tag{4.27}\] where \(S\) is the Liouville action (4.4). \(y_{0}\) is some reference point but the result is independent of \(y_{0}\) as we will see below. To compute the one-loop correction, we expand the action around the classical solution (4.8) as \[S-S_{\rm cl}=\sum_{n=2}^{\infty}S_{n}=S_{2}+S_{\rm int} \tag{4.28}\] where \[S_{2} =\frac{1}{16}\int dxdy\,\varepsilon(x,y)K\varepsilon(x,y),\qquad K= \partial_{x}^{2}-\partial_{y}^{2}-\frac{2}{\cos^{2}x}, \tag{4.29}\] \[S_{n} =-\frac{\lambda^{\frac{n}{2}-1}}{4}\int dxdy\frac{1}{\cos^{2}x} \frac{\varepsilon(x,y)^{n}}{n!},\quad(n\geq 3),\] \[S_{\text{int}} =\sum_{n\geq 3}S_{n}.\] Then the two-point function is written as \[G_{2}=e^{\frac{\Delta}{\lambda}g_{\text{cl}}(\phi)}\Big{\langle}e^{\frac{ \Delta}{\sqrt{\lambda}}\varepsilon(\phi,y_{0})-S_{\text{int}}}\Big{\rangle}_{S _{2}}=\left(\frac{\cos^{2}u}{\cos^{2}\phi}\right)^{\frac{\Delta}{\lambda}} \Big{\langle}e^{\frac{\Delta}{\sqrt{\lambda}}\varepsilon(\phi,y_{0})-S_{\text {int}}}\Big{\rangle}_{S_{2}} \tag{4.30}\] where \(\langle\cdots\rangle_{S_{2}}\) is defined by \[\langle\cdots\rangle_{S_{2}}=\int\mathcal{D}\varepsilon(\cdots)e^{-S_{2}}. \tag{4.31}\] At the one-loop level, we find \[\Big{\langle}e^{\frac{\Delta}{\sqrt{\lambda}}\varepsilon(\phi,y _{0})-S_{\text{int}}}\Big{\rangle}_{S_{2}} =\left\langle\Bigg{[}1+\frac{\Delta}{\sqrt{\lambda}}\varepsilon( \phi,y_{0})+\frac{\Delta^{2}}{2\lambda}\varepsilon(\phi,y_{0})^{2}+\cdots \Bigg{]}\Bigg{[}1-S_{3}+\cdots\Bigg{]}\right\rangle_{S_{2}} \tag{4.32}\] \[=1+\frac{\Delta^{2}}{2\lambda}\langle\varepsilon(\phi,y_{0})^{2 }\rangle+\frac{\Delta}{4}\left\langle\varepsilon(\phi,y_{0})\int dxdy\frac{1} {\cos^{2}x}\frac{\varepsilon(x,y)^{3}}{3!}\right\rangle_{S_{2}}+\cdots\] The second term reproduces \(I\) in (3.34) \[I=\langle\varepsilon(\phi,y_{0})^{2}\rangle=\frac{f(\phi)f(-\phi)}{1+u\tan u}, \tag{4.33}\] and the last term corresponds to \(\mathcal{A}\) in (3.38) \[\mathcal{A} =\frac{1}{4}\left\langle\varepsilon(\phi,y_{0})\int dxdy\frac{1} {\cos^{2}x}\frac{\varepsilon(x,y)^{3}}{3!}\right\rangle_{S_{2}} \tag{4.34}\] \[=\frac{1}{8}\int dxdy\frac{1}{\cos^{2}x}\langle\varepsilon(\phi, y_{0})\varepsilon(x,y)\rangle\langle\varepsilon(x,y)^{2}\rangle.\] From the expression of the propagator (4.26), one can show that \(\mathcal{A}\) is independent of \(y_{0}\). Using this property, one can show that \(\mathcal{A}\) in (4.34) satisfies the same relation (3.40) as we found for the one-loop correction in the previous section \[\left(\partial_{\phi}^{2}-\frac{2}{\cos^{2}\phi}\right)\mathcal{A} =\left(\partial_{\phi}^{2}-\partial_{y_{0}^{2}}-\frac{2}{\cos^{2} \phi}\right)\mathcal{A} \tag{4.35}\] \[=\int dxdy\frac{1}{\cos^{2}x}\delta(x-\phi)\widehat{\delta}(y-y_ {0})\langle\varepsilon(x,y)^{2}\rangle\] \[=\frac{1}{\cos^{2}\phi}\langle\varepsilon(\phi,y_{0})^{2}\rangle\] \[=\frac{1}{\cos^{2}\phi}I.\] This indeed reproduces (3.40). ### Uncrossed four-point function Next, let us consider the uncrossed four-point function. In the Liouville language, the uncrossed four-point function is given by \[G_{4}(\phi_{1},\phi_{2})=\left\langle e^{\frac{\Delta_{1}}{\lambda}g(\phi_{1},y_ {1})}e^{\frac{\Delta_{2}}{\lambda}g(-\phi_{2},y_{2})}\right\rangle. \tag{4.36}\] We have changed the sign of \(\phi\) for one of the bi-local operator \(e^{\frac{\Delta}{\lambda}g}\). Note that \(\tau\) and \(\phi\) are related by \[\phi(\tau)=\left(1-\frac{2\tau}{\beta}\right)u,\quad\phi(\beta-\tau)=-\phi( \tau), \tag{4.37}\] and thus the sign flip of \(\phi\) corresponds to \(\tau\to\beta-\tau\). The necessity of the sign flip \(\phi_{2}\to-\phi_{2}\) in (4.36) is understood from the following picture (4.38) Namely, \(\phi\) of the bi-local operator is defined with respect to the Hartle-Hawking state \(|0\rangle\)[16, 17] at the bottom of the figure in (4.38) and we have to choose \[\phi_{1}=\phi(\tau_{12}),\quad\phi_{2}=\phi(\beta-\tau_{34})=-\phi(\tau_{34}). \tag{4.39}\] One can easily generalize the perturbative computation in the previous subsection to the four-point function \[\begin{split} G_{4}&=e^{\frac{\Delta_{1}}{\lambda }g_{\text{cl}}(\phi_{1})}e^{\frac{\Delta_{2}}{\lambda}g_{\text{cl}}(-\phi_{2 })}\left\langle e^{\frac{\Delta_{1}}{\sqrt{\lambda}}\varepsilon(\phi_{1},y_{ 1})+\frac{\Delta_{2}}{\sqrt{\lambda}}\varepsilon(-\phi_{2},y_{2})-S_{\text{ int}}}\right\rangle_{S_{2}}\\ &=\prod_{i=1,2}\left(\frac{\cos^{2}u}{\cos^{2}\phi_{i}}\right)^{ \frac{\Delta_{i}}{\lambda}}\left\langle 1+\frac{\Delta_{1}^{2}}{2\lambda} \varepsilon(\phi_{1},y_{1})^{2}+\frac{\Delta_{2}^{2}}{2\lambda}\varepsilon(- \phi_{2},y_{2})^{2}+\frac{\Delta_{1}\Delta_{2}}{\lambda}\varepsilon(\phi_{1}, y_{1})\varepsilon(-\phi_{2},y_{2})\right.\\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad-\left(\frac{\Delta_ {1}}{\sqrt{\lambda}}\varepsilon(\phi_{1},y_{1})+\frac{\Delta_{2}}{\sqrt{ \lambda}}\varepsilon(-\phi_{2},y_{2})\right)S_{3}+\mathcal{O}(\lambda^{2}) \right\rangle_{S_{2}}\\ &=e^{\frac{1}{2\lambda}I_{ij}\Delta_{i}\Delta_{j}}\prod_{i=1,2} \left[\frac{\cos^{2}u}{\cos^{2}\phi_{i}}(1+\lambda\mathcal{A}_{i})\right]^{ \frac{\Delta_{i}}{\lambda}}.\end{split} \tag{4.40}\] Using the explicit form of the propagator of \(\varepsilon(x,y)\) in (4.26), one can see that this computation reproduces the result of four-point function in the previous section. For instance, from (4.40) one can read off \(I_{12}\) as \[I_{12}=\langle\varepsilon(\phi_{1},y_{1})\varepsilon(-\phi_{2},y_{2})\rangle= \frac{f(\phi_{1})f(\phi_{2})}{1+u\tan u}, \tag{4.41}\] which reproduces \(I_{12}\) in (3.50) obtained from the saddle point analysis. One can also show that \(I_{ii},{\cal A}_{i}\) (\(i=1,2\)) are reproduced from (4.40) in a similar manner. ## 5 Out-of-time-order correlators In this section, we study direction evaluation of the summation over \(n\) in (4.24) for the out-of-time-ordered case: \(\tau_{1}>\tau_{3}>\tau_{2}>\tau_{4}\). Let us first consider a special case with \(\tau_{3}=\pi\) and \(\tau_{4}=0\) as in [5], where we used \(\beta=2\pi\) unit. This corresponds to \(x^{\prime}=0\) and \(y^{\prime}=u\), as well as \(\pi<\tau_{1}<2\pi\) and \(0<\tau_{2}<\pi\). In this case, one of the wave function is reduced to \[f_{n}(0)\,=\,-\widetilde{n}\,f_{n}^{(1)}(u)\,, \tag{5.1}\] so that the Fourier series of the non-zero modes is rewritten as \[\sum_{|n|\geq 1}D_{n}(x\,,x^{\prime})\,e^{{\rm i}\widetilde{n}(y-y^{ \prime})}\,=\,\frac{\cot u}{2}\sum_{|n|\geq 1}\frac{(-1)^{n+1}}{\widetilde{n}( \widetilde{n}^{2}-1)}\,f_{n}(x)f_{n}^{(1)}(u)\,e^{{\rm i}\widetilde{n}y}\,e^ {-\frac{{\rm i}n\pi}{2}}\,. \tag{5.2}\] The summation over \(n\) can be explicitly performed by using the formulae (B.2) - (B.4) as \[\sum_{|n|\geq 1}D_{n}(x\,,x^{\prime})\,e^{{\rm i}\widetilde{n}(y-y^{ \prime})}\,=\,\frac{1}{2}\left[\,-\tan x+(1+x\tan x)\tan u\,-\,u\,\frac{\cos(2 u-y)}{\cos u\cos x}\right]\,. \tag{5.3}\] Finally combining with the zero mode contribution (4.22), the out of time ordered two-point function is found as \[\langle\varepsilon(x,y)\varepsilon(0,u)\rangle\,=\,\frac{\tan^{2 }u(1+x\tan x)}{1+u\tan u}\,-\,\frac{\cos(2u-y)}{\cos u\cos x}\,. \tag{5.4}\] This agrees with the results found in [19; 20]. One can also check that the low temperature limit of this result agrees with the one found in [5] (see section 6). In order to obtain the Lyapunov exponent, we set \(x=0\) and \(y=-{\rm i}2\pi vt/\beta\), which gives \[\sum_{|n|\geq 1}D_{n}(x\,;x^{\prime})\,e^{{\rm i}\widetilde{n}(y-y^{ \prime})}\,=\,-\,\frac{u\cos u}{2}\,\cosh\left(\frac{2\pi vt}{\beta}\right)\,+ \,\cdots\,. \tag{5.5}\] From this we find \[\lambda_{L}\,=\,\frac{2\pi v}{\beta}\,, \tag{5.6}\] which agrees with the Lyapunov exponent found in [5]. Next, let us try produce the \(x^{\prime}\)-dependence as well. For this purpose, we keep \(x^{\prime}\) general (but assume close to \(0\)) and set \(y^{\prime}=u\). We also assume \(\pi<\tau_{1}<2\pi\) and \(0<\tau_{2}<\pi\) and use the formulae (B.2) - (B.4). This leads to \[\sum_{|n|\geq 1}D_{n}(x\,,x^{\prime})\,e^{{\rm i}\widetilde{n}(y-y^{ \prime})}\,=\,\frac{1}{2}\bigg{[}\tan u\,f^{(2)}(x)f^{(2)}(x^{\prime})\,+\, \tan x^{\prime}f^{(2)}(x)\,-\,\tan x\,f^{(2)}(x^{\prime})\] \[\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \ where \(f^{(2)}(x)=1+x\tan x\). Finally combining with the zero mode contribution (4.22), the out of time ordered two-point function is found as \[\langle\varepsilon(x,y)\varepsilon(x^{\prime},y^{\prime})\rangle \,=\,\frac{\tan^{2}u}{1+u\tan u}\,f^{(2)}(x)f^{(2)}(x^{\prime}) \tag{5.8}\] \[\qquad-\,\frac{\cos(2u-y)}{\cos u\cos x\cos x^{\prime}}\,+\,(y-2u )\tan u\tan x\tan x^{\prime}\,.\] this result completely agrees with the results found in [19; 20]. ## 6 Low temperature limit In this section, we will give low temperature expressions of our results obtained above, and compare with previous works [5]. Let us start from the two-point function. For the function \(\mathcal{A}\) defined in (3.38), using \[u\,=\,\frac{\pi}{2}\,-\,\frac{\pi}{\beta}\,,\qquad\phi\,=\,\frac{\pi}{2}\,- \,\frac{\pi\tau_{12}}{\beta}\,, \tag{6.1}\] the low temperature limit of \(\mathcal{A}\) is found as \[\mathcal{A}\,=\,\frac{\beta}{2\pi^{2}}\left[1\,-\,\frac{\frac{ \pi^{2}\tau_{12}}{\beta}(1-\frac{\tau_{12}}{\beta})}{\sin^{2}(\frac{\pi\tau_{ 12}}{\beta})}\,+\,\frac{\pi(1-\frac{2\tau_{12}}{\beta})}{\tan\frac{\pi\tau_{ 12}}{\beta}}\right]\,. \tag{6.2}\] Up to an overall coefficient, this agrees with \(\langle\mathcal{C}(u_{1},u_{2})\rangle\) computed in Schwarzian theory in the next subsection 6.1. The low temperature limit of the function \(f(x)\) defined in (3.35) is given by \[f(x)\,=\,-\,\frac{\beta}{\pi}\left(1\,-\,\frac{\pi\tau_{12}}{ \beta}\cot\frac{\pi\tau_{12}}{\beta}\right)\,. \tag{6.3}\] This guarantees that the four-point function in the low temperature limit agrees with the low temperature result found in [5]. This also shows that the low temperature limit of the function \(I\) defined in (3.34) agrees with \(\langle\mathcal{B}(u_{1},u_{2})\mathcal{B}(u_{3},u_{4})\rangle\) computed in Schwarzian theory, up to an overall coefficient. In appendix C, we will also show that the zero-temperature factorization holds at arbitrary \(\lambda\) in the DSSYK model. ### One-loop correction from Schawrzian mode In this subsection, we study the one-loop correction of two-point functions in Schwarzian theory. Here we follow the notation of [9] and in particular \(u_{i}=2\pi\tau_{i}/\beta\). The reparametrization symmetry of two-point function transforms \[G^{(2)}_{\Delta}(\tau_{1},\tau_{2})\,=\,\frac{1}{|\tau_{12}|^{2 \Delta}}\,\Rightarrow\,\left(\frac{f^{\prime}(u_{1})f^{\prime}(u_{2})}{(f(u_ {1})-f(u_{2}))^{2}}\right)^{\Delta}\,, \tag{6.4}\] where \(\tau_{ij}:=\tau_{i}-\tau_{j}\). Parametrizing \(f(u)=\tan((u+\varepsilon(u))/2)\) and expanding up to \(\varepsilon^{2}\) order, we find \[G^{(2)}_{\Delta}(\tau_{1},\tau_{2})\,=\,\frac{1}{(2\sin\frac{u_{ 12}}{2})^{2\Delta}}\left[1\,+\,\Delta\,\mathcal{B}(u_{1},u_{2})\,+\,\frac{ \Delta^{2}}{2}\,\mathcal{B}(u_{1},u_{2})^{2}\,+\,\Delta\,\mathcal{C}(u_{1},u_ {2})\,+\,\cdots\right]\,, \tag{100}\] where \[\mathcal{B}(u_{1},u_{2}) = \varepsilon^{\prime}(u_{1})\,+\,\varepsilon^{\prime}(u_{2})\,-\, \frac{\varepsilon(u_{1})-\varepsilon(u_{2})}{\tan\frac{u_{12}}{2}}\,, \tag{101}\] \[\mathcal{C}(u_{1},u_{2}) = -\,\left(\frac{\varepsilon^{\prime}(u_{1})^{2}+\varepsilon^{\prime }(u_{2})^{2}}{2}\right)\,+\,\frac{\left(\varepsilon(u_{1})-\varepsilon(u_{2} )\right)^{2}}{4\sin^{2}(\frac{u_{12}}{2})}\,. \tag{102}\] Now we would like to evaluate the expectation value of the RHS of (100) with using Schwarzian mode propagator \[\left\langle\varepsilon(u_{1})\varepsilon(u_{2})\right\rangle\,=\,\frac{1}{2 \pi C}\left[-\frac{(|u_{12}|-\pi)^{2}}{2}\,+\,(|u_{12}|-\pi)\sin|u_{12}|+a+b \cos u_{12}\right]\,, \tag{103}\] where \(C\) is the Schwarzian coupling and \(a\) and \(b\) are gauge parameters which should not appear in any physical quantities. Since the one-point function of the Schwarzian mode vanishes, we have \[\left\langle\mathcal{B}(u_{1},u_{2})\right\rangle\,=\,0\,. \tag{104}\] The two-point function of \(\mathcal{B}\) is evaluated in [9] as \[\left\langle\mathcal{B}(u_{1},u_{2})\mathcal{B}(u_{3},u_{4}) \right\rangle\,=\,\frac{1}{2\pi C}\left(-2+\frac{u_{12}}{\tan\frac{u_{12}}{2} }\right)\left(-2+\frac{u_{34}}{\tan\frac{u_{34}}{2}}\right)\,. \tag{105}\] Finally we can also evaluate the one-loop function of \(\mathcal{C}\) as \[\left\langle\mathcal{C}(u_{1},u_{2})\right\rangle\,=\,\frac{1}{2\pi C}\left[ 1\,+\,\frac{u_{12}(u_{12}-2\pi)}{4\sin^{2}(\frac{u_{12}}{2})}\,+\,\frac{(\pi-u _{12})}{\tan\frac{u_{12}}{2}}\right]\,. \tag{106}\] ## 7 Conclusions and outlook In this paper, we have studied the one-loop correction to the correlators of DSSYK from two approaches: the saddle point approximation of the exact result obtained from the chord diagrams, and the perturbative computation in the Liouville theory. We found that the relation (114) obeyed by the one-loop correction \(\mathcal{A}\) naturally follows from the computation in the Liouville theory. In particular, \(\mathcal{A}\) and \(I_{ij}\) are closely related to the propagator of the fluctuation \(\varepsilon(x,y)\) around the classical solution \(g_{\rm cl}\) in the Liouville theory. We also found that the out-of-time-order propagator \(\langle\varepsilon(x,y)\varepsilon(x^{\prime},y^{\prime})\rangle\) in the Liouville theory correctly reproduces the known result of OTOC in the literature [5; 19; 20]. We have also seen that the low temperature limit of the one-loop correction \(\mathcal{A}\) is reproduced from the corresponding computation in the Schwarzian theory. There are many interesting open questions. The Liouville field \(g(\tau_{1},\tau_{2})\) can be thought of as a quantum analogue of the bulk geodesic length between the two points \(\tau_{1},\tau_{2}\) on the boundary. The classical solution \(g_{\rm cl}\) corresponds to the geodesic length in the semi-classical bulk geometry and \(\varepsilon(x,y)\) represents its quantum fluctuation. It would be interesting to "decode" the bulk quantum geometry defined by the Liouville field \(g(\tau_{1},\tau_{2})\) along the lines of [18]. Our analysis was restricted to the small \(\lambda,\Delta\) regime. It would be interesting to generalize our analysis to the finite \(\lambda,\Delta\) case. When \(\Delta\) becomes large, the corresponding matter operator is called the "heavy operator". It is expected that the insertion of heavy operator strongly back-reacts to the bulk geometry and the spacetime is pinched in the limit \(\Delta\to\infty\)[22]. It would be interesting to understand the bulk gravitational interpretation of this phenomenon. It is also important to understand the symmetry underlying the DSSYK. In particular, it would be interesting to understand the quantum group symmetry of DSSYK and its bulk interpretation.4 At finite \(\lambda\), it is suggested that the bulk spacetime is discretized [16] or becomes non-commutative [24]. It is very interesting to understand the bulk dual of DSSYK better. We leave these issues as interesting future problems. Footnote 4: It is curious that the \(q\)-oscillator representation of the transfer matrix of DSSYK in [15] also appears in a statistical mechanical problem known as the Asymmetric Simple Exclusion Process (ASEP) [26]. In this context, the quantum group symmetry naturally arises after mapping the problem of ASEP to the matrix product states of XXZ spin chain [27]. ###### Acknowledgements. This work was supported in part by JSPS Grant-in-Aid for Transformative Research Areas (A) "Extreme Universe" 21H05187. KO was also supported by JSPS KAKENHI Grant 22K03594. ## Appendix A Alternative derivation of one-loop determinant In this appendix, we consider the diagonalization of the Hessian \(F_{ij}\) in the two-point function. By the change of integration variables \((\theta_{1},\theta_{2})\to(\theta,x)\) \[\theta_{1}=\frac{\pi}{2}-\theta+\frac{1}{2}\Delta\tan x,\quad\theta_{2} =\frac{\pi}{2}-\theta-\frac{1}{2}\Delta\tan x, \tag{100}\] the two-point function is written as \[\widetilde{G}_{2}\sim\int\frac{d\theta dx}{\cos^{2}x}e^{-\frac{1}{\lambda}F+ h+\mathcal{O}(\lambda)} \tag{101}\] where \[\begin{split} F&=2\theta^{2}-\frac{4u}{\cos u}\sin \theta+2\Delta\left(\log\frac{\cos x}{\cos\theta}+x\tan x-\phi\frac{\cos \theta}{\cos u}\tan x\right)\\ &\quad+\frac{\Delta^{2}}{2}\left(1+\frac{u\sin\theta}{\cos u} \right)\tan^{2}x+\mathcal{O}(\Delta^{3}).\end{split} \tag{102}\] The saddle point solution is given by \[\theta_{*}=u+\Delta a+\mathcal{O}(\Delta^{2}),\quad x_{*}=\phi+\Delta b-\Delta a \phi\tan u+\mathcal{O}(\Delta^{2}), \tag{100}\] with \[a=-\frac{1+\phi\tan\phi}{2(1+u\tan u)},\quad b=-\frac{1}{2}(1+u\tan u)\tan\phi. \tag{101}\] We expand the integral around the saddle point as \[\theta=\theta_{*}+\sqrt{\lambda}\varepsilon,\quad x=x_{*}+\sqrt{\lambda} \Big{(}\Delta^{-\frac{1}{2}}s-\varepsilon\phi\tan u\Big{)}, \tag{102}\] where \(\varepsilon\) and \(s\) parameterize the fluctuation around the saddle point. Then, up to the quadratic order in the fluctuations \(\varepsilon,s\), we find \[\widetilde{G}_{2}\sim\frac{e^{-\frac{1}{\lambda}F_{*}+h_{*}}}{\cos^{2}x_{*}} \int d\varepsilon dse^{-F_{2}} \tag{103}\] where \[F_{2}=\sec^{2}\phi s^{2}+2(1+u\tan u)\varepsilon^{2}+\Delta(A\varepsilon^{2}+ Bs^{2})+\mathcal{O}(\Delta^{2}), \tag{104}\] with \[A =2au-\phi^{2}\tan^{2}\phi\tan^{2}u-\big{(}\phi^{2}-1\big{)}\tan^{ 2}u+\phi\tan\phi+1, \tag{105}\] \[B =\frac{1}{2}\sec^{2}\phi\left(\tan\phi(8b-4a\phi\tan u)+3\tan^{2} \phi(u\tan u+1)+u\tan u+1\right).\] As we can see from (104), the fluctuations \(\varepsilon\) and \(s\) have no mixing at the quadratic order. Finally, at the one-loop level we find \[\widetilde{G}_{2}\sim\frac{e^{-\frac{1}{\lambda}F_{*}+h_{*}}}{\cos^{2}x_{*}} \frac{1-\Delta A\langle\varepsilon^{2}\rangle-\Delta B\langle s^{2}\rangle}{ \sqrt{\sec^{2}\phi(1+u\tan u)}}, \tag{106}\] where \(\langle\varepsilon^{2}\rangle\) and \(\langle s^{2}\rangle\) are determined by the quadratic action (104) \[\langle\varepsilon^{2}\rangle=\frac{1}{4(1+u\tan u)},\quad\langle s^{2} \rangle=\frac{1}{2}\cos^{2}\phi. \tag{107}\] One can check that (106) correctly reproduces the one-loop correction \(\mathcal{A}\) in (3.38). Summation formula We find the summation formula (\(|\theta|<\pi v\)) \[\sum_{|n|\geq 1}\frac{(-1)^{n}\cos(\widetilde{m}\theta)}{\widetilde{n} ^{2}(\widetilde{n}^{2}-1)} =-\frac{\theta^{2}}{2}+\frac{2u^{2}}{3}+1-\frac{2u}{\sin 2u}\cos\theta, \tag{103}\] \[\sum_{|n|\geq 1}\frac{(-1)^{n}\sin(\widetilde{m}\theta)}{\widetilde{n} (\widetilde{n}^{2}-1)} =\theta-\frac{2u}{\sin 2u}\sin\theta,\] \[\sum_{|n|\geq 1}\frac{(-1)^{n}\cos(\widetilde{m}\theta)}{\widetilde{n} ^{2}-1} =1-\frac{2u}{\sin 2u}\cos\theta,\] \[\sum_{|n|\geq 1}\frac{(-1)^{n}\widetilde{n}\sin(\widetilde{m} \theta)}{\widetilde{n}^{2}-1} =-\frac{2u}{\sin 2u}\sin\theta,\] \[\sum_{|n|\geq 1}\frac{(-1)^{n}\widetilde{n}^{2}\cos(\widetilde{m} \theta)}{\widetilde{n}^{2}-1} =-\frac{2u}{\sin 2u}\cos\theta,\] where \(\widetilde{n}=n/v\). For the out of time ordered correlator we discussed in section 5, we also need \[\sum_{m=1}^{\infty}\frac{\cos(2m\theta)}{4m^{2}-v^{2}} = \left\{\begin{aligned} &\frac{1}{2v^{2}}\,-\, \frac{\pi^{2}\cos(v\theta-u)}{8u\sin u}\,,&&(0<\theta<\pi)\\ &\frac{1}{2v^{2}}\,-\,\frac{\pi^{2}\cos(v\theta-3u)}{8u\sin u}\,,&& (\pi<\theta<2\pi)\end{aligned}\right. \tag{104}\] \[\sum_{m=1}^{\infty}\frac{\cos(2m\theta)}{m^{2}(4m^{2}-v^{2})} = \left\{\begin{aligned} &\frac{1}{v^{2}}\left(\frac{2}{v^{2}}\,-\, \frac{\pi^{2}\cos(v\theta-u)}{2u\sin u}\,-\,\theta^{2}+\pi\theta-\frac{\pi^{2} }{6}\right)\,,&&(0<\theta<\pi)\\ &\frac{1}{v^{2}}\left(\frac{2}{v^{2}}\,-\,\frac{\pi^{2}\cos(v \theta-3u)}{2u\sin u}\,-\,\theta^{2}+3\pi\theta-\frac{13}{6}\pi^{2}\right)\,,&& (\pi<\theta<2\pi)\end{aligned}\right.\] (105) \[\sum_{m=1}^{\infty}\frac{\cos((2m-1)\theta)}{(2m-1)^{2}-v^{2}} = \left\{\begin{aligned} &-\frac{\pi^{2}\sin(v\theta-u)}{8u\cos u}\,,&& (0<\theta<\pi)\\ &\frac{\pi^{2}\sin(v\theta-3u)}{8u\cos u}\,,&& (\pi<\theta<2\pi)\end{aligned}\right. \tag{106}\] ## Appendix C Zero-temperature factorization In Schwarzian theory, it is known that in zero temperature limit, the uncrossed four-point function factorizes into a product of two-point functions [28]. In this subsection, we will show that this zero-temperature factorization is also true for the DSSYK for any value of \(q\) (or \(\lambda\)). For this purpose, we first define finite temperature correlators by \[G_{2}^{\beta}(\tau) := \widetilde{G}_{2}(\tau,\beta-\tau)\,, \tag{107}\] \[G_{4}^{\beta}(\tau_{1},\tau_{2},\tau_{3},\tau_{4}) := \widetilde{G}_{4}(\tau_{12},\tau_{23},\tau_{34},\tau_{41}+\beta)\,, \tag{108}\] where the RHS' are defined in (11) and (12). Each \(\tau_{i}\) represents the matter operator insertion time. We also shift the energy as \[E(\theta)\,\Rightarrow\,\frac{2}{\sqrt{1-q}}\,(1-\cos\theta)\,. \tag{124}\] This sets the ground state energy \(E(0)=0\). Let us first study the zero-temperature limit of the two-point function: \[G_{2}^{\beta}(\tau)\,=\,\int_{0}^{\pi}\frac{d\theta_{1}}{2\pi}\frac{d\theta_{2} }{2\pi}\,\mu(\theta_{1})\mu(\theta_{2})e^{-\tau E(\theta_{1})}e^{(\tau-\beta)E (\theta_{2})}\,\frac{(e^{-2\Delta};q)_{\infty}}{(e^{-\Delta+i(\pm\theta_{1} \pm\theta_{2})};q)_{\infty}}\,. \tag{125}\] Due to the Boltzmann factor \(e^{-\beta E(\theta_{2})}\), the contribution to the \(\theta_{2}\) integral is localized to the ground state, i.e. \(\theta_{2}\to 0\). In this limit, we have \[\mu(\theta)\,=\,-4(q;q)_{\infty}^{3}\,\sin^{2}\theta\,+\,\cdots\,. \tag{126}\] Therefore, the zero-temperature two-point function is given by \[G_{2}^{\infty}(\tau)\,=\,-\frac{2\sqrt{1-q}}{\beta}\,(q;q)_{ \infty}^{3}\,e^{-\frac{2\beta}{\sqrt{1-q}}}I_{1}\left(\frac{2\beta}{\sqrt{1-q} }\right)\int_{0}^{\pi}\frac{d\theta}{2\pi}\,\mu(\theta)e^{-\tau E(\theta)}\, \frac{(e^{-2\Delta};q)_{\infty}}{(e^{-\Delta\pm i\theta};q)_{\infty}^{2}}\,. \tag{127}\] Before studying the four-point function, let us here consider the late time behavior of this zero-temperature two-point function. Evaluating the late time behavior by the same method as zero-temperature limit discussed above, we find \[\lim_{\tau\to\inf}G_{2}^{\infty}(\tau)\,=\,\frac{4(1-q)}{\beta \tau}\,e^{-\frac{2(\beta+\tau)}{\sqrt{1-q}}}\,(q;q)_{\infty}^{6}\frac{(e^{-2 \Delta};q)_{\infty}}{(e^{-\Delta};q)_{\infty}^{4}}\,I_{1}\left(\frac{2\beta}{ \sqrt{1-q}}\right)I_{1}\left(\frac{2\tau}{\sqrt{1-q}}\right)\,. \tag{128}\] Since \[\lim_{\tau\to\infty}I_{1}\left(\frac{2\tau}{\sqrt{1-q}}\right)\,= \,\frac{(1-q)^{\frac{1}{4}}}{\sqrt{4\pi\tau}}\,e^{\frac{2\tau}{\sqrt{1-q}}}\, +\,\cdots\,, \tag{129}\] The late time behavior of the zero-temperature two-point function is \(G_{\Delta}^{\infty}(\tau)\propto\tau^{-3/2}\), which agrees with the late time behavior in Schwarzian theory. Now we study zero-temperature limit of the four-point function: \[G_{4}^{\beta}\,=\,\int_{0}^{\pi}\prod_{i=1,3}\left(\frac{d\theta _{i}}{2\pi}\,\mu(\theta_{i})e^{-(\tau_{i}-\tau_{i+1})E(\theta_{i})}(e^{-2 \Delta_{i}};q)_{\infty}\right)\] \[\qquad\times\int_{0}^{\pi}\frac{d\theta_{2}}{2\pi}\mu(\theta_{2} )e^{-(\tau_{23}+\tau_{41}+\beta)E(\theta_{2})}\prod_{j=1}^{2}\frac{1}{(e^{- \Delta+i(\pm\theta_{j}\pm\theta_{j+1})};q)_{\infty}}\,, \tag{130}\] where \(\Delta_{3}=\Delta_{2}\). Again, zero-temperature limit \(\beta\to\infty\) localizes \(\theta_{2}\to 0\). Therefore, we find \[G_{4}^{\beta}\,=\,-\frac{\beta}{2\sqrt{1-q}}\,(q;q)_{\infty}^{- 3}\,e^{\frac{2\beta}{\sqrt{1-q}}}\left(I_{1}\left(\frac{2\beta}{\sqrt{1-q}} \right)\right)^{-1}G_{2}^{\infty}(\tau_{12})\,G_{2}^{\infty}(\tau_{34})\,. \tag{131}\] We note that as in Schwarzian theory, the zero-temperature four-point function can be factorized only in this \(s\)-channel, but not in the \(t\) or \(u\)-channels, which is obvious from the matter operator contractions.
2302.08548
Connectomes and Properties of Quantum Entanglement
Topological quantum field theories (TQFT) encode properties of quantum states in the topological features of abstract manifolds. One can use the topological avatars of quantum states to develop intuition about different concepts and phenomena of quantum mechanics. In this paper we focus on the class of simplest topologies provided by a specific TQFT and investigate what the corresponding states teach us about entanglement. These ``planar connectome" states are defined by graphs of simplest topology for a given adjacency matrix. In the case of bipartite systems the connectomes classify different types of entanglement matching the classification of stochastic local operations and classical communication (SLOCC). The topological realization makes explicit the nature of entanglement as a resource and makes apparent a number of its properties, including monogamy and characteristic inequalities for the entanglement entropy. It also provides tools and hints to engineer new measures of entanglement and other applications. Here the approach is used to construct purely topological versions of the dense coding and quantum teleportation protocols, giving diagrammatic interpretation of the role of entanglement in quantum computation and communication. Finally, the topological concepts of entanglement and quantum teleportation are employed in a simple model of information retrieval from a causally disconnected region, similar to the interior of an evaporating black hole.
Dmitry Melnikov
2023-02-16T19:54:17Z
http://arxiv.org/abs/2302.08548v1
# Connectomes and Properties of Quantum Entanglement ###### Abstract Topological quantum field theories (TQFT) encode properties of quantum states in the topological features of abstract manifolds. One can use the topological avatars of quantum states to develop intuition about different concepts and phenomena of quantum mechanics. In this paper we focus on the class of simplest topologies provided by a specific TQFT and investigate what the corresponding states teach us about entanglement. These "planar connectome" states are defined by graphs of simplest topology for a given adjacency matrix. In the case of bipartite systems the connectomes classify different types of entanglement matching the classification of stochastic local operations and classical communication (SLOCC). The topological realization makes explicit the nature of entanglement as a resource and makes apparent a number of its properties, including monogamy and characteristic inequalities for the entanglement entropy. It also provides tools and hints to engineer new measures of entanglement and other applications. Here the approach is used to construct purely topological versions of the dense coding and quantum teleportation protocols, giving diagrammatic interpretation of the role of entanglement in quantum computation and communication. Finally, the topological concepts of entanglement and quantum teleportation are employed in a simple model of information retrieval from a causally disconnected region, similar to the interior of an evaporating black hole. ## 1 Introduction Topological Quantum Field Theories (TQFT) are instances of ordinary, often finite-dimensional, quantum mechanical systems [1, 2]. A special feature of these instances is the possibility to cast quantum states and quantum operations as topological spaces, which realize the defining properties of Hilbert spaces. In other words, in TQFT quantum states and quantum operations (operators) can be visualized by drawing diagrams of topological spaces and manipulating them. Diagrammatic representation of linear operations is a very natural way to visualize matrix multiplication, widely used in different areas of physics, mathematics and computer science, including tensor networks and computer algorithms. However, in most cases such representations are symbolic and do not literally reflect the physical processes in the manipulated systems. In TQFT, on the contrary, the diagrams can be understood literally, as quantum evolution of the system's elements, such as particles, qubits, or general media composing the systems. In this paper we will work with a topological version of tensor networks based on the axiomatic definition of TQFT [2]. Similar constructions are well-known under the name of topological quantum computation [3, 4, 5, 6], but the focus of this paper will be in different aspects of the construction. In particular, we will discuss what quantum entanglement is from the point of view of topology and show how topology highlights some of its fundamental properties. In this sense the work continues some previous discussion in the literature (e.g [7, 8, 9, 10, 11, 12] and earlier works by Kauffman et al. cited in [13]) inspired by the question of the connection between topological and quantum entanglement raised explicitly in [14]. We will consider a specific example of a TQFT, that is Chern-Simons theory with gauge group \(SU(2)\) in spaces, which are locally \(\mathbb{R}^{3}\), with boundaries carved as disjoint 2-spheres. We will allow the 2-spheres to have punctures - point-like defects that have to be extended in the three-dimensional bulk as one-dimensional lines. These are the Wilson lines from the point of view of the Chern-Simons theory. Three-dimensional manifolds with boundaries and Wilson lines represent states and more general tensors of the quantum theory, with boundary \(S^{2}\) corresponding to elementary subsystems, that is Hilbert spaces of individual "particles" [15]. The key observation in this construction is that spheres connected by a sufficient number of Wilson lines correspond to entangled subsystems [11]. So, entanglement corresponds to wiring of the bulk space with Wilson lines. We refer to such wirings as "connectomes", borrowing the term from neuroscience. Obviously the same space can be wired in different ways, so the first question is what kind of entanglement different wirings describe. Here we focus on the choice of wirings with the simplest topology. The essential information that such wirings should contain is what is connected to what. Such objects can be characterized by classes labeled by the adjacency matrices of graphs, whose nodes are associated with the subsystems and edges - with the Wilson lines. The classes contain infinite number of elements and we would like to consider only the simplest representatives, which do not have non-trivial knotting and tangling of the Wilson lines, the planar (trivial) connectomes. In this work we will mostly focus on these simplest connectome quantum states and use them to illustrate different features and applications of quantum entanglement. We will find that such states have some distinctive features. The entanglement entropy for such states share the properties with the holographic states1: the entropy of a single subsystem is given by a discrete version of the minimal area law, while for many subsystems, the entropy satisfies a number of inequalities beyond subadditivity, which are also satisfied by the holographic states. For bipartite entanglement the planar connectome states are equivalent to the classes of states in the classification provided by the action of stochastic local operations and classical communication (SLOCC) [16, 17]. For multipartite entanglement they are similar to either full multipartite GHZ states or to the embeddings of lower rank GHZ states. Footnote 1: That is states that are expected to have a dual gravity description. The main advantage of the topological approach to description of entanglement is that it makes the properties of entanglement very intuitive: the states are entangled if the topological spaces are properly connected; shared Wilson lines is the entanglement resource shared between the parties; impossibility of sharing this resource with several parties simultaneously is the monogamy of entanglement. One of the goals of this work is to show that the topological interpretation can motivate new tools for study and applications of entanglement. In this work we use the topology argument to construct a new measure of multipartite entanglement. We also find that basic quantum algorithms, such as dense coding and quantum teleportation have purely topological interpretation, which makes visual the role of different aspects of entanglement in quantum computation and communication. As another application of the topological method we reflect on the recent progress in the understanding of black holes and propose a toy model that contains the salient features of an evaporating black hole in the context of the information paradox [18]. In this model we build upon our experience with the planar connectome states, which are supposedly similar to the holographic ones. The way in which the Hawking radiation gets entangled with the interior and later destroys the entanglement, allowing the information to escape, is analogous to the topological realization of quantum teleportation [19]. The paper is organized as follows. In section 2 we give a short introduction in TQFT and describe a specific realization of quantum mechanics (qubits and entanglement) in Chern-Simons theory with boundaries. In section 3 we introduce the connectome states and review the classification of bipartite entanglement via connectomes. In section 4 we briefly discuss connectomes in the multipartite situation. Section 5 contains the main discussion of entanglement properties and applications. In section 5.1 we discuss the topological expression for the entanglement entropy and introduce a new measure, which detects multipartite entanglement. In section 5.2 we discuss inequalities for the entanglement entropy focusing on the planar connectome states. In section 5.3 we propose topological cartoons of the dense coding and quantum teleportation protocols. Finally, section 5.4 describes a model of unitary evaporation in a topological interpretation of the black hole information paradox. Some of the results discussed in this paper were previously reported in [17] and [19]. This paper makes a more detailed discussion of those results and offers a number of new ones. For example, the discussion of a new entanglement measure, the inequalities for the entanglement entropy and the dense coding protocol are the main new results. Moreover, this paper is written for a more general audience and is expected to be self-contained. ## 2 Topological quantum field theory ### Definitions Following the axiomatic definition of TQFT [2] we will consider an \(n\) dimensional orientable hypersurface \(\Sigma\) as a Hilbert space \({\cal H}_{\Sigma}\). Then any \(n+1\)-dimensional topological space \({\cal M}\), such that \(\Sigma\) is a boundary of \({\cal M}\), that is \(\Sigma=\partial{\cal M}\), corresponds to a vector in \({\cal H}_{\Sigma}\), as illustrated by the following diagram, (1) Note that spaces homeomorphic to each other, that is continuously deformable into each other, preserving the topology, define equivalent states. We can also consider an \(n+1\)-dimensional space \({\cal O}\) with boundaries \(\partial{\cal O}=\Sigma\cup\overline{\Sigma}\) as an evolution of the Hilbert space \({\cal H}_{\Sigma}\), that is an operator acting on \({\cal H}_{\Sigma}\). Here \(\Sigma\) and \(\overline{\Sigma}\) differ by choice of orientation, so one may think of \(\overline{\Sigma}\) as representing the dual vector space \({\cal H}_{\overline{\Sigma}}={\cal H}_{\Sigma}^{*}\). More generally, space \({\cal O}\) with boundary \(\partial{\cal O}=\Sigma_{1}\cup\Sigma_{2}\) can be viewed as a linear operator that acts on the respective Hilbert spaces. Such space is also called a cobordism of \(\Sigma_{1}\) and \(\Sigma_{2}\). Application of an operator on a state is realized by gluing boundary \(\Sigma\) of the state with boundary \(\overline{\Sigma}\) of the operator, as the following diagram shows. (2) The result of this operation is obviously another state in \({\cal H}_{\Sigma}\). Similarly, composition of operators is a concatenation of the latter. For completeness of the axiomatic definition we need to spell out a few other properties. It should be obvious that gluing together two homeomorphic spaces-states should correspond to the square of the norm of the state. More generally, gluing two inequivalent spaces should produce a diagram of the internal product in the Hilbert space, (3) The result of the gluing is a space without boundary. Spaces without boundary thus correspond to zero-dimensional Hilbert spaces or \(\mathbb{C}\)-numbers. We also need to define the diagrammatic analog of the identity operator. The latter should represent a trivial evolution of the Hilbert space, which is achieved by gluing a featureless cylinder \(\Sigma\otimes I\) (\(I\) being an interval) or any space homeomorphic to it. Finally the last property that plays the major role in this paper is the consequence of separability of topological spaces. Namely, if \(\Sigma\) consists of disconnected components \(\Sigma=\Sigma_{1}\cup\Sigma_{2}\) (\(\Sigma_{1}\cap\Sigma_{2}=\emptyset\)) then it corresponds to a direct product of Hilbert spaces \(\mathcal{H}_{\Sigma}=\mathcal{H}_{\Sigma_{1}}\otimes\mathcal{H}_{\Sigma_{2}}\). The map of topological spaces to linear spaces described above is realized by path integrals of metric-independent quantum field theories, with the primary example provided by the three-dimension Chern-Simons theory [15]. This example motivated the general definition of TQFT [1]. In particular, states in quantum Chern-Simons theory are path integrals of the on manifolds \(\mathcal{M}\) with prescribed boundary conditions for the fields on boundary \(\Sigma\). ### Topological qubits Now let us describe a specific realization of TQFT axioms and a particular class of Hilbert spaces. In other words, we describe the qubits and qudits that we will be working with in this paper. We will consider spaces \(\Sigma\) that are disjoint unions of two-spheres \(S^{2}\) and assume the underlying CFT to be Chern-Simons theory with gauge group \(SU(2)\) and coupling constant (level) \(k\). The main information that we will need here about this theory is the dimension of the Hilbert spaces on the spheres and a few rules of assigning matrix operators to topological diagrams, or equivalently computing scalar products of states. These will be introduced in due course. It turns out that in the Chern-Simons theory, the Hilbert space of a featureless two-sphere is one-dimensional [15]. In order to have a non-trivial Hilbert space one has to consider spheres with punctures. From the point of view of the Chern-Simons theory punctures are special points of the Chern-Simons field and can be viewed as external particles coupled to the field. In the \(SU(2)\) theory these particles are characterized by a spin \(j\) (to be precise, by "integrable" representations \(R\) of Kac-Moody algebra \(su(2)_{k}\)). In order to consistently put \(n\) particles with spins \(j_{1}\), \(j_{2}\), \(\ldots j_{n}\) on a sphere these particles should form a spin singlet (tensor product of irreps \(R_{i}\) should contain a trivial irrep). The dimension of the Hilbert space of \(S^{2}\) is defined by the number of possible ways of forming a singlet from \(n\) spins (number of trivial irreps in the tensor product of \(R_{i}\)) with the following caveat. In \(su(2)_{k}\) Kac-Moody algebra there is only a finite number of integrable representations labeled by spin \(0\leq j\leq k/2\). As a result, the dimension of the Hilbert space may be less than the number of singlets formed by the tensor product of the puncture spins, unless \(k/2\) is larger than any of the spins that can appear in the tensor products. In the rest of the paper we will assume \(k\) to be large, so that the dimension is precisely the number of singlets in the tensor product. Obviously a sphere cannot have a single puncture, unless it corresponds to a particle of spin zero. The latter case is equivalent to absence of any punctures. For two punctures, in order to have a Hilbert space, the associated particles must have equal spins. There is only one way to form a singlet of two spins, so the dimension of the Hilbert space is at most one, as for the zero-puncture case. We will be mostly interested in the situation of the punctures-particles with spin \(1/2\). Three particles of spin half cannot form a singlet. The minimal non-trivial example of a Hilbert space comes with four punctures of spin \(1/2\). There are two ways of forming a singlet from four spin-half particles, by pairwise forming singlets or triplets, so \(S^{2}\) with four spin-half punctures correspond to a Hilbert space of dimension two - the qubit. In a three-dimensional bulk space \(\mathcal{M}\) punctures are extended to one-dimensional defects. In the Chern-Simons language these defects are trajectories of the boundary particles, called Wilson lines. Wilson lines cannot end in the bulk: they either end on the same two-sphere or on two different spheres included in \(\Sigma\). An example of a two-qubit state with different options for the Wilson lines is shown in figure 1 (left). Besides, the bulk of \(\mathcal{M}\) can contain closed Wilson lines - Wilson loops. The next question is choice of a basis in the Hilbert space. We start by choosing two inequivalent extensions of the punctures in the bulk, as shown by figure 1 (right). In the following sections, we will not draw the spheres explicitly, but will rather group the punctures assuming that each group belongs to the surface of a two-sphere. We will not draw the three-dimensional spaces either, assuming they have a topology of \(S^{3}\), so the pair of states shown in figure 1 (right) will be simply denoted as \[|e_{1}\rangle\ =\ \ \raisebox{-14.226378pt}{\includegraphics[scale=0.8]{fig/fig/ fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/ fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/ fig/fig/fig/fig/fig/fig/fig/fig/fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig// fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig// fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig// fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig// fig/ fig/ fig/ fig// fig/ fig/ fig/ fig/ fig// fig/ fig// fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig// fig// fig/ fig/ fig/ fig// fig/ fig// fig/ fig/ fig/ fig/ fig/ fig// fig// fig// fig/ fig// fig// fig/ fig// fig// fig// fig// fig// fig/ fig// fig// fig// fig// fig// fig/ fig/ fig/ fig// fig// fig// fig/ fig// fig/ fig// fig/ fig// fig// fig// fig// fig// fig// fig// fig// fig/ fig// fig// fig/ fig// fig// fig// fig// fig// fig// fig// fig// fig// fig// fig// fig/// fig// fig// fig// fig// fig// fig// fig/ fig// fig/// fig// fig// fig// fig// fig// fig// fig// fig// fig// fig// fig// fig// fig/// fig/// fig// fig/// fig/// fig// fig/ fig/// fig// fig// fig/// fig// fig/// fig// fig// fig/// fig// fig// fig/// fig// fig/// fig// fig/// fig/// fig/// fig/// fig// fig/// fig/// fig/// fig// fig/// fig/// fig/// fig// fig/// fig/// fig/// fig/// fig/// fig/// fig// fig/// fig// fig// fig/// fig// fig// fig/// fig/// fig// fig/// fig// fig/// fig// fig// fig// fig/// fig// fig// fig/// fig/// fig// fig/// fig// fig/// fig/// fig/// fig// fig/// fig// fig// fig/// fig// fig// fig/// fig// fig// fig/// fig// fig/// fig// fig// fig/// fig/// fig// fig// fig// fig// fig// fig// fig// fig// fig/// fig// fig/// fig// fig/// fig// fig// fig// fig// fig// fig// fig/// fig// fig// fig// fig// fig// fig/ fig/ fig// fig/ fig// fig// fig/ fig// fig// fig/ fig// fig/ fig// fig// fig// fig// fig// fig// fig/ fig// fig/ fig// fig/ fig// fig/ fig// fig// fig// fig/ fig/ fig// fig// fig/ fig// fig// fig// fig// fig// fig// fig/ fig// fig// fig/ fig// fig/ fig// fig// fig/ fig/ fig// fig/ fig/ fig// fig/ fig// fig/ fig/ fig// fig/ fig/ fig// fig/ fig// fig/ fig// fig// fig/ fig/ fig/ fig// fig/ fig/ fig// fig// fig// fig/ fig/ fig// fig/ fig/ fig/ fig// fig/ fig// fig/ fig// fig/ fig// fig/ fig/ fig// fig/ fig// fig/ fig/ fig/ fig// fig/ fig/ fig/ fig/ fig/ fig// fig/ fig// fig/ fig/ fig/ fig/ fig/ fig/ fig// fig/ fig/ fig/ fig/ fig// fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig// fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig// fig This relation tells that any diagram with a crossing can be replaced by a linear combination of diagrams with no crossings. The Jones polynomial acts linearly on the linear combination of diagrams. With the above rules, one can now construct an orthonormal basis for the qubit, \[|0\rangle\ =\ \frac{1}{d}|e_{1}\rangle\,,\qquad|1\rangle\ =\ \frac{1}{\sqrt{d^{2} -1}}\left(|e_{2}\rangle-\frac{1}{d}|e_{1}\rangle\right)\,. \tag{9}\] Note that for \(k=1\) parameter \(d^{2}-1\) vanishes, which means that \(|e_{1}\rangle\) and \(|e_{2}\rangle\) are linearly dependent, which is a consequence of the fact that the dimension of the Hilbert space has a non-trivial dependence of \(k\). It is useful to note that loop factorization and skein relations apply not only to calculations in \(S^{3}\), but also to any three-dimensional manifold with boundaries. This will be useful in the discussion of many particle states and operators. The rules for three-manifolds with different global structure follow automatically from the above, since any manifold can be constructed by gluing three-spheres with \(S^{2}\) boundaries. To construct a basis in the case of an arbitrary even number of punctures one needs to consider all diagrams that connect the points without intersecting lines. For \(2n\) points, any such diagram can be mapped to an element of the Temperley-Lieb algebra \(TL_{n}\). The dimension of the Temperley-Lieb algebra is given by the Catalan numbers, so the dimension of the Hilbert space of a two-sphere with \(2n\) punctures in \(SU(2)\) Chern-Simons theory with sufficiently large level \(k\) is given by \[\dim{\cal H}_{2n}\ =\ C_{n}\ =\ \frac{(2n)!}{(n+1)!n!}\,,\qquad k>n-1\,. \tag{10}\] ### Quantum entanglement In this paper we discuss properties of quantum entanglement as seen by the topological description of quantum mechanics. The map from topological to linear spaces offers a very natural interpretation for entanglement: separability of spaces implies separability of wavefunctions. We can illustrate this by the following heuristic diagrams, \[\raisebox{-14.226378pt}{\includegraphics[width=14.226378pt]{fig/2n-1.eps}} \qquad\longrightarrow\quad\mbox{separable},\qquad\qquad\qquad\raisebox{-14.22637 8pt}{\includegraphics[width=14.226378pt]{fig/2n-1.eps}}\qquad\longrightarrow \quad\mbox{entangled}. \tag{11}\] Here \(\Sigma_{A}\) and \(\Sigma_{B}\) may represent a pair of two spheres. In the latter case the first diagram can be a pair of three-balls, and the second - a space between two concentric \(S^{2}\), but in principle the diagrams are supposed to illustrate the most general situation.2 Footnote 2: Another interesting example is the case of \(\Sigma=T^{2}\), a two-dimensional torus. Entanglement for states with linked \(T^{2}\) boundaries (knot complement states) was originally discussed in [8]. One can formally prove the correspondence described by diagrams (11), for example, by computing the von Neumann entropies [11], but there are some subtleties. The second diagram in (11) is in general an entangled state, but there are situations, in which it is actually separable. One such situation happens when either \(\Sigma_{A}\) or \(\Sigma_{B}\) correspond to Hilbert spaces of dimension one. Therefore, for entanglement one needs non-trivial \(\Sigma\). Moreover, if we understand the diagram as an evolution \(\Sigma[t]\) with \(\Sigma[0]=\Sigma_{A}\) and \(\Sigma[1]=\Sigma_{B}\), at no intermediate \(0\leq t\leq 1\) can the dimension of \({\cal H}_{\Sigma}\) be non-trivial, otherwise the diagram corresponds to a separable state. For two-spheres this means that we will always need Wilson lines to support entanglement. For Wilson lines in the representation \(j=1/2\), we will need at least four lines crossing any section of the three-dimensional space \({\cal M}\) that breaks it into a disconnected pair of three-manifolds. Therefore, we have the following refinement of the simple classification (11), \[\raisebox{-14.226378pt}{\includegraphics[width=14.226378pt]{fig/2n-1.eps}} \qquad\longrightarrow\quad\mbox{separable}\,,\qquad\qquad\qquad\raisebox{-14.22637 8pt}{\includegraphics[width=14.226378pt]{fig/2n-1.eps}}\qquad\longrightarrow \quad\mbox{entangled}. \tag{12}\] Since the Wilson lines are fundamental for creating entanglement between 2-spheres, in this paper we will focus on the Wilson-line wiring of three-dimensional spaces. As already mentioned before we will not draw spheres, where the lines end, nor we will draw the three-dimensional spaces, essentially reducing the study to \(S^{3}\) topologies. Classification (12) will be cast as \[\raisebox{-14.226378pt}{\includegraphics[width=14.226378pt]{Fig4.eps}}\quad \longrightarrow\quad\mbox{separable}\,,\qquad\qquad\raisebox{-14.226378pt}{ \includegraphics[width=14.226378pt]{Fig5.eps}}\quad\longrightarrow\quad\mbox{ entangled}. \tag{13}\] In fact, non-trivial global topologies can be replaced by linear combinations of \(S^{3}\) topologies with additional Wilson loops via the "surgery operation" [15]. This means, as we will demonstrate below, that such topologies correspond to weaker entanglement, as compared to simply connected 3D topologies. We have already seen that subtleties of quantum Chern-Simons theories can make images of some non-homeomorphic topological spaces linearly dependent. Therefore one has to keep in mind that the TQFT map is not always faithful, or more generally, only a finite set of quantum states is available for the topological description for some integer values of the Chern-Simons coupling constant \(k\). A specific version of this problem is known as non-universality of quantum computation with Ising anyons [6]. The problem can be avoided if \(k\) is taken sufficiently large, or more generally if one makes an analytic continuation to generic values of \(k\) or \(q\). We will assume either of this loopholes in the remaining discussion. ## 3 Connectome classification of bipartite entanglement ### Connectomes Quantum entanglement is studied by the Quantum Resource Theory, which views it as a resource for quantum computation. States entangled in different ways are suitable for different quantum tasks, so classification of different types of entanglement is an important problem. Topological picture discussed in this paper provide an intuitive classification of entanglement in terms of topology: wiring of Wilson lines, and more general connectivity of topological spaces. This classification, although discrete, is still too detailed, and one is typically interested in a coarser one, with a finite number of classes. One very well known classification in Quantum Resource Theory is by stochastic local operations and classical communication (SLOCC) [16]. This classification identifies quantum states that differ by the action of local invertible operators (not necessarily the unitary ones), and produces a finite number of classes for bipartite entanglement. Specifically, if a quantum system is split into subsystems \(A\) and \(B\), such that \(\mathcal{H}_{A}\simeq\mathbb{C}^{m}\) and \(\mathcal{H}_{B}\simeq\mathbb{C}^{n}\), with \(m\geq n\), then there are \(n\) possible classes that cannot be mixed by action of invertible operators on \(\mathcal{H}_{A}\) and \(\mathcal{H}_{B}\). This result can be obtained by applying Gram-Schmidt decomposition of the reduced density matrix, which has at most \(n\) independent terms (rank), with the rank invariant under the action of local invertible operators. For a pair of qubits this gives just two classes: separable states and entangled states. Let us introduce a simple class of topological diagrams that characterize different types of entanglement and also produce a finite number of classes, equivalent to the SLOCC ones for bipartite entanglement. We will assume that each sphere represents a party. So, for bipartite entanglement we will consider different wiring of a pair of two-spheres. We expect that different ways of wiring of spheres with Wilson lines should produce different ways of entangling quantum states. We can label different choices of wiring by graphs, which only distinguish what is connected to what, but ignore the specific 3D topology of connections. That is the information encoded by the graph is \(A_{ij}\) (the adjacency matrix) - the number of connections between sphere \(i\) and sphere \(j\), including self-connections. Graphs split the infinite number of wiring options in a finite number of classes. Let us focus on the simplest class representative that are given by planar graphs, which can be drawn on a plane without line intersections. For a pair of spheres planar graphs include representatives of all the classes. We can also identify graphs that are connected by local permutations. Such permutations should not affect entanglement. For two qubits, that is a pair of two-spheres with four punctures, our wiring prescription leads to three inequivalent options, (14) considered up to local permutation of punctures. Absence of either local or non-local braiding makes the diagrams equivalent to elements of Temperley-Lieb algebra \(TL_{4}\). We will generally refer to representatives of the adjacency matrix classes as _connectomes_, specifying whether they are planar or not whenever necessary. One can note that the second diagram in (14) possesses a section crossed by only two lines, so it in fact represents a separable state. Indeed, using basis (9) one can show that the first and the second diagram correspond to the same normalized wavefunction. Therefore there are two inequivalent connectome diagrams describing entanglement of two qubits: separable state and entangled state. Moreover, the diagram of the entangled state describes the case of maximal entanglement. As expected, the collection of parallel lines is equivalent to the identity operator, which in the present case describes both the wavefunction and the reduced density matrix. To confirm this, expand in basis (9), (15) One can also note that the equivalence between the reduced density matrix and the wavefunction is true for all the connectome states. Let us now extend the connectome classification to pairs of qudits. ### Bipartite entanglement To systematically construct qudit Hilbert spaces, that is \({\cal H}=\mathbb{C}^{n}\) with \(n>2\), we will make use of the fact that tensor product of representations of spin \(j_{1}\) and \(j_{2}\) expands in the sum of \(j_{1}+j_{2}-|j_{1}-j_{2}|+1\) representations with spin varying between \(|j_{1}-j_{2}|\) and \(j_{1}+j_{2}\). Since we would like to work with \(S^{2}\) boundaries with \(j=1/2\) punctures, representations of any spin \(j_{1}\) and \(j_{2}\) can be obtained by fusing several \(j=1/2\) representations. In other words, for spin \(j_{1}\) (\(j_{2}\)), we will consider groups of \(2j_{1}\) (\(2j_{2}\)) regular punctures and project out all the irrelevant irreps \(j<j_{1}\) (\(j<j_{2}\)) that appear in the tensor product of \(2j_{1}\) (\(2j_{2}\)) fundamental irreps. In our case the projection can be organized through the Jones-Wenzl symmetrizers (projectors) [21, 22], which we will now introduce. The Jones-Wenzl projectors are elements of the Temperley-Lieb algebra, which can be defined recursively as follows. For \(TL_{n+2}\) the projector on the space of spin \(n/2+1\) can be obtained from the projector on the space of spin \((n+1)/2\) of \(TL_{n+1}\) through relation [13] (16) Here the thick line with label \(n\) substitutes \(n\) ordinary lines. Parameters \(\Delta_{n}\) are also defined recursively, \[\Delta_{-1}\ =\ 0\,,\qquad\Delta_{0}\ =\ 1\,,\qquad\Delta_{n+1}\ =\ d\Delta_{n}-\Delta_{n-1}\,. \tag{17}\] For example, for \(TL_{2}\) the following is the projector on the subspace of \(j=1\) in the tensor product of two spins \(j=1/2\), \[\raisebox{-14.226378pt}{\includegraphics[]{figs/2-1-2-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 the projectors and permutations of outputs within each projector one should conclude that the following three diagrams make a complete set of inequivalent wirings of two qutrits, \[\raisebox{-11.381102pt}{\includegraphics[]{figs/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutritsqutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutritsqutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/qutrits/ lines from only one \((n-1)\) projector. Then there are obviously \(n\) inequivalent diagrams, which correspond to \(n\) SLOCC classes. We can illustrate this with an example of entanglement classes of a qutrit and a qudit with \(m=5\), \[\begin{array}{c}n=3\\ m=5\end{array}\ :\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad produced an equivalent classification. In this section we briefly discuss the properties of more complex connectomes, which contain crossing and tangling of the lines. We would like to distinguish local and non-local tangling caused by the crossings. Local tangling operations result in a local change of basis and by themselves are not important for entanglement. Non-local tangling is a result of a non-local permutation (exchange) of punctures. The following diagrams give an example of a local and non-local tangling, (30) The local tangling can be undone by a permutation of a pair of points on either side of the diagram. For the non-local one, one can apply skein relation (8) to express the diagram as a linear combination of a fully connected and a fully disconnected diagrams (see the example below). In a more general setup this would imply a linear combination of states with a stronger and a weaker entanglement. This means in particular, that such a diagram cannot represent maximal entanglement and the correlations will be even weaker for a more complex tangling. Without further elaborating on this point we consider an example [17] of an entangled two-qubit state with two pairs of punctures connected through a non-local tangle (30). Using skein relations we reduce this state to a linear combination of connectome diagrams, (31) Using basis (9) this state can be cast in an algebraic form, (32) In terms of the SLOCC classification, the diagram is an example of an operation that is invertible, but not unitary. It can be applied locally to the maximally entangled state to produce a more generic state of the Bell class. Although it is an invertible operation there is no obvious diagrammatic presentation for a local inverse, which is reminiscent of the fact that the maximally entangled state can be obtained from a generic Bell class state only with finite probability of success, while the inverse transformation can be performed with certainty [23]. We can investigate the entanglement entropy of this state and a family of similar states with increasing complexity of the tangle. Let us consider the following family (33) In this family, every state is equivalent to the reduced density matrix of the previous state, which allows computing the entropies recursively. Specifically, one has \[|\Psi_{0}\rangle = s_{0}|00\rangle+c_{0}|11\rangle\,,\quad s_{0}\ =\ (A^{4}+A^{-4})^{2}\,,\quad c_{0}\ =\ (1-A^{-4})^{2}\,, \tag{34}\] \[|\Psi_{\ell+1}\rangle = \frac{|s_{\ell}|^{2}}{|s_{\ell}|^{2}+|c_{\ell}|^{2}}|00 \rangle+\frac{|c_{\ell}|^{2}}{|s_{\ell}|^{2}+|c_{\ell}|^{2}}|11\rangle\,, \quad\ell\ =\ 0,1,2\ldots\,,\] (35) \[S_{\ell} = -\frac{|s_{\ell}|^{2}}{|s_{\ell}|^{2}+|c_{\ell}|^{2}}\log\frac{|s _{\ell}|^{2}}{|s_{\ell}|^{2}+|c_{\ell}|^{2}}-\frac{|c_{\ell}|^{2}}{|s_{\ell}|^ {2}+|c_{\ell}|^{2}}\log\frac{|s_{\ell}|^{2}}{|s_{\ell}|^{2}+|c_{\ell}|^{2}}\,. \tag{36}\] The plots of the entropies for \(\ell=0,1,2\), where by \(\ell=0\) we understand the first diagram in (33), as a function of the topological phase parameter \(\theta=-i\log A\), which is related to coupling constant through (6) and (7), are shown in figure 2. Values \(\theta=\pm\pi/12\) and \(\theta=\pi/4\) modulo \(\pi/2\) are special TQFTs, in which the above states are equivalent to the maximally entangled one (\(s_{\ell}=c_{\ell}\)). For general values of \(\theta\) the entropy drops with \(\ell\). Note that states in family (33) belong to the connectome class of the first diagram in (14), which describes a separable state. As compared to this state tangling increases entanglement, but the strongest entanglement is yet in the state with the least tangling. This counterintuitive feature is better explained with the use of the skein relation. By the latter, every subchain is a linear combination of two diagrams in the right hand side of (8). However, expansion of a long chain produces only one connected element, while the number of disconnected elements grows with the length of the chain giving a higher weight to the disconnected elements. The effect of non-local tangling is similar to that of other types of defects, such as "holes". By holes we mean non-trivial global 3D defects, which generalize the holes of Riemann surfaces to the case of 3-manifolds. Holes also tend to weaken entanglement if compared to simply connected spaces: heuristically they can be thought as of ruptures of space tying the quantum parties. In fact they can be reduced to a linear combination of tangles by the operation known as surgery. By a surgery one can close the 3D hole in the topology at the expense of insertion of additional Wilson loops, as illustrated by figure 3. ## 4 Connectome states in the multipartite entanglement It would be interesting to extend the connectome classification to multipartite entanglement. For bipartite entanglement use of the planar connectomes (the connectomes with no line crossing) was successful: we showed how they describe the SLOCC entanglement classes. Moreover, the construction suggests that such planar connectomes correspond to maximal entanglement in each SLOCC class. In particular, introduction of tangling reduces the amount of entanglement. The first problem in the case of multipartite entanglement is that connectomes for a high number of parties are in general non-planar, so one has to refine the definition of the simplest representatives of each class. This can be done by requiring that a representative is "simple" if connecting endpoints of any line in the given topology results in a trivial loop. For a triplet of qubits one can still use planarity as the criterion. However, in this case the classification diverges with the SLOCC one [16]. For three qubits one can draw seven inequivalent planar diagrams up Figure 2: Entanglement entropy of the family of chain states (33) as a function of \(\theta=-i\log A\) for \(\ell=0,1,2\). Apart from a few special values of \(\theta\) the entropy of these states is lower than the maximal value for a pair of qubits. With increasing complexity of the tangle the entanglement entropy decreases, with the single chain lock (\(\ell=0\), blue) showing the highest entropy in the family and others showing a monotonous decrease with the number of chain segments. to local permutation of points and permutation of parties. (37) As explained above, parties connected by only two fundamental Wilson lines are not entangled, so from the seven options only three are independent. Thus, the first four diagrams correspond to a fully separable state, the following pair to the biseparable generalization of the Bell type and only the last diagram to a genuinely tripartite entanglement. To determine the SLOCC class of the last state we can expand it in the basis (9), \[\raisebox{-14.226378pt}{\includegraphics[width=14.226378pt]{Fig4.png}} \hskip-14.226378pt=\hskip 5.690551pt|000\rangle+\frac{1}{\sqrt{d^{2}-1}}|111 \rangle\,. \tag{38}\] This is a representative of the GHZ class in the SLOCC classification. One property of the GHZ class, which is usually cited, is the separability of the result of the measurement of any of the three qubits. This property can be seen in the diagram if one glues state \(|0\rangle\) of (9) to either of the ends of the diagram. The result of such an operation will be the second diagram of (14), which is a separable state. Yet this analysis misses the W class of SLOCC. An obvious solution is to consider other connectomes, allowing for non-trivial tangling in the diagrams. This was attempted in [17] in particular, where a special tangled state was found to be at least numerically close to a W state for a specially tuned \(k\), but no diagram that is generically of W type was found. Unlike GHZ state, W states are measure zero in the space of all three-qubit states, so a possibility remains that the connectome classes can only describe such states approximately, although with any desired precision. More generally trivial connectomes rather describe different generalizations of GHZ states, either their versions for arbitrary number of parties, or embedding of the latter in partially separable situations. For four qubits the connectomes one still work with planar diagrams, and the same approach gives six inequivalent sets, of which two correspond to non-biseparable entanglement. The latter classes can be Figure 3: Surgery allows to express non-trivial 3D topologies as linear combinations of simpler ones, in which 3D defects are replaced by 1D defects (Wilson lines). It is not possible to draw a picture of a 3D space with global 3D defects, for example \(S^{2}\times S^{1}\), so the diagrams are heuristic depictions. Here the defect is shown as a pair of light gray spheres whose surfaces must be identified, so that lines can disappear at one sphere and reappear from the other one. Blue spheres are the boundaries. After the surgery, the defect is replaced by an additional Wilson loop in representation \(R\). Coefficients \(S^{R_{1}R_{2}}\) of the surgery operation are determined by the modular \(S\) transformation in the basis labeled by irreducible representations [15, 24]. illustrated by the following diagrams \[= |0000\rangle+\frac{1}{d^{2}-1}|1111\rangle\,,\] \[= \frac{1}{d}|0000\rangle+\frac{1}{d}\left(|1100\rangle+|0011\rangle \right)-\frac{1}{d(d^{2}-1)}|1111\rangle\,. \tag{40}\] Here the first class is the four-qubit analog of the GHZ state. The second state differs from the GHZ one in that its parties share entanglement with all the remaining parties, while in the GHZ state entanglement is shared only with the two parties in the form of a chain. Breaking this chain at any segment, that is making a measurement of either of qubits, produces a product state. The above two classes of the four-partite entanglement can be compared with nine non-biseparable families of SLOCC in the classification found by [25]. The classification of [25] is not finite, since most of the families are defined using additional complex parameters that can vary continuously. It turns out, that states (39) and (40) both belong to the same family dubbed \(G_{abcd}\), for different choices of the parameters. Similarly to the GHZ states in the tripartite entanglement, states of the \(G_{abcd}\) family exhibit strongest entanglement characteristics. This implies in particular, that such states are dense and can be used to approximate all other forms of entanglement, which makes them an important resource for quantum computation. ## 5 Applications Connectome states and, more generally, the topological picture of quantum entanglement discussed in this paper are particularly suitable for discussing general properties and applications of entanglement. In this section we will discuss a few examples. ### Measures of entanglement The most common measure of entanglement is the von Neumann entropy. Let us briefly review a general result [11] for the entropy of an entanglement pair, which illustrates the convenience of the topological approach (to author's knowledge this idea was originally explored in [26], for an interesting realization on link complement states see [8]). Previously we have introduced a natural TQFT presentation for separable, \[|\Psi_{1}\rangle\ = \tag{41}\] and entangled, \[|\Psi_{2}\rangle\ = \tag{42}\] states. The von Neumann entropies of these two states can be conveniently computed using the replica trick. First, one defines the reduced density matrices, say for subsystem \(A\), by duplicating the diagrams representing the states and gluing the two copies along the boundary \(\Sigma_{B}\). The result can be cast as the following diagrams, \[\rho_{1}(A)\ =\ \left[\ \raisebox{-14.226378pt}{\includegraphics[width=14.226378 pt]{TQFT1}}\ \right]^{-1}\ \raisebox{-14.226378pt}{\includegraphics[width=14.226378 pt]{TQFT2}}\,\quad\mbox{or}\quad\rho_{2}(A)\ =\ \left[\ \raisebox{-14.226378pt}{\includegraphics[width=14.226378 pt]{TQFT3}}\ \right]^{-1}\ \raisebox{-14.226378pt}{\includegraphics[width=14.226378 pt]{TQFT4}}. \tag{43}\] Here the normalization factors in the square brackets have been added to ensure that the matrices have unit trace. In the replica trick one needs to compute \(\operatorname{tr}\rho^{n}(A)\), analytically continue the result in \(n\) and then compute \[S_{\mathrm{E}}(A)\ =\ -\lim_{n\to 1}\frac{d}{dn}\operatorname{tr}\rho^{n}(A)\,. \tag{44}\] Stacking replica of the diagrams in (43) on top of themselves produce equivalent diagrams so, unless the interior of the spaces has some features, \(n\) dependence can only occur in the normalization factors. In fact, in the first case \(\operatorname{tr}\rho_{1}^{n}(A)=1\), and features of the interior are unimportant, so that the entropy vanishes, as expected for a separable state. In the second case one arrives at a formal result (45) To get an actual number on needs to specify some further information about both the boundary \(\Sigma_{A}\) and the topological features of the bulk of the above diagram. In the naive case of \(\Sigma_{A}\simeq S^{2}\) and no features in the bulk, the above diagram equals unity and there is no entanglement between \(A\) and \(B\). This result was anticipated in the previous sections, since a trivial \(S^{2}\) corresponds to a trivial Hilbert space. If \(\Sigma_{A}\) is a torus \(T^{2}\), or a higher genus Riemann surface, the Hilbert spaces are non-trivial and their dimensions non-trivially depend on \(k\)[8, 15, 26]. In the absence of internal features, the diagram in (45) counts the dimension of the Hilbert space, \(\dim\mathcal{H}_{\Sigma_{A}}\). Spheres with punctures also correspond to non-trivial Hilbert spaces. For \(2n\) punctures generalization of states (15) with no extra features in the bulk have entropy \[S_{\mathrm{E}}(A)\ =\ \log\dim\mathcal{H}_{\Sigma_{A}}\ \xrightarrow[k>n-1]{}\log C _{n}\enspace. \tag{46}\] These are maximally entangled states with reduced density matrices of maximal rank. Similar results apply to arbitrary planar connectome states with two 2-sphere boundaries, like (14) and (29), since they have the property that their reduced matrices (and their powers) are proportional to the states themselves and the only problem is to correctly determine the normalization factor. For such states the entropy will be determined by the number of lines that connect \(\Sigma_{A}\) to \(\Sigma_{B}\). This property can be compared with the holographic formula for the entanglement entropy [27], which states that the entropy is computed by the area of the minimum area surface in the bulk separating \(\Sigma_{A}\) and \(\Sigma_{B}\). Indeed, there is a close connection between the formulations of AdS/CFT and TQFT, and the latter can be seen as a fundamental version of the former, where the bulk space only has topological rather than geometric structure. The notion of the area is then replaced by the discrete counting of Wilson lines (flux units) piercing the space, with approximately \(\log 2\) entropy per unit flux. If the states have additional bulk features, like tangling in (31), the entropy counting is more involved. Equation (45) is not valid in general, but the steps of the replica procedure are well defined and allow computing the entropy at least in principle. We still expect the area law to control the entropy in this case, in particular by providing an upper bound. The topological approach gives a very convenient way of understanding the entanglement entropy. The property of the entropy of being defined by the part of the space (section) that has the minimum number of piercing Wilson lines suggest a way to construct generalizations of this measure to the case of multiple parties. For example, for an indicator of genuine tripartite entanglement between parties \(A\), \(B\) and \(C\) one can consider the following. Let us schematically denote a tripartite state as (47) It does not necessarily stand for a genuinely tripartite entanglement and can be separable. Consider the following object acting in \({\cal H}_{A}\times{\cal H}_{A}\), \[\hat{L}\ =\ \raisebox{-14.226378pt}{\includegraphics[width=14.226378pt]{ \includegraphics[width=14.226378pt]{\includegraphics[width=14.226378pt]{ \includegraphics[width=14.226378pt]{\includegraphics[width=14.226378pt]{ \includegraphics[width=14.226378pt]{\includegraphics[width=14.226378pt]{ \includegraphics[width=14.226378pt]{\includegraphics[width=14.226378pt]{ \includegraphics[width=14.226378pt]{\includegraphics[width=14.226378pt]{ \includegraphics[width=14.226378pt]{\includegraphics[width=14.226378pt]{ \includegraphics[width=14.226378pt]{\includegraphics[width=14.226378pt]{ \includegraphics[width=14.226378pt]{\includegraphics[width=14.226378pt]{ \includegraphics[width=14.226378pt]{\includegraphics[width=14.226378pt]{ \includegraphics[width=14.226378pt]{\includegraphics[width=14.226378pt]{ \includegraphics[width=14.226378pt]{\includegraphics[width=14.226378pt]{ \includegraphics[width=14.226378pt]{\includegraphics[width=14.226378pt]{ \includegraphics[width=14.226378pt]{\includegraphics[width=14.226378pt]{ \includegraphics[width=14.226378pt]{\includegraphics[width=14.226378pt]{ \includegraphics[width=14.226378pt]{\includegraphics[width=14.226378pt]{ \includegraphics[width=14.226378pt]{\includegraphics[width=14.226378pt]{ \includegraphics[width=14.226378pt]{\includegraphics[width=14.226378pt]{ \includegraphics[width=14.226378pt]{\includegraphics[width=14.226378pt]{ \includegraphics[width=14.226378pt]{\includegraphics[width=14.226378pt]{ \includegraphics[width=14.226378pt]{\includegraphics[width=14.226378pt]{ \includegraphics[width=14.226378pt]{\includegraphics[width=14.226378pt]{ \includegraphics[width=14.226378pt]{\includegraphics[width=14.226378pt]{ \includegraphics[width=14.226378pt]{\includegraphics[width=14.226378pt]{ \includegraphics[width=14.226378pt]{\includegraphics[width=14.226378pt]{ \includegraphics[width=14.226378pt]{\includegraphics[width=14.226378pt]{ \includegraphics[width=14.226378pt]{\includegraphics[width=14.226378pt]{ \includegraphics[width=14.226378pt]{\includegraphics[width=14.226378pt]{ \includegraphics[width=14.226378pt]{\includegraphics[width=14.226378pt]{ \includegraphics[width=14.226378pt]{\includegraphics[width=14.226378pt]{ \includegraphics[width=14.226378pt]{\includegraphics[width=14.226378pt]{ \includegraphics[width=14.226378pt]{\includegraphics[width=14.226378pt]{ \includegraphics[width=14.226378pt]{\includegraphics[width=14.226378pt]{ \includegraphics[width=14.226378pt]{\includegraphics[width=14.226378pt]{ \includegraphics[width=14.226378pt]{\includegraphics[width=14.226378pt]{ \includegraphics[width=14.226378pt]{\includegraphics[width=14.226378pt]{ \includegraphics[width=14.226378pt]{\includegraphics[width=14.226378pt]{ \includegraphics[width=14.226378pt]{\includegraphics[width=14.226378pt]{ \includegraphics[width=14.226378pt]{\includegraphics[width=14.226378pt]{ \includegraphics[width=14.226378pt]{\includegraphics[width=14.226378pt]{ \includegraphics[width=14.226378pt]{\includegraphics[width=14.226378pt]{ \includegraphics[width=14.226378pt]{\includegraphics[width=14.226378pt]{ \includegraphics[width=14.226378pt]{\includegraphics[width=14.226378pt]{ \includegraphics[width=14.226378pt]{\includegraphics[width=14.226378pt]{ \includegraphics[width=14.226378pt]{\includegraphics[width=14.226378pt]{ \includegraphics[width=14.226378pt]{\includegraphics[width=14.26378pt]{ \includegraphics[width=14.226378pt]{\includegraphics[width=14.26378pt]{ \includegraphics[width=14.226378pt]{\includegraphics[width=14.26378pt]{ \includegraphics[width=14.26378pt]{\includegraphics[width=14.26378pt]{ \includegraphics[width=14.26378pt]{\includegraphics[width=14.26378pt]{ \includegraphics[width=14.26378pt]{\includegraphics[width=14.26378pt]{ \includegraphics[width=14.26378pt]{\includegraphics[width=14.26378pt]{ \includegraphics[width=14.26378pt]{\includegraphics[width=14.26378pt]{ \includegraphics[width=14.26378pt]{\includegraphics[width=14.26378pt]{ \includegraphics[width=14.26378pt]{\includegraphics[width=14.26378pt]{ \includegraphics[width=14.26378pt]{\includegraphics[width=14.26378pt]{ \includegraphics[width=14.26378pt]{\includegraphics[width=14.26378pt]{ \includegraphics[width=14.26378pt]{\includegraphics[width=14.26378pt]{ \includegraphics[width=14.26378pt]{\includegraphics[width=14.26378pt]{ \includegraphics[width=14.26378pt]{\includegraphics[width=14.26378pt]{ \includegraphics[width=14.26378pt]{\includegraphics[width=14.26378pt]{ \includegraphics[width=14.26378pt]{\includegraphics[width=14.26378pt]{ \includegraphics[width=14.26378pt]{\includegraphics[width=14.26378pt]{ \includegraphics[width=14.26378pt]{\includegraphics[width=14.26378pt]{ \includegraphics[width=14.26378pt]{\includegraphics[width=14.26378pt]{ \includegraphics[width=14.26378pt]{\includegraphics[width=14.26378pt]{ \includegraphics[width=14.26378pt]{\includegraphics[width=14.26378pt]{ \includegraphics[width=14.26378pt]{\includegraphics[width=14.26378pt]{ \includegraphics[width=14.26378pt]{\includegraphics[width=14.26378pt]{ \includegraphics[width=14.26378pt]{\includegraphics[width=14.26378pt]{ \includegraphics[width=14.26378pt]{\includegraphics[width=14.26378pt]{ \includegraphics[width=14.26378pt]{\includegraphics[width=14.26378pt]{ \includegraphics[width=14.26378pt]{\includegraphics[width=14.26378pt]{ \includegraphics[width=14.26378pt]{\includegraphics[width=14.26378pt]{ \includegraphics[width=14.26378pt]{\includegraphics[width=14.26378pt]{ \includegraphics[width=14.26378pt]{\includegraphics[width=14.26378pt]{ \includegraphics[width=14.26378pt]{includegraphics[width=14.26378pt]{ \includegraphics[width=14.26378pt]{\includegraphics[width=14.26378pt]{ \includegraphics[width=14.26378pt]{\includegraphics[width=14.26378pt]{ \includegraphics[width=14.26378pt]{\includegraphics[width=14.26378pt]{ \includegraphics[width=14. In figure 4 we show the correlation of the ladder \(\tau_{3}\) (55) with the measure of tripartite entanglement known as 3-tangle [28]. The plot shows the following properties: \(\tau_{3}\) is bounded from above by \(\sim 1.44\); the W state almost maximizes \(\tau_{3}\) and the GHZ, which maximizes the 3-tangle has only a half of the maximum value of \(\tau_{3}\); states with low \(\tau_{3}\) tend to have low 3-tangle; although the 3-tangle appears bounded as \(\tau_{3}\) approaches zero, its value on some states may still be considerable. The discussed measure of tripartite entanglement can be generalized to higher number of parties. Moreover, it is straightforward to extend the topological interpretation to the Renyi entropies and other related measures of entanglement. ### Basic properties of entanglement In quantum computation entanglement is viewed as a resource shared between parts of a quantum system. The topological picture provides an intuitive interpretation of this nature of entanglement. The resource are the strings connecting points distributed between parts. One obvious property of entanglement in this picture is monogamy. For example, if a pair of qudits (with the same \(d\)) are maximally entangled, they must share all the strings with each other and cannot be entangled with any other qudit. Let us compare two situations described by the following diagrams, (57) In both cases \(A\) and \(B\) share the maximal possible number of connections, but in the second case part \(C\) is topologically entangled with the pair \(AB\). As a consequence, in the second case, \(C\) has some amount of quantum entanglement with \(A\) and \(B\). But in this case the entanglement between \(A\) and \(B\) is not maximal. One can show, using skein relations (8), for example, that the second state is a linear combination of a state with maximally entangled \(AB\) and other states with less entanglement in the same pair. As another example of the properties of quantum entanglement in the topological realization we will discuss the subadditivity of the von Neumann entropy. The regular subadditivity is the inequality for the entropy of two subsystems \[S(\rho_{AB})\ \leq\ S(\rho_{A})+S(\rho_{B})\,, \tag{58}\] Figure 4: Correlation between the tripartite entanglement measure (50), defined with respect to the Hermitian ladder (55), and the 3-tangle [28] calculated for \(10^{4}\) three-qubit states with random real coefficients. The dots are showing the locations of biseparable states (orange), GHZ state (cyan) and the W state (blue). where \(\rho_{\cal S}\) are reduced density matrices of subsystems \({\cal S}=A,B,AB\), respectively. To compare this with the topological picture, let us use the fact that the dimension of the Hilbert space of a sphere with \(n\) punctures scales approximately as \(4^{n/2}\) if one assumes \(k>n\), that is the von Neumann entropy of maximally entangled pairs is approximately linear in the number of lines that connect them. In other words, the number of lines connecting two systems is a measure of entropy of entanglement shared by them. Let us denote as \(C\equiv\overline{AB}\) the complement of \(AB\) (\(C=\emptyset\) if \(AB\) describes a pure state, otherwise the state is mixed). If \(N_{A}\) and \(N_{B}\) are the numbers of lines that emanate from \(A\) and \(B\), respectively, \(\ell_{AB}\) is the number of lines connecting \(A\) and \(B\), and \(N_{\overline{AB}}\) is the number of lines connecting both \(A\) and \(B\) to \(C\), then the following inequality holds, \[N_{A}+N_{B}\ =\ N_{\overline{AB}}+2\ell_{AB}\ \geq\ N_{\overline{AB}}\,, \tag{59}\] which is the "connectome" version of (58). To be precise, we do not include in \(N_{A}\) and \(N_{B}\) lines that have both endpoints belonging to the same subsystem. For planar connectome states it is straightforward to promote this relation to the statement about the entropies. As we discussed in the previous section, for such states the entanglement entropy for any subsystem \({\cal S}\) is given by the logarithm of the dimension of the Hilbert space \({\cal H}_{\rm min}({\cal S})\) of a surface that separates this subsystem from its complement and contains the minimum number of punctures. In our case such surface is simply a 2-sphere in \({\mathbb{R}}^{3}\) that encircles \({\cal S}\). Inequality (59) then implies a statement about the dimensions, \[\dim{\cal H}_{\rm min}(A)\cdot\dim{\cal H}_{\rm min}(B)\ \geq\ \dim{\cal H}_{\rm min }(AB)\,, \tag{60}\] where \({\cal H}_{\rm min}\) is defined with respect to the mentioned "minimal" surface. In other words, \({\cal H}_{\rm min}(AB)\) in general contains less degrees of freedom than \({\cal H}_{AB}\), which is defined as a product \({\cal H}_{A}\otimes{\cal H}_{B}\). Then, for connectome states, inequality (58) follows from the above statement about the dimensions. Of course, the inequality must hold for arbitrary TQFT states, but we will not give a topological proof of this fact. Similar discussion applies to the case of the strong subadditivity property, which is defined for three subsystems and spells \[S(\rho_{ABC})+S(\rho_{B})\leq S(\rho_{AB})+S(\rho_{BC})\,. \tag{61}\] The connectome version of this inequality reads \[N_{\overline{ABC}}\ =\ N_{\overline{AB}}+N_{\overline{BC}}-2\ell_{AC}-N_{B} \leq N_{\overline{AB}}+N_{\overline{BC}}-N_{B}\,, \tag{62}\] where \(N_{\overline{ABC}}\) is the total number of lines that connect \(A\), \(B\) or \(C\) with the complement of their union, \(N_{\overline{AB}}\) and \(N_{\overline{BC}}\) count the number of lines connecting the respective unions with their complements, \(N_{B}\) is the number of lines emanating from \(B\), and \(\ell_{AC}\) is the number of lines connecting \(A\) and \(C\). Again, for planar connectome states (62) implies (61) through the statement about the dimensions of the Hilbert spaces. It is straightforward to generalize inequalities like (59) and (62) to a larger number of parties. For example, for a system with parts \(A\), \(B\), \(C\) and \(D\) one possibility would be \[N_{\overline{ABCD}}\ \leq\ N_{\overline{ABC}}+N_{\overline{BCD}}-N_{B}-N_{C} \tag{63}\] The corresponding relation for the entropies must be satisfied on the connectome states. We note again the similarity of the connectome states with quantum states in holographic models, which comes through the connection of entanglement entropy with minimal surfaces. The inequalities used above are based on a discrete version of the area counting, which also appears in the tensor network models of holography [29]. In the context of holographic theories, the inequalities for entanglement entropy and other measures were extensively studied, starting from [30]. In [31] a comprehensive study of the inequalities was performed, which is very reminiscent of the topological approach. Among other results it was shown in that study that holographic states satisfy a set of other commonly known inequalities, such as the monogamy of the mutual information, Zhang-Yeung inequality and Ingleton inequality. Below we list these inequalities and their connectome versions. The monogamy of mutual information [32] can be cast in the following way, \[S(AB)+S(BC)+S(AC)\ \geq\ S(ABC)+S(A)+S(B)+S(C)\,. \tag{64}\] Constructing the connectome analog one finds that the connectomes actually saturate this inequality \[N_{\overline{ABC}}\ =\ N_{\overline{AB}}+N_{\overline{BC}}+N_{\overline{AC}}-N_ {A}-N_{B}-N_{C}\,. \tag{65}\] A simple way to see this is to write (64) in terms of mutual information \(I(A:B)=S(A)+S(B)-S(AB)\), \[I(A:BC)\ \geq\ I(A:B)+I(A:C)\,. \tag{66}\] In the connectome version, the mutual information \(I(A:B)\sim 2\ell_{AB}\), just the double of the number of lines connecting \(A\) and \(B\). Then the number of lines connecting \(A\) with the union of \(B\) and \(C\) is simply \(\ell_{AB}+\ell_{AC}\). The Zhang-Yeung inequality [33] is \[2I(C:D)\ \leq\ I(A:B)+I(A:CD)+3I(C:D|A)+I(C:D|B)\,, \tag{67}\] where \(I(A:B|C)=S(AC)+S(BC)-S(ABC)-S(C)\). The latter quantity also characterizes the correlations between \(A\) and \(B\), but in the presence of an additional subsystem \(C\). Its connectome version is just equivalent to the ordinary mutual information \(I(A:B|C)=I(A:B)\). Hence the above inequality is satisfied trivially by the connectome states. The Ingleton inequality [34] reads \[I(A:B|C)+I(A:B|D)+I(C:D)\ \geq I(A:B)\,. \tag{68}\] It is also trivially satisfied by the connectomes due to positivity of mutual information and the above interpretation, \(I(A:B)\sim 2\ell_{AB}\) and \(I(A:B|C)\sim 2\ell_{AB}\). In fact, connectome states must satisfy a stronger version of (68), \[I(A:B|C)+I(A:B|D)+2I(C:D)\ \geq 2I(A:B)\,. \tag{69}\] Finally, we can comment on other inequalities, derived in [31] for the first time. It turns out that the "cyclic entropy inequalities" for \(n\geq 2k+l\) subsystems, \[\sum_{i=1}^{n}S(A_{i}\cdots A_{i+l-1}|A_{i+l}\cdots A_{i+k+l-1})\ \geq\ S(A_{1}\cdots A_{n})\,, \tag{70}\] are satisfied by the connectome states. Here \(S(A|B)=S(AB)-S(B)\) is the conditional entropy, and the indices are defined modulo \(n\). For \(n=2\) and \(l=2\) and \(k=0\) this is just the regular subadditivity and for \(n=3\), \(k=l=1\) this is the monogamy of mutual information. As stated in [31], the most interesting case is \(l=1\) and \(n=2k+1\), because all other cases follow from this one and strong subadditivity. It easy to check that the connectome states saturate the inequality, that is \[\sum_{i=1}^{2k+1}S(A_{i}|A_{i+1}\cdots A_{i+k})\ =\ S(A_{1}\cdots A_{2k+1})\,. \tag{71}\] A simple argument to demonstrate this is to note that a measure of the conditional entropy \(S(A|B)\) is the difference between the number of lines connecting \(A\) to the complement of \(AB\) and the number of lines connecting \(A\) to \(B\). In the sum over all the subsystems \(A_{i}\) all of them contribute \(\ell_{A_{i}A_{j}}\) twice: first when \(A_{j}\) is in a subsystem with \(A_{i}\) and second, when it is in a complement. Hence, the only contribution to the left hand side is the number of lines connecting \(A_{1}\cdots A_{n}\) to its complement, which is precisely the right hand side. Overall, the comparison with the study of holographic states [31] shows that connectomes are a subclass of holographic states, which likely correspond to simply-connected topologies. ### Basic quantum algorithms #### 5.3.1 Dense coding Let us consider a few examples of quantum algorithms as seen by the topological approach. We will work out two examples of communication algorithms based on pre-sharing entanglement between the parties. The first example is superdense (or simply dense) coding [36]. In the simplest version of dense coding Alice and Bob share a pair of entangled qubits. Alice can use her qubit to code a pair of classical bits and send the result to Bob via a quantum communication channel. By performing operations and measurements on his and Alice's qubits Bob can recover the classical bits of Alice without classically communicating with her. This algorithm is an example of a quantum cryptography protocol, in which the security of information is protected by the distribution of entanglement between the parties. We will explain the dense coding algorithm directly through its topological version. The necessary ingredients include a maximally entangled pair and a measurement basis. From section 3.1 we know that a maximally entangled pair can be represented by diagrams like (15). For the measurement basis it is more convenient to choose the non-orthonormal basis (4). To code the classical bits Alice can use the braiding gates. The dense coding protocol is shown in figure 5. On the left of the figure a table shows the gates that Alice needs to apply to code classical qubit pairs 00, 01, 10 and 11, so that Bob would measure states \(|00\rangle\), \(|01\rangle\), \(|10\rangle\) and \(|11\rangle\) respectively, here coded by basis (4), with unit probability. These gates must be substituted for the gray cylinder in the circuit on the right half of the figure. The essence of the protocol is in the existence of two topologically se Figure 5: The topological version of the dense coding protocol. Charlie (pink background) prepares a pair of entangled qubits and passes them to Alice (light blue) and Bob (lime). Alice codes a pair of classical bits by applying one of the four operations shown on the left in the place indicated by the gray cylinder. After that Bob receives Alice’s qubit. He applies one local transformation on his original qubit and a non-local disentangling transformation on the pair of qubits [35], separating black and blue lines. In the final state the two unentangled qubits encode the pair of Alice’s classical bits. black in the figure). At different stages these subcircuits are shared between the parties (qubits) or separated. The fact that a planar connectome state is used simplifies the protocol: one should not worry about additional tangling of the subcircuits. #### 5.3.2 Quantum teleportation Quantum teleportation is another basic quantum protocol in which Alice has an unknown quantum state, which she would like to pass to Bob [37]. As in the case of dense coding, Alice and Bob pre-share a pair of entangled qubits. In order to transfer the unknown qubit, Alice first entangles it with her half of the shared pair and then measures both qubits. The result of the measurement is communicated classically to Bob, who can recover the unknown qubit on his half of the shared pair after applying transformations defined by the received classical information. The topological version of the teleportation protocol is shown in figure 6. Again we use (4) as the measurement basis. In her manipulations Alice can use the same two-qubit entangling gate as Bob used in the dense coding protocol, cf. figure 5. Figure 6 shows which transformations Bob needs to apply on his qubit for each one of four possible outputs of Alice's measurement. In fact, they are the same transformations as the ones Alice would use in the dense coding up to the one qubit transformation used by Bob in the same protocol. ### Model of unitary evaporation Quantum teleportation algorithm, especially in its topological version, highlights some properties of entanglement of interacting particles. Interactions modify the entanglement which results in a transfer of some particle properties, which we generally refer to as information (about states or particles), from one particle state to a distant particle state. The information, however, is not accessible unless there is a classical channel that provides details about the interaction. Otherwise the information is encrypted. Figure 6: The topological version of the quantum teleportation protocol [19]. Alice (lime background) possesses a qubit in an unspecified state (black cup). She also receives a qubit from an entangled pair (pink). Brown and olive caps denote projections of the results of Alice’s operations on a two-qubit basis in the table on the right. To retrieve Alice’s black qubit Bob (blue) needs to apply an operation denoted by the gray cylinder, according to the outcome of Alice’s measurement, as instructed by the table. This mechanism of the transfer of properties is a possible way to explain, how the information contained in a causally disconnected region, such as the interior of a black hole, can be retrieved in the presence of entanglement with that region [19]. In the case of a black hole Hawking pairs is the source of the shared entanglement between the exterior and interior [18]. Hawking pairs are created by the gravitational field of the black hole, with one particle of the entangled pair falling inside the horizon and the other escaping the black hole and being collected by a distant observer [38]. Trajectories of such pairs are shown as solid straight lines in the causal diagram in the left panel of figure 7. Particles inside the black hole must interact. Creation of the Hawking pairs themselves is one of the results of these interactions. The other type of interactions expected inside the black hole is the one that guarantees maximal scrambling of the information in the interior [39]. These kind of interactions can engage the quantum teleportation protocol so that the information of the interacting quanta are transferred to a state of their entangled cousins outside the black hole. The latter situation is illustrated by the blue lines in figure 7 (left). The essential features of such processes can be illustrated by the following simple topological model, considered in [19]. Let us assume that system \(A\) is causally disconnected from system \(B\). We assume that \(A\) and \(B\) are separated by a horizon-like interface that on-shell particles can only cross in one direction, from \(B\) to \(A\). In the meantime, virtual particles can cross the interface in either directions. Let us use solid lines for the trajectories of the on-shell particles. Virtual particles, which are intermediate states in the interactions of the on-shell particles will have no lines, but will rather appear as voids. A toy Figure 7: (Left) A causal diagram of an evaporating black hole. Solid straight lines show the trajectories of Hawking pairs. Interactions of the Hawking quanta in the interior result in the entanglement of the Hawking radiation with the internal modes of the black hole (dotted lines) but they can also result in the teleportation of the interior states to the exterior (blue lines). (Right) The evaporation of the black hole in the topological model. The blue lines show the initial scrambled state of the black hole. The dashed line shows the location of the horizon, which separates the interior (shaded) and exterior, and contracts as long as the evaporation progresses. The shape of this line is adapted to reflect the fact that the on-shell particles cross it in one direction, while the off-shell ones in the opposite. Solid black lines show evolution histories of the black hole degrees of freedom, which include pair creation-annihilation processes prescribed by the S matrix (72). The void in the interior corresponds to the formation of the island, as explained in the text. Numbers on the sides count the number of degrees of freedom contained in the interior (\(N_{A}\)) and entangled with the Hawking radiation (\(N_{B}\)) at discrete steps of evolution. two-particle S matrix of a relevant interaction can be chosen to be the R matrix, (72) where the first diagram on the right is a trivial scattering and the second is the particle production via annihilation with a virtual particle in the intermediate state. This is of course the skein relation (8). Let us assume that particles in the interior interact via (72) close to the horizon in such a way that the virtual particle crosses the interface and a pair is created in the exterior. This is the evaporation of the black hole [38, 40]. A series of such evaporation events is shown in the right diagram of figure 7, where the horizon is shown as a dashed line. One of the created on-shell particles returns to the interior, crossing the interface in the allowed direction. As a result entanglement is formed between the exterior and the interior in the sense discussed in this paper. By the same kind of interaction, the in-falling particle gets "scrambled" in the interior at the initial steps of the evaporation. In the topological setup the scrambling is modeled by entangling modes in the deep interior and the modes close to the horizon. In this way the out-going particle gets entangled with the particles deep in the interior. While this process continues, and more Hawking pairs are created, the entanglement of the escaped particles with the interior grows. Horizontal lines show discrete time steps in the diagram of figure 7 (right). At approximately half of the evaporation one has the external particles connected with all the remaining particles in the interior. At this moment the number of internal degrees of freedom of the black hole is equal to the number of entangled Hawking quanta (the black hole entropy and the entanglement entropy are the same). After that moment, the in-falling quanta cannot entangle with the particles in the interior without destroying their entanglement with the exterior, as a consequence of monogamy. From that moment the number of pairs connecting the interior and the exterior begins to fall, and no such pairs remain when the black hole evaporates completely. In this model the evaporation happens unitarily and no information paradox appears [18, 41]. In particular, one can measure the entanglement by counting the lines connecting particles in the interior and exterior, which as we argued before, gives a measure of entanglement entropy. The entropy of the black hole is defined by the number of lines in the interior at a given moment of evaporation. The entropy of the Hawking radiation is counted by the number of lines connecting the exterior and interior. Clearly the latter cannot be bigger than the former. In terms of the example of figure 7 (right), this fact is reflected in the inequality \(N_{A}\geq N_{B}\). While \(N_{A}\) decreases with time, \(N_{B}\) may increase initially, but is eventually bounded by \(N_{A}\), providing a unitary example of the Page curve [42]. The notion of the "island" [43], which appears in the recent proposals of the solution of the information paradox [44, 45, 46] can be illustrated by the topological model. In figure 7 (right) the moment of the first engagement of quantum teleportation occurs after the third Hawking pair is created. A pair of entangled particles in the center is teleported to the exterior and the corresponding void is indicated by the unshaded part of the interior. The complement of the unshaded part at the given moment of time is the island. ## 6 Conclusions In this paper we have shown how topology can encode correlations in quantum systems. One of the messages that was conveyed is that topology can provide an intuitive understanding of quantum entanglement and its properties and can assist in developing new theoretical tools and applications. Let us summarize some possible further questions that can be addressed in the topological approach. In the study of the classification of entanglement we have found that planar representatives of connectome classes are only sufficient for the description of the SLOCC classes of bipartite entanglement. For multipartite entanglement the class of topologies must be extended. It remains to be verified, whether the connectome classes contain all the SLOCC entanglement classes or they can only approximate the latter. A basic version of this question is whether there is a topological representation of W entanglement of three qubits. We have argued that planar connectome states are those realizing maximal entanglement in each class of the bipartite entanglement. We have also demonstrated how tangling reduces entanglement. It would be interesting to perform a more systematic study of the effects of tangling or non-trivial 3D topology on entanglement. Such a study can shed additional light on the problem of classification. Topological realization gives an intuitive interpretation not only to quantum states, but more generally to correlations and measures of quantum entanglement. We used this advantage to introduce indicators of multipartite entanglement constructing analogs of reduced density matrices for the multipartite case. Although a more detailed study is necessary to show that such indicators are useful measures of entanglement, the topological approach seems to be a very simple tool for engineering specifically tailored measures. We have seen that certain properties of entanglement are especially transparent in the topological interpretation. Planar connectomes are particularly simple quantum states, which illustrate the properties of entanglement and its measures. We have noticed that planar connectome states have similarities with the holographic quantum states, which includes the "minimal area" formula for the entanglement entropy and different inequalities that the entanglement entropy satisfies on both classes of states. Yet, the connectome states seem to be a more restricted class, saturating some of the inequalities of the holographic states. Therefore, it would be interesting to understand how the connectomes can be generalized to match the properties of the holographic states. Presumably this can be done by allowing non-trivial topologies, but one should also remember that in most studies holographic states are described by classical geometries, so there is a tantalizing perspective that generic topologies can encode quantum geometries, that is states of quantum gravity. In this respect the present work touches upon some active areas of research, including tensor network models of quantum gravity and low dimensional gravities, which are themselves topological theories. A somewhat related problem is the generalization of the entropy formula for planar connectome states. We have argued that on this class counting Wilson lines gives a faithful measure of entanglement entropy. This formula should somehow be corrected on generic states - a relevant problem for the holographic states as well. A model of information retrieval from a causally disconnected region viewed here as a toy model of black hole evaporation was also motivated by recent discussion and progress in quantum gravity. The topological model makes the information transfer particularly manifest. The main question is whether such a mechanism can be consistently embedded in a theory with dynamical gravity. We can note that the specific interactions considered in the model were only chosen to make explicit the action of the teleportation protocol, but the protocol itself should work in a similar way for arbitrary particle interactions. Finally, we have shown that basic quantum algorithms have a very intuitive realization in the topological setup. The dense coding algorithm is based on the realization of shared entanglement as a physical resource (Wilson lines) that can be manipulated and redistributed between parties. The quantum teleportation is based on the topological equivalence: topological sets can be deformed violating locality. It would of course be interesting to know if such topological tricks can be played to construct yet unknown quantum algorithms. Acknowledgments.This work was supported by RSF grant No. 18-71-10073.
2304.13580
Introduction to inverse semigroups
This is an account of the theory of inverse semigroups, assuming only that the reader knows the basics of semigroup theory.
Mark V. Lawson
2023-04-26T14:14:45Z
http://arxiv.org/abs/2304.13580v2
# Introduction to Inverse Semigroups ###### Abstract. This is an account of the theory of inverse semigroups, assuming only that the reader knows the basics of semigroup theory. ## 1. Introduction: a little history Inverse semigroups1 were introduced in the 1950's by Wagner [115] in the Soviet Union2, by Preston [98] in the UK3, and by Ehresmann [20] in France4 as algebraic analogues of pseudogroups of transformations. Inverse semigroups can be viewed as generalizations of groups. Whereas group theory is based on the notion of a symmetry -- that is, a structure-preserving bijection -- inverse semigroup theory is based on the notion of a partial symmetry -- that is, a structure-preserving partial bijection. The passage from bijections to partial bijections introduces a host of new algebraic phenomena. Footnote 1: The early development of inverse semigroup theory ocurred at a time of acute tensions between East and West. See the book by Christopher Hollings [35] for an in-depth analysis. Footnote 2: The book [36] contains more information on Wagner and his approach to mathematics. See also Wagner’s obituary by Boris Schein [105]. Footnote 3: The connection between Preston’s introduction of what he termed ‘inverse semigroups’ and the theory of pseudogroups of transformations is the result of a conversation I had with Preston in Australia in the late 1980’s. Footnote 4: It is hard to single out a single paper of Ehresmann as the starting point for his approach to inverse semigroups — which was via a class of ordered groupoids — since he stayed faithful to his interest in pseudogroups as the way in which local structures were defined. In any event, Ehresmann’s papers in this area reflect what might be called ‘an evolving corpus of work’. See [21]. In writing this chapter, I have assumed the reader is familiar with the basics of semigroup theory such as could be gleaned from the first few chapters of Howie [37].5 Here are a few things you should know and which I take for granted. The multiplication in a semigroup will usually be denoted by \(\cdot\) or by concatenation. If \(S\) is any semigroup then \(S^{0}\) is the semigroup \(S\) with an adjoined zero, and \(S^{1}\) is the semigroup \(S\) with an adjoined identity. If \(A\) and \(B\) are any subsets of a semigroup \(S\) then \(AB\) is the set of all products where \(a\in A\) and \(b\in B\). The study of congruences in semigroups is unavoidable since there are not, in general, the analogues of normal subgroups as in groups or ideals as in rings. If \(\theta\colon S\to T\) is a homomorphism between semigroups then \(\ker(\theta)\) is the congruence that \(\theta\) induces on \(S\). On the other hand, every congruence \(\rho\) on a semigroup \(S\) has an associated _natural homomorphism_ from \(S\) to \(S/\rho\) which maps \(s\) to \(\rho(s)\). If both \(S\) and \(T\) are monoids then a _monoid homomorphism_ maps the identity of \(S\) to the identity of \(T\); if \(S\) and \(T\) are both semigroups with zero, then a homomorphism of such semigroups maps the zero to the zero. If the semigroups \(S\) and \(T\) are isomorphic then we write \(S\cong T\). A subset \(I\subseteq S\) is said to be a _right ideal_ if \(IS\subseteq I\) and a _left ideal_ if \(SI\subseteq I\). It is said to be an _ideal_ if \(SI\subseteq I\) and \(IS\subseteq I\). If \(a\in S\) then we write \(a\) instead of \(\{a\}\). The right ideal \(aS^{1}\) is called the _principal right ideal generated by \(a\)_, whereas the left ideal \(S^{1}a\) is called the _principal left ideal generated by \(a\)_. We call \(S^{1}aS^{1}\) the _principal ideal generated by \(a\)_. Our use of \(S^{1}\) is just a trick to ensure that \(a\) belongs to the ideal. Define \(a\,\mathcal{R}\,b\) if \(aS^{1}=bS^{1}\) and \(a\,\mathcal{L}\,b\) if \(S^{1}a=S^{1}b\). Recall that \(\mathcal{L}\circ\mathcal{R}=\mathcal{R}\circ\mathcal{L}\). Put \(\mathcal{D}=\mathcal{L}\circ\mathcal{R}\), another equivalence relation. Put \(\mathcal{H}=\mathcal{L}\cap\mathcal{R}\). Define \(a\,\mathcal{J}\,b\) if \(S^{1}aS^{1}=S^{1}bS^{1}\). These are the familiar _Green's relations_. As usual, if \(\mathcal{G}\) is one of Green's relations, then the \(\mathcal{G}\)-class that contains the element \(a\) is denoted by \(G_{a}\). Although ideals are useful in semigroup theory, the connection between ideals and congruences is weaker for semigroups than it is for rings. If \(\rho\) is a congruence on a semigroup with zero \(S\), then the set \(I=\rho(0)\) is an ideal of \(S\); however, examples show that the congruence is not determined by this ideal. Nevertheless, ideals can be used to construct some congruences on semigroups. Let \(I\) be an ideal in the semigroup \(S\). Define a relation \(\rho_{I}\) on \(S\) by: \[(s,t)\in\rho_{I}\Leftrightarrow\text{either }s,t\in I\text{ or }s=t.\] Then \(\rho_{I}\) is a congruence. The quotient semigroup \(S/\rho_{I}\) is isomorphic to the set \((S\setminus I)\cup\{0\}\) (we may assume that \(0\notin S\setminus I\)) equipped with the following product: if \(s,t\in S\setminus I\) then their product is \(st\) if \(st\in S\setminus I\), all other products are defined to be \(0\). Such quotients are called _Rees quotients_. There are currently two books entirely devoted to inverse semigroup theory: Petrich's [97] and mine [51]. Petrich's book is pretty comprehensive up to 1984 and is still a useful reference. Its only drawback is the poor index which makes finding particular topics a bit of a chore. My book is less ambitious; its goal is to motivate the study of inverse semigroups by concentrating on concrete examples and was completed in 1998. In writing this chapter, I have drawn mainly upon my own book [51] and some notes I wrote [63] for a workshop at the University of Ottawa in 2010, but I have taken the opportunity to radically rethink what I wrote there in the light of subsequent research. Any results below that are stated but not proved should be treated as exercises. ### Acknowledgements I would like to thank Victoria Gould, Bernard Rybolowicz and Aidan Sims for reading the first version of this chapter and giving me good notes. ## 2. Motivation: pseudogroups of transformations Good mathematics has to be based on good examples. For example, finite semigroup theory grew out of the theory of finite-state automata since a finite semigroup can be associated with every finite-state automaton. Similarly, inverse semigroup theory grew out of the theory of pseudogroups of transformations, which goes back to the works of Sophus Lie and Henri Cartan. If you have ever taken an introductory course in differential geometry, then you have been exposed to pseudogroups of transformations even if that phrase was never once uttered. So, there I shall begin. However, for those of you who have not taken such a course, I shall quickly move on to the collection of all partial bijections on a set, jettisoning any topology or differential structure, and this will take us to the core motivation for studying inverse semigroups. Let \(X\) be a nice space (for example, \(X\) should be Hausdorff but this will play no role in what follows). Let \(\mathbb{R}^{n}\) be the usual space of real \(n\)-vectors. We shall coordinatize \(X\) using \(\mathbb{R}^{n}\) but we shall only do so _locally_ not _globally_. Thus we assume that for each point \(x\in X\) there exists an open set \(x\in U\) and a homeomorphism \(\phi\colon U\to V\), called a _chart_, where \(V\) is an open subset of \(\mathbb{R}^{n}\). A chart gives one way of coordinatizing elements of \(U\) by means of elements of \(V\). However, it is quite possible that \(x\) lies in another chart. In other words, there could well exist another open subset of \(X\), let's say \(U^{\prime}\), which also contains \(x\) and for which there is a homeomorphism \(\psi\colon U^{\prime}\to V^{\prime}\) where \(V^{\prime}\) is also an open subset of \(\mathbb{R}^{n}\). This map gives another way of coordinatizing elements of \(U^{\prime}\) by means of elements of \(V^{\prime}\). However, for any element in \(y\in U\cap U^{\prime}\), there are now two possible co-ordinate representations: \(\phi(y)\) and \(\psi(y)\). How are they to be related? This depends on the nature of the map \(\psi\phi^{-1}\colon\phi(U\cap U^{\prime})\to\psi(U\cap U^{\prime})\), called the _change of coordinates map_, which connects them. This map is a homeomorphism from the open subset \(\phi(U\cap U^{\prime})\) of \(\mathbb{R}^{n}\) to the open subset \(\psi(U\cap U^{\prime})\) of \(\mathbb{R}^{n}\). If the map \(\psi\phi^{-1}\) is always assumed to be _smooth_ then we say that we have defined on \(X\) the structure of a _differential manifold_[106]. We now dispense with \(X\) and concentrate instead on the set \(\Gamma(\mathbb{R}^{n})\) of all smooth maps between the open subsets of \(\mathbb{R}^{n}\). This is just the set of all possible changes of co-ordinates and is our first example of a pseudogroup. Usually, it is the set \(X\) which is the centre of attention, but we have shifted focus to the set of allowable co-ordinate transformations; if we change the nature of those, then we change the nature of the structure we are defining on \(X\). More generally, let \(X\) be any topological space with topology \(\tau\). Denote the set of all homeomorphisms between open subsets of \(X\) by \(\mathcal{I}(X,\tau)\). If \(\tau\) is the usual topology on \(\mathbb{R}^{n}\) then \(\Gamma(\mathbb{R}^{n})\) is a subset of \(\mathcal{I}(\mathbb{R}^{n},\tau)\) closed under certain operations. The sets \(\mathcal{I}(X,\tau)\) are also examples of pseudogroups. The terminology 'pseudogroup' comes from the fact that, just like groups of transformations, the sets \(\mathcal{I}(X,\tau)\) are closed under the composition of partial functions -- don't forget the empty function! -- and are also closed under 'inverses' where we interpret inverses as inverses of partial functions. The set \(\Gamma(\mathbb{R}^{n})\) is likewise closed under composition of partial functions and said inverses. There is a further property that is needed to qualify as a pseudogroup. It is this. If \(\{\phi_{i}\colon i\in I\}\) is any subset of \(\mathcal{I}(X,\tau)\) such that \(\bigcup_{i\in I}\phi_{i}\) is a partial bijection then, in fact, \(\bigcup_{i\in I}\phi_{i}\in\mathcal{I}(X,\tau)\). This is a completeness property. We shall call semigroups of the form \(\mathcal{I}(X,\tau)\)_full pseudogroups of transformations_. For modern applications of pseudogroups, see [16, 80]. To make our lives easier, let's now take the non-empty set \(X\) with the discrete topology. We denote by \(\mathcal{I}(X)\) the set of all bijections between subsets of \(X\). If \(f\colon A\to B\), where \(A,B\subseteq X\), then we call \(A\) the _domain (of definition)_ of \(f\) and \(B\) the _range_ of \(f\). We write \(\operatorname{dom}(f)=A\) and \(\operatorname{ran}(f)=B\). The set \(\mathcal{I}(X)\) is closed under the composition of partial functions and so is a semigroup. It is, in fact, a monoid since the identity function defined on \(X\) belongs to \(\mathcal{I}(X)\). By analogy with the symmetric group, it is called the _symmetric inverse monoid_. What we mean by the term 'inverse monoid' will be explained in the next section. Clearly, the structure of the semigroup \(\mathcal{I}(X)\) depends only on the cardinality of \(X\). Thus, if \(X\) is a finite set with \(n\) elements, it makes sense to write \(\mathcal{I}_{n}\) instead of \(\mathcal{I}(X)\). There are a number of structures we could look at in the semigroups \(\mathcal{I}(X)\) but we begin with the following which turns out to be the right place to start. Let \(f\in\mathcal{I}(X)\). Consider the following two equations in \(\mathcal{I}(X)\): \[f=fgf\text{ and }g=gfg.\] These have the unique solution \(g=f^{-1}\). This result is the basis for the definition of an inverse semigroup in the next section. ## 3. Basic inverse semigroup theory A semigroup \(S\) is said to be _inverse_ if for each \(s\in S\) there exists a unique element, denoted by \(s^{-1}\), such that the following two equations are satisfied: \[s=ss^{-1}s\text{ and }s^{-1}=s^{-1}ss^{-1}.\] **Example 3.1**.: The semigroups \(\mathcal{I}(X)\) really are inverse monoids. If \(f\in\mathcal{I}(X)\) then \(f^{-1}\) is the unique solution of the equations \(f=fgf\) and \(g=gfg\) where \(g\) is the unknown. Observe that if \(s^{-1}\) is the inverse of \(s\), then \(s\) is the inverse of \(s^{-1}\). We have therefore proved the following. **Lemma 3.2**.: _Let \(S\) be an inverse semigroup and let \(s\in S\). Then \((s^{-1})^{-1}=s\)._ The elements \(s^{-1}s\) and \(ss^{-1}\) are idempotents, where an _idempotent_ in a semigroup is an element \(e\) such that \(e^{2}=e\). For example, \(s^{-1}s\) is an idempotent because \[(s^{-1}s)^{2}=(s^{-1}s)(s^{-1}s)=(s^{-1}ss^{-1})s=s^{-1}s.\] If \(e\) is an idempotent in an inverse semigroup \(S\), then \(e=ee=eee\). We have therefore proved the following. **Lemma 3.3**.: _Let \(S\) be an inverse semigroup and let \(e\in S\) be an idempotent. Then \(e^{-1}=e\)._ It follows that every idempotent in an inverse semigroup is of the form \(s^{-1}s\) for some elements \(s\) or, equivalently, of the form \(ss^{-1}\), for some element \(s\). Idempotents play an important role in inverse semigroup theory. For that reason we introduce some special notation. If \(S\) is an inverse semigroup we denote its set of idempotents by \(\mathsf{E}(S)\). Two special idempotents are the _identity_ element, if it exists, usually denoted by \(1\), and the _zero_ element, if it exists, usually denoted by \(0\). An inverse semigroup with identity is called an _inverse monoid_ and an inverse semigroup with zero is called an _inverse semigroup with zero_.6 It is important to be aware right at the beginning that even if \(S\) is an inverse monoid, it is not true in general that \(s^{-1}s=1=ss^{-1}\). Footnote 6: There is no one term in English for a ‘semigroup with zero’. **Example 3.4**.: The idempotents in \(\mathcal{I}(X)\) are precisely the identity functions defined on the subsets of \(X\). Thus if \(A\subseteq X\) then a typical idempotent is \(1_{A}\) which is the identity function on the set \(A\). The identity is \(1_{X}\) and the zero is \(1_{\varnothing}\). The difference between monoids and semigroups is sometimes viewed as trivial. It isn't. For example, a commutative unital \(C^{*}\)-algebra is constructed from a _compact_ space whereas a commutative non-unital \(C^{*}\)-algebra is constructed from a _locally compact_ space. If \(S\) is any monoid then we can look at its _group of units_ which we denote by \(\mathsf{U}(S)\). If \(e\) is any idempotent in a semigroup \(S\) then \(eSe\) is a monoid. I call these _local monoids_ though you will find the misleading term 'local submonoid' in the literature. The group of units of \(eSe\) is denoted by \(H_{e}\). **Example 3.5**.: The units of \(\mathcal{I}(X)\) are the bijective functions and these form a group \(\mathcal{S}(X)\), called the _symmetric group_ on \(X\). **Lemma 3.6**.: _Let \(S\) be an inverse semigroup and let \(e\) be an idempotent in \(S\). Then \(eSe\) is an inverse monoid._ Proof.: It is clear that \(eSe\) is a monoid. We prove that it is inverse. Let \(a\in eSe\). Then \(a=aa^{-1}a\) and \(a^{-1}=a^{-1}aa^{-1}\) in \(S\). You can check that \(ea^{-1}e\) is also an inverse of \(a\) since \(ea=a=ae\). So, by uniqueness, we have that \(a^{-1}=ea^{-1}e\). We have therefore proved that if \(a\in eSe\) then \(a^{-1}\in eSe\). It follows that \(eSe\) is an inverse monoid. The idempotents \(s^{-1}s\) and \(ss^{-1}\) are so important, that we have some special notation for them: \[\mathbf{d}(s)=s^{-1}s\text{ and }\mathbf{r}(s)=ss^{-1}.\] Observe that \(a\mathbf{d}(a)=a\) and \(\mathbf{r}(a)a=a\). If the inverse semigroup has a zero then \(a=0\) if and only if \(\mathbf{d}(a)=0\) (respectively, \(\mathbf{r}(a)=0\).) You can easily check that in an inverse semigroup \(a\,\mathcal{R}\,b\) if and only if \(\mathbf{r}(a)=\mathbf{r}(b)\) and \(a\,\mathcal{L}\,b\) if and only if \(\mathbf{d}(a)=\mathbf{d}(b)\). For this reason, the explicit use of Green's relations in inverse semigroups is not very common. An _inverse subsemigroup_ of an inverse semigroup is a subsemigroup that is also closed under inverses. If \(S\) is an inverse subsemigroup of \(T\) and \(\mathsf{E}(S)=\mathsf{E}(T)\) we say that \(S\) is a _wide_7 inverse subsemigroup of \(T\). Footnote 7: You will often see the term ‘full’ used to mean ‘wide’. I prefer the term ‘wide’ because, as we shall see, it fits in well with groupoid theory, whereas ‘full’ has categorical connotations which are not what we want. **Example 3.7**.: The inverse semigroup \(\Gamma(\mathbb{R}^{n})\) is, in fact, a wide inverse submonoid of \(\mathcal{I}(\mathbb{R}^{n},\tau)\) where \(\tau\) is the usual topology on \(\mathbb{R}^{n}\). It is now time for our first example. We can, in fact, express this as a lemma. **Lemma 3.8**.: __ 1. _A group is an inverse semigroup having a unique idempotent._ 2. _An inverse semigroup having a unique idempotent is a group._ Proof.: (1) Let \(G\) be a group. Then for any element \(g\) in \(G\) we have that \(g^{-1}g=1=gg^{-1}\). Multiply the first equation on the left by \(g\) to get \(gg^{-1}g=g\) and multiply the second equation on the left by \(g^{-1}\) to get \(g^{-1}gg^{-1}=g^{-1}\). On the other hand, suppose that \(gxg=g\) and \(xgx=x\). Multiply the first equation by \(g^{-1}\) on the left and by \(g^{-1}\) on the right. This gives us \(x=g^{-1}\) which also satisfies the second equation. It follows that every group is an inverse semigroup. Let \(g\) be an idempotent in \(G\). Then \(gg=g\). Multiply the left-hand side of this equation by \(g^{-1}\) to get \(g=1\), and \(1\) is clearly an idempotent. Thus the only idempotent in a group is the identity. (2) Let \(S\) be an inverse semigroup with exactly one idempotent. Call this idempotent \(e\). For any element \(a\in S\), we already know that \(a^{-1}a\) and \(aa^{-1}\) are idempotents. It follows that \(a^{-1}a=e=aa^{-1}\) for any element \(a\in S\). On the other hand, \(e\) is the identity. To see why, observe that \(ea=(aa^{-1})a=a\). Similarly, \(ae=a\). We have therefore proved that \(S\) is a group with identity \(e\). The most important fact about inverse semigroups, and one that is not at all obvious from the definition, is that if \(e\) and \(f\) are any idempotents in an inverse semigroup \(S\) then \(ef=fe\). We say that the _idempotents commute_. For the benefit of ring theorists, observe that we are not saying that the idempotents are central. Inverse semigroups in which the idempotents are central are called _Clifford semigroups_. **Proposition 3.9**.: _Idempotents commute in an inverse semigroup._ Proof.: This is an example of a proof that is elementary but not easy. Let \(e\) and \(f\) be idempotents in the inverse semigroup \(S\). Put \(x=(ef)^{-1}\) We prove first that \(fxe\) is an idempotent: \[(fxe)^{2}=(fxe)(fxe)=f(x(ef)x)e=fxe.\] Next, we claim that \(fxe\) is the inverse of \(ef\): \[ef(fxe)ef=ef^{2}xe^{2}f=efxef=ef\] and \[(fxe)ef(fxe)=fxe^{2}f^{2}xe=(fxe)^{2}=fxe,\] by what we proved above. It follows that \[(ef)^{-1}=fxe.\] We have therefore proved that \((ef)^{-1}\) is an idempotent. Now, we apply Lemma 3.2 and Lemma 3.3 to deduce that \(ef\) is an idempotent. We have therefore proved that in an inverse semigroup the product of any two idempotents is itself an idempotent. It follows that both \(ef\) and \(fe\) are idempotents. Next, we show that \(fe\) is the inverse of \(ef\): \[(ef)(fe)(ef)=ef^{2}e^{2}f=efef=ef\] and \[(fe)(ef)(fe)=fe^{2}f^{2}e=fefe=fe.\] We have therefore proved that \((ef)^{-1}=fe\). But \(ef\) is an idempotent and so, by Lemma 3.3, it is its own inverse. We have therefore proved that \(ef=fe\), as required. **Example 3.10**.: In the symmetric inverse monoid, the product of the idempotents \(1_{A}\) and \(1_{B}\) is the idempotent \(1_{A\cap B}\). Because the intersection operation on two sets is commutative, we see why the idempotents commute in this special case. Because we know that idempotents commute, we can now prove the following: **Lemma 3.11**.: _Let \(S\) be an inverse semigroup. Then \((st)^{-1}=t^{-1}s^{-1}\)._ Proof.: We prove that \(t^{-1}s^{-1}\) is the inverse of \(st\). We have that \[st(t^{-1}s^{-1})st=s(tt^{-1})(s^{-1}s)t=s(s^{-1}s)(tt^{-1})t=st\] where we have used the fact that idempotents in an inverse semigroup commute. Similarly, \[t^{-1}s^{-1}(st)t^{-1}s^{-1}=t^{-1}s^{-1}.\] By virtue of the fact that inverses are unique, we deduce that \((st)^{-1}=t^{-1}s^{-1}\). We also see that the analogue of conjugation holds. We shall use the term 'conjugation' below. **Lemma 3.12**.: _If \(e\) is an idempotent of an inverse semigroup and \(s\) is any element of that inverse semigroup then \(ses^{-1}\) is an idempotent._ Proof.: This follows by a direct calculation \[(ses^{-1})^{2}=ses^{-1}ses^{-1}=se^{2}(s^{-1}s)s^{-1}=ses^{-1},\] where we have again used the fact that idempotents commute. Let \(e\) and \(f\) be idempotents in an inverse semigroup. Then we may define a relation on them by \(e\leq f\) precisely when \(e=ef\). Because idempotents commute, it is not hard to show that this defines a partial order on the set of idempotents. We can now characterize Clifford semigroups. **Lemma 3.13**.: _Let \(S\) be an inverse semigroup. Then \(S\) is a Clifford semigroup if and only if \(\mathbf{d}(s)=\mathbf{r}(s)\) for every \(s\in S\)._ Proof.: We use the partial order we have defined on the idempotents of an inverse semigroup. Let \(S\) be a Clifford semigroup and let \(s\in S\). Since the idempotents are central \(s=s(s^{-1}s)=(s^{-1}s)s\). Multiply the last equality on the right by \(s^{-1}\) to deduce that \(\mathbf{r}(s)\leq\mathbf{d}(s)\). It now follows by symmetry that \(\mathbf{d}(s)=\mathbf{r}(s)\). Suppose now that \(\mathbf{d}(s)=\mathbf{r}(s)\) for all \(s\in S\). Let \(e\) be any idempotent and \(s\) arbitrary. We shall prove that \(se=es\). By assumption, \(\mathbf{d}(es)=\mathbf{r}(es)\). That is \((es)^{-1}es=es(es)^{-1}\). We now use Lemma 3.11 and Lemma 3.3 and the fact that idempotents commute to get \(s^{-1}es=ss^{-1}e\). Because of our assumption, \(ss^{-1}=s^{-1}s\). Thus \(s^{-1}es=s^{-1}se\). Multiplying on the left by \(s\), and using the fact that idempotents commute and our assumption, gives \(es=se\), as required. An inverse semigroup \(S\) is said to be a _union of groups_ if \(S=\bigcup_{e\in\mathbf{E}(S)}H_{e}\). By Lemma 3.13, we may now deduce the following **Lemma 3.14**.: _Let \(S\) be an inverse semigroup. Then \(S\) is a Clifford semigroup if and only if it is a union of groups._ Proof.: Let \(S\) be a Clifford semigroup and let \(s\in S\). Put \(e=\mathbf{d}(s)=\mathbf{r}(s)\). Then \(s\in eSe\). But, in fact, \(s\) belongs to the group of units of \(eSe\) which is \(H_{e}\). Thus \(s\in H_{e}\) and so we have proved that every Clifford semigroup is a union of groups. Conversely, suppose that \(S\) is an inverse semigroup which is a union of groups. Then for each \(s\in S\), we have that \(s\in H_{e}\) for some idempotent \(e\). We have that \(\mathbf{d}(s)=e=\mathbf{r}(s)\). It follows that \(S\) Clifford. I have included Lemma 3.14 because group theorists sometimes tend to see inverse semigroups as unions of groups. These are, in fact, a very special class of inverse semigroups. Clifford semigroups can be described explicitly as'strong semilattices of groups' or, what amounts to the same thing, as presheaves of groups over meet semilattices. This applies to, in particular, abelian inverse semigroups. For strong semilattices of groups, see [37, Chapter IV, Section 2]. We can obtain another characterization of Clifford semigroups which is analogous to part of [30, part (c) of Theorem 3.2] but at the price of an extra assumption on the semilattice of idempotents. We say that a meet semilattice is \(0\)_-disjunctive_ if for all \(0\neq f<e\) there exists \(0\neq g\leq e\) such that \(fg=0\). Thus, semilattices which are \(0\)-disjunctive have a weak notion of complement. A non-zero element \(a\) of an inverse semigroup is said to be an _infinitesimal_ if \(a^{2}=0\). **Lemma 3.15**.: _Let \(S\) be an inverse semigroup, the semilattice of idempotents of which is \(0\)-disjunctive. Then \(S\) is a Clifford semigroup if and only if \(S\) contains no infinitesimals._ Proof.: Suppose that \(S\) is a Clifford semigroup. Let \(a\) be an infinitesimal. Then \(a^{2}=0\) thus \(aa=0\). It follows that \(\mathbf{d}(a)\mathbf{r}(a)=0\). But, in a Clifford semigroup \(\mathbf{d}(a)=\mathbf{r}(a)\) by Lemma 3.13. It follows that \(\mathbf{d}(a)=0\) and so \(a=0\) which is a contradiction. Thus there are no infinitesimals. To prove the converse, suppose that there are no infinitesimals. Let \(a\in S\) be arbitrary and non-zero. We shall prove that \(\mathbf{d}(a)=\mathbf{r}(a)\). Suppose not. Put \(e=\mathbf{d}(a)\mathbf{r}(a)\). If \(e=0\) then \(a^{-1}aaa^{-1}=0\) and so \(a^{2}=0\). But this means that \(a\) is an infinitesimal. It follows that \(e\neq 0\). There are now two cases. Suppose, first, that \(e=\mathbf{d}(a)\). Then \(\mathbf{d}(a)<\mathbf{r}(a)\). By assumption, there exists \(0<f\leq\mathbf{r}(a)\) such that \(f\mathbf{d}(a)=0\). Put \(b=fa\). If \(b=0\) then it is easy to see that \(f=0\), which is a contradiction. It follows that \(b\neq 0\). You can easily check that \(b\) is an infinitesimal. This is a contradiction. We can now deal with the second case where \(e<\mathbf{d}(a)\). By assumption, there is a non-zero idempotent \(f\) such that \(ef=0\) and \(f\leq\mathbf{d}(a)\). Put \(b=af\). If \(b=0\) then \((af)^{-1}af=0\) which implies that \(f=0\). It follows that \(b\neq 0\). However, \(b^{2}=afaf=a\mathbf{d}(a)f\mathbf{r}(a)af=aefaf=0\). But this contradicts the assumption that there are no infinitesimals. It follows that \(\mathbf{d}(a)=\mathbf{r}(a)\) for all \(a\in S\). Thus by Lemma 3.13, we have shown that \(S\) is a Clifford semigroup. Although the idempotents in an inverse semigroup are not in general central, we do have the following result, which tells us how idempotents 'pass through' non-idempotent elements. **Lemma 3.16**.: _Let \(S\) be an inverse semigroup in which \(e\) is an idempotent and \(s\) is any element._ 1. \(es=sf\) _for some idempotent_ \(f\)_._ 2. \(se=gs\) _for some idempotent_ \(g\)_._ Proof.: We prove only (1) since the proof of (2) is similar. We have that \(es=ess^{-1}s=e(ss^{-1})s\). Now we use the fact that idempotents commute to get \(es=(ss^{-1})es\). But this is equal to \(s(s^{-1}es)\) and now we use Lemma 3.12 to deduce that \(f=s^{-1}es\) is an idempotent. We have therefore proved that \(es=sf\). We have described one extreme example of inverse semigroup. We now describe another. By a _meet semilattice_, we mean a partially ordered set \((P,\leq)\) with the property that every pair of elements \(a,b\in P\) has a _greatest lower bound_, denoted by \(a\wedge b\). Thus, \(a\wedge b\leq a,b\) and if \(c\leq a,b\) then \(c\leq a\wedge b\). We call \(a\wedge b\) the _meet_ of \(a\) and \(b\).8 The proofs of the following are immediate from the definitions. Footnote 8: We shall usually denote the meet by \(\cdot\) or concatenation. **Lemma 3.17**.: _Let \((P,\leq)\) be a meet semilattice and let \(a,b,c\in P\). Then_ 1. \(a\wedge(b\wedge c)=(a\wedge b)\wedge c\)_._ 2. \(a\wedge b=b\wedge a\)_._ 3. \(a\wedge a=a\)__ A semigroup is called a _band_ if every element is an idempotent. Observe that we have proved in Lemma 3.17 that if \((P,\leq)\) is a meet semilattice then \((P,\wedge)\) is a commutative band. **Lemma 3.18**.: _Let \(S\) be a commutative band. Define \(a\leq b\) by \(a=ab\). Then \((S,\leq)\) is a partially ordered set and, in fact, a meet semilattice in which \(a\wedge b=ab\)._ Proof.: We first check that \(\leq\) is a partial order. By assumption, each element of \(S\) is an idempotent. It follows that \(a\leq a\). Suppose that \(a\leq b\) and \(b\leq a\). Then \(a=ab\) and \(b=ba\). But the semigroup is commutative and so \(a=b\). Finally, suppose that \(a\leq b\) and \(b\leq c\). Then \(a=ab\) and \(b=bc\). It follows that \(a=abc=ac\). Thus \(a\leq c\). We have verified that \((S,\leq)\) is a partially ordered set. We now show that \(a\wedge b=ab\). Observe first that \((ab)a=a^{2}b=ab\), using commutativity and the fact that every element is idempotent. Also, \((ab)b=ab^{2}=ab\). We have show that \(ab\leq a,b\). Suppose, now that \(c\leq a,b\). Then \(c=ca=cb\), and so \(c(ab)=cb=c\). This shows that \(c\leq ab\) and thus proves that \(ab=a\wedge b\). As a result of Lemma 3.17 and Lemma 3.18, we can think about meet semilattices in two equivalent ways: either as partially ordered sets in which each pair of elements has a greatest lower bound or as commutative bands. Observe that if \(S\) is any inverse semigroup then, since the idempotents commute, \(\mathsf{E}(S)\) is a subsemigroup of \(S\). It follows that \(\mathsf{E}(S)\) is always a commutative band and so it becomes a meet semilattice when we define \(e\wedge f=ef\). For this reason, it is usual to refer to \(\mathsf{E}(S)\) as the _semilattice of idempotents_ of \(S\). We now have the following companion to Lemma 3.8. **Lemma 3.19**.: 1. _A meet semilattice is an inverse semigroup with respect to the meet operation in which every element is an idempotent._ 2. _An inverse semigroup in which every element is an idempotent is a meet semilattice._ Proof.: The proof of (1) is immediate by Lemma 3.17. (2) Suppose that \(S\) is an inverse semigroup in which every element is an idempotent. Then \(S=\mathsf{E}(S)\). The result now follows by Lemma 3.18, since \(\mathsf{E}(S)\) is a commutative band. Our next result is often useful in showing that a semigroup is or is not inverse. We need a definition first. A semigroup \(S\) is said to be _regular_ if for each \(a\in S\) there exists an element \(b\) such that \(a=aba\) and \(b=bab\). The element \(b\) is called _an inverse_ of \(a\). Observe that both \(ab\) and \(ba\) are idempotents. In showing that a semigroup is regular, it is enough to check that for each element \(a\) there is an element \(x\) such that \(a=axa\). The reason is that \(b=xax\) is then an inverse of \(a\). Inverse semigroups are the regular semigroups in which every element has a unique inverse. **Proposition 3.20**.: _A regular semigroup is inverse if and only if its idempotents commute._ Proof.: Let \(S\) be a regular semigroup in which the idempotents commute and let \(u\) and \(v\) be inverses of \(x\). Then \[u=uxu=u(xvx)u=(ux)(vx)u,\] where both \(ux\) and \(vx\) are idempotents. Thus, since idempotents commute, we have that \[u=(vx)(ux)u=vxu=(vxv)xu=v(xv)(xu).\] Again, \(xv\) and \(xu\) are idempotents and so \[u=v(xu)(xv)=v(xux)v=vxv=v.\] Hence \(u=v\). The converse follows by Proposition 3.9. **Example 3.21**.: As an example of Proposition 3.20 in action, observe that the full transformation monoid \(\mathcal{T}(X)\), of all functions from \(X\) to itself where \(X\) has at least two elements, is regular but not inverse. We show first that it is regular. Let \(f\in\mathcal{I}(X)\). There are two cases. Suppose first that \(f\) is surjective; for each \(y\in X\) choose \(x_{y}\in f^{-1}(y)\). Define \(g\colon X\to X\) by \(g(y)=x_{y}\). That is, \(g\) maps an element to one of its pre-images under \(f\). We calculate \(fgf\). Let \(x\in X\) and \(y=f(x)\). Then \[(fgf)(x)=(fg)(f(x))=f(g(y))=f(x_{y})=y.\] It follows that \(f=fgf\). Suppose now that \(f\) is not surjective. Choose any element \(x_{0}\in X\). For each element \(y\) in the image of \(f\) choose \(x_{y}\in f^{-1}(y)\). Now define \(g\colon X\to X\) as follows: if \(y\) is in the image of \(f\), then define \(g(y)=x_{y}\), and if \(y\) is not in the image of \(f\), then define \(g(y)=x_{0}\). A similar calculation to the one above shows that \(f=fgf\). We have therefore shown that the full transformation monoid is regular. Choose two distinct elements from \(X\) that we call \(x_{1}\) and \(x_{2}\). Define two functions \(c_{1},c_{2}\colon X\to X\) where \(c_{1}\) is the constant function to \(x_{1}\) and \(c_{2}\) is the constant function to \(x_{2}\). The functions \(c_{1}\) and \(c_{2}\) are distinct and both are idempotents. But \(c_{1}c_{2}=c_{1}\) whereas \(c_{2}c_{1}=c_{2}\). We have shown that the idempotents do not commute. Thus \(\mathcal{T}(X)\) is regular but not inverse. We now consider homomorphisms between inverse semigroups. Our first result is actually slightly more general. **Lemma 3.22**.: _Let \(S\) be an inverse semigroup and let \(T\) be any semigroup. Suppose that \(\theta\colon S\to T\) is a semigroup homomorphism. Then \(\theta(S)\), the image of \(S\), is an inverse semigroup._ Proof.: We shall use Proposition 3.20 to prove that \(\theta(S)\) is inverse. We show first that it is regular. Let \(\theta(s)\in\theta(S)\). Then, since \(s\in S\) inverse, there is an element \(s^{-1}\in S\) such that \(s=ss^{-1}s\) and \(s^{-1}=s^{-1}ss^{-1}\). It follows that \(\theta(s)=\theta(s)\theta(s^{-1})\theta(s)\) and \(\theta(s^{-1})=\theta(s^{-1})\theta(s)\theta(s^{-1})\). This shows that \(\theta(S)\) is regular. We now prove that the idempotents in \(\theta(S)\) commute. To begin with, we have to show that idempotents in the image come from idempotents in the domain.9 Let \(\theta(s)\) be an idempotent in \(\theta(S)\). We shall prove that there is an idempotent \(e\) in \(S\) such that \(\theta(e)=\theta(s)\). Consider the element \(e=ss^{-2}s=(ss^{-1})(s^{-1}s)\). It is the product of two idempotents in \(S\) and so it is an idempotent. In particular, \(e\) is an idempotent in \(S\). We calculate \(\theta(e)\). We have that Footnote 9: This is an instance of what is often termed _Lallement’s Lemma_. \[\theta(e)=\theta(ss^{-2}s)=\theta(s)\theta(s^{-2})\theta(s).\] But \(\theta(s)=\theta(s)^{2}=\theta(s^{2})\). We therefore have that \[\theta(e)=\theta(s^{2}s^{-2}s^{2})=\theta(s^{2})=\theta(s)^{2}=\theta(s).\] Now, let \(\theta(s)\) and \(\theta(t)\) be idempotents in \(\theta(S)\). By our result above, there are idempotents \(e\) and \(f\) in \(S\) such that \(\theta(e)=\theta(s)\) and \(\theta(f)=\theta(t)\). It follows that \(\theta(ef)=\theta(s)\theta(t)\). But idempotents in inverse semigroups commute and so \(ef=fe\). We have therefore shown that \(\theta(s)\theta(t)=\theta(t)\theta(s)\). Thus, the idempotents in \(\theta(S)\) commute. We have therefore proved that the image of \(\theta\) is an inverse semigroup by Proposition 3.20. _Homomorphisms of inverse semigroups_ are just semigroup homomorphisms. _Isomorphisms of inverse semigroups_ are just semigroup isomorphisms. **Lemma 3.23**.: _Let \(\theta\colon S\to T\) be a homomorphism between inverse semigroups. Then:_ 1. \(\theta(s^{-1})=\theta(s)^{-1}\) _for all_ \(s\in S\)_._ 2. _If_ \(e\) _is an idempotent then_ \(\theta(e)\) _is an idempotent._ 3. _If_ \(\theta(s)\) _is an idempotent then there is an idempotent_ \(e\) _in_ \(S\) _such that_ \(\theta(s)=\theta(e)\)_._ 4. \(\theta(S)\) _is an inverse subsemigroup of_ \(T\)_._ 5. _If_ \(U\) _is an inverse subsemigroup of_ \(T\) _then_ \(\theta^{-1}(U)\) _is an inverse subsemigroup of_ \(S\)_._ Proof.: (1) Clearly, \(\theta(s)\theta(s^{-1})\theta(s)=\theta(s)\) and \(\theta(s^{-1})\theta(s)\theta(s^{-1})=\theta(s^{-1})\). Thus, by uniqueness of inverses, we have that \(\theta(s^{-1})=\theta(s)^{-1}\). (2) \(\theta(e)^{2}=\theta(e)\theta(e)=\theta(e^{2})=\theta(e)\). (3) If \(\theta(s)^{2}=\theta(s)\), then \(\theta(s^{-1}s)=\theta(s^{-1})\theta(s)=\theta(s)^{-1}\theta(s)=\theta(s)^{2}= \theta(s)\), where we have used the fact that every idempotent in an inverse semigroup is its own inverse. (4) This is immediate by Lemma 3.22. (5) Straightforward. Let \(\theta\colon S\to T\) be a homomorphism of inverse semigroups. Then the restriction map, \((\theta\mid\mathsf{E}(S))\colon\mathsf{E}(S)\to\mathsf{E}(T)\), is well-defined since homomorphisms map idempotents to idempotents by part (2) of Lemma 3.23 The homomorphism \(\theta\) is said to be _idempotent-separating_ if \((\theta\mid\mathsf{E}(S))\) is actually injective. We say that a congruence is idempotent-separating if its associated natural homomorphism is idempotent-separating. We say that \(S\) is an _idempotent-separating cover_ of \(T\) if \(\theta\) is both surjective and idempotent-separating. It remains to be shown that inverse semigroups really do encode partial bijections. Given an inverse semigroup \(S\) our goal is therefore to construct an injective homomorphism \(\lambda\colon S\to\mathcal{I}(X)\) for some set \(X\). We shall do just this by generalizing the familiar Cayley's Theorem from group theory. As a starting point, we follow the group theory and take as our underlying set \(X\) the set \(S\) itself. We now have to associate an element of \(\mathcal{I}(S)\) with each element \(a\in S\). This is delivered by the following lemma. **Lemma 3.24**.: _Let \(S\) be an inverse semigroup and let \(a\in S\). Then \(\lambda_{a}\colon\mathbf{d}(a)S\to\mathbf{r}(a)S\) defined by by \(\lambda_{a}(x)=ax\) is a well-defined partial bijection._ Proof.: This is well-defined because \(aS=aa^{-1}S\) as the following set inclusions show \[aS=aa^{-1}aS\subseteq aa^{-1}S\subseteq aS.\] Also \(\lambda_{a^{-1}}\colon\mathbf{r}(a)S\to\mathbf{d}(a)S\), \(\lambda_{a^{-1}}\lambda_{a}\) is the identity on \(\mathbf{d}(a)S\), and \(\lambda_{a}\lambda_{a^{-1}}\) is the identity on \(\mathbf{r}(a)S\). Thus \(\lambda_{a}\) is a bijection and \(\lambda_{a}^{-1}=\lambda_{a^{-1}}\). We now define a function \(\lambda\colon S\to\mathcal{I}(S)\) by \(\lambda(a)=\lambda_{a}\). The following theorem completes the circle of ideas begun in Section 2 and justifies the claim that inverse semigroups are the abstract theory of partial bijections. **Theorem 3.25** (Wagner-Preston representation theorem).: _Every inverse semigroup can be embedded in a symmetric inverse monoid._ Proof.: We prove first that \(\lambda\) is a semigroup homomorphism. To that end, we prove that \(\lambda_{a}\lambda_{b}=\lambda_{ab}\). If \(e\) and \(f\) are any idempotents of an inverse semigroup \(S\) then \[eS\cap fS=efS.\] Thus \[\operatorname{dom}(\lambda_{a})\cap\operatorname{im}(\lambda_{b})=a^{-1}aS \cap bb^{-1}S=a^{-1}abb^{-1}S.\] Hence \[\operatorname{dom}(\lambda_{a}\lambda_{b})=\lambda_{b}^{-1}(a^{-1}abb^{-1}S)= b^{-1}a^{-1}aS=b^{-1}a^{-1}abS\] where we use the following subset inclusions \[b^{-1}a^{-1}aS=b^{-1}bb^{-1}a^{-1}aS=b^{-1}a^{-1}abb^{-1}S\subseteq b^{-1}a^{-1} abS\subseteq b^{-1}a^{-1}aS.\] Thus \(\operatorname{dom}(\lambda_{a}\lambda_{b})=\operatorname{dom}(\lambda_{ab})\). It is immediate from the definitions that \(\lambda_{a}\lambda_{b}\) and \(\lambda_{ab}\) have the same effect on elements, and so \(\lambda\) is a homomorphism. It remains to prove that \(\lambda\) is injective. Suppose that \(\lambda_{a}=\lambda_{b}\). Then \(a=ba^{-1}a\) and \(b=ab^{-1}b\). Observe that \(ab^{-1}b=(ba^{-1}a)b^{-1}b=b(b^{-1}b)a^{-1}a=ba^{-1}a=a\), where we have used the fact that idempotents commute. It follows that \(a=b\). We conclude this section by stating a deeper result that we shall not prove. Let \(S\) be a _finite_ semigroup. Then for each \(s\in S\) there exists some power \(s^{n}\) which is an idempotent. Let \(\theta\colon S\to T\) be a homomorphism between finite semigroups. Suppose that \(\theta(s)\) is an idempotent in \(T\). Then for some natural number \(n\geq 1\), the element \(s^{n}\) is also an idempotent. But \(\theta(s^{n})=\theta(s)^{n}=\theta(s)\). Thus, for finite semigroups, if \(\theta(s)\) is an idempotent then there is an idempotent \(e\) such that \(\theta(e)=\theta(s)\). If \(S\) is an inverse semigroup then every subsemigroup (notice: I did not say _inverse_ subsemigroup on purpose) has commuting idempotents. We say that a semigroup \(T\)_divides_ a semigroup \(S\) if there is a subsemigroup \(S^{\prime}\) of \(S\) such that \(T\) is a homomorphic image of \(S^{\prime}\). Suppose now that \(S\) is a finite inverse semigroup. Then every subsemigroup \(S^{\prime}\) of \(S\) is a finite semigroup that has commuting idempotents. Thus every homomorphic image of \(S^{\prime}\) is a finite semigroup with commuting idempotents. We have therefore proved the following: **Lemma 3.26**.: _Every semigroup that divides a finite inverse semigroup has commuting idempotents._ The following theorem was proved by Chris Ash [3] and is deep. It is the converse of the above lemma. **Theorem 3.27** (Ash).: _Every finite semigroup with commuting idempotents divides a finite inverse semigroup._ This is a beautiful result, and a stunning piece of mathematics. It uses finite combinatorics, in the guise of Ramsey Theory, as part of the proof. See [49] for the connections between combinatorics and semigroup theory. ## 4. The natural partial order Partial bijections can be compared with each other: we can say that one partial bijection is the restriction of another. This leads to a partial order on the set of partial bijections of a set. It might be thought that this partial order has to be imposed on an inverse semigroup but, remarkably, this partial order can be defined in purely algebraic terms. It is therefore called the natural partial order. It will follow that every inverse semigroup is, in fact, a partially ordered semigroup with respect to this order. On an inverse semigroup, define \(s\leq t\) iff \(s=ts^{-1}s\). This looks one-sided because the idempotent appears on the right-hand side. It is here that we invoke Lemma 3.16. **Lemma 4.1**.: _In an inverse semigroup, the following are equivalent:_ 1. \(s\leq t\)_._ 2. \(s=te\) _for some idempotent_ \(e\)_._ 3. \(s=ft\) _for some idempotent_ \(f\)_._ 4. \(s=ss^{-1}t\)_._ Proof.: (1)\(\Rightarrow\)(2). This is immediate. (2)\(\Rightarrow\)(3). This is immediate by Lemma 3.16. (3)\(\Rightarrow\)(4). Suppose that \(s=ft\). Then \(fs=s\) and so \(fss^{-1}=ss^{-1}\). It follows that \(s=ss^{-1}t\). (4)\(\Rightarrow\)(1). Suppose that \(s=ss^{-1}t\). By Lemma 3.16, we know that \(s=ti\) for some idempotent \(i\). Observe that \(si=s\) and so \(s^{-1}si=s^{-1}s\). It readily follows that \(s=ts^{-1}s\) giving \(s\leq t\). Let \(S\) be a semigroup equipped with a partial order \(\leq\). We say that \(S\) is a _partially ordered semigroup_ if \(a\leq b\) and \(c\leq d\) imply that \(ac\leq bd\). We may now establish the main properties of the relation we have defined. **Lemma 4.2**.: _Let \(S\) be an inverse semigroup._ 1. _The relation_ \(\leq\) _is a partial order._ 2. _If_ \(s\leq t\) _then_ \(s^{-1}\leq t^{-1}\)_._ 3. _The semigroup_ \(S\) _is partially ordered with respect to_ \(\leq\)_._ 4. _If_ \(e\) _and_ \(f\) _are idempotents then_ \(e\leq f\) _if and only if_ \(e=ef=fe\)_._ 5. \(s\leq e\)_, where_ \(e\) _is an idempotent, implies that_ \(s\) _is an idempotent._ 6. _Let_ \(T\) _be an inverse semigroup and let_ \(\theta\colon S\to T\) _be a homomorphism. Then_ \(a\leq b\) _in_ \(S\) _implies that_ \(\theta(a)\leq\theta(b)\) _in_ \(T\)_._ Proof.: (1) Observe that \(a\leq a\) since \(a=a\mathbf{d}(a)\). Suppose that \(a\leq b\) and \(b\leq a\). Then \(a=b\mathbf{d}(a)\) and \(b=a\mathbf{d}(b)\). Then \(b\mathbf{d}(a)=b\) from which it follows that \(a=b\). Suppose that \(a\leq b\) and \(b\leq c\). Using Lemma 4.1, it is easy to show that \(a\leq c\). (2) This is immediate using the definition and Lemma 4.1. (3) This is immediate using the definition, Lemma 4.1 and Lemma 3.16. (4) Straightforward. (5) This is immediate from the definition and the fact that the product of idempotents is an idempotent. (6) This is immediate from the fact that the natural partial order is algebraically defined, Lemma 4.1 and the fact that idempotents are mapped to idempotents by homomorphisms. Part (1) of Lemma 4.2 leads us to dub \(\leq\) the _natural partial order_ on \(S\). If a partial order is studied in relation to an inverse semigroup, then it is always this one. Part (2) of Lemma 4.2 needs to be highlighted since readers familiar with lattice-ordered groups might have been expecting something different. Part (4) of Lemma 4.2 tells us that when the natural partial order is restricted to the semilattice of idempotents we get back the usual ordering on the idempotents that we have already defined. **Example 4.3**.: The natural partial order in the symmetric inverse monoids \(\mathcal{I}(X)\) is precisely the usual restriction order on partial functions. Observe that by the Wagner-Preston representation theorem Theorem 3.25, we have that \(a\leq b\) if and only if \(\lambda_{a}\subseteq\lambda_{b}\). The natural partial order is used to define a class of inverse monoids. We say that an inverse monoid \(S\) is _factorizable_ if for each element \(s\in S\) there exists a unit \(g\) such that \(s\leq g\), **Example 4.4**.: The symmetric inverse monoids \(\mathcal{I}(X)\) are factorizable if \(X\) is finite. Let \(f\in\mathcal{I}(X)\). Then not only do \(\operatorname{dom}(f)\) and \(\operatorname{ran}(f)\) have the same cardinality but so too do the sets \(X\setminus\operatorname{dom}(f)\) and \(X\setminus\operatorname{ran}(f)\). Choose any bijection \(g\) from \(X\setminus\operatorname{dom}(f)\) to \(X\setminus\operatorname{ran}(f)\). Then \(f\cup g\) is a bijection and so is an element of the group of units of \(\mathcal{I}(X)\). However, there are many choices for \(g\) and so there are many units that extend \(f\). It is precisely this lack of uniqueness in the unit that implies that we cannot reduce inverse monoid theory to group theory. Our next result tells us that the partial order encodes how far from being a group an inverse semigroup is. **Lemma 4.5**.: _An inverse semigroup is a group if and only if the natural partial order is the equality relation._ Proof.: Let \(S\) be an inverse semigroup in which the natural partial order is equality. If \(e\) and \(f\) are any idempotents then \(ef\leq e,f\) and so \(e=f\). It follows that there is exactly one idempotent. We deduce that \(S\) is a group by Lemma 3.8. The proof of the converse is immediate. The natural partial order is very important in studying inverse semigroups. For this reason, it is appropriate here to introduce some terminology and notation from the theory of partially ordered sets. In any partially ordered set \((X,\leq)\), a subset \(Y\subseteq X\) is said to be an _order ideal_ if \(x\leq y\in Y\) implies that \(x\in Y\). More generally, if \(Y\) is any subset of \(X\) then define \[Y^{\downarrow}=\{x\in X\colon x\leq y\text{ for some }y\in Y\}.\] This is the _order ideal generated by \(Y\)_. If \(y\in X\) then we denote \(\{y\}^{\downarrow}\) by \(y^{\downarrow}\) and call it the _principal order ideal generated by \(y\)_. If \(Y\) is any subset of \(X\), define \[Y^{\uparrow}=\{x\in X\colon x\geq y\text{ for some }y\in Y\}.\] If \(Y=\{y\}\) we denote \(\{y\}^{\uparrow}\) by \(y^{\uparrow}\). If \(P\) and \(Q\) are partially ordered sets then a function \(\theta\colon X\to Y\) is said to be _isotone_ if \(x\leq y\) in \(X\) implies that \(\theta(x)\leq\theta(y)\) in \(Y\). An _order-isomorphism_ between two partially ordered sets is a bijective isotone function whose inverse is also isotone. Part (5) of Lemma 4.2 tells us that the semilattice of idempotents of an inverse semigroup \(S\) is an order ideal in \(S\) with respect to the natural partial order. Part (6) of Lemma 4.2 tells us that homomorphisms between inverse semigroups are isotone. Idempotents and non-idempotents are closely related in an inverse semigroup. **Lemma 4.6**.: _Let \(S\) be an inverse semigroup. Then there is an order-isomorphism between the set \(a^{\downarrow}\) and the set \(\mathbf{d}(a)^{\downarrow}\) (and, likewise, with the set \(\mathbf{r}(a)^{\downarrow}\))._ Proof.: Define a map from \(a^{\downarrow}\) to \(\mathbf{d}(a)^{\downarrow}\) by \(b\mapsto\mathbf{d}(b)\). This is well-defined since if \(b\leq a\) then \(\mathbf{d}(b)\leq\mathbf{d}(a)\). It is isotone since if \(b_{2}\leq b_{1}\leq a\) then \(\mathbf{d}(b_{2})\leq\mathbf{d}(b_{1})\leq\mathbf{d}(a)\). From the definition of the natural partial order, it is immediate that this map is injective. We define a map from \(\mathbf{d}(a)^{\downarrow}\) to \(a^{\downarrow}\) by \(e\mapsto ae\). It is routine to check that this is well-defined and isotone. These two maps are mutually inverse. It follows that the partially ordered sets are order-isomorphic. Looking below an idempotent we see only idempotents, but what happens if we look up? The answer is that we don't necessarily see only idempotents. The symmetric inverse monoid is an example. This leads us to the following definition. An inverse semigroup \(S\) is said to be _\(E\)-unitary_ if \(e\leq s\), where \(e\) is an idempotent, implies that \(s\) is an idempotent. An inverse semigroup with zero \(S\) is said to be \(E^{*}\)_-unitary10_ if \(0\neq e\leq s\) where \(e\) is an idempotent implies that \(s\) is an idempotent. Footnote 10: The term \(0\)-\(E\)-_unitary_ is also used. **Lemma 4.7**.: _Let \(S\) be an inverse semigroup with zero. Then it is \(E\)-unitary if and only if it is a meet semilattice._ Proof.: Suppose that \(S\) is \(E\)-unitary. If \(a\in S\) then \(0\leq a\). But \(0\) is an idempotent, and so \(a\) is an idempotent. The proof of the converse ismediate. The above lemma explains why we have made two definitions above, depending on whether the inverse semigroup does not or does have a zero. This bifurcation between inverse semigroups-without-zero and inverse semigroups-with-zero permeates the subject. There was a time when the study of \(E\)-unitary inverse semigroups was the centre of attention. The two papers by Don McAlister [84, 85] are probably the most significant in that they describe the structure of \(E\)-unitary inverse semigroups in terms of simpler building blocks and relate them to arbitrary inverse semigroups. The theory of \(E^{*}\)-unitary inverse semigroups has also been pursued. There are both interesting examples of such inverse semigroups and the analogue of McAlister's theory can be developed in the case of the so-called _strongly \(E^{*}\)-unitary inverse semigroups_. See [11, 52, 53], for example. The following lemma tells us that in an inverse semigroup, there is a relationship between two elements that have a common upper bound. **Lemma 4.8**.: _Let \(S\) be an inverse semigroup and suppose that \(a,b\leq c\). then \(a^{-1}b\) and \(ab^{-1}\) are idempotents._ Proof.: By part (2) of Lemma 4.2, we have that \(a^{-1},b^{-1}\leq c^{-1}\). By part (3) of Lemma 4.2, we have that \(a^{-1}b\leq c^{-1}c\) and \(ab^{-1}\leq cc^{-1}\). It follows by part (5) of Lemma 4.2, that both \(a^{-1}b\) and \(ab^{-1}\) are idempotents. Lemma 4.8 leads us to the following definition. Let \(a,b\in S\), an inverse semigroup. Define \(a\sim b\) if and only if \(a^{-1}b,ab^{-1}\in\mathsf{E}(S)\). This is called the _compatibility relation_. If \(a\sim b\) and if their least upper bound exists, we denote it by \(a\lor b\) and call it the _join_ of \(a\) and \(b\). A subset of an inverse semigroup is said to be _compatible_ if the elements are pairwise compatible. For example, for each element \(a\in S\) in an inverse semigroup, the set \(a^{\downarrow}\) is compatible. In an inverse semigroup with zero, there is a refinement of the compatibility relation which is important. In such a semigroup, a pair of idempotents \(e\) and \(f\) is said to be _orthogonal_, denoted by \(e\perp f\), if and only if \(ef=0\). We define an arbitrary pair of elements \(a\) and \(b\) to be _orthogonal_, denoted by \(a\perp b\), precisely when \(\mathbf{d}(a)\perp\mathbf{d}(b)\) and \(\mathbf{r}(a)\perp\mathbf{r}(b)\) You can easily check that \(a\perp b\) precisely when \(a^{-1}b=0=ab^{-1}\). We call \(\perp\) the _orthogonality relation_. Observe that \(a\perp b\) implies that \(a\sim b\). If an orthogonal subset has a least upper bound then it is said to have an _orthogonal join_. **Example 4.9**.: Let \(f,g\in\mathcal{I}(Y)\). Then \(f\sim g\) if and only if \(f\cup g\) is a partial bijection, and \(f\perp g\) if and only if the domain of \(f\) is disjoint from the domain of \(g\) and the range of \(f\) is disjoint from the range of \(g\). **Lemma 4.10**.: _Let \(S\) be an inverse semigroup with zero. If \(a\perp b\) and \(c\in S\) then \(ac\perp bc\) and \(ca\perp cb\)._ Proof.: We prove that \(ac\perp bc\); the proof of the other case is similar. It is routine to check that \(\mathbf{d}(ac)\mathbf{d}(bc)=0\). The result now follows by symmetry. The compatibility relation is reflexive and symmetric but not, in general, transitive, as the symmetric inverse monoid shows. However, we do have an exact criterion for when the compatibility relation is transitive. **Proposition 4.11**.: _The compatibility relation is transitive if and only if the semigroup is \(E\)-unitary._ Proof.: Suppose that \(\sim\) is transitive. Let \(e\leq s\), where \(e\) is an idempotent. Then \(se^{-1}\) is an idempotent because \(e=se=se^{-1}\), and \(s^{-1}e\) is an idempotent because \(s^{-1}e\leq s^{-1}s\). Thus \(s\sim e\). Clearly \(e\sim s^{-1}s\), and so, by our assumption that the compatibility relation is transitive, we have that \(s\sim s^{-1}s\). But \(s(s^{-1}s)^{-1}=s\), so that \(s\) is an idempotent. Conversely, suppose that \(S\) is \(E\)-unitary and that \(s\sim t\) and \(t\sim u\). Clearly \((s^{-1}t)(t^{-1}u)\) is an idempotent and \[(s^{-1}t)(t^{-1}u)=s^{-1}(tt^{-1})u\leq s^{-1}u.\] But \(S\) is \(E\)-unitary and so \(s^{-1}u\) is an idempotent. Similarly, \(su^{-1}\) is an idempotent. Hence \(s\sim u\). There is a connection between compatible elements and certain kinds of meets. We shall examine meets in greater generality later on in this section. **Lemma 4.12**.: _Let \(S\) be an inverse semigroup._ 1. \(s\sim t\) _if and only if_ \(s\wedge t\) _exists and_ \(\mathbf{d}(s\wedge t)=\mathbf{d}(s)\wedge\mathbf{d}(t)\) _and_ \(\mathbf{r}(s\wedge t)=\mathbf{r}(s)\wedge\mathbf{r}(t)\) 2. _If_ \(s\sim t\) _then_ \[s\wedge t=t\mathbf{d}(s)=s\mathbf{d}(t)=\mathbf{r}(s)t=\mathbf{r}(t)s.\] Proof.: (1) We prove first that \(st^{-1}\) is an idempotent if and only if the greatest lower bound \(s\wedge t\) of \(s\) and \(t\) exists and \((s\wedge t)^{-1}(s\wedge t)=s^{-1}st^{-1}t\). The full result then follows by the dual argument. Suppose that \(st^{-1}\) is an idempotent. Put \(z=st^{-1}t\). Then \(z\leq s\) and \(z\leq t\), since \(st^{-1}\) is an idempotent. Let \(w\leq s,t\). Then \(w^{-1}w\leq t^{-1}t\) and so \(w\leq st^{-1}t=z\). Hence \(z=s\wedge t\). Also \[z^{-1}z=(st^{-1}t)^{-1}(st^{-1}t)=t^{-1}ts^{-1}st^{-1}t=s^{-1}st^{-1}t.\] Conversely, suppose that \(s\wedge t\) exists and \((s\wedge t)^{-1}(s\wedge t)=s^{-1}st^{-1}t\). Put \(z=s\wedge t\). Then \(z=sz^{-1}z\) and \(z=tz^{-1}z\). Thus \(sz^{-1}z=tz^{-1}z\), and so \(st^{-1}t=ts^{-1}s\). Hence \(st^{-1}=ts^{-1}st^{-1}\), which is an idempotent. (2) We shall prove that \(s\wedge t=t\mathbf{d}(s)\) since the other equalities follow by symmetry. Observe that \(t\mathbf{d}(s)=ts^{-1}s\leq s,t\) since \(ts^{-1}=(st^{-1})^{-1}\) is an idempotent. Suppose that \(x\leq s,t\). Then \(x=xx^{-1}x\leq ts^{-1}s\). It follows that \(s\wedge t=t\mathbf{d}(s)\). The following result is useful since it enables us to deduce that two elements are equal from apparently weaker conditions. **Lemma 4.13**.: _Let \(S\) be an inverse semigroup. If \(a\sim b\) and \(\mathbf{d}(a)=\mathbf{d}(b)\) (respectively, \(\mathbf{r}(a)=\mathbf{r}(b)\)) then \(a=b\)._ Proof.: Suppose that \(a\sim b\) then by Lemma 4.12 the meet \(a\wedge b\) exists and \(\mathbf{d}(a\wedge b)=\mathbf{d}(a)\mathbf{d}(b)\). By assumption, \(\mathbf{d}(a)=\mathbf{d}(b)\). Thus \(a\wedge b\leq a\) and \(\mathbf{d}(a\wedge b)=\mathbf{d}(a)\). It follows that \(a\wedge b=a\). Similarly, \(a\wedge b=b\). We have therefore proved that \(a=b\). Inverse semigroups generalize groups, but we can also construct groups from inverse semigroups. The idea is this. Groups are abstract versions of groups of bijections, and bijections can be constructed by glueing together compatible sets of partial bijections. Thus, we could construct groups from inverse semigroups by glueing together suitable compatible subsets. We show first how to construct groups from arbitrary inverse semigroups. The motivation for how to do this comes from Lemma 4.5, which tells us that groups are those inverse semigroups in which the natural partial order is equality, and part (6) of Lemma 4.2, which tells us that homomorphisms between inverse semigroups are isotone. These two results lead us to make the following definition. On an inverse semigroup \(S\), with \(s,t\in S\), define the relation \(\sigma\) by \(s\,\sigma\,t\) if and only if there is an element \(u\) such that \(u\leq s,t\). **Theorem 4.14**.: _Let \(S\) be an inverse semigroup._ 1. \(\sigma\) _is a congruence on_ \(S\)_._ 2. \(S/\sigma\) _is a group._ 3. _If_ \(\rho\) _is any congruence on_ \(S\) _such that_ \(S/\rho\) _is a group then_ \(\sigma\subseteq\rho\)_._ Proof.: (1) We begin by showing that \(\sigma\) is an equivalence relation. Reflexivity and symmetry are immediate. To prove transitivity, let \((a,b),(b,c)\in\sigma\). Then there exist elements \(u,v\in S\) such that \(u\leq a,b\) and \(v\leq b,c\). Thus \(u,v\leq b\). The set \(b^{\downarrow}\) is a compatible subset and so \(u\wedge v\) exists by Lemma 4.12. But \(u\wedge v\leq a,c\) and so \((a,c)\in\sigma\). The fact that \(\sigma\) is a congruence follows from the fact that the natural partial order is compatible with the multiplication. (2) Clearly, all idempotents are contained in a single \(\sigma\)-class (possibly with non-idempotent elements). Consequently, \(S/\sigma\) is an inverse semigroup with a single idempotent. Thus \(S/\sigma\) is a group by Lemma 4.5. (3) Let \(\rho\) be any congruence such that \(S/\rho\) is a group. Let \((a,b)\in\sigma\). Then \(z\leq a,b\), for some \(z\), by definition. Hence \(\rho(z)\leq\rho(a),\rho(b)\) since homomorphisms between inverse semigroups are isotone. But \(S/\rho\) is a group and so its natural partial order is equality. Hence \(\rho(a)=\rho(b)\). In the light of Theorem 4.14, it is natural to call the congruence \(\sigma\) the _minimum group congruence_ and the group \(S/\sigma\) the _maximum group image_ of \(S\). A homomorphism \(\theta\colon S\to T\) is said to be _idempotent-pure_ if \(\theta(a)\) an idempotent implies that \(a\) is an idempotent. **Lemma 4.15**.: _Let \(S\) be an inverse semigroup._ 1. \(\sim\,\subseteq\sigma\)_._ 2. _The congruence_ \(\rho\) _is idempotent-pure if and only if_ \(\rho\,\subseteq\,\sim\)_._ Proof.: (1) Suppose that \(a\sim b\). Then \(a\wedge b\) exists by Lemma 4.12. It follows that \(a\,\sigma\,b\). (2) Suppose that \(\rho\) is idempotent-pure and that \(a\,\rho\,b\). Then \(ab^{-1}\,\rho\,bb^{-1}\). But \(\rho\) is idempotent-pure and so \(ab^{-1}\) is an idempotent. Similarly, \(a^{-1}b\) is an idempotent. We have proved that \(a\sim b\). Conversely, suppose that \(\rho\) is a congruence such that \(\rho\,\subseteq\,\sim\) and \(a\,\rho\,e\), where \(e\) is an idempotent. Observe that \(a^{-1}\,\rho\,e\). Thus \(aa^{-1}\,\rho\,e\). It follows that \(aa^{-1}\,\rho\,a\). Thus \((aa^{-1})a\) is an idempotent and so \(a\) is an idempotent. Inverse semigroups and their homomorphisms form a category (of structures). The category of groups and their homomorphisms is a subcategory. The properties of the minimum group congruence lead naturally to the following result on the category of inverse semigroups. **Proposition 4.16**.: _The category of groups is a reflective subcategory of the category of inverse semigroups._ Proof.: Let \(S\) be an inverse semigroup and let \(\sigma^{\natural}\colon S\to S/\sigma\) be the associated natural homomorphism. Let \(\theta\colon S\to G\) be a homomorphism to a group \(G\). Then \(\ker(\theta)\) is a group congruence on \(S\) and so \(\sigma\subseteq\ker(\theta)\) by Theorem 4.14. Thus by standard semigroup theory, there is a unique homomorphism \(\theta^{*}\) from \(S/\sigma\) to \(G\) such that \(\theta=\theta^{*}\sigma^{\natural}\). It follows by standard category theory, such as [78, Chapter IV, Section 3], that there is a functor from the category of inverse semigroups to the category of groups which takes each inverse semigroup \(S\) to \(S/\sigma\) and if \(\theta\colon S\to T\) is a homomorphism of inverse semigroups then the function \(\psi\colon S/\sigma\to T/\sigma\) defined by \(\psi(\sigma(s))=\sigma(\theta(s))\) is the corresponding group homomorphism (this can be checked directly). There is another characterization of \(E\)-unitary inverse semigroups which is interesting in this context. **Lemma 4.17**.: _Let \(S\) be an inverse semigroup. Then the following conditions are equivalent:_ 1. \(S\) _is_ \(E\)_-unitary._ 2. \(\sim\,=\sigma\)_._ 3. \(\sigma\) _is idempotent-pure._ 4. \(\sigma(e)=\mathsf{E}(S)\) _for any idempotent_ \(e\)_._ Proof.: (1)\(\Rightarrow\)(2). By part (1) of Lemma 4.15, the compatibility relation is contained in \(\sigma\). Let \((a,b)\in\sigma\). Then \(z\leq a,b\) for some \(z\). It follows that \(z^{-1}z\leq a^{-1}b\) and \(zz^{-1}\leq ab^{-1}\). But \(S\) is \(E\)-unitary and so \(a^{-1}b\) and \(ab^{-1}\) are both idempotents. Hence \(a\sim b\). (2)\(\Rightarrow\)(3). By part (2) of Lemma 4.15, a congruence is idempotent pure precisely when it is contained in the compatibility relation. (3) \(\Rightarrow\) (4). This is immediate from the definition of an idempotent-pure congruence. (4) \(\Rightarrow\) (1) Suppose that \(e\leq a\) where \(e\) is an idempotent. Then \((e,a)\in\sigma\). But by our assumption, the element \(a\) is an idempotent. For inverse semigroups with zero the minimum group congruence is not very interesting since the group degenerates to the trivial group. The following example shows one concrete way to deal with this issue. **Example 4.18**.: Let \(G\) be a group and let \(S\) be the inverse monoid of all isomorphisms between the subgroups of \(G\). The semilattice of idempotents is isomorphic to the partially ordered set of subgroups of \(G\). The trivial group is a subgroup of every group. It follows that the isomorphism from the trivial subgroup to itself is the zero of \(S\). Suppose now that \(G\) is infinite. Consider the inverse subsemigroup \(\Omega(G)\) of \(S\) which consists of all the isomorphisms between the subgroups of \(G\) of finite index. The group \(\operatorname{Comm}(G)=\Omega(G)/\sigma\) is called the _abstract commensurator_ of \(G\)[7]. See also [90]. The elements of \(\operatorname{Comm}(G)\) are 'hidden symmetries' to use the terminology of Farb and Weinberger [23]. The above example shows that we may form groups from inverse semigroups by looking at the 'large' elements of the inverse semigroup (and so excluding the zero). Here is another example. Let \(S\) be an inverse semigroup. A non-zero idempotent \(e\) of an inverse semigroup \(S\) is said to be _essential_ if \(ef\neq 0\) for all non-zero idempotents \(f\) of \(S\). An element \(s\) is said to be _essential_ if both \(s^{-1}s\) and \(ss^{-1}\) are essential. Essential elements were first defined in [9] and applied to construct groups in [55, 56]. Denote by \(S^{e}\) the set of all essential elements of \(S\). We regard the elements of \(S^{e}\) as being 'large' elements of \(S\). We call \(S^{e}\) the _essential part of \(S\)._ **Lemma 4.19**.: _Let \(S\) be an inverse semigroup. Then, if non-empty, \(S^{e}\) is an inverse subsemigroup of \(S\)._ Proof.: It is clear that \(S^{e}\) is closed under inverse. Let \(a,b\in S^{e}\). We prove that \(\mathbf{d}(ab)\) is essential; the proof that \(\mathbf{r}(ab)\) is essential is similar. Observe, first, that \(b^{-1}a^{-1}ab\neq 0\) since both \(bb^{-1}\) and \(a^{-1}a\) are essential. Let \(e\) be a non-zero idempotent. We calculate \(\mathbf{d}(ab)e\). Observe that \(beb^{-1}\neq 0\) and is an idempotent, since it is the conjugate of an idempotent. Thus \(a^{-1}abeb^{-1}\neq 0\). It follows that \(a^{-1}abe\neq 0\) and so \(b^{-1}a^{-1}abe\neq 0\). We expect the groups \(S^{e}/\sigma\) to be interesting, which indeed they are if we choose \(S\) carefully. See [56, 71, 73] for applications of the essential part of an inverse semigroup in constructing groups. We have seen that there is a precondition that must be satisfied in order that a pair of elements have a join. The same is not true for meets. If every pair of elements of an inverse semigroup has meets we say that it is a _meet-semigroup_. Such semigroups were first studied by Leech [75]. There is an alternative way to characterize inverse meet-semigroups which is often useful. **Lemma 4.20**.: _Let \(S\) be an inverse semigroup._ 1. \(S\) _has all binary meets if and only if for each element_ \(a\in S\) _there is an idempotent, denoted by_ \(\phi(a)\)_, such that_ \(\phi(a)\leq a\) _and if_ \(e\leq a\)_, where_ \(e\) _is an idempotent, then_ \(e\leq\phi(a)\)_. Thus,_ \(\phi(a)\) _is the largest idempotent below_ \(a\) 2. _The map_ \(\phi\colon S\to\mathsf{E}(S)\)_, defined in part (_1_) above, is an order-preserving idempotent function with_ \(\mathsf{E}(S)\) _as its fixed-point set such that_ \(\phi(ae)=\phi(a)e\) _and_ \(\phi(ea)=e\phi(a)\) _for all_ \(e\in\mathsf{E}(S)\)_._ Proof.: (1) Suppose first that \(S\) has all binary meets. For each \(a\in S\) define \(\phi(a)=a\wedge\mathbf{d}(a)\). Clearly, \(\phi(a)\) is an idempotent (because it is beneath an idempotent). Let \(e\) be an idempotent such that \(e\leq a\). Then \(e\leq\mathbf{d}(a)\) and so \(e\leq\phi(a)\). We have therefore proved that if all binary meet exist, then the function \(\phi\) exists. Conversely, suppose that such a function \(\phi\) exists. Let \(a,b\in S\) and consider the element \(\phi(ab^{-1})b\). Clearly, \(\phi(ab^{-1})b\leq b\). But, by definition, \(\phi(ab^{-1})\leq ab^{-1}\) and so \(\phi(ab^{-1})b\leq ab^{-1}b\leq a\). Let \(c\leq a,b\). Then \(cc^{-1}\leq ab^{-1}\) and so \(cc^{-1}\leq\phi(ab^{-1})\). Now \(c\leq b\) and so \(c=(cc^{-1})c\leq\phi(ab^{-1})b\). It follows that \(a\wedge b=\phi(ab^{-1})b\). (2) It is easy to prove that \(\phi\) is order-preserving. We prove that \(\phi(ae)=\phi(a)e\) for all \(e\in\mathsf{E}(S)\). By definition \(\phi(a)\leq a\). Thus \(\phi(a)e\leq ae\). But \(\phi(a)e\) is an idempotent. It follows that \(\phi(a)e\leq\phi(ae)\). We have that \(\phi(ae)\leq ae\leq a\). But \(\phi(ae)\) is an idempotent. It follows that \(\phi(ae)\leq\phi(a)\) and so \(\phi(ae)e\leq\phi(a)e\). But \(\phi(ae)\leq ae\), and so \(\phi(ae)e=\phi(ae)\). Thus \(\phi(ae)\leq\phi(a)e\). We have therefore proved that \(\phi(ae)=\phi(a)e\). The function \(\phi\) is called a _fixed-point operator_. **Example 4.21**.: We can deduce right away that the symmetric inverse monoids are meet-monoids because if \(f\in\mathcal{I}(X)\) then we can define the idempotent \(1_{A}\) where \(A\) is the set of all points that \(f\) fixes (and if \(f\) does not fix any points then \(A=\varnothing\).) The \(E^{*}\)-unitary semigroups also enjoy a property that is more significant than it looks. The following was first noted by [76]. **Proposition 4.22**.: _An \(E^{*}\)-unitary inverse semigroup has meets of all pairs of elements._ Proof.: Let \(s\) and \(t\) be any pair of elements. Suppose that there exists a non-zero element \(u\) such that \(u\leq s,t\). Then \(uu^{-1}\leq st^{-1}\) and \(uu^{-1}\) is a non-zero idempotent. Thus \(st^{-1}\) is an idempotent. Similarly \(s^{-1}t\) is an idempotent. It follows that \(s\wedge t\) exists by Lemma 4.12. If the only element below \(s\) and \(t\) is \(0\) then \(s\wedge t=0\). To conclude this section, we state another deep result about finite inverse monoids. We need a definition first. An inverse monoid \(S\) is said to be \(F\)_-inverse_ if every \(\sigma\)-class contains a greatest element. **Lemma 4.23**.: _Every \(F\)-inverse monoid is \(E\)-unitary._ Proof.: Let \(S\) be an \(F\)-inverse monoid. Suppose that \(e\leq a\). Then \(\sigma(e)=\sigma(a)\). Each \(\sigma\)-class contains a maximum element. Since we are working in a monoid, \(\sigma(e)=\sigma(1)\). Let the maximum element in \(\sigma(1)\) be \(x\). Then \(1\leq x\). It follows that \(1=x1=x\). Thus the maximum element of \(\sigma(1)\) is \(1\) itself. Thus \(a\leq 1\) and so \(a\) is an idempotent. You can find out a lot more about \(F\)-inverse monoids, here [6]. In [5, Theorem 2.7], the following deep theorem is proved and the background to it explained. **Theorem 4.24**.: _Every finite inverse monoid has a finite \(F\)-inverse monoid cover._ The implications of this theorem for finite inverse monoid theory are explained in [50], but it is enough to say that this is another example of a beautiful piece of mathematics and is related to Ash's deep result Theorem 3.27. ## 5. Inverse semigroups as non-commutative lattices It is only a slight exaggeration to say that in the four decades after inverse semigroups were introduced the main focus of researchers was on the purely algebraic properties of inverse semigroups.11 Subsequently, the properties of the natural partial order have come to the fore. In this section, we shall study inverse semigroups with respect to this natural partial order. From this point of view, inverse semigroups can themselves be regarded as 'non-commutative meet semilattices'. However, we shall find it fruitful to regard certain inverse semigroups as being 'non-commutative lattices' of various kinds, always with the proviso that, because of Lemma 4.8, the join will not always be defined. Meets of idempotents will always usually be denoted by concatenation. Specifically, we shall take as our points of departure the various classes of lattice (each with top \(1\) and bottom \(0\)): a _frames_ are the complete infinitely distributive lattices; _distributive lattices_ have all binary meets and binary joins with binary meets distributing over binary joins and vice-versa; _Boolean algebras_ are those distibutive lattices in which each element \(x\) has a _complement_\(\bar{x}\) such that \(x\bar{x}=0\) and \(x\vee\bar{x}=1\). Observe that homomorphisms of distributive lattices automatically preserve complements. Boolean algebras are particularly important so it will be useful to have a way of defining them which involves only products and complements. For the standard axioms for a Boolean algebra see [61, Chapter 2]. The following lemma contains some axioms due to Frink [26]. If you have trouble proving that this really is a Boolean algebra, see [93]. **Lemma 5.1**.: _Consider the following structure \((B,\cdot,a\mapsto\bar{a},0)\) where \((B,\cdot)\) is a commutative band and \(ab=a\) if and only if \(\bar{a}\bar{b}=0\). Define \(a+b=\overline{(\bar{a}\cdot\bar{b})}\). Then \((B,\cdot,+,0,\bar{0})\) is a Boolean algebra._ We call these _Frink's axioms for a Boolean algebra_. We begin with frames. The lattice of open sets of a topological space is an example of a frame The theory of frames can be viewed as the theory of topological spaces in which the open sets, and not the points, are taken as primary. As well as being an interesting theory in its own right [38] with important applications, it is also a key ingredient in topos theory [79]. Johnstone discusses the origins of frame theory in his book on this subject [38, Chapter II]. One sentence is significant for the goals of this section. He writes on page 76: It was Ehresmann...and his student Benabou...who first took the decisive step in regarding complete Heyting algebras as 'generalized topological spaces'. However, Johnstone does not say _why_ Ehresmann was led to his frame-theoretic viewpoint of topological spaces. In fact, it was Ehresmann's paper [20], which we have cited above as being one of the origins of inverse semigroup theory, which led to the theory of frames. Ehresmann was interested in pseudogroups of transformations. Amongst those are the full transformation pseudogroups \(\mathcal{I}(X,\tau)\) of homeomorphisms between the open subsets of \(\tau\). The idempotents of such pseudogroups are the identity functions defined on the open subsets of \(\tau\). Of course, these form frames. More generally, we define a _pseudogroup_ to be an inverse semigroup in which all compatible subsets have joins and in which multiplication distributes over such joins. The semilattice of idempotents of a pseudogroup is a frame and so we can regard pseudogroups as being non-commutative generalizations of frames. Using this language, we can say that Schein [104] proved that associated with every inverse semigroup is a universal pseudogroup. Pseudogroups themselves are the subject of [102]. We can continue in this vein, but consider instead only finite joins. We say that an inverse monoid is _distributive_ if it has all finite joins and multiplication distributes over such joins. The semilattice of idempotents of a distributive inverse monoid is a distributive lattice. We can therefore regard distributive inverse monoids as being non-commutative distributive lattices. Distributive inverse monoids were first studied in [42]. An inverse monoid is said to be _Boolean_ if it is distributive and its semilattice of idempotents is in fact a Boolean algebra. We can therefore regard Boolean inverse monoids as being non-commutative Boolean algebras. Boolean inverse monoids have shown themselves to be particularly interesting. The relationships between these various classes of inverse semigroup is described in [66]. Boolean inverse monoids were introduced in [59], and the fact that associated with every inverse semigroup is a universal Boolean inverse semigroup is proved in [62]. Associated with every Boolean inverse monoid is its _type monoid_: this is always an abelian refinement monoid [47]. The type monoid is then studied in great detail in [117]. The following table summarizes this whole approach to inverse semigroups: \begin{tabular}{|c||c|} \hline **Commutative** & **Non-commutative** \\ \hline \hline Meet semilattice & Inverse semigroup \\ \hline Frame & Pseudogroup \\ \hline Distributive lattice & Distributive inverse monoid \\ \hline Boolean algebra & Boolean inverse monoid \\ \hline \end{tabular} In the remainder of this section, we shall be particularly interested in Boolean inverse monoids, but we shall prove some results in greater generality. Our first results hold in any inverse semigroup **Lemma 5.2**.: _Let \(S\) be an inverse semigroup._ 1. _Here, we shall need the fact that the inverse semigroup has a zero. Suppose that_ \(a,b\leq c\)_. If_ \(\mathbf{d}(a)\perp\mathbf{d}(b)\) _then_ \(\mathbf{r}(a)\perp\mathbf{r}(b)\)_._ 2. _If_ \(a\wedge b\) _exists then_ \(c(a\wedge b)=ca\wedge cb\) _and_ \((a\wedge b)c=ac\wedge bc\)_._ Proof.: (1) We are given that \(a=c\mathbf{d}(a)\) and \(b=c\mathbf{d}(b)\). Using these equations and the fact that idempotents commute, it is easy to check that \(\mathbf{r}(a)\mathbf{r}(b)=0\). (2) We are given that \(a\wedge b\) exists. We prove that \(ca\wedge cb\) exists and that \((a\wedge b)c=ac\wedge bc\). The proof of the other case is similar. Observe that \(c(a\wedge b)\leq ca,cb\). Now let \(y\leq ca,cb\). Then \(c^{-1}y\leq c^{-1}ca,c^{-1}cb\) and so \(c^{-1}y\leq a,b\). It follows that \(c^{-1}y\leq a\wedge b\). Thus \(cc^{-1}y\leq c(a\wedge b)\). But \(y\leq ca\) and so \(y=ca\mathbf{d}(y)\). It follows that \(cc^{-1}y=y\). Thus \(y\leq ca,cb\) implies that \(y\leq c(a\wedge b)\). It follows that \(c(a\wedge b)=ca\wedge cb\) We now specialize to distributive inverse monoids. Result (2) below is what remains of one of the distributive laws. **Lemma 5.3**.: _Let \(S\) be a distributive inverse monoid._ 1. _Suppose that_ \(a\sim b\)_. Then_ \(\mathbf{d}(a\lor b)=\mathbf{d}(a)\vee\mathbf{d}(b)\) _and_ \(\mathbf{r}(a\lor b)=\mathbf{r}(a)\vee\mathbf{r}(b)\)__ 2. _Suppose that both_ \(a\lor b\) _and_ \(c\wedge(a\lor b)\) _exist. Then both_ \(c\wedge a\) _and_ \(c\wedge b\) _exist, the join_ \((c\wedge a)\vee(c\wedge b)\) _exists and_ \(c\wedge(a\lor b)=(c\wedge a)\vee(c\wedge b)\)_._ Proof.: (1) We prove the first result; the proof of the second is similar. We calculate \((a\lor b)^{-1}(a\lor b)\). We first use the fact that \(a\mapsto a^{-1}\) is an order-isomorphism. It follows that \((a\lor b)^{-1}(a\lor b)=(a^{-1}\lor b^{-1})(a\lor b)\). Now multiply out to get \[a^{-1}a\lor a^{-1}b\lor b^{-1}a\lor b^{-1}b.\] But \(a\sim b\) and so both \(a^{-1}b\) and \(b^{-1}a\) are idempotents. Morover, \(a^{-1}b\leq a^{-1}a\) and \(b^{-1}a\leq b^{-1}b\). The result now follows. (2) Let \(x\leq c,a\). Then \(x\leq c\wedge(a\lor b)\). It follows that \(x\mathbf{d}(a)\leq(c\wedge(a\lor b))\mathbf{d}(a)\). But \(x\leq a\) implies that \(x\mathbf{d}(a)=x\). It follows that \(x\leq(c\wedge(a\lor b))\mathbf{d}(a)\). On the other hand, \((c\wedge(a\lor b))\mathbf{d}(a)\leq c,a\). It follows that \[c\wedge a=(c\wedge(a\lor b))\mathbf{d}(a).\] Similarly, \[c\wedge b=(c\wedge(a\lor b))\mathbf{d}(b).\] Observe that since \(c\wedge a,c\wedge b\leq a\lor b\) it follows that \((c\wedge a)\sim(c\wedge b)\). Thus \((c\wedge a)\vee(c\wedge b)\) exists. Observe that \((c\wedge a)\vee(c\wedge b)\leq c\wedge a,c\wedge b\). It follows that \((c\wedge a)\vee(c\wedge b)\leq c,a\lor b\). Thus, \[(c\wedge a)\vee(c\wedge b)\leq c\wedge(a\lor b).\] Let \(x\leq a\lor b,c\). Then \(x=(a\lor b)\mathbf{d}(x)\). From \(x\leq a\lor b\) we get that \(x\mathbf{d}(a)\leq(a\lor b)\mathbf{d}(a)\). Now, \((a\lor b)\mathbf{d}(a)=a\). It follows that \(x\mathbf{d}(a)\leq a\). But \(x\leq c\) and so \(x\mathbf{d}(a)\leq c\mathbf{d}(a)\). Thus we have proved that \(x\mathbf{d}(a)\leq a,c\mathbf{d}(a)\). We now apply Lemma 5.2, and deduce that \((a\wedge c)\mathbf{d}(a)=a\wedge c\mathbf{d}(c)\). Thus \(x\mathbf{d}(a)\leq a\wedge c\mathbf{d}(c)\). Similarly, \(x\mathbf{d}(b)\leq c\mathbf{d}(b)\wedge b\). It follows that \[x\mathbf{d}(a)\lor x\mathbf{d}(b)\leq(c\mathbf{d}(a)\wedge a)\vee(c\mathbf{d }(b)\wedge b)\] and so \[x\mathbf{d}(a\lor b)\leq(c\mathbf{d}(a)\wedge a)\vee(c\mathbf{d}(b)\wedge b).\] By the above, we deduce that \[x\leq(c\mathbf{d}(a)\wedge a)\vee(c\mathbf{d}(b)\wedge b).\] But \((c\mathbf{d}(a)\wedge a)\vee(c\mathbf{d}(b)\wedge b)\leq(c\wedge a)\vee(c \wedge b)\) and the result follows. By induction, the second result above can be generalized to any number of joins. We now specialize to Boolean inverse monoids. Let \(S\) be a Boolean inverse monoid. If \(y\leq x\), define \[x\setminus y=x\overline{\mathbf{d}(y)}.\] **Lemma 5.4**.: _Let \(S\) be a Boolean inverse monoid._ 1. \(\mathbf{d}(x\setminus y)=\mathbf{d}(x)\overline{\mathbf{d}(y)}\)_._ 2. _If_ \(y\leq x\) _then_ \(y\perp(x\setminus y)\) _and_ \(x=y\vee(x\setminus y)\)_._ 3. _Suppose that_ \(a\leq x\) _is such that_ \(a\perp y\) _and_ \(x=y\lor a\) _then_ \(a=x\setminus y\)_._ 4. \(\mathbf{r}(x\setminus y)=\mathbf{r}(x)\overline{\mathbf{r}(y)}\)_._ Proof.: (1) Straightforward from the definition. (2) By definition \((x\setminus y)\leq x\), and from part (1), we have that \(\mathbf{d}(x\setminus y)=\mathbf{d}(x)\overline{\mathbf{d}(y)}\). It follows that \(\mathbf{d}(y)\perp\mathbf{d}(x\setminus y)\). We now apply Lemma 5.2, to deduce that \(y\perp(x\setminus y)\). Clearly, \(y\vee(x\setminus y)\leq x\). But \(\mathbf{d}(y\vee(x\setminus y))=\mathbf{d}(y)\vee\mathbf{d}(x)\overline{ \mathbf{d}(y)})=\mathbf{d}(x)\). It follows that \(x=y\vee(x\setminus y)\). (3) Observe that \(\mathbf{d}(x)=\mathbf{d}(y)\vee\mathbf{d}(a)\) and \(\mathbf{d}(a)\perp\mathbf{d}(y)\). It follows that \(\mathbf{d}(a)=\mathbf{d}(x)\overline{\mathbf{d}(y)}\). But \(a=x\mathbf{d}(a)=x\overline{\mathbf{d}(y)}=x\setminus y\). (4) This follows by part (3) above. A _morphism_ between distributive inverse monoids is a homomorphism of monoids with zero that preserves binary joins. In working with Boolean inverse monoids, it is often much easier to calculate with orthogonal joins than arbitrary ones. The following result provides circumstances under which we lose nothing by doing this. **Lemma 5.5**.: _Let \(S\) be an inverse semigroup._ 1. _We make the following assumptions about_ \(S\)_: the semilattice of idempotents of_ \(S\) _forms a Boolean algebra under the natural partial order; all finite orthogonal joins exist; multiplication distributes over finite orthogonal joins. Then_ \(S\) _is a Boolean inverse monoid._ 2. _Let_ \(S\) _and_ \(T\) _be Boolean inverse monoids. Let_ \(\theta\colon S\to T\) _be a monoid homomorphism of monoid with zero that preserves orthogonal joins. Then_ \(\theta\) _preserves all binary compatible joins._ Proof.: (1) We have to prove that all binary joins exist and that multiplication distributes over such joins. Let \(e,f\in\mathsf{E}(S)\). Then using the distributive law, we have that \(e\bar{f}\lor f=e\lor f\). But \(e\bar{f}\) and \(f\) are orthogonal. It follows that \(a(e\lor f)=a(e\bar{f}\lor f)=aef\bar{f}\lor af\). Observe that \((ae\bar{f}\lor af){\bf d}(ae)=ae\). It follows that \(ae,af\leq ae\bar{f}\lor af\). On the other hand, if \(ae,af\leq x\) then \(ae\bar{f}\lor af\leq x\). We have therefore proved that \[a(e\lor f)=ae\lor af\] where \(e\) and \(f\) are any idempotents. Suppose, now, that \(a\sim b\). Then \(a\wedge b\) exists by Lemma 4.12. We have that \[(a\setminus a\wedge b)\lor b=a\overline{{\bf d}(a){\bf d}(b)}\lor b\] since, in this case, \({\bf d}(a\wedge b)={\bf d}(a){\bf d}(b)\) by Lemma 4.12. But \(\overline{{\bf d}(a){\bf d}(b)}=\overline{{\bf d}(a)}\vee\overline{{\bf d}(b)}\). We now use our result above to deduce that \[(a\setminus a\wedge b)\lor b=a\overline{{\bf d}(b)}\lor b,\] an orthogonal join. Observe that \((a\overline{{\bf d}(b)}\lor b){\bf d}(a)=a\), where we use the fact that \(b{\bf d}(a)=a{\bf d}(b)\) by Lemma 4.12, our result above, and a little Boolean algebra. We therefore have that \(a,b\leq a\overline{{\bf d}(b)}\lor b\). Suppose that \(a,b\leq x\). Then it is routine to check that \(a\overline{{\bf d}(b)}\lor b\leq x\). We have therefore proved that \(a\lor b\) exists and is equal to the orthogonal join \(a\overline{{\bf d}(b)}\lor b\). It remains to show that multiplication distributes over binary joins. Suppose that \(a\sim b\). It can be proved directly that that \(ca\sim cb\). We prove that \[c(a\lor b)=ca\lor cb.\] We have that \[c(a\lor b)=c(a\overline{{\bf d}(b)}\lor b)=ca\overline{{\bf d}(b)}\lor cb.\] Thus \[ca\leq ca\overline{{\bf d}(b)}\lor cb.\] Now, \[(ca\overline{{\bf d}(b)}\lor cb){\bf d}(ca)=ca\overline{{\bf d}(b)}\lor cb{ \bf d}(ca).\] We now use the fact that \(ca\sim cb\) and Lemma 4.12 to get \[ca\overline{{\bf d}(b)}\lor cb{\bf d}(ca)=ca\overline{{\bf d}(b)}\lor ca{\bf d }(cb)\leq ca.\] It follows that \[ca=(ca\overline{{\bf d}(b)}\lor cb){\bf d}(ca).\] We deduce that \(ca,cb\leq ca\overline{{\bf d}(b)}\lor cb\). Suppose that \(ca,cb\leq x\). Then it is routine to check that \(ca\overline{{\bf d}(b)}\lor cb\leq x\). Whence \(ca\lor cb=ca\overline{{\bf d}(b)}\lor cb\) and so \(c(a\lor b)=ca\lor cb\). (2) Suppose that \(a\sim b\). Then \(a\lor b=a\overline{{\bf d}(b)}\lor b\), which is an orthogonal join. By assumption, \[\theta(a\lor b)=\theta(a\overline{{\bf d}(b)})\vee\theta(b)=\theta(a)\theta( \overline{{\bf d}(b)}))\vee\theta(b).\] The result now follows since \(\theta(\bar{e})=\overline{\theta(e)}\) where \(e\) is any idempotent. For the remainder of this section, we shall focus on Boolean inverse monoids. As we shall prove in Theorem 5.10, they arise naturally as soon as we try to map inverse monoids to the multiplicative monoids of rings, such as in representation theory. Accordingly, we begin with some results about idempotents in rings. Recall that all our rings will be unital. If \(e\) and \(f\) are idempotents in a ring \(R\) then we say they are _orthogonal_ if \(ef=0\) denoted by \(e\perp f\). A finite set of idempotents is _orthogonal_ if each distinct pair of elements is orthogonal. A sum of a finite number of orthogonal idempotents will be called an _orthogonal sum_. More generally, a sum of orthogonal elements taken from an inverse submonoid will be called an _orthogonal sum_. **Lemma 5.6**.: _Let \(R\) be a unital ring._ 1. _If_ \(e\) _is an idempotent in_ \(R\)_, then_ \(1-e\) _is an idempotent in_ \(R\)_; in addition, the idempotents_ \(e\) _and_ \(1-e\) _are orthogonal._ 2. _If_ \(e\) _and_ \(f\) _are orthogonal idempotents then_ \(e+f\) _is an idempotent; this can be generalized to any finite sets of orthogonal idempotents._ 3. _If_ \(e\) _and_ \(f\) _are orthogonal idemptents then_ \[1-(e+f)=(1-e)(1-f).\] 4. _If_ \(e\) _and_ \(f\) _are commuting idempotents, then_ \[e+f-ef=f+e(1-f),\] _an orthogonal sum._ 5. _If_ \(e\) _and_ \(f\) _are commuting idempotents, then_ \[e\circ f=e+f-ef\] _is an idempotent._ 6. _If_ \(e\) _and_ \(f\) _are commuting idempotents then_ \[(1-e)(1-f)=1-e\circ f.\] 7. _If_ \(e,f,g\) _is a set of commuting idempotents, then_ \(e\) _commutes with_ \(f\circ g\) _and_ \[(e\circ f)\circ g=e\circ(f\circ g).\] 8. _If_ \(e_{1},\dots,e_{m}\) _are orthogonal idempotents, then_ \[1-\left(\sum_{i=1}^{m}e_{i}\right)=\prod_{i=1}^{m}(1-e_{i}).\] _._ 9. _Let_ \(e_{1},\ldots,e_{m}\) _be a set of commuting idempotents. Define_ \[[e_{1},\ldots,e_{m}]=(\ldots((e_{1}\circ e_{2})\circ e_{3})\ldots)\circ e_{m}.\] _Then_ \[\prod_{i=1}^{m}(1-e_{i})=1-[e_{1},\ldots,e_{m}].\] 10. _Let_ \(e_{1},\ldots,e_{m}\) _be a set of commuting idempotents. Then_ \[[e_{1},\ldots,e_{m}]\] _is defined to be_ \[e_{1}(1-e_{2})\ldots(1-e_{m})+e_{2}(1-e_{3})\ldots(1-e_{m})+\ldots+e_{m-1}(1- e_{m})+e_{m},\] _a sum of orthogonal idempotents._ Proof.: The proofs of (1)-(7) are straightforward. (8) By part (3) and induction. (9) By part (6) and induction. (10) By part (4) and induction. We begin with a lemma which is a special case of the result we want to prove. **Lemma 5.7**.: _Let \(E\subseteq R\), where \(R\) is a unital ring. Suppose that \(0,1\in E\) and \(E\) is a commutative band under the induced multiplication from \(R\). Then there is a subset \(E\subseteq B\subseteq R\) such that \(B\) is a Boolean algebra with meet given by the product in the ring._ Proof.: If \(E\) is any semilattice in \(R\), define \[E^{\prime}=\{e(1-e_{1})\ldots(1-e_{m})\colon e,e_{1},\ldots,e_{m}\in E\}\cup E.\] Then it is easy to check that this is a commutative band. If \(E\) is any semilattice in \(R\), define \(E^{\perp}\) to be the set of all sums of finite sets of orthogonal elements of \(E\). Then it is easy to check that this is a commutative band. Put \(B=(E^{\prime})^{\perp}\). This is a commutative band. To prove that \(B\) is a Boolean algebra, we use Lemma 5.1. We claim that if \(e\in B\) then \(1-e\in B\). An element of \(B\) is an orthogonal join of idempotents that belong to \(E^{\prime}\). Observe that if \(e_{1},\ldots,e_{m}\) are any orthogonal idempotents of \(B\) then \(1-(\sum_{i=1}^{m}e_{i})=\prod_{i=1}^{m}(1-e_{i})\) by part (8) of Lemma 5.6 where each \(e_{i}\in E^{\prime}\). If \(e\in E\) then \(1-e\in E^{\prime}\). Thus we need only concentrate on idempotents of the form \[e(1-f_{1})\ldots(1-f_{n})\in E^{\prime}\] where \(f_{1},\ldots,f_{n}\in E\). Then \[1-e(1-f_{1})\ldots(1-f_{n})=(1-e)+e[f_{1},\ldots,f_{n}],\] using part (9) of Lemma 5.6, which is a sum of orthogonal idempotents by part (10) of Lemma 5.6. We have therefore proved that \(B\) is closed under taking complements. Let \(e,f\in B\) and suppose that \(ef=e\). Then \(e(1-f)=e-ef=e-e=0\). On the other hand, suppose that \(e(1-f)=0\). Then \(e=ef\). We have that \(B\) satisfies Frink's axioms and so \(B\) is a Boolean algebra. To help us prove our main result, it is useful to have the following two lemmas. The first says that to check if an inverse monoid is Boolean it is enough to prove that we have orthogonal joins. **Lemma 5.8**.: _Let \(S\subseteq R\), where \(R\) is a unital ring. Suppose that \(0,1\in S\) and that \(S\) is an inverse monoid under the induced multiplication from \(R\). In addition, we suppose that \(S\) is closed under sums of pairs of orthogonal elements and, if \(e\in S\) is an idempotent, then \(1-e\in S\). Then \(S\) is a Boolean inverse monoid._ Proof.: It is immediate that \(\mathsf{E}(S)\) satisfies Frink's axioms and so is a Boolean algebra. Let \(e\) and \(f\) be orthogonal idempotents. Then \(e+f=e\lor f\); to see why observe that \((e+f)e=e\) and \((e+f)f=f\). Thus \(e,f\leq e+f\). On the other hand, suppose that \(e,f\leq i\). Then \((e+f)i=e+f\). Thus \(e+f\leq i\). It follows that \(e+f=e\lor f\). Let \(a,b\in S\) be arbitrary orthogonal elements. They are, in particular, compatible. We prove that \(a+b=a\lor b\) and the result then follows by Lemma 5.5. Observe that \[(a+b)\mathbf{d}(a)=a\mathbf{d}(a)+b\mathbf{d}(b)=a\mathbf{d}(a)+a\mathbf{d}(b )=a\mathbf{d}(a)=a\] where we have used Lemma 4.12. Thus \(a\leq a+b\). Similarly \(b\leq a+b\). Suppose that \(a,b\leq x\). Then \(a+b=x\mathbf{d}(a)+x\mathbf{d}(b)=x(\mathbf{d}(a)\vee\mathbf{d}(b))\). Thus \(a+b\leq x\). We have proved that \(a+b=a\lor b\) if \(a\) and \(b\) are orthogonal. Our second result deals with orthogonal sums. **Lemma 5.9**.: _Let \(S\subseteq R\), where \(R\) is a unital ring. Suppose that \(0,1\in S\) and that \(S\) is an inverse monoid under the induced multiplication from \(R\). Define \(S^{\perp}\) to be the sums of all finite orthogonal sets of \(S\). Then \(S^{\perp}\) is an inverse monoid and its set of idempotents is \(\mathsf{E}(S)^{\perp}\)._ Proof.: Let \(a,b\in S^{\perp}\). Then the fact that \(ab\in S^{\perp}\) follows by Lemma 4.10. Observe that if \(a=a_{1}+\ldots+a_{n}\), an orthogonal sum of elements of \(S\), then defining \(a^{-1}=a_{1}^{-1}+\ldots+a_{n}^{-1}\) we get that \(a=aa^{-1}a\). Thus \(S^{\perp}\) is a regular semigroup where, for example, \(aa^{-1}=a_{1}a_{1}^{-1}+\ldots+a_{n}a_{n}^{-1}\). Suppose that \(a=a_{1}+\ldots+a_{n}\) is an idempotent. Then \(a^{2}=a\) and so \(a^{2}a_{1}^{-1}=a_{1}a_{1}^{-1}\). It follows that \(a_{1}^{2}a_{1}^{-1}=a_{1}a_{1}^{-1}\). We deduce that \(a_{1}\) is an idempotent. In a similar way, we may deduce that each of \(a_{1},\dots,a_{n}\) is an idempotent and so \(a\) is a sum of orthogonal idempotents. We have therefore provded that \(\mathsf{E}(S^{\perp})=\mathsf{E}(S)^{\perp}\). It now follows that \(S^{\perp}\) is an inverse monoid since it is regular and the idempotents commute. We can now prove the more general result which shows us that Boolean inverse monoids arise naturally. **Theorem 5.10**.: _Let \(S\subseteq R\), where \(R\) is a unital ring. Suppose that \(0,1\in S\) and \(S\) is an inverse monoid under the induced multiplication from \(R\). Then there is a subset \(S\subseteq T\subseteq R\) such that \(T\) is a Boolean inverse monoid._ Proof.: Put \(E=\mathsf{E}(S)\). Define \(E^{\prime}\) as in the proof of Lemma 5.7. It is easy to check that \(E^{\prime}\) is closed under conjugation by elements of \(S\). Put \(S^{\prime}=SE^{\prime}\). Then \(S^{\prime}\) is a regular monoid and the set of idempotents of \(S^{\prime}\) is precisely the set \(E^{\prime}\). It follows by Proposition 3.9, that \(S^{\prime}\) is also an inverse monoid. Define \(T\) to be the sums of finite orthogonal subsets of \(S^{\prime}\). Then \(T\) is an inverse monoid whose set of idempotents is precisely the set \((\mathsf{E}(S)^{\prime})^{\perp}\), using the notation of the proof of Lemma 5.7. This is proved using Lemma 5.9. It now follows by Lemma 5.8 that \(T\) is a Boolean inverse monoid. ## 6. The underlying groupoid The product we have defined on the symmetric inverse monoid \(\mathcal{I}(X)\) is not the only one, nor perhaps even the most obvious one, that we might define. Given partial bijections \(f\) and \(g\), we could define \(fg\) only when the domain of \(f\) is equal to the range of \(g\). When we do this, we are regarding \(f\) and \(g\) as being functions rather than as partial functions. With respect to this'restricted product', \(\mathcal{I}(X)\) becomes a groupoid. What we have done for the special case of the symmetric inverse monoids, we can also do for arbitrary inverse semigroups, but first we review the basics of groupoid theory we shall need. Categories are usually regarded as categories of structures with their morphisms. They can, however, also be regarded as algebraic structures in their own right, no different from groups, rings and fields except that the binary operation is only partially defined. We shall define categories from this purely algebraic point of view. Let \(C\) be a set equipped with a partial binary operation which we shall denote by \(\cdot\) or by concatenation. If \(x,y\in C\) and the product \(x\cdot y\) is defined we write \(\exists x\cdot y\). An element \(e\in C\) is called an _identity_ if \(\exists e\cdot x\) implies \(e\cdot x=x\), and \(\exists x\cdot e\) implies \(x\cdot e=x\). The set of identities of \(C\) is denoted \(C_{o}\), where the subscript 'o' stands for 'object'. The pair \((C,\cdot)\) is said to be a _category_ if the following axioms hold: (C1): \(x\cdot(y\cdot z)\) exists if and only if \((x\cdot y)\cdot z\) exists, in which case they are equal. (C2): \(x\cdot(y\cdot z)\) exists if and only if \(x\cdot y\) and \(y\cdot z\) exist. (C3): For each \(x\in C\) there exist identities \(e\) and \(f\) such that \(\exists x\cdot e\) and \(\exists f\cdot x\). From axiom (C3), it follows that the identities \(e\) and \(f\) are uniquely determined by \(x\). We write \(e=\mathbf{d}(x)\) and \(f=\mathbf{r}(x)\), where \(\mathbf{d}(x)\) is the _domain_ identity and \(\mathbf{r}(x)\) is the _range_ identity.12 Observe that \(\exists x\cdot y\) if and only if \(\mathbf{d}(x)=\mathbf{r}(y)\). The elements of a category are called _arrows_. We say that the arrow \(x\)_starts at \(\mathbf{d}(x)\)_ and _ends at \(\mathbf{r}(x)\)_. If \(C\) is a category and \(e\) and \(f\) identities in \(C\) then we put Footnote 12: As you will see, there is no contradiction with the notation we introduced earlier. \[\hom(e,f)=\{x\in C\colon\ \mathbf{d}(x)=f\text{ and }\mathbf{r}(x)=e\},\] the set of _arrows from \(f\) to \(e\)_. Subsets of \(C\) of the form \(\hom(e,f)\) are called _hom-sets_. We also put \(\operatorname{end}(e)=\hom(e,e)\), the _local monoid at \(e\)_. We define _subcategories_ in the obvious way. Viewed in this light, a category is a monoid with many identities since the categories with exactly one identity are precisely the monoids. We shall not need arbitary categories but those that generalize groups. A category \(C\) is said to be a _groupoid_ if for each \(x\in C\) there is an element \(x^{-1}\) such that \(x^{-1}x=\mathbf{d}(x)\) and \(xx^{-1}=\mathbf{r}(x)\). The element \(x^{-1}\) is unique with these properties. In the case of groupoids, the local monoids are, in fact, _local groups_. It can happen that a groupoid consists entirely of its local units; in this case, we say that the groupoid is a _union of groups_. A groupoid with exactly one identity is a group. Two elements \(x\) and \(y\) of a groupoid are said to be _connected_ if there is an arrow starting at \(\mathbf{d}(x)\) and ending at \(\mathbf{d}(y)\). This defines an equivalence relation on the groupoid whose equivalence classes are called the _connected components_13 of the groupoid. A groupoid with one connected component is said to be _connected_. Every groupoid can be written as a disjoint union of connected groupoids. We say that a groupoid is _principal_ if for any identities \(e\) and \(f\) there is at most one arrow from \(e\) to \(f\). Footnote 13: The use of the word ‘connected’ here is unfortunate. It has nothing to do with topology. The groupoids here are discrete. **Lemma 6.1**.: _Let \(G\) be a groupoid. Then \(G\) is principal if and only if all its local groups are trivial._ Proof.: If a groupoid is principal, it is clear that the local groups are trivial. Suppose, now, that the local groups are trivial. We prove that the groupoid is principal. Suppose that \(x\) and \(y\) are arrows that start at \(e\) and end at \(f\). Then \(y^{-1}x\) begins and ends at \(e\). This means that, under our assumption, \(y^{-1}x=e\). We deduce that \(x=y\). We define _subgroupoids_ in the obvious way. **Example 6.2**.: Let \(X\) be a set. The set \(X\times X\) becomes a groupoid when we define \(\mathbf{d}(x,y)=(y,y)\), \(\mathbf{r}(x,y)=(x,x)\) and \((x,y)^{-1}=(y,x)\); define a partial product by \((x,y)(y,z)=(x,z)\). Now, let \(\sim\) be an equivalence relation on \(X\). Define \(G\) to consist of those ordered pairs \((x,y)\) where \(x\sim y\). It is easy to check that \(G\) is a subgroupoid of \(X\times X\). It is, in fact, a principal groupoid. If \(G\) is an arbitrary principal groupoid, then it defines an equivalence relation on the set \(X=G_{o}\). In fact, principal groupoids and equivalence relations are different ways of defining the same thing. At this point, category theorists should look away since we shall convert a category into a semigroup. If \(C\) is a category as we have defined it above, then we can convert it into a semigroup with zero by adjoining a zero and defining all undefined products to be zero. We denote the semigroup with zero that arises by \(C^{0}\). Now we can return to inverse semigroups. Let \(S\) be an arbitrary inverse semigroup. Define the _restricted product14_ of two elements \(s\) and \(t\) in \(S\) to be \(s\cdot t=st\) if \(s^{-1}s=tt^{-1}\) and undefined otherwise. The following result simply tells us that what we expect to happen actually does happen. Footnote 14: Sometimes referred to as the _trace product_. **Proposition 6.3**.: _Every inverse semigroup \(S\) is a groupoid with respect to its restricted product._ Proof.: We begin by showing that all idempotents of \(S\) are identities of \((S,\cdot)\). Let \(e\in S\) be an idempotent and suppose that \(e\cdot x\) is defined. Then \(e=xx^{-1}\) and \(e\cdot x=ex\). But \(ex=(xx^{-1})x=x\). Similarly, if \(x\cdot e\) is defined then it is equal to \(x\). We now check that the axioms (C1), (C2) and (C3) hold. Axiom (C1) holds: suppose that \(x\cdot(y\cdot z)\) is defined. Then \[x^{-1}x=(y\cdot z)(y\cdot z)^{-1}\text{ and }y^{-1}y=zz^{-1}.\] But \[(y\cdot z)(y\cdot z)^{-1}=yzz^{-1}y^{-1}=yy^{-1}.\] Hence \(x^{-1}x=yy^{-1}\), and so \(x\cdot y\) is defined. Also \((xy)^{-1}(xy)=y^{-1}y=zz^{-1}\). Thus \((x\cdot y)\cdot z\) is defined. It is clear that \(x\cdot(y\cdot z)\) is equal to \((x\cdot y)\cdot z\). A similar argument shows that if \((x\cdot y)\cdot z\) exists then \(x\cdot(y\cdot z)\) exists and they are equal. Axiom (C2) holds: suppose that \(x\cdot y\) and \(y\cdot z\) are defined. We show that \(x\cdot(y\cdot z)\) is defined. We have that \(x^{-1}x=yy^{-1}\) and \(y^{-1}y=zz^{-1}\). Now \[(yz)(yz)^{-1}=y(zz^{-1})y^{-1}=y(y^{-1}y)y^{-1}=yy^{-1}=x^{-1}x.\] Thus \(x\cdot(y\cdot z)\) is defined. The proof of the converse is straightforward. Axiom (C3) holds: for each element \(x\) we have that \(x\cdot(x^{-1}x)\) is defined, and we have seen that idempotents of \(S\) are identities. Thus we put \(\mathbf{d}(x)=x^{-1}x\). Similarly, we put \(xx^{-1}=\mathbf{r}(x)\). It is now clear that \((S,\cdot)\) is a category. The fact that it is a groupoid is immediate. We call \((S,\cdot)\) the _(underlying) groupoid_ of \(S\). We can use the underlying groupoid to reveal something of the structure of inverse semigroups. **Example 6.4**.: We can interpret Lemma 3.14 in terms of the structure of the underlying groupoid. An inverse semigroup is a Clifford semigroup if and only if its underlying groupoid is a union of groups. In the light of Proposition 6.3, it is now natural to picture an element \(a\) of an inverse semigroup as follows: \[\mathbf{d}(a)\stackrel{{ a}}{{\longrightarrow}}\mathbf{r}(a).\] where we now call \(\mathbf{d}(a)\) the _domain idempotent_ of \(a\) and \(\mathbf{r}(a)\) is the _range idempotent of \(a\)_. The underlying groupoid structure more or less does away with the need to deal directly with Green's relations. If \(e\) and \(f\) are idempotents we write \(e\,\mathcal{D}\,f\) to mean that there exists an element \(a\) such that \(\mathbf{d}(a)=e\) and \(\mathbf{r}(a)=f\). We write \(a\,\mathcal{D}\,b\) to mean that \(\mathbf{d}(a)\,\mathcal{D}\,\mathbf{d}(b)\). The relation \(\mathcal{D}\) really is Green's relation \(\mathcal{D}\); by the same token \(a\,\mathcal{L}\,b\) if and only if \(\mathbf{d}(a)=\mathbf{d}(b)\) and \(a\,\mathcal{R}\,b\) if and only if \(\mathbf{r}(a)=\mathbf{r}(b)\). We define \(a\,\mathcal{H}\,b\) if and only if \(a\,\mathcal{L}\,b\) and \(a\,\mathcal{R}\,b\); in other words, \(a\) and \(b\) belong to the same _hom-set_. An inverse semigroup is said to be _bisimple_ if its underlying groupoid consists of one connected component. An inverse semigroup with zero is said to be _\(0\)-bisimple_ if its underlying groupoid consists of two connected components. In passing from an inverse semigroup to its underlying groupoid, we do lose some information. As the following result shows, the information that is lost is encoded by the natural partial order. **Lemma 6.5**.: _Let \(S\) be an inverse semigroup. Then for any \(s,t\in S\) there exist elements \(s^{\prime}\leq s\) and \(t^{\prime}\leq t\) such that \(st=s^{\prime}\cdot t^{\prime}\) where the product on the right is the restricted product._ Proof.: Put \(e=\mathbf{d}(s)\mathbf{r}(t)\) and define \(s^{\prime}=se\) and \(t^{\prime}=et\). Observe that \(\mathbf{d}(s^{\prime})=e\) and \(\mathbf{r}(t^{\prime})=e\) and that \(st=s^{\prime}t^{\prime}\). It is possible to formalize the idea of a groupoid equipped with a partial order in such a way that the original semigroup product can be recaptured. This is the approach that Ehresmann took to studying inverse semigroups. See [51, Chapter] for exactly how this is done.15 Footnote 15: This groupoid, which is discrete, is quite different from the one that Paterson constructs [94]. We shall now deal with the missing \(\mathcal{J}\)-relation. First, we need a slight extension of Lemma 6.5. **Lemma 6.6**.: _Let \(S\) be an inverse semigroup. Then \(abc=a^{\prime}\cdot b^{\prime}\cdot c^{\prime}\), which is a restricted product where \(a^{\prime}\leq a\), \(b^{\prime}\leq b\) and \(c^{\prime}\leq c\)._ Proof.: Because of associativity, it doesn't matter how we bracket. We write \(abc=(ab)c\). Put \(d=ab\). Then \(dc=d^{\prime}\cdot c^{\prime}\). But \(d^{\prime}=ab\mathbf{r}(c)\). Thus \((a(b\mathbf{r}(c)))\cdot c^{\prime}\). Now write \(a(b\mathbf{r}(c))=a^{\prime}\cdot b^{\prime}\) where \(b^{\prime}\leq b\mathbf{r}(c)\leq b\). We have therefore written \(abc=a^{\prime}\cdot b^{\prime}\cdot c^{\prime}\) using the fact that the multiplication in a groupoid is associative. We can now describe the \(\mathcal{J}\)-relation on inverse semigroups. **Lemma 6.7**.: _Let \(S\) be an inverse semigroup. Then \(SaS\subseteq SbS\) if and only if \(a\,\mathcal{D}\,b^{\prime}\leq b\) for some element \(b^{\prime}\in S\)._ Proof.: Suppose, first, that \(SaS\subseteq SbS\). Then \(a=xby\) for some \(x,y\in S\). By Lemma 6.6, we may write \(a=x^{\prime}\cdot b^{\prime}\cdot y^{\prime}\). You can check that \(a\,\mathcal{D}\,b^{\prime}\). But \(b^{\prime}\leq b\). We have therefore proved one direction. To prove the converse, suppose that \(a\,\mathcal{D}\,b^{\prime}\leq b\) for some element \(b^{\prime}\in S\). Let \(\mathbf{d}(a)\stackrel{{ x}}{{\longrightarrow}}\mathbf{d}(b^{ \prime})\). Put \(y=b^{\prime}\cdot x\cdot a^{-1}\). Then \(a=y^{-1}\cdot b^{\prime}\cdot x\). Similarly, \(b^{\prime}=y\cdot a\cdot x^{-1}\) It follows that \(SaS=Sb^{\prime}S\). Clearly, \(Sb^{\prime}S\subseteq SbS\). We have therefore shown that \(SaS\subseteq SbS\). We can use the theory we have developed to generalize Lemma 3.8. **Lemma 6.8**.: _Let \(S\) be an inverse semigroup with zero. Then \(S\) is isomorphic to a groupoid with a zero adjoined if and only if the natural partial order is equality on the set \(S\setminus\{0\}\)._ Proof.: We prove one direction only. Let \(S\) be an inverse semigroup with zero such that the natural partial order is equality on the set \(S\setminus\{0\}\). Let \(G\) be the underlying groupoid of \(S\) with the component \(\{0\}\) removed. There is an obvious bijection between \(S\) and \(G^{0}\). We prove that this is a homomorphism. Let \(a,b\in S\) be any non-zero elements. Then \(ab=a^{\prime}\cdot b^{\prime}\) where \(a^{\prime}\leq a\) and \(b^{\prime}\leq b\) by Lemma 6.5. As a result of our assumption on the natural partial order, either \(a^{\prime}=a\) and \(b^{\prime}=b\), in which case the product is a groupoid product, or at least one of \(a^{\prime}\) and \(b^{\prime}\) is equal to zero -- in which case \(ab=0\). Thus a product in \(S\) is either equal to zero or a restricted product but not both. In addition to the underlying groupoid, we may sometimes be able to associate another, smaller, groupoid to an inverse semigroup with zero. Let \(S\) be an inverse semigroup with zero. An element \(s\in S\) is said to be an _atom_ if \(t\leq s\) implies that \(t=0\) or \(t=s\). **Lemma 6.9**.: _Let \(S\) be an inverse semigroup._ 1. _If_ \(x\) _is an atom then_ \(x^{-1}\) _is an atom._ 2. _If_ \(x\) _is an atom then both_ \(\mathbf{d}(x)\) _and_ \(\mathbf{r}(x)\) _are atoms_ 3. _Suppose that_ \(\mathbf{d}(x)\) _is an atom then_ \(x\) _is an atom; similarly, if_ \(\mathbf{r}(x)\) _is an atom then_ \(x\) _is an atom._ 4. _If_ \(x\) _and_ \(y\) _are atoms and the restricted product_ \(x\cdot y\) _is defined then_ \(x\cdot y\) _is an atom._ 5. _If_ \(x\) _and_ \(y\) _are distinct compatible atoms then_ \(x\perp y\)_._ Proof.: (1) Immediate from the properties of the inverse. (2) We prove that \(\mathbf{d}(x)\) is an atom; the proof that \(\mathbf{r}(x)\) is an atom is similar. Suppose that \(e\leq\mathbf{d}(x)\). Then \(xe\leq x\). It follows that either \(xe=0\) or \(xe=x\). Suppose, first, that \(xe=0\). Then \(e\mathbf{d}(x)=0\) and so \(e=0\). Alternatively, if \(xe=x\) then \(e\mathbf{d}(x)=\mathbf{d}(x)\) and so \(e=\mathbf{d}(x)\). This shows that \(\mathbf{d}(x)\) is an atom. (3) We prove that if \(\mathbf{d}(x)\) is an atom then \(x\) is an atom; the proof of the other statement is analogous. Suppose that \(y\leq x\). Then \(\mathbf{d}(y)\leq\mathbf{d}(x)\). Since \(\mathbf{d}(x)\) is an atom either \(\mathbf{d}(y)=0\) or \(\mathbf{d}(y)=\mathbf{d}(x)\). It follows that either \(y=0\) or \(y=x\). We have therefore proved that \(x\) is an atom. (4) We are given that \(x\) and \(y\) are atoms and that \(x\cdot y\) exists. Observe first that \(x\cdot y\neq 0\). Let \(z\leq xy\). Then \(z=x(y\mathbf{d}(z))\). Now, \(y\mathbf{d}(z)=0\) or \(y\mathbf{d}(z)=y\), since \(y\) is an atom. If the former then \(z=0\) and if the latter then \(z=xy\). We have therefore proved that \(x\cdot y\) is an atom. (5) Let \(x\) be an atom. Then by part (2) above it follows that \(\mathbf{d}(x)\) is an atom. Similarly, \(\mathbf{d}(y)\) is an atom. If the product \(\mathbf{d}(x)\mathbf{d}(y)\) is non-zero then in fact \(\mathbf{d}(x)=\mathbf{d}(y)\). But \(x\sim y\) and so \(x=y\) by Lemma 4.13, which contradicts our assumption that \(x\) and \(y\) are distinct. It follows that \(\mathbf{d}(x)\perp\mathbf{d}(y)\). A similar argument shows that \(\mathbf{r}(x)\perp\mathbf{r}(y)\) from which it follows that \(x\perp y\). It follows by Lemma 6.9, that the set of atoms of \(S\), if non-empty, forms a groupoid, which we shall call the _atomic groupoid_ of \(S\) and denote by \(\mathsf{A}(S)\). **Example 6.10**.: The finite symmetric inverse monoid \(\mathcal{I}(X)\) has an interesting atomic groupoid. It consists of those partial bijections the domains of which contain exactly one element of \(X\). This groupoid is isomorphic to the groupoid \(X\times X\) defined in Example 6.2. We shall describe all finite Boolean inverse monoids in terms of groupoids. The reader will recall that the finite Boolean algebras are isomorphic to the powerset Boolean algebras defined on the finite set of atoms. We shall replace finite sets by finite groupoids. We need some definitions first. If \(A\) and \(B\) are subsets of a category \(C\) then \(AB\) is the set of all products \(ab\) where \(a\in A\), \(b\in B\) and \(\mathbf{d}(a)=\mathbf{r}(b)\). If \(A\) is a subset of a groupoid then \(A^{-1}\) is the set of all \(a^{-1}\) where \(a\in A\). We first show how to construct finite Boolean inverse monoids from finite groupoids. If \(G\) is a groupoid then a subset \(A\subseteq G\) is said to be a _local bisection_ if both \(AA^{-1}\) and \(A^{-1}A\) consist entirely of identities. You can check that a subset \(A\subseteq G\) is a local bisection if \(a,b\in A\) and \(\mathbf{d}(a)=\mathbf{d}(b)\) (respectively, \(\mathbf{r}(a)=\mathbf{r}(b)\)) then \(a=b\), **Proposition 6.11**.: _Let \(G\) be a finite groupoid. Then \(\mathsf{K}(G)\), the set of all local bisections of \(G\) under subset multiplication, is a finite Boolean inverse monoid, the set of atoms of which forms a groupoid isomorphic to \(G\)._ Proof.: You can check that the product of two local bisections is a local bisection. If \(A\) is a local bisection, then so is \(A^{-1}=\{a^{-1}\colon a\in A\}\) and \(A=AA^{-1}A\). The idempotents are just the subsets of \(G_{o}\) and the product of two idempotents is just the intersection of these two sets. It follows that \(\mathsf{K}(G)\) is an inverse semigroup since it is a regular semigroup with commuting idempotents. It is a monoid with identity \(G_{o}\) and has a zero \(\varnothing\). It has a Boolean algebra of idempotents. Observe that \(A\leq B\) if and only if \(A\subseteq B\). You can check that \(A\sim B\) if and only if \(A\cup B\) is a local bisection. It is now easy to check that \(\mathsf{K}(G)\) is a Boolean inverse monoid. The atoms are the singleton sets \(\{g\}\) and form a groupoid isomorphic to \(G\). We shall now go in the opposite direction. Our first result is an immediate consequence of finiteness. **Lemma 6.12**.: _Let \(S\) be a finite Boolean inverse monoid. Then each non-zero element is above an atom._ We now connect elements with the atoms beneath them. Let \(a\in S\). Define \(\theta(a)=a^{\downarrow}\cap\mathsf{A}(S)\). Observe that \(\theta(0)=\varnothing\). **Lemma 6.13**.: _Let \(S\) be a Boolean inverse semigroup. For each \(a\in S\), the set \(\theta(a)\) is a local bisection of the groupoid \(\mathsf{A}(S)\)._ Proof.: If \(a=0\) then \(\theta(a)=\varnothing\). If \(a\neq 0\) then it is above at least one atom by Lemma 6.12 and so is non-empty. Let \(x,y\in\theta(a)\) such that \(\mathbf{d}(x)=\mathbf{d}(y)\). Then \(x=y\). Dually, if \(\mathbf{r}(x)=\mathbf{r}(y)\) then \(x=y\). If \(S\) is a finite Boolean inverse monoid, then by Lemma 6.12 we may define a function \(\theta\colon S\to\mathsf{K}(\mathsf{A}(S))\) by \(\theta(0)=\varnothing\) and \(\theta(a)=a^{\downarrow}\cap\mathsf{A}(S)\). It is quite rare that we can say anything about the structure of finite semigroups belonging to some class. Thus the following result is a pleasant surprise. **Theorem 6.14** (The structure of finite Boolean inverse monoids).: _Let \(S\) be a finite Boolean inverse monoid. Then \(S\) is isomorphic to the Boolean inverse monoid \(\mathsf{K}(\mathsf{A}(S))\)._ Proof.: It remains to show that \(\theta\) (as defined above) is an isomorphism of semigroups. First, \(\theta\) is a homomorphism. Let \(x\) be an atom such that \(x\leq ab\). Then \(x=a(b\mathbf{d}(x))\). Thus by Lemma 6.5, we may write \(x=a^{\prime}\cdot b^{\prime}\) where \(a^{\prime}\leq a\) and \(b^{\prime}\leq b\). It is easy to check that \(a^{\prime}\) and \(b^{\prime}\) are themselves atoms. We have therefore proved that \(\theta(ab)\subseteq\theta(a)\theta(b)\). Conversely, let \(x\in\theta(a)\) and \(y\in\theta(b)\) such that the restricted product \(x\cdot y\) is defined. Then \(x\cdot y=xy\leq ab\). But the restricted product of atoms is an atom and so we have proved the first claim. It remains to prove that \(\theta\) is a bijection. We show first that \(a=\bigvee\theta(a)\). Put \(b=\bigvee\theta(a)\). Then, clearly, \(b\leq a\). Suppose that \(b\neq a\). It is here that we use the Boolean structure. Then \(a\setminus b\neq 0\). It follows by Lemma 6.12 that \(a\setminus b\) is above an atom \(x\). But then \(x\leq a\) and so \(x\leq b\) (by definition). Thus \(x\leq b,a\setminus b\) which implies that \(x=0\). But atoms are non-zero. It follows that \(a=\bigvee\theta(a)\) and so \(\theta\) is an injection. Now let \(A\in\mathsf{K}(\mathsf{A}(S))\). We don't lose any generality by assuming that it is non-empty. Then \(A\) is a set of compatible elements. Put \(a=\bigvee A\). Clearly, \(A\subseteq\theta(a)\). Let \(x\) be an atom such that \(x\leq a\). Then \(x=\bigvee_{a\in A}(x\wedge a)\) by part (2) of Lemma 5.3. Remembering that both \(x\) and \(a\) are atoms, it follows that \(x=a\) for some \(a\in A\). We have therefore proved that \(\theta(a)=A\). We now have a complete description of the finite Boolean inverse monoids: as a result of Proposition 6.11 and Theorem 6.14, they are precisely the inverse monoids of the form \(\mathsf{K}(G)\) where \(G\) is a finite groupoid. We can apply the theory we have developed to the representation theory of _arbitrary_ finite inverse monoids, although a more elementary account can be found in [111, Chapter 9]. We prove first that every finite inverse monoid \(S\) can be embedded into the Boolean inverse monoid constructed from its underlying groupoid. **Lemma 6.15**.: _Let \(S\) be a finite inverse monoid with underlying groupoid \(G\)._ 1. _Let_ \(a\in S\)_. Then_ \(a^{\downarrow}\) _is a local bisection of_ \(G\)_._ 2. _We have that_ \((ab)^{\downarrow}=a^{\downarrow}b^{\downarrow}\)_._ Proof.: (1) Let \(x,y\leq a\) such that \(\mathbf{d}(x)=\mathbf{d}(y)\). Then it is immediate that \(x=y\). Similar reasoning shows that if \(x,y\leq a\) such that \(\mathbf{r}(x)=\mathbf{r}(y)\) then \(x=y\). We have therefore shown that the set \(a^{\downarrow}\) is a local bisection of \(G\). (2) Suppose that \(x\leq a\) and \(y\leq b\). Then \(xy\leq ab\). On the other hand, if \(c\leq ab\) then we can write \(c=(\mathbf{r}(c)a)(b\mathbf{d}(c))\) and so \(x\in a^{\downarrow}b^{\downarrow}\). Let \(S\) be an arbitrary inverse monoid with underlying groupoid \(G\). An element \(a\) with the property that \(a^{\downarrow}=\{a\}\), a singleton set, will be said to be _at the bottom_. Define \(\beta\colon S\to\mathsf{K}(G)\) by \(\beta(a)=a^{\downarrow}\). Then by Lemma 6.15, \(\beta\) is an injective homomorphism of inverse semigroups. The elements in \(1^{\downarrow}\) are precisely the idempotents, which are the identities of \(G\); the set of identities of \(G\) is the monoid identity of \(\mathsf{K}(G)\). Thus the homomorphism is a monoid homomorphism. The above result, on its own, doesn't take us very far because we already know that every finite inverse monoid can be embedded in a finite Boolean inverse monoid. The key point of this embedding is that the map \(\beta\colon S\to\mathsf{K}(G)\) has a special property described by the following proposition. **Proposition 6.16**.: _Let \(S\) be a finite inverse monoid with underlying groupoid \(G\) and let \(\alpha\colon S\to T\) be any monoid homomorphism to a Boolean inverse monoid \(T\). Then there is a unique morphism of Boolean inverse monoids \(\gamma\colon\mathsf{K}(G)\to T\) such that \(\gamma\beta=\alpha\)._ Proof.: Our first step is to define the function \(\gamma\colon\mathsf{K}(G)\to T\). Define \(\gamma(\varnothing)=0\). If \(a\) is any element of \(S\) then \(\{a\}\) is an element of \(\mathsf{K}(G)\). Let the set of elements strictly below \(a\) in \(S\) be \(\{a_{1},\ldots,a_{m}\}\). This set could well be empty if \(a\) is at the bottom -- this will not cause us any problems. Since \(a_{1},\ldots,a_{m}\leq a\), these elements are pairwise compatible. It follows that \(\alpha(a_{1}),\ldots,\alpha(a_{m})\) is a compatible subset of \(T\). Thus the join \(\alpha(a_{1})\vee\ldots\vee\alpha(a_{m})\) exists; if the set of elements strictly below \(a\) is empty then this join is just \(0\). It is clearly less than \(\alpha(a)\) and so we may form the element \(\alpha(a)\setminus(\alpha(a_{1})\vee\ldots\vee\alpha(a_{m}))\). On the basis of the above, define \[\gamma(\{a\})=\alpha(a)\setminus(\alpha(a_{1})\vee\ldots\vee\alpha(a_{m})).\] If \(A\) is a local bisection which is neither empty nor a singleton set, define \[\gamma(A)=\bigvee_{a\in A}\gamma(\{a\}).\] This makes sense, for if \(a,b\in A\), then \(\{a\},\{b\}\subseteq A\) and so \(\{a\}\sim\{b\}\) in \(\mathsf{K}(G)\). This completes the definition of \(\gamma\). We show that \(\gamma\) preserves binary joins. If \(A\sim B\) in \(\mathsf{K}(G)\) then \(A\lor B=A\cup B\). From the definition of \(\gamma\), it is clear that \(\gamma\) preserves binary joins. It maps the empty set to the zero by definition. We prove that \(\gamma(a^{\downarrow})=\alpha(a)\). This proves that we have a monoid homomorphism and that \(\gamma\beta=\alpha\). Define the _height_ of \(a\) in \(S\) to be the length of a chain of maximum length from \(a\). Those elements with height zero are precisely those which are at the bottom. If \(a\) has height zero then \[\gamma(a^{\downarrow})=\alpha(a).\] We assume that we have proved that \(\gamma(b^{\downarrow})=\alpha(b)\) for all elements \(b\) of height at most \(n\). Let \(a\) be an element of height \(n+1\). Let \(b_{1},\ldots,b_{m}\) be all the elements immediately below \(a\). Then, by the induction hypothesis, we have that \[\gamma(b_{i}^{\downarrow})=\alpha(b_{i}).\] Let the elements strictly less than \(a\) be \(a_{1},\ldots,a_{m}\). These include the elements \(b_{1},\ldots,b_{m}\), for example, so in general each element \(a_{i}\) is beneath one of the \(b_{j}\). It follows that we can write \[\alpha(a)\setminus(\alpha(a_{1})\vee\ldots\vee\alpha(a_{m}))\] as \[\alpha(a)\setminus(\alpha(b_{1}e_{1})\vee\ldots\vee\alpha(b_{n}e_{n}))\] where the idempotents \(\alpha(e_{j})\) gather together by means of a join all the idempotents that arise from showing that \(a_{i}\leq b_{j}\) for various \(i\). By definition \[\gamma(a^{\downarrow})=\alpha(a)\setminus(\alpha(a_{1})\vee\ldots\vee\alpha( a_{m}))\vee\gamma(\{a_{1}\})\vee\ldots\vee\gamma(\{a_{m}\})\] but we can write \[\gamma(a^{\downarrow})=\gamma(\{a\})\vee\gamma(\{a_{1},\ldots,a_{m}\}^{ \downarrow}).\] But \[\{a_{1},\ldots,a_{m}\}^{\downarrow}=b_{1}^{\downarrow}\cup\ldots\cup b_{n}^{\downarrow}\] and so \[\gamma(\{a_{1},\ldots,a_{m}\}^{\downarrow})=\gamma(b_{1}^{\downarrow})\vee \ldots\vee\gamma(b_{n}^{\downarrow}).\] Thus \[\gamma(\{a_{1},\ldots,a_{m}\}^{\downarrow})=\alpha(b_{1})\vee\ldots\vee \alpha(b_{n})\] using the induction hypothesis. Thus \[\gamma(a^{\downarrow})=\alpha(a)\setminus(\alpha(a_{1})\vee\ldots\vee\alpha(a_{m} ))\vee\alpha(b_{1})\vee\ldots\vee\alpha(b_{n}.)\] By our argument above \[\gamma(a^{\downarrow})=\alpha(a)\setminus(\alpha(b_{1}e_{1})\vee\ldots\vee \alpha(b_{n}e_{n}))\vee\alpha(b_{1})\vee\ldots\vee\alpha(b_{n}).\] This is equal to \[\alpha(a)\backslash(\alpha(b_{1}e_{1})\vee\ldots\vee\alpha(b_{n}e_{n}))\vee( \alpha(b_{1}e_{1})\vee\ldots\vee\alpha(b_{n}e_{n})))\vee\alpha(b_{1})\vee \ldots\vee\alpha(b_{n})\] which is just \[\alpha(a)\vee\alpha(b_{1})\vee\ldots\vee\alpha(b_{n})\] which is equal to \(\alpha(a)\). We now prove uniqueness. Suppose that \(\gamma^{\prime}\colon\mathsf{K}(S)\to T\) is a morphism of Boolean inverse monoids such that \(\gamma^{\prime}\beta=\alpha\). We show that \(\gamma^{\prime}=\gamma\). It is immediate from our assumption on \(\gamma^{\prime}\) that for all elements of height zero we have that \(\gamma(a)=\gamma^{\prime}(a)\). So, let \(a\in S\) be any element which does not have height zero. Let \(\{a_{1},\ldots,a_{m}\}\) be the set of all elements strictly less than \(a\). Then \[\{a_{1},\ldots,a_{m}\}=a_{1}^{\downarrow}\cup\ldots\cup a_{m}^{\downarrow}.\] It follows that \[\gamma^{\prime}(\{a_{1},\ldots,a_{m}\})=\gamma^{\prime}(a_{1}^{\downarrow}) \vee\ldots\vee\gamma^{\prime}(a_{m}^{\downarrow}).\] But this is just equal to \[\gamma^{\prime}(\beta(a_{1}))\vee\ldots\vee\gamma^{\prime}(\beta(a_{m}))\] which is equal to \[\gamma(\beta(a_{1}))\cup\ldots\cup\gamma(\beta(a_{m}))\] by assumption. We have therefore proved that \[\gamma^{\prime}(\{a_{1},\ldots,a_{m}\})=\gamma(\{a_{1},\ldots,a_{m}\}).\] Observe that \(\{a\}=a^{\downarrow}\setminus\{a_{1},\ldots,a_{m}\}\). Thus \[\gamma^{\prime}(\{a\})=\gamma^{\prime}(a^{\downarrow})\setminus\gamma^{ \prime}(\{a_{1},\ldots,a_{m}\}).\] But \[\gamma^{\prime}(a^{\downarrow})=\gamma(a^{\downarrow})\] by assumption, and \[\gamma^{\prime}(\{a_{1},\ldots,a_{m}\})=\gamma(\{a_{1},\ldots,a_{m}\})\] by what we proved above. It follows that \(\gamma^{\prime}(\{a\})=\gamma(\{a\})\) for all elements \(a\in S\). Now, let \(A\) be any non-empty local bisection. Then, since \(\gamma^{\prime}\) is a morphism of Boolean inverse monoids, we have that \(\gamma^{\prime}(A)=\bigvee_{a\in A}\gamma^{\prime}(\{a\})\). It follows that \(\gamma^{\prime}=\gamma\) We now apply our results to the study of the representation theory of finite inverse monoids. The starting point is to say what we mean by the representation theory of groupoids. Let \(G\) be a finite groupoid. Then we get a semigroup with zero \(G^{0}\) by adjoining a zero; in fact, this is a special kind of inverse semigroup by Lemma 6.8. We can therefore consider homomorphisms \(\theta\colon G^{0}\to R\) to the multiplicative monoid of the unital ring \(R\). Each identity \(e\in G\) gives rise to an idempotent \(\theta(e)\) in the ring \(R\). If \(e\) and \(f\) are distinct identities of the groupoid \(G\) then \(\theta(e)\) and \(\theta(f)\) are orthogonal. By assumption, the groupoid \(G\) is finite. Put \(f=\sum_{e\in G_{o}}\theta(e)\). Then \(f\) is an idempotent in \(R\). Consider the ring \(fRf=\{a\in R\colon faf=a\}\). This has identity \(f\). Let \(g\in G\) be an arbitary element of \(G\). Let \(e,e^{\prime}\) be the identities such that \(a=e^{\prime}ae\). Then \(\theta(a)=\theta(e^{\prime})\theta(a)\theta(e)\). But for any identity \(e\), we have that \(\theta(e)=f\theta(e)f\). It follows that \(\theta(a)\in fRf\). We therefore define a _representation_ of a finite groupoid \(G\) in a ring \(R\) to be a semigroup with zero homomorphism \(\theta\colon G^{0}\to R\) such that \(1=\sum_{e\in G_{o}}\theta(e)\). We shall also need the following definition. Let \(S\) be a Boolean inverse monoid. A monoid homomorphism \(\phi\colon S\to R\) to the multiplicative monoid of the ring \(R\) which maps zero to zero is said to be _additive_ if \(\phi(a\lor b)=\phi(a)+\phi(b)\), whenever \(a\perp b\). We refer the reader to part (2) of Lemma 5.5 for the rationale for this definition. The following theorem was proved in a different way in [111, Theorem 9.3]. **Theorem 6.17** (Representation theory of finite inverse monoids).: _Let \(S\) be a finite inverse monoid with underlying groupoid \(G\) and let \(R\) be a unital ring. Then there is a bijective correspondence between the set of representations of \(S\) in \(R\) and the set of representations of the finite groupoid \(G\) in \(R\)._ Proof.: Let \(\theta\colon S\to R\) be a monoid homomorphism to the multiplicative monoid of the ring \(R\). The image \(\theta(S)\) is an inverse submonoid of the multiplicative monoid of the ring \(R\) by Lemma 3.22 and so \(\theta(S)^{0}\), the inverse monoid \(\theta(S)\) with the zero of the ring \(R\) adjoined, is an inverse monoid with zero. Thus by Theorem 5.10, there is a Boolean inverse monoid \(T\) such that \(\theta(S)^{0}\subseteq T\subseteq R\). By Proposition 6.16, there is therefore a unique morphism of Boolean inverse monoids \(\phi\colon\mathsf{K}(G)\to T\) such that \(\phi\beta=\theta\). We want to regard \(\phi\) as a map from \(\mathsf{K}(G)\) to the ring \(R\). This is an additive homomorphism. We have proved that every homomorphism \(\theta\colon S\to R\) gives rise to an additive homomorphism \(\phi\colon\mathsf{K}(G)\to R\). On the other hand, given an additive homomorphism \(\phi\colon\mathsf{K}(G)\to R\), we can construct a monoid homomorphism \(\phi\beta\colon S\to R\) This leads to a bijective correspondence between representations of \(S\) in \(R\) and additive homomorphisms of \(\mathsf{K}(G)\) in \(R\). We described all finite Boolean inverse monoids in Theorem 6.14. In what follows, therefore, we may assume that \(S\) is a finite Boolean inverse monoid with atomic groupoid \(G\). Let \(\theta\colon S\to R\) be an additive homomorphism of \(S\). Then, by restriction, we get a semigroup homomorphism \(\theta^{\prime}\) from \(G^{0}\) to the ring \(R\). Since \(S\) is a finite Boolean inverse monoid, the identity of \(S\) is an orthogonal join of the atomic idempotents. It follows that the sum of the idempotents \(\theta^{\prime}(e)\), where \(e\in G_{o}\), is equal to the identity of \(R\). Thus, we have defined a representation of the groupoid \(G\). We now go in the opposite direction. Let \(S\) be a finite Boolean inverse monoid with atomic groupoid \(G\) and suppose that there a representation \(\theta^{\prime}\colon G^{0}\to R\). We shall now define a homomorphism \(\theta\) of \(S\) that extends \(\theta^{\prime}\). Let \(a\in S\). Define \(\theta\colon S\to R\) by \(\theta(a)=\theta^{\prime}(a_{1})+\ldots+\theta^{\prime}(a_{m})\) where \(a_{1},\ldots,a_{m}\leq a\) are all the atoms below \(a\); we can assume this is an orthogonal set by part (5) of Lemma 6.9 This is an additive homomorphism of \(S\). We therefore have a bijective correspondence between the additive homomorphisms of a Boolean inverse monoid and the representations of its atomic groupoid. If we put our two results together, then we have established a bijective correspondence between representations of an inverse monoid \(S\) and the representations of its underlying groupoid \(G\). ## 7. Fundamental inverse semigroups In Section 3, we introduced the Clifford semigroups. These are the inverse semigroups in which every element is central. Living inside every inverse semigroup is a Clifford semigroup. For every inverse semigroup \(S\), define \(\mathsf{Z}(\mathsf{E}(S))\), the _centralizer of the idempotents_, to be set of all elements of \(S\) which commute with every idempotent. Then \(\mathsf{Z}(\mathsf{E}(S))\) is a wide inverse subsemigroup of \(S\) which is Clifford. If \(\mathsf{Z}(\mathsf{E}(S))=\mathsf{E}(S)\) we say the inverse semigroup is _fundamental_. Fundamental inverse semigroups are important. In this section, we shall study them in more detail. To do this, we shall need the following. Define the relation \(\mu\) on an arbitrary inverse semigroup by \[(s,t)\in\mu\Leftrightarrow(\forall e\in\mathsf{E}(S))(ses^{-1}=tet^{-1}).\] It is routine to check that this is a congruence. **Lemma 7.1**.: _Let \(S\) be an inverse semigroup._ 1. _If_ \((s,t)\in\mu\) _then_ \(\mathbf{r}(s)=\mathbf{r}(t)\) _and_ \(\mathbf{d}(s)=\mathbf{d}(t)\)_. It follows that_ \(\mu\subseteq\mathcal{H}\) _._ 2. _In the definition, we may restrict to those idempotents in_ \(\mathbf{d}(s)^{\downarrow}\)__ 3. _If_ \((e,f)\in\mu\)_, where_ \(e\) _and_ \(f\) _are idempotents, then_ \(e=f\)_._ Proof.: (1) If \((s,t)\in\mu\) then \(\mathbf{r}(s)\leq\mathbf{r}(t)\). Symmetry now delivers the answer. To prove the second claim, observe that if \((s,t)\in\mu\) then \((s^{-1},t^{-1})\in\mu\). We can now use the first claim to prove the second claim. It follows that \(\mu\subseteq\mathcal{H}\). (2) We have used all idempotents in the definition of \(\mu\) but we may restrict to those idempotents in \(\mathbf{d}(s)^{\downarrow}\) simply by mulitplying by \(\mathbf{d}(s)\). (3) Immediate. By part (3) of Lemma 7.1, it follows that \(\mu\) is an _idempotent-separating_ congruence. In fact, we have the following. **Lemma 7.2**.: \(\mu\) _is the largest idempotent-separating congruence on \(S\)._ Proof.: Let \(\rho\) be any idempotent-separating congruence on \(S\) and let \((s,t)\in\rho\). Let \(e\) be any idempotent. Then \((ses^{-1},tet^{-1})\in\rho\) but \(\rho\) is idempotent-separating and so \(ses^{-1}=tet^{-1}\). It follows that \((s,t)\in\mu\). Thus we have shown that \(\rho\subseteq\mu\). We can now explain the connection between the congruence \(\mu\) and fundamental inverse semigroups. **Lemma 7.3**.: _Let \(S\) be an inverse semigroup. Then \(S\) is fundamental if and only if \(\mu\) is the equality relation_ Proof.: Suppose that \(S\) is fundamental. We prove that \(\mu\) is the equality relation. Let \((s,t)\in\mu\). Then \((st^{-1},tt^{-1})\in\mu\). Let \(e\) be any idempotent. Then \((st^{-1})e(st^{-1})^{-1}=tt^{-1}ett^{-1}\). It follows that \[st^{-1}e=e\mathbf{r}(t)st^{-1}=e\mathbf{r}(s)st^{-1}=est^{-1}.\] We have therefore proved that \(st^{-1}\) is central. By assumption it must be an idempotent. It follows that \(st^{-1}=tt^{-1}\). Thus \(st^{-1}t=t\) but \(\mathbf{d}(t)=\mathbf{d}(s)\) and so \(s=t\). To prove the connverse, suppose that \(\mu\) is the equality relation. Let \(s\) commute with every idempotent. Then \((s,ss^{-1})\in\mu\). Thus, by assumption, \(s=ss^{-1}\) and so \(s\) is an idempotent. We can easily construct fundamental inverse semigroups. **Lemma 7.4**.: _Let \(S\) be an inverse semigroup. Then \(S/\mu\) is fundamental._ Proof.: Suppose that \(\mu(s)\) and \(\mu(t)\) are \(\mu\)-related in \(S/\mu\). Every idempotent in \(S/\mu\) is of the form \(\mu(e)\) where \(e\in E(S)\). Thus \[\mu(s)\mu(e)\mu(s)^{-1}=\mu(t)\mu(e)\mu(t)^{-1}\] so that \(\mu(ses^{-1})=\mu(tet^{-1})\). But both \(ses^{-1}\) and \(tet^{-1}\) are idempotents, so that \(ses^{-1}=tet^{-1}\) for every \(e\in E(S)\). Thus \((s,t)\in\mu\). The symmetric inverse monoid is constructed from an arbitrary set. We now show how to construct an inverse semigroup from a meet semilattice. Let \((E,\leq)\) be a meet semilattice, and denote by \(T_{E}\) be the set of all order isomorphisms between the principal order ideals of \(E\). Clearly, \(T_{E}\) is a subset of \(\mathcal{I}(E)\). In fact we have the following. **Lemma 7.5**.: _The set \(T_{E}\) is an inverse subsemigroup of \(\mathcal{I}(E)\) whose semilattice of idempotents is isomorphic to E._ The semigroup \(T_{E}\) is called the _Munn semigroup_ of the semilattice \(E\). We can now construct inverse semigroups having specific semilattices of idempotents. **Theorem 7.6** (Munn representation theorem).: _Let \(S\) be an inverse semigroup. Then there is an idempotent-separating homomorphism \(\delta\colon S\to T_{\mathsf{E}(S)}\) whose image is a wide inverse subsemigroup of \(T_{\mathsf{E}(S)}\). The kernel of \(\delta\) is \(\mu\)._ Proof.: For each \(s\in S\) define the function \[\delta_{s}\colon\mathbf{d}(s)^{\downarrow}\to\mathbf{r}(s)^{\downarrow}\] by \(\delta_{s}(e)=ses^{-1}\). We first show that \(\delta_{s}\) is well-defined. Let \(e\leq s^{-1}s\). Then \(ss^{-1}\delta_{s}(e)=\delta_{s}(e)\), and so \(\delta_{s}(e)\leq ss^{-1}\). To show that \(\delta_{s}\) is isotone, let \(e\leq f\in(s^{-1}s)^{\downarrow}\). Then \[\delta_{s}(e)\delta_{s}(f)=ses^{-1}sfs^{-1}=sefs^{-1}=\delta_{s}(e).\] Thus \(\delta_{s}(e)\leq\delta_{s}(f)\). Consider now the function \(\delta_{s^{-1}}\colon(ss^{-1})^{\downarrow}\to(s^{-1}s)^{\downarrow}\). This is isotone by the argument above. For each \(e\in(s^{-1}s)^{\downarrow}\), we have that \[\delta_{s^{-1}}(\delta_{s}(e))=\delta_{s^{-1}}(ses^{-1})=s^{-1}ses^{-1}s=e.\] Similarly, \(\delta_{s}(\delta_{s^{-1}}(f))=f\) for each \(f\in(ss^{-1})^{\downarrow}\). Thus \(\delta_{s}\) and \(\delta_{s^{-1}}\) are mutually inverse, and so \(\delta_{s}\) is an order isomorphism. Define \(\delta\colon S\to T_{\mathsf{E}(S)}\) by \(\delta(s)=\delta_{s}\). To show that \(\delta\) is a homomorphism, we begin by calculating \(\operatorname{dom}(\delta_{s}\delta_{t})\) for any \(s,t\in S\). We have that \[\operatorname{dom}(\delta_{s}\delta_{t})=\delta_{t}^{-1}((s^{-1}s)^{\downarrow }\ \cap\ (tt^{-1})^{\downarrow})=\delta_{t}^{-1}((s^{-1}stt^{-1})^{\downarrow}).\] But \(\delta_{t}^{-1}=\delta_{t^{-1}}\) and so \[\operatorname{dom}(\delta_{s}\delta_{t})=((st)^{-1}st)^{\downarrow}= \operatorname{dom}(\delta_{st}).\] If \(e\in\operatorname{dom}\delta_{st}\) then \[\delta_{st}(e)=(st)e(st)^{-1}=s(tet^{-1})s^{-1}=\delta_{s}(\delta_{t}(e)).\] Hence \(\delta_{s}\delta_{t}=\delta_{st}\). To show that \(\delta\) is idempotent-separating, suppose that \(\delta(e)=\delta(f)\) where \(e\) and \(f\) are idempotents of \(S\). Then \(\operatorname{dom}\delta(e)=\operatorname{dom}\delta(f)\). Thus \(e=f\). The image of \(\delta\) is a wide inverse subsemigroup of \(T_{\mathsf{E}(S)}\) because every idempotent in \(T_{\mathsf{E}(S)}\) is of the form \(1_{[e]}\) for some \(e\in E(S)\), and \(\delta_{e}=1_{[e]}\). Suppose that \(\delta(s)=\delta(t)\). Then \((s,t)\in\mathcal{H}\). It is now immediate from the definition that, precisely, \((s,t)\in\mu\). The Munn representation should be contrasted with the Wagner-Preston representation: the Wagner-Preston was injective whereas the Munn representation has a non-trivial kernel. Fundamental inverse semigroups arise in the following way. **Theorem 7.7**.: _Let \(S\) be an inverse semigroup. Then \(S\) is fundamental if and only if \(S\) is isomorphic to a wide inverse subsemigroup of the Munn semigroup \(T_{\mathsf{E}(S)}\)._ Proof.: Let \(S\) be a fundamental inverse semigroup. By Theorem 7.6, there is a homomorphism \(\delta\colon S\to T_{\mathsf{E}(S)}\) such that \(\ker(\delta)=\mu\). By assumption, \(\mu\) is the equality congruence by Lemma 7.3, and so \(\delta\) is an injective homomorphism. Thus \(S\) is isomorphic to its image in \(T_{\mathsf{E}(S)}\), which is a wide inverse subsemigroup. Conversely, let \(S\) be a wide inverse subsemigroup of a Munn semigroup \(T_{E}\). Clearly, we can assume that \(E=\mathsf{E}(S)\). We calculate the maximum idempotent-separating congruence of \(S\). Let \(\alpha,\beta\in S\) and suppose that \((\alpha,\beta)\in\mu\) in \(S\). Then \(\operatorname{dom}(\alpha)=\operatorname{dom}(\beta)\). Let \(e\in\operatorname{dom}(\alpha)\). Then \(1_{[e]}\in S\), since \(S\) is a wide inverse subsemigroup of \(T_{\mathsf{E}(S)}\). By assumption \(\alpha 1_{[e]}\alpha^{-1}=\beta 1_{[e]}\beta^{-1}\). It is easy to check that \(1_{[\alpha(e)]}=\alpha 1_{[e]}\alpha^{-1}\) and \(1_{[\beta(e)]}=\beta 1_{[e]}\beta^{-1}\). Thus \(\alpha(e)=\beta(e)\). Hence \(\alpha=\beta\), and so \(S\) is fundamental. The following is a special case of an argument due to Wagner. **Example 7.8**.: Let \((X,\tau)\) be a \(T_{0}\)-space. We prove that \(\mathcal{I}(X,\tau)\) is fundamental. Let \(f\in\mathcal{I}(X,\tau)\) be a non-idempotent. Then there is an element \(x\in X\) such that \(f(x)\neq x\). Since \(X\) is \(T_{0}\) there is an open set \(U\) such that either \(f(x)\in U\) and \(x\notin U\) or \(f(x)\notin U\) and \(x\in U\). In either event, the elements \(f1_{U}\) and \(1_{U}f\) are not equal, where \(1_{u}\) is an idempotent. It follows that \(\mathcal{I}(X,\tau)\) is fundamental. As an example of fundamental inverse semigroups, we can easily construct the fundamental finite Boolean inverse monoids. **Proposition 7.9**.: _A finite Boolean inverse monoid is fundamental if and only its groupoid of atoms is principal._ Proof.: Let \(S\) be fundamental. We shall prove that the groupoid of atoms is principal by proving that the local groups are trivial. Suppose that \(e\stackrel{{ a}}{{\longrightarrow}}e\) where \(e\) is an atom and an idempotent. We shall prove that \(a\) is an idempotent. Then \(a\) is an atom. Let \(f\) be any idempotent. Then \(fa\leq a\). It follows that \(fa=0\) or \(fa=a\). Suppose that \(fa=0\). Then \(fe=0\) and so \(af=0\). Thus \(fa=af\). Suppose now that \(fa=a\). Then \(fe=e=fe\) and so \(af=a\). It follows again that \(fa=af\). We have therefore proved that \(a\) commutes with every idempotent, but under our assumption that the inverse semigroup is fundamental, we deduce that \(a=e\). Conversely, suppose that \(e\stackrel{{ a}}{{\longrightarrow}}e\), where \(e\) is an atomic idempotent, implies that \(a=e\). We shall prove that our semigroup is fundamental. Let \(a\) commute with all idempotents. We prove that \(a\) is an idempotent. We can write \(a=\bigvee_{i=1}^{m}a_{i}\) where the \(a_{i}\) are atoms. We prove that \(\mathbf{d}(a_{i})=\mathbf{r}(a_{i})\) for all \(i\) from which the result follows. Since \(\mathbf{r}(a_{j})a=a\mathbf{r}(a_{j})\), by assumption, we have that \(a_{j}=\bigvee_{i=1}^{m}a_{i}\mathbf{r}(a_{j})\). But \(a_{i}\mathbf{r}(a_{j})\leq a_{j}\). Thus either \(a_{i}\mathbf{r}(a_{j})=0\) or \(a_{i}\mathbf{r}(a_{j})=a_{j}\) since \(a_{j}\) is an atom. But \(a_{i}\mathbf{r}(a_{j})\leq a_{i}\). It follows that \(a_{i}\mathbf{r}(a_{j})=a_{i}\) and so \(a_{i}=a_{j}\). Thus \(a_{j}\mathbf{r}(a_{j})=a_{j}\). Hence \(\mathbf{d}(a_{j})\leq\mathbf{r}(a_{j})\) and so \(\mathbf{d}(a_{j})=\mathbf{r}(a_{j})\), since both \(\mathbf{d}(a_{j})\) and \(\mathbf{r}(a_{j})\) are atoms. By assumption \(a_{j}\) is an idempotent. It follows that \(a\) is an idempotent. The above result can be used to obtain a more explicit description of the finite fundamental Boolean inverse monoids. **Theorem 7.10**.: _Let \(S\) be a finite Boolean inverse monoid. Then it is fundamental if and only if \(S\) is isomorphic to a finite direct products \(\mathcal{I}_{n_{1}}\times\ldots\times\mathcal{I}_{n_{r}}\)._ Proof.: It can be checked that the product of two fundamental inverse semigroups is itself a fundamental inverse semigroup, and that the product of two Boolean inverse monoids is again a Boolean inverse monoid. So, one direction is easy to prove. Let, now, \(S\) be a fundamental finite Boolean inverse monoid. Then by Proposition 7.9, \(S\) is isomorphic to a Boolean inverse monoid of the form \(\mathsf{K}(G)\) where \(G\) is a principal groupoid. Now, \(G=\bigcup_{i=1}^{i=m}H_{i}\) is a finite disjoint union of connected principal groupoids. It can be checked that \(\mathsf{K}(G)\cong\mathsf{K}(H_{1})\times\ldots\times\mathsf{K}(H_{m})\). Let \(H\) be any finite connected principal groupoid with set of identities \(X\). Then \(\mathsf{K}(H)\cong\mathcal{I}(X)\). The result now follows. The above result can be specialized to characterize the finite symmetric inverse monoids. Let \(S\) be a Boolean inverse monoid. A semigroup ideal \(I\subseteq S\) is said to be an _additive ideal_ if \(a,b\in I\) and \(a\sim b\) implies that \(a\lor b\in I\). Clearly, both \(\{0\}\) and \(S\) itself are additive ideals. If these are the only ones we say that \(S\) is _\(0\)-simplifying._ We now have the following theorem which can be derived from Theorem 7.10 (or see [60]). **Theorem 7.11**.: _The finite fundamental \(0\)-simplifying Boolean inverse monoids are precisely the finite symmetric inverse monoids._ The above theorem suggests that the groups of units of fundamental \(0\)-simplifying Boolean inverse monoids should be regarded as generalizations of finite symmetric inverse monoids. ## 8. Congruence-free inverse semigroups with zero An inverse semigroup is said to be _congruence-free_ if its only congruences are equality and the universal congruence. In this section, we shall characterize those inverse semigroups with zero which are congruence-free.16 We begin by ruling out the existence of various kinds of congruence. An inverse semigroup with zero \(S\) is said to be _\(0\)-simple_ if the only ideals are \(\{0\}\) and \(S\). The following characterization uses Lemma 6.7 where we describe the \(\mathcal{J}\)-relation in terms of the \(\mathcal{D}\)-relation and the natural partial order. Footnote 16: Douglas Munn once remarked to me that this was one of the few instances where the theory for inverse semigroups with zero was easier than it was for the one without. **Lemma 8.1**.: _Let \(S\) be an inverse semigroup with zero. Then \(S\) is \(0\)-simple if and only if for any two non-zero idempotents \(e\) and \(f\) in \(S\) there exists an idempotent \(i\) such that \(e\,\mathcal{D}\,i\leq f\) and an idempotent \(j\) such that \(f\,\mathcal{D}\,j\leq e\)._ Proof.: Suppose first that \(S\) is \(0\)-simple. Let \(e\) and \(f\) be any two non-zero idempotents. Observe that \(e\in SeS\) and \(f\in SfS\). So, both \(SeS\) and \(SfS\) are not equal to \(\{0\}\). It follows that \(S=SeS=SfS\). Thus \(e\,\mathcal{J}\,f\). We now use Lemma 6.7 to deduce the result where we have used the fact that the idempotents form an order-ideal in an inverse semigroup. We now prove the converse. Let \(I\neq\{0\}\) be any non-zero ideal of \(S\). Suppose that \(I\neq S\). Let \(a\in S\setminus I\). Observe that \(\mathbf{d}(a)\in S\setminus I\) because if \(\mathbf{d}(a)\in I\) then \(a=a\mathbf{d}(a)\in I\), since \(I\) is an ideal. Let \(b\in I\) be any non-zero element. Then \(\mathbf{d}(b)\in I\) since \(I\) is an ideal. By assumption, \(\mathbf{d}(a)\) and \(\mathbf{d}(b)\) are nonzero idempotents. It follows by the assumption and Lemma 6.7, that \(S\mathbf{d}(a)S=S\mathbf{d}(b)S\). This implies that \(\mathbf{d}(a)\in I\) and so \(a\in I\), which is a contradiction. It follows that, in fact, \(I=S\). Let \(S\) be any inverse semigroup with zero. Define \[(s,t)\in\xi_{S}\Leftrightarrow(\forall a,b\in S)(asb=0\Leftrightarrow atb=0).\] It is left to the reader to check that this really is a congruence. We shall denote it by \(\xi\) when the semigroup it is defined on is clear. In the case where \(S\) is a meet-semilattice, the above definition simplifies somewhat. Let \(E\) be a meet-semilattice with zero, the operation of which is denoted by concatenation. Then \((e,f)\in\xi\) if and only \((\forall g\in E)(eg=0\) if and only if \(fg=0)\). A congruence \(\rho\) is said to be \(0\)_-restricted_ if the \(\rho\)-class containing \(0\) is just \(0\). We now have the following characterization of the congruence \(\xi\). **Lemma 8.2**.: _Let \(S\) be an inverse semigroup with zero. The congruence \(\xi\) is the maximum \(0\)-restricted congruence on \(S\)._ Proof.: Observe, first, that \(\xi\) is \(0\)-restricted. Suppose that \((s,0)\in\xi\). By definition, for all \(a,b\in S\) we have that \(asb=0\) if and only of \(a0b=0\). However, if we put \(a=ss^{-1}\) and \(b=s^{-1}s\) then we deduce that \(s=0\). Now, let \(\rho\) be any \(0\)-restricted congruence on \(S\) and let \(s\,\rho\,t\). Suppose that \(asb=0\). Then \(asb\,\rho\,atb\). Since \(\rho\) is \(0\)-restricted, we have that \(atb=0\). Thus \(asb=0\) implies that \(atb=0\). By symmetry, we deduce that \(a\,\xi\,b\). Thus \(\rho\subseteq\xi\), as required. Our next result shows, amongst other things, the relationship between \(\mu\) and \(\xi\). **Lemma 8.3**.: _Let \(S\) be an inverse semigroup with zero._ 1. \(\mu\subseteq\xi\)_._ 2. _The congruence_ \(\xi\) _restricted to_ \(\mathsf{E}(S)\) _is precisely_ \(\xi_{\mathsf{E}(S)}\)_._ Proof.: (1) Let \(s\,\mu\,t\). We prove that \(s\,\xi\,t\). Suppose that \(asb=0\) for \(a,b\in S\). We shall prove that \(atb=0\). However, \(asb\,\mu\,atb\). By definition, for all idempotents \(e\), we have that \((asb)e(asb)^{-1}=(atb)e(atb)^{-1}\). Choose \(e=b^{-1}b\). It follows that \(\mathbf{r}(asb)=\mathbf{r}(atb)\). We are given that \(asb=0\) and so \(\mathbf{r}(asb)=0\). It follows that \(\mathbf{r}(atb)=0\) and so \(atb=0\). By symmetry, this shows that \(s\,\xi\,t\). (2) Let \(e\) and \(f\) be idempotents. To say that \((e,f)\in\xi\) means that for all \(a,b\in S\) we have that \(aeb=0\) if and only if \(afb=0\). It is clear that \((e,f)\in\xi_{\mathsf{E}(S)}\). Suppose, now, that \((e,f)\in\xi_{\mathsf{E}(S)}\). We prove that \((e,f)\in\xi\) in \(S\). Let \(aeb=0\). Then \(a^{-1}aebb^{-1}=0\). Thus \(a^{-1}abb^{-1}e=0\) and so, by assumption, \(a^{-1}abb^{-1}f=0\). Hence \(a^{-1}afbb^{-1}=0\) and so \(afb=0\). The reverse direction is proved similarly. We now link \(\xi\) being the equality relation on the meet-semilattice \(E\) with the property of \(E\) being \(0\)-disjunctive which we introduced just before Lemma 3.15. **Lemma 8.4**.: _Let \(E\) be a meet semilattice with zero. Then \(\xi\) is the equality relation on \(E\) if and only if \(E\) is \(0\)-disjunctive._ Proof.: Suppose first that \(\xi\) is the equality relation on \(E\). We prove that \(E\) is \(0\)-disjunctive. Suppose that \(0<f<e\) and assume that there does not exist any \(0\neq g\leq e\) such that \(fg=0\). If \(ei=0\) then clearly \(fi=0\). Suppose that \(fi=0\). Then \(fi=fei=0\) and so \(f(ei)=0\). Clearly, \(ei\leq e\) and so by our assumption above, we must have \(ei=0\). It follows that \((e,f)\in\xi\). But this implies that \(e=f\), which is a contradiction. We assume that \(E\) is \(0\)-disjunctive and prove that \(\xi\) is the equality relation. Suppose that \((e,f)\in\xi\), where \(e\) and \(f\) are both non-zero. Then \(e\,\xi\,ef\) and so \(ef\neq 0\) since \(\xi\) is \(0\)-restricted. Suppose that \(ef\neq e\). Then \(0<ef<e\). Then, by assumption, there exists \(0\neq g\leq e\) such that \((ef)g=0\). But, clearly, \(g=eg\neq 0\). However, \(e\,\xi\,ef\) and so we have a contradiction. It follows that \(ef=e\). Similarly, \(ef=f\) and so \(e=f\), as required. We are nearly at our goal. **Lemma 8.5**.: _Let \(S\) be an inverse semigroup with zero. Then \(\xi\) is the equality relation if and only if \(\mathsf{E}(S)\) is \(0\)-disjunctive and \(S\) is fundamental._ Proof.: Suppose first that \(\xi\) is the equality relation. Then \(\mathcal{E}(S)\) is \(0\)-disjunctive by Lemma 8.3 and Lemma 8.4, and it is fundamental by Lemma 8.3. To prove the converse, suppose that \(\mathsf{E}(S)\) is \(0\)-disjunctive and \(S\) is fundamental. Then by Lemma 8.3 and Lemma 8.4, \(\xi\) restricted to \(\mathsf{E}(S)\) is the equality relation and so \(\xi\) is idempotent-separating. It follows by Lemma 7.2 that \(\xi\subseteq\mu\). But \(S\) is fundamental and so \(\mu\) is the equality relation thus \(\xi\) is the equality relation. We may now state the characterization of congruence-free inverse semigroups with zero. **Theorem 8.6** (Congruence-free inverse semigroups with zero).: _An inverse semigroup with zero \(S\) is congruence-free if and only if \(S\) is fundamental, \(0\)-simple and \(\mathsf{E}(S)\) is \(0\)-disjunctive._ Proof.: Suppose that \(S\) is congruence-free. Then \(\mu\) is equality, there are no non-trivial ideals and \(\xi\) is equality. Thus \(S\) is fundamental, \(0\)-simple and \(\mathsf{E}(S)\) is \(0\)-disjunctive by Lemma 8.4 To prove the converse, suppose that \(S\) is fundamental, \(0\)-simple and \(\mathsf{E}(S)\) is \(0\)-disjunctive. Let \(\rho\) be a congruence on \(S\) which is not the universal relation. Then \(\rho(0)\) is an ideal which is not \(S\). Thus this ideal must be equal to \(\{0\}\). It follows that \(\rho\) is a \(0\)-restricted congruence and so \(\rho\subseteq\xi\) by Lemma 8.2. Now we use the fact that the semigroup is fundamental together with Lemma 8.5, to deduce that \(\xi\) is the equality congruence and so \(\rho\) is the equality congruence. ## 9. Further reading and examples My interest in inverse semigroups developed as a result of John Fountain's undergraduate course in semigroups at the University of York, UK. It was deepened by attending the seminar at Oxford organized by Peter Cameron, Wilfrid Hodges, Angus Macintyre, and Peter Neumann, in which partial automorphisms and the work of Fraisse played a central role. The connections between Fraisse's work and inverse semigroups was clarified in [114] though based on the work of Benda to be found [8]. Partial automorphisms also play a significant role in Fraisse's approach to the study of relational structures [25]. The work of Ehresmann on pseudogroups [21] convinced me that inverse semigroups really were worth studying. I was, however, bugged by the question of whether there were 'natural' examples of inverse semigroups. But what makes an area of mathematics 'natural'? I can think of at least two answers to this question: the existence of deep problems and the proliferation of good examples. When I embarked on writing my book [51], I was primarily motivated by finding good examples. The work of Kellendonk [44, 45, 46] and Girard [29], as mediated by Peter Hines [34], provided such good examples. Since that time, it has become clear that partial bijections, and therefore inverse semigroups, arise naturally in many different parts of mathematics. There are also deep questions such as whether every finite inverse monoid has a finite \(F\)-inverse cover [5] or what can be said about finite semigroups with commuting idempotents [3]. In this section, I shall concentrate on sources of good examples, since these look like promising growth points for the theory. If there is one area of mathematics which makes essential use of inverse semigroups, it is in the theory of \(C^{*}\)-algebras. There are so many books and papers that are relevant here that I will only mention a few to get you started. The connection between inverse semigroups and \(C^{*}\)-algebras was first explored in [48] and [101]. A rather more recent book on this topic is [94] which will repay reading. There are many, many papers on this topic. A good place to start is [22] and [24] but it is worth checking out [99, page 45] which is on the special case of Cantor minimal systems. Let me, here, give one example of the theory of \(C^{*}\)-algebras leading to new semigroup theory. A monoid is said to be _finitely aligned_ if the intersection of any two principal right ideals is a finitely generated right ideal. This concept was first defined within the theory of \(C^{*}\)-algebras in [100] and made extensive use of in [107, 108]. However, it seems to have been first defined within semigroup theory in [31], quite early on, but was not picked up by workers in inverse semigroup theory; this is but one illustration of the importance of mathematicians in different fields talking to one another. See also the more recent [12]. It would be most naturally applied in [13]. The connection between inverse semigroups and \(C^{*}\)-algebras has come to be mediated by what are now termed _Steinberg algebras_; see [110, 112]. Even where no direct connection between inverse semigroups and \(C^{*}\)-algebras appears to exist, the theory of \(C^{*}\)-algebras has sometimes led to new developments in the theory of inverse semigroups. A good example is Steinberg's notion of strong Morita equivalence of inverse semigroups motivated by the corresponding theory for \(C^{*}\)-algebras [109]. The connections between inverse semigroups and quasi-crystals were first spelt out by Kellendonk [44, 45, 46]. The inverse semigroup perspective on the topological groupoids that Kellendonk constructs were first clarified by Lenz [76] and then developed in [67]. Aspects of this connection were described in [66]. This led to what we might term 'non-commutative Stone duality' which connects, in particular, Boolean inverse monoids with a class of etale groupoids. For a survey of non-commutative Stone duality, see [65]. I once asked the late Pieter Hofstra, an expert in topos theory, why I kept seeing inverse semigroups everyone. I don't know whether Pieter was being polite, but he explained that it was because etendues were everywhere, where an etendue is a particular kind of topos. This seems to me a particularly fruitful area of research. We refer the reader to [27, 28]. If the theory of inverse semigroups is to be more fully developed, then there will be a need to study invariants associated with inverse semigroups. Motivated by the Banach-Tarski Paradox [15] and the theory of \(K_{0}\)-groups [30], the type monoid was introduced in [47, 116] and its theory developed by Wehrung [117]. Type monoids are commutative refinement monoids and provide the first example of invariants of inverse semigroups. Another source of examples of invariants of inverse semigroups are cohomology theories. These have barely been developed; see the work of Lausch [74] and, particularly, Loganathan [77]. An application of the extension theory of inverse semigroups in the case of Von Neumann algebras can be found in [19]. MV-algebras form a natural class of invariants for the class of factorizable Boolean inverse monoids \(S\) in which \(S/\mathcal{J}\) is a lattice. MV-algebras are another generalization of Boolean algebras but it was proved in [70] that all countable MV-algebras arise in this way and this was generalized to all MV-algebras by Wehrung [117] using very different methods. Thus every MV-algebra arises from a particular kind of Boolean inverse monoid. The theory of MV-algebras goes back to Tarski's book [113], which is also noteworthy in that partial automorphisms of structures are singled out in [113, Theorem 11.6]. MV-algebras are studied in their own right in [14, 87, 88] but their applications to the study of certain kinds of Boolean inverse monoids has so far not been developed. I have indicated that certain kinds of groups can be constructed from inverse semigroups. See [81, 82] for how groups arise as so-called 'topological full groups' of certain kinds of groupoids; under non-commutative Stone duality, this means that the groups arise as the groups of units of certain kinds of Boolean inverse monoids. The Thompson groups form an important class of such groups. They, too, can be constructed from inverse monoids [41, 71, 73] generalizing some initial work in [9]. Groups with a different flavour arise from what Dehornoy calls 'geometry monoids'. This is explained in his book [17] but the explicit connection with inverse monoids is made in [54]. If I were to single out one class of inverse semigroups that justifies the field it would be the polycyclic inverse monoids introduced by Nivat and Perrot [92]. They were generalized by Perrot [95, 96]. As I pointed out in [57], Perrot's thesis contains the first definition of self-similar group actions [91] and was motivated entirely by inverse semigroup theory. My paper was subsequently generalized by Alistair Wallis [116]; see also [68, 69]. The polycyclic inverse monoids are intimately related to the classical Thompson groups [55, 56, 64]. The representation theory (by means of partial bijections) of the polycyclic inverse monoids is the subject of [39, 40, 58] and was motivated by [10] and by [43], which on the face of it have nothing to do with the polycyclic inverse monoids. The polycyclic inverse monoids arise from the free monoid. Analogous inverse monoids arise from free categories [4, 41]. There is a natural connection between such inverse semigroups and Leavitt path algebras [1]. This is part of the general topic of studying algebras of various kinds associated with inverse semigroups. This goes back to pioneering work of Douglas Munn [89] and it reaches its apotheosis in the applications of inverse semigroups to the theory of \(C^{*}\)-algebras mentioned above. This is a subject in its own right, but we mention in passing [18] and [72], which are relevant. To conclude, I have not said anything about the classical theory of inverse semigroups. Here, I would simply highlight the papers [2, 83, 86] with the suggestion that there is more to do here. I have also said nothing about free inverse semigroups. These are introduced in my book [51, Chapter 6] but a much more recent paper on their structure is [33]. In passing, I would like to mention that free inverse monoids can also be constructed using the doubly pointed pattern classes of Kellendonk applied to the graph of the free group regadred as a 'dendritic tiling' [45]. In fact, it was this example which convinced me that Kellendonk was on to something. The study of free inverse monoids leads naturally to the study of presentations of inverse semigroups. A recent paper on this topic is [32].
2309.02471
Investigating the classical problem of pursuit, in two modes
The pursuit problem is a historical issue of the application of mathematics in physics, which has been discussed for centuries since the time of Leonardo Da Vinci, and its applications are wide ranging from military and industrial to recreational, but its place of interest is nowhere but nature and inspiration from the way of migration of birds and hunting of archer fish. The pursuit problem involves one or more pursuers trying to catch a target that is moving in a certain direction. In this article, we delve into two modes of movement: movement on a straight line and movement on a curve. Our primary focus is on the latter. Within the context of movement on a straight line, we explore two methods and compare their respective results. Furthermore, we investigate the movement of two particles chasing each other and extend these findings to N particles that are chasing each other in pairs. By leveraging these two modes of movement, we present a novel relationship for two-particle and N-particle systems in pursuit. Lastly, we analyze the movement of moths around a lamp and evaluate their motion in relation to two-particle and N-particle systems in pursuit. The results of this analysis are carefully examined.
Amir Hossein Arshadi Kalameh, Kourosh Bayati Komitaki, Reza Sharifian, Mohammad Mahdi Eftekhari
2023-09-05T06:47:41Z
http://arxiv.org/abs/2309.02471v1
# Investigating the classical problem of pursuit, in two modes ###### Abstract The pursuit problem is a historical issue of the application of mathematics in physics, which has been discussed for centuries since the time of Leonardo Da Vinci, and its applications are wide ranging from military and industrial to recreational, but its place of interest is nowhere but nature and inspiration from the way of migration of birds and hunting of archer fish. The pursuit problem involves one or more pursuers trying to catch a target that is moving in a certain direction. In this article, we delve into two modes of movement: movement on a straight line and movement on a curve. Our primary focus is on the latter. Within the context of movement on a straight line, we explore two methods and compare their respective results. Furthermore, we investigate the movement of two particles chasing each other and extend these findings to N particles that are chasing each other in pairs. By leveraging these two modes of movement, we present a novel relationship for two-particle and N-particle systems in pursuit. Lastly, we analyze the movement of moths around a lamp and evaluate their motion in relation to two-particle and N-particle systems in pursuit. The results of this analysis are carefully examined. + Footnote †: preprint: APS/123-QED ## I Introduction ### History The pursuit problem was apparently first stated by Leonardo da Vinci. In this problem, the cat is chasing the mouse while the speed of both of them is constant and the mouse is moving in a straight line1.In 1732, a problem called (bugs), which was known as the dog, mouse, ant or beetles problem, was investigated by the French scientist Pierre Bouguer. These problems originate from the mathematics of pursuit curves2. In 1877, Edward Lucas posed the three dog problem. It is stated in this problem: The dogs are first placed at the three vertices of the equilateral triangle, then each one follows the other. What is the curve that describes their motion? Three years later, Henri Brocard showed that the dog's path is a logarithmic spiral and the dogs eventually meet at a common point, this common point is called Brocard point of a triangle [2]. For N bugs that are initially located on the vertices of regular N-polygon and follow each other at a constant speed, their trajectory is a logarithmic spiral and finally they all meet at the center of the polygon [3]. Footnote 1: The idea of the motion of a point is that the motion of a point is not a straight line. ### Applications It has been suggested in [4] that air transportation will reduce its emissions in the near future. One proposed method to achieve this is by using the upwash of the wake vortex generated by a leading aircraft. This concept is inspired by migratory birds; however, the main challenge is overcoming the difficulties associated with pursuing the leading aircraft, which can be resolved by implementing a generalized solution of the chase problem. Multiple unmanned aerial vehicles (UAVs) or mobile robots, as discussed in [5; 6], require a generalized solution of the chase problem, similar to that discussed in [4]. There is a wide range of applications for controlling multiple UAVs, such as exploration or surveillance, military or civilian scenarios, as noted in [5; 7]. Additionally,2 describes a new strategy, called the interception strategy, for American football players to catch the ball carrier. This strategy was commonly used when the sky was overcast shown in FIG [8]. 1. Houseflies, like migratory birds, tend to follow the leader and choose their path based on the angula velocity of the chasing fly [9; 7]. The cinematography of the chase problem was used to measure the path taken by teleost fish and archer fish when catching their food [10; 11; 12]. Furthermore, the initial angle and distance strongly influence the route selected by teleost fish and archer fish to avoid collisions or choose the best path. This was noted in [13]. Lastly, cinematography is required to target a query when only a partial location is known, as well as to control impact direction or limit the minimum radius curve (which maximizes acceleration) [7]. A framework for controlling the formation of a group of robots was developed, and one critical parameter for this framework is the leader-follower graph [14; 15; 16]. ## II Straight line chasing problem For many years, the issue of chasing with a straight line has been of great importance and has been extensively studied. This problem involves examining two main types of straight lines: those that are parallel to the horizontal or vertical axis, and those that are angled with respect to these axes. The first type of straight line path is a special case of the Figure 1: New strategy second, and both types are crucial to understanding this problem and finding potential solutions. To begin solving this problem, we can utilize the equation of the curve's slope for the first type of straight line path (as shown in FIG. 2). This approach was presented in [7] and can be followed by proceeding with the rest of the necessary work. \[Vt-x=(H-y)\frac{v_{x}}{v_{y}} \tag{1}\] By adopting this approach, they were able to derive various equations, including the equation for the distance between two objects in motion at a given moment, the equation for the vertical component of velocity, and the pursuit curve.Additionally, the equations can be used to determine the collision point. \[z=1-\frac{y}{H} \tag{2}\] \[v_{y}=\frac{2v}{z^{\frac{-V}{V}}+z^{\frac{V}{V}}} \tag{3}\] \[L=\frac{H}{2}(z^{1-\frac{V}{V}}+z^{1+\frac{V}{V}}) \tag{4}\] \[\frac{x}{H}=(\frac{z^{2}-1}{4})-\ln(\sqrt{z}) \tag{5}\] As discussed in [7], the problem was solved in [17] by defining the slope (follower). However, the difference this time was that the slope was expressed as the derivative of y with respect to x. Using method [17], it is also possible to identify the collision location. Essentially, the fundamental concepts were the same in both methods, and the solution could be obtained. The two methods discussed above have been utilized to resolve the problem. Another approach is presented in [18]. In this method, rather than examining the slopes, location vectors are initially assigned to both movers. Since the pursuer is constantly moving towards the pursued, this yields the following equation: \[\vec{r}_{1}-\vec{r}_{2}=k(t)\vec{V} \tag{6}\] where: \[k(0)=\frac{L}{V} \tag{7}\] \[k(T)=0 \tag{8}\] By utilizing the equations mentioned above, it is possible to determine the arrival time of both movers, as well as the pursuit curve (which was also obtained in [17]). Additional information regarding this can be found in [18]. We will now explore the techniques for solving the second type of problems. In [1], one of the methods to resolve this problem is presented. The scenario involves a cat chasing a mouse that moves along a straight line, making an angle of \(\alpha\) with the horizon line. The cat moves at a speed of kv (where k > 1), while the mouse moves at a speed of v. M is the intersection point of the tangent line at point C on the line IT, creating the angle \(\beta\) (FIG. 3). The solution to the problem can be summarized by the following three key equations: \[\frac{dx}{dt}=kvcos\beta \tag{9}\] \[\frac{dy}{dt}=v \tag{10}\] Figure 3: Second type Figure 2: First type Investigating the classical problem of pursuit, in two modes \[\frac{dz}{dt}=vcos\beta-kv \tag{11}\] As evident, the solution approach in this case is comparable to the previous two methods of the first type, except for the fact that the angle's impact needs to be taken into account since the path is not parallel to the horizon line. The problem is ultimately resolved by utilizing the aforementioned equations. For instance, the arrival point of the two mobiles can be determined as follows: \[X_{T}=y_{0}\frac{cos\alpha(k+sin\alpha)}{k^{2}-1} \tag{12}\] \[Y_{T}=y_{0}(1+\frac{k+sin\alpha}{k^{2}-1}sin\alpha) \tag{13}\] Moreover, this method derives the pursuit differential equation (refer to [1] for more information). Next, we will examine an alternative method introduced in [19]. The approaches outlined in [19] are considered general methods for resolving all pursuit problems, even if the motion functions are non-linear and intricate. The following equations govern the pursuer and pursued coordinate systems, where (x, y) represents the former and (X, Y) represents the latter: \[X=x+\lambda\dot{x} \tag{14}\] \[Y=y+\lambda\dot{y} \tag{15}\] \[\dot{x}^{2}+\dot{y}^{2}=c^{2}(\dot{X}^{2}+\dot{Y}^{2}) \tag{16}\] The equations are rewritten below, taking into account the angle \(\alpha\) formed with reference to the horizontal axis: \[x+\lambda\dot{x}=tcos\alpha \tag{17}\] \[y+\lambda\dot{y}=y_{0}+tsin\alpha \tag{18}\] \[\dot{x}^{2}+\dot{y}^{2}=c^{2} \tag{19}\] The collision position determined from the equations is in agreement with the collision location stated in method [1]. It can be asserted that among all these techniques, [19] is the most efficient. This is because it offers a general principle and can analyze more intricate movements with greater ease. ## III Rotational movement ### What is the problem? Consider a scenario where a fox with a speed of v is chasing a rabbit with the same speed. The rabbit moves at a path where its velocity makes an angle of \(\alpha\) with respect to the line connecting the two entities. If their initial distance is l, where and when will they meet [20] ### Why this problem? The determination of the rabbit and fox's arrival time and position is not a primary focus of this article. The solution can be found in the same book [20] and in similar studies [18; 19; 21; 22]. What is significant in this context is the behavior and movement pattern of the hunter and prey in such systems. Furthermore, the study of this behavior in complex systems is of great importance [2; 23]. Most of the previous investigations [18; 22]have solely considered the convergence point of the two entities. ### Solution \[\dot{\theta}=\frac{d\theta}{dt}=\frac{vsin\alpha}{L-(1-v)cos\alpha} \tag{20}\] \[\triangle v(\tilde{v})=vcosd\theta-v=0 \tag{21}\] \[\lim_{\triangle t\to 0}\triangle v(\hat{\theta})=dv(\hat{\theta})=vsind \theta=vd\theta \tag{22}\] \[dv=\sqrt{dv(\hat{\theta})+dv(\tilde{v})\xlLongrightarrow{dv(\tilde{v})=0}}dv=dv (\hat{\theta}) \tag{23}\] Using Eqs. (20), (21), (22) and (23) then we can write: \[L^{\prime}=L-(1-v)t\cos\alpha \tag{24}\] \[\begin{array}{c}d\theta=(\frac{v\sin\alpha}{L-(1-v)t\cos\alpha})dt\\ \xlLongrightarrow{dv=vd\theta}dv=(\frac{v\sin\alpha}{L-(1-v)t\cos\alpha})vdt \end{array} \tag{25}\] Figure 4: Motion’s diagram of fox and rabbit in Cartesian coordinate system As we see rabbit’s velocity makes an angle of \(\alpha\) with respect to the line connecting the two entities(L\({}^{\prime}\)). Investigating the classical problem of pursuit, in two modes \[dv(\hat{x})=dv\cos\theta \tag{26}\] \[dv(\hat{y})=-dv\sin\theta \tag{27}\] \[\sin\theta=\frac{dx}{\sqrt{(dx^{2}+dy^{2})}}\stackrel{{\frac{1}{2}}} {{\Longrightarrow}}\sin\theta=\frac{\frac{dx}{dt}}{(\frac{\sqrt{(dx^{2}+dy^{ 2})}}{dt})}=\frac{v(x)}{v} \tag{28}\] \[\cos\theta=\frac{dy}{\sqrt{(dx^{2}+dy^{2})}}\stackrel{{\frac{1}{2} }}{{\Longrightarrow}}\cos\theta=\frac{\frac{dy}{dt}}{(\frac{\sqrt{(dx^{2}+dy^ {2})}}{dt})}=\frac{v(y)}{v} \tag{29}\] By merging Eqs. (25), (26), (27), (28), and (29), it is feasible to formulate two coupled differential equations for velocity in two directions. \[dv(\hat{x})=(\frac{vv(\hat{y})\sin\alpha}{L-v(1-\cos\alpha)t})dt \tag{30}\] \[dv(\hat{y})=-(\frac{vv(\hat{x})\sin\alpha}{L-v(1-\cos\alpha)t})dt \tag{31}\] ## IV Result The approach we have employed thus far is somewhat similar to the method presented in [20] for determining the endpoint and we can determine the end point without solving Eqs. (30) and (31). However, our primary focus is to examine the motion parameters, such as velocity and location, along the direction of motion. To achieve this, we can solve the coupled differential equations of the system using software tools like Mathematica. Solving with Mathematica: \[\begin{split} v_{x}(t)=& C_{1}\cos[\frac{\sin\alpha \ln{(L+(-1+\cos\alpha)tV)}}{-1+\cos\alpha}]+\\ & C_{2}\sin[\frac{\sin\alpha\ln{(L+(-1+\cos\alpha)tV)}}{-1+\cos \alpha}]\\ & v_{y}(t)=C_{2}\cos[\frac{\sin\alpha\ln{(L+(-1+\cos\alpha)tV)}}{- 1+\cos\alpha}]-\\ & C_{1}\sin[\frac{\sin\alpha\ln{(L+(-1+\cos\alpha)tV)}}{-1+\cos \alpha}]\end{split} \tag{32}\] Well, we can take the integral of these equations to give us the coordinate equation for motion over time: \[\begin{split}& x(t)=B_{1}+(L+(-1+\cos\alpha)tV)\times\\ &(\frac{((-1+\cos\alpha)C_{1}-\sin\alpha C_{2})\cos[\frac{\sin \alpha\ln{[L+(-1+\cos\alpha)tV]}}{-1+\cos\alpha}]}{(1-2\cos\alpha+\cos\alpha^{2 }+\sin\alpha^{2})V}+\\ &\frac{(\sin\alpha C_{1}+(-1+\cos\alpha)C_{2})\sin[\frac{\sin \alpha\ln{[L+(-1+\cos\alpha)tV]}}{-1+\cos\alpha}]}{(1-2\cos\alpha+\cos\alpha^{2 }+\sin\alpha^{2})V})\\ &\frac{(\sin\alpha C_{1}+(-1+\cos\alpha)C_{2})\sin[\frac{\sin \alpha\ln{[L+(-1+\cos\alpha)tV]}}{-1+\cos\alpha}]}{(1-2\cos\alpha+\cos\alpha^{2 }+\sin\alpha^{2})V})\\ \end{split} \tag{34}\] \[\begin{split}& y(t)=B_{2}+(L+(-1+\cos\alpha)tV)((\sin\alpha C_{1}+(-1+ \cos\alpha)\times\\ &\frac{C_{2}\cos[\frac{\sin\alpha\ln{[L+(-1+\cos\alpha)tV]}}{-1+ \cos\alpha}]}{(1-2\cos\alpha+\cos\alpha^{2}+\sin\alpha^{2})V}+\\ &\frac{(C_{1}-\cos\alpha C_{1}+\sin\alpha C_{2})\sin[\frac{\sin \alpha\ln{[L+(-1+1+\cos\alpha)tV]}}{-1-\cos\alpha}]}{(1-2\cos\alpha+\cos \alpha^{2}+\sin\alpha^{2})V})\\ \end{split} \tag{35}\] Note that the coefficients \(C_{1}\),\(C_{2}\),\(B_{1}\) and \(B_{2}\) are established based on the initial conditions. We have flexibility in selecting the coordinates by rotating them, and we opt for the coordinates that satisfy the following initial conditions: \[\begin{split} v_{x}(0)=0,v_{y}(0)=v,x(0)=0,y(0)=0\\ & v_{x}(t)=\frac{-v\cos[\frac{[\sin\alpha\ln{[L+(-1+\cos\alpha)tV] }}{-1+\cos\alpha}]\sin[\frac{\sin\alpha\ln{[L]}}{-1+\cos\alpha}+\\ &\cos[\frac{\sin\alpha\ln{[L]}}{-1+\cos\alpha}]+\sin[\frac{\sin \alpha\ln{[L]}}{-1+\cos\alpha}]\\ &\frac{v\cos[\frac{\sin\alpha\ln{[L]}}{-1+\cos\alpha}]\sin[\frac{ \sin\alpha\ln{[L]}}{-1+\cos\alpha}]}{\cos[\frac{\sin\alpha\ln{[L]}}{-1+\cos \alpha}]^{2}+\sin[\frac{\sin\alpha\ln{[L]}}{-1+\cos\alpha}]^{2}})\\ \end{split} \tag{36}\] \[\begin{split} v_{y}(t)=\frac{(v\cos[\frac{\sin\alpha\ln{[L]}}{-1 +\cos\alpha}]\cos[\frac{\sin\alpha\ln{[L+(-1+\cos\alpha)tV]}}{-1+\cos\alpha}] }{(\cos[\frac{\sin\alpha\ln{[L]}}{-1+\cos\alpha}]^{2}+\sin[\frac{\sin\alpha \ln{[L]}}{-1+\cos\alpha}]^{2})}+\\ &\frac{v\sin[\frac{\sin\alpha\ln{[L]}}{-1+\cos\alpha}]\sin[\frac{ \sin\alpha\ln{[L+(-1+\cos\alpha)tV}}{-1+\cos\alpha}]]}{(\cos[\frac{\sin\alpha \ln{[L]}}{-1+\cos\alpha}]^{2}+\sin[\frac{\sin\alpha\ln{[L]}}{-1+\cos\alpha}]^{2}) })\\ \end{split} \tag{37}\] We set the initial conditions as follows (note that the specific choice of initial conditions does not impact the overall outcome of our movement analysis): v=2 (m/s),L=20(m), \(\alpha=\pi\)/3 event, we need to examine the system in a relative device. In a relativistic device, the length of the connecting line decreases (radius of gyration), leading to an increase in angular velocity changes. At the point where the two moving entities meet, this length tends to zero, causing our rotational acceleration to tend towards infinity. Furthermore, the time required to complete a round tends to zero, resulting in an infinite acceleration over time, and an increase in the speed of changing the speed sign due to the reduction in rotation time. (the two moving parts rotate together) It is worth noting that we have formulated the equations of motion for the fox, and the equations of motion for the rabbit are identical to those of the fox, except for the boundary conditions. FIG. 7, we observe that the shape of the rabbit's movement path is similar to that of the fox, except that the initial conditions in the equations show that we have shifted the fox's diagram by the initial distance L and rotated it to the right by \(\alpha\). This indicates that we can convert the fox diagram into a rabbit diagram, as mentioned earlier. Another conclusion that we can draw from FIG. 7 is that the rabbit moves like a fox that is chasing a rabbit in the direction of its initial speed, while being located at a distance of L. If we continue this reasoning, we can assume that the rabbit behaves like a fox that is pursuing another rabbit. Thus, the system of two particles following each other behaves like a system of N particles following each other (FIG. 8). For situations where \(\alpha\) is a divisor of 360, our N-particle system becomes a closed system, where the N bodies follow the first body due to the constraint of movement in the connecting line. The interesting thing is that this N-particle system behaves like a two-particle system. Therefore, we can use the resulting relationship for an N-particle system for a two-particle system. For a closed N-particle system, we know that the shape symmetry system preserves its initial shape. This symmetry implies that we can write the rates of motion in the polar device to the center of this closed system.As the system is symmetrical, the point of convergence of the particles is certainly at its center. \[2\beta=\pi-\alpha\stackrel{{\frac{1}{2}}}{{\Rightarrow}}\beta= \frac{\pi}{2}-\frac{\alpha}{2} \tag{38}\] \[2R\sin\frac{\alpha}{2}=l\to R=\frac{l}{2\sin\frac{\alpha}{2}} \tag{39}\] \[v=\sqrt{(\dot{r})^{2}+(r\dot{\theta})^{2}} \tag{40}\] \[\dot{r}=-v\cos\beta=-v\sin\frac{\alpha}{2} \tag{41}\] \[r=R-v\sin\frac{\alpha}{2}t\to r=\frac{l}{2\sin\frac{\alpha}{2}}-vt\sin\frac{ \alpha}{2} \tag{42}\] Figure 8: When the \(\alpha=60^{\circ}\) the initial conditions of our two-particle system exhibit behavior that is similar to that of a six-particle system. Additionally, this behavior displays the same symmetry as the figure opposite to it. Figure 7: Fox and rabbit movement path diagram in Cartesian coordinates Figure 9: In a generalized two-particle to N-particle system, the angle between the center of rotation and each vertex of the N polygon that is con-structed is equal to beta. This relationship holds true regardless of the number of parti-cles involved in the system, allowing for a gen-eralized approach to the problem. Using Eqs. (38), (39), (40),(41) and (42) then we can write: \[v=\sqrt{(-v\sin\frac{\alpha}{2})^{2}+((\frac{l}{2\sin\frac{\alpha}{2}}-v\sin \frac{\alpha}{2}t)\dot{\theta})^{2}} \tag{43}\] \[\dot{\theta}=\frac{v\cos\frac{\alpha}{2}}{(\frac{l}{2\sin\frac{\alpha}{2}}-v \sin\frac{\alpha}{2}t)} \tag{44}\] If we solve this differential equation \[-\frac{\theta}{\cot\frac{\alpha}{2}}=\ln{(\frac{l}{2\sin\frac{\alpha}{2}}-v \sin\frac{\alpha}{2}t)}+c \tag{45}\] c is a constant determined by the initial conditions Using Eqs. (42) and (45) we can write : \[r(\theta)=r_{0}e^{-\frac{\theta}{\cot\frac{\alpha}{2}}} \tag{46}\] Eq. (46) is applicable to all the systems we have analyzed so far, including the two-particle system in pursuit and N-particle systems' motion behavior in polar coordinates. Eq. (46) can also be observed in moth insects. In the past, when the air was dark due to the absence of lamps and lighting in modern cities, moths used the moon for orientation and movement, keeping the angle between the direction of their movement and the distance vector between the moth and the moon constant. This was possible because the moon was considered to be at an infinite distance [24]. However, with the increase in brightness in cities and the prevalence of lamps, the light from lamps became brighter than that of the moon, causing moths to choose lamps as a source of movement. By approaching the lamp, the moth tries to keep the angle between the vector connecting to the light source and the direction of movement constant, resulting in a rotational movement similar to the system of two tracked particles. It can be argued that the motion of the two-particle system in pursuit and the N-particle system in pursuit, previously investigated, moves with a constant angle between the line connecting the particle to the center of rotation and the velocity vector. This movement is similar to the movement of moths.
2308.15224
Papeos: Augmenting Research Papers with Talk Videos
Research consumption has been traditionally limited to the reading of academic papers-a static, dense, and formally written format. Alternatively, pre-recorded conference presentation videos, which are more dynamic, concise, and colloquial, have recently become more widely available but potentially under-utilized. In this work, we explore the design space and benefits for combining academic papers and talk videos to leverage their complementary nature to provide a rich and fluid research consumption experience. Based on formative and co-design studies, we present Papeos, a novel reading and authoring interface that allow authors to augment their papers by segmenting and localizing talk videos alongside relevant paper passages with automatically generated suggestions. With Papeos, readers can visually skim a paper through clip thumbnails, and fluidly switch between consuming dense text in the paper or visual summaries in the video. In a comparative lab study (n=16), Papeos reduced mental load, scaffolded navigation, and facilitated more comprehensive reading of papers.
Tae Soo Kim, Matt Latzke, Jonathan Bragg, Amy X. Zhang, Joseph Chee Chang
2023-08-29T11:25:30Z
http://arxiv.org/abs/2308.15224v1
# Papeos: Augmenting Research Papers with Talk Videos+ ###### Abstract. Research consumption has been traditionally limited to the reading of academic papers--a static, dense, and formally written format. Alternatively, pre-recorded conference presentation videos, which are more dynamic, concise, and colloquial, have recently become more widely available but potentially under-utilized. In this work, we explore the design space and benefits for combining academic papers and talk videos to leverage their complementary nature to provide a rich and fluid research consumption experience. Based on formative and co-design studies, we present **Papeos**, a novel reading and authoring interface that allow authors to augment their **papers** by segmenting and localizing talk videos alongside relevant paper passages with automatically generated suggestions. With Papeos, readers can visually skim a paper through clip thumbnails, and fluidly switch between consuming dense text in the paper or visual summaries in the video. In a comparative lab study (n=16), Papeos reduced mental load, scaffolded navigation, and facilitated more comprehensive reading of papers. Interactive Documents; Reading Interfaces; Scientific Papers; Videos + Footnote †: C Click here to open the Papeon version of this document: [https://papeo.apr/demo](https://papeo.apr/demo) + Footnote †: C Click here to open the Papeon version of this document: [https://papeo.apr/demo](https://papeo.apr/demo) + Footnote †: C Click here to open the Papeon version of this document: [https://papeo.apr/demo](https://papeo.apr/demo) + Footnote †: C Click here to open the Papeon version of this document: [https://papeo.apr/demo](https://papeo.apr/demo) + Footnote †: C Click here to open the Papeon version of this document: [https://papeo.apr/demo](https://papeo.apr/demo) + Footnote †: C Click here to open the Papeon version of this document: [https://papeo.apr/demo](https://papeo.apr/demo) + Footnote †: C Click here to open the Papeon version of this document: [https://papeo.apr/demo](https://papeo.apr/demo) + Footnote †: C Click here to open the Papeon version of this document: [https://papeo.apr/demo](https://papeo.apr/demo) + Footnote †: C Click here to open the Papeon version of this document: [https://papeo.apr/demo](https://papeo.apr/demo) + Footnote †: C Click here to open the Papeon version of this document: [https://papeo.apr/demo](https://papeo.apr/demo) + Footnote †: C Click here to open the Papeon version of this document: [https://papeo.apr/demo](https://papeo.apr/demo) + Footnote †: C Click here to open the Papeon version of this document: [https://papeo.apr/demo](https://papeo.apr/demo) + Footnote †: C Click here to open the Papeon version of this document: [https://papeo.apr/demo](https://papeo.apr/demo) + Footnote †: C Click here to open the Papeon version of this document: [https://papeo.apr/demo](https://papeo.apr/demo) + Footnote †: C Click here to open the Papeon version of this document: [https://papeo.apr/demo](https://papeo.apr/demo) + Footnote †: C Click here to open the Papeon version of this document: [https://papeo.apr/demo](https://papeo.apr/demo) + Footnote †: C Click here to open the Papeon version of this document: [https://papeo.apr/demo](https://papeo.apr/demo) + Footnote †: C Click here to open the Papeon version of this document: [https://papeo.apr/demo](https://papeo.apr/demo) + Footnote †: C Click here to open the Papeon version of this document: [https://papeo.apr/demo](https://papeo.apr/demo) + Footnote †: C Click here to open the Papeon version of this document: [https://papeo.apr/demo](https://papeo.apr/demo) + Footnote †: C Click here to open the Papeon version of this document: [https://papeo.apr/demo](https://papeo.apr/demo) + Footnote †: C Click here to open the Papeon version of this document: [https://papeo.apr/demo](https://papeo.apr/demo) + Footnote †: C Click here to open the Papeon version of this document: [https://papeo.apr/demo](https://papeo.apr/demo) + Footnote †: C Click here to open the Papeon version of this document: [https://papeo.apr/demo](https://papeo.apr/demo) + Footnote †: C Click here to open the Papeon version of this document: [https://papeo.apr/demo](https://papeo.apr/demo) + Footnote †: C Click here to open the Papeon version of this document: [https://papeo.apr/demo](https://papeo.apr/demo) + Footnote †: C Click here to open the Papeon version of this document: [https://papeo.apr/demo](https://papeo.apr/demo) + Footnote †: C Click here to open the Papeon version of this document: [https://papeo.apr/demo](https://papeo.apr/demo) + Footnote †: C Click here to open the Papeon version of this document: [https://papeo.apr/demo](https://papeo.apr/demo) + Footnote †: C Click here to open the Papeon version of this document: [https://papeo.apr/demo](https://papeo.apr/demo) + Footnote †: C Click here to open the Papeon version of this document: [https://papeo.apr/demo](https://papeo.apr/demo) + Footnote †: C Click here to open the Papeon version of this document: [https://papeo.apr/demo](https://papeo.apr/demo) + Footnote †: C Click here to open the Papeon version of this document: [https://papeo.apr/demo](https://papeo.apr/demo) + Footnote †: C Click here to open the Papeon version of this document: [https://papeo.apr/demo](https://papeo.apr/demo) + Footnote †: C Click here to open the Papeon version of this document: [https://papeo.apr/demo](https://papeo.apr/demo) + Footnote †: C Click here to open the Papeon version of this document: [https://papeo.apr/demo](https://papeo.apr/demo) + Footnote †: C Click here to open the Papeon version of this document: [https://papeo.apr/demo](https://papeo.apr/demo) + Footnote †: C Click here to open the Papeon version of this document: [https://papeo.apr/demo](https://papeo.apr/demo) + Footnote †: C Click here to open the Papeon version of this document: [https://papeo.apr/demo](https://papeo.apr/demo) + Footnote †: C Click here to open the Papeon version of this document: [https://papeo.apr/demo](https://papeo.apr/demo) + Footnote †: C Click here to open the Papeon version of this document: [https://papeo.apr/demo](https://papeo.apr/demo) + Footnote †: C Click here to open the Papeon version of this document: [https://papeo.apr/demo](https://papeo.apr/demo) + Footnote †: C Click here to open the Papeon version of this document: [https://papeo.apr/demo](https://papeo.apr/demo) + Footnote †: C Click here to open the Papeon version of this document: [https://papeo.apr/demo](https://papeo.apr/demo) + Footnote †: C Click here to open the Papeon version of this document: [https://papeo.apr/demo](https://papeo.apr/demo) + Footnote †: C Click here to open the Papeon version of this document: [https://papeo.apr/demo](https://papeo.apr/demo) + Footnote †: C Click here to open the Papeon version of this document: [https://papeo.apr/demo](https://papeo.apr/demo) + Footnote †: C Click here to open the Pape version of this document: [https://papeo.apr/demo](https://papeo.apr/demo) + Footnote †: C Click here to open the Pape version of this document: [https://papeo.apr/demo](https://papeo.apr/demo) + Footnote †: C Click here to open the Pape version of this document: [https://papeo.apr/demo](https://papeo.apr/demo) + Footnote †: C Click here to open the Pape version of this document: [https://papeo.apr/demo](https://papeo.apr/demo) + Footnote †: C Click here to open the Pape version of this document: [https://papeo.apr/demo](https://papeo.apr/demo) + Footnote †: C Click here to open the Pape version of this document: [https://papeo.apr/demo](https://papeo.apr/demo) + Footnote †: C Click here to open the Pape version of this document: [https://papeo.apr/demo](https://papeo.apr/demo) + Footnote †: C Click here to open the Pape version of this document: [https://papeo.apr/demo](https://papeo.apr/demo) + Footnote †: C Click here to open the Pape version of this document: [https://papeo.apr/demo](https://papeo.apr/demo) + Footnote †: C Click here to open the Pape version of this document: [https://papeo.apr/demo](https://papeo.apr/demo) + Footnote †: C Click here to open the Pape version of this document: [https://papeo.apr/demo](https://papeo.apr/demo) + Footnote †: C Click here to open the Pape version of this document: [https://papeo.apr/demo](https://papeo.apr/demo) + Footnote †: C Click here to open the Pape version of this document: [https://papeo.apr/demo](https://papeo.apr/demo) + Footnote †: C Click here to open the Pape version of this document: [https://papeo.apr/demo](https://papeo.apr/demo) + Footnote †: C Click here to open the Pape version of this document: [https://papeo.apr/demo](https://papeo.apr/demo) + Footnote †: C Click here to open the Pape version of this document: [https://papeo.apr/demo](https://papeo.apr/demo) + Footnote †: C Click here to open the Pape version of this document: [https://papeo.apr/demo](https://papeo.apr/demo) + Footnote †: C Click here to open the Pape version of this document: [https://papeo.apr/demo](https://papeo.apr/demo) + Footnote †: C Click here to open the Pape version of this document: [https://papeo.apr/demo](https://papeo.apr/demo) + Footnote †: C Click here to open the Pape version of this document: [https://papeo.apr/demo](https://papeo.apr/demo) + Footnote †: C Click here to open the Pape version of this document: [https://papeo.apr/demo](https://papeo.apr/demo) + Footnote †: C Click here to open the Pape version of this document: [https://papeo.apr/demo](https://papeo.apr/demo) + Footnote †: C Click here to open the Pape version of this document: [https://papeo.apr/demo](https://papeo.apr/demo) + Footnote †: C Click here to open the Pape version of this document: [https://papeo.apr/demo](https://papeo.apr/demo) + Footnote †: C Click here to open the Pape version of this document: [https://papeo.apr/demo](https://papeo.apr/demo) + Footnote †: C Click here to open the Pape version of this document: [https://papeo.apr/demo](https://papeo.apr/demo) + Footnote †: C Click here to open the Pape version of this document: [https://papeo.apr/demo](https://papeo.apr/demo) + Footnote †: C Click here to open the Pape version of this document: [https://papeo.apr/demo](https://papeo.apr/demo) + Footnote †: C Click here to open the Pape version of this document: [https://papeo.apr/demo](https://papeo.apr/demo) + Footnote †: C Click here to open the Pape version of this document: [https://papeo.apr/demo](https://papeo.apr/demo) + Footnote †: C Click here to open the Pape version of this document: [https://papeo.apr/demo](https://papeo.apr/demo) + Footnote †: C Click here to open the Pape version of this document: [https://papeo.apr/demo](https://papeo.apr/demo) + Footnote †: C Click here to open the Pape version of this document: [https://papeo.apr/demo](https://papeo.apr/demo) + Footnote †: C Click here to open the Pape version of this document: [https://papeo.apr/demo](https://papeo.apr/demo) + Footnote †: C Click here to open the Pape version of this document: [https://papeo.apr/demo](https://papeo.apr/demo) + Footnote †: C Click here to open the Pape version of this document: [https://papeo.apr/demo](https://papeo.apr/demo) + Footnote †: C Click here to open the Pape version of this document: [https://papeo.apr/demo](https://papeo.apr/demo) + Footnote †: C Click here to open the Pape version of this document: [https://papeo.apr/demo](https://papeo.apr/demo) + Footnote †: C Click here to open the Pape version of this document: [https://papeo.apr/demo](https://papeo.apr/demo) + Footnote †: C Click here to open the Pape version of this document: [https://papeo.apr/demo](https://papeo.apr/demo) + Footnote †: C Click here to open the Pape version of this document: [https://papeo.apr/demo](https://papeo.apr/demo) + Footnote †: C Click here to open the Pape version of this document: https://papeoapr/demo + Footnote †: C Click here to open the Pape version of this document: [https://papeo.apr/demo](https://papeo.apr/demo) + Footnote †: C Click here to open the Pape version of this document: [https://papeo.apr/demoapr/demo](https://papeo.apr/demoapr/demo) + Footnote †: C Click here to open the Pape version of this document: [https://papeo.apr/demo](https://papeo.apr/demo) + Footnote †: C Click here to open the Pape version of this document: [https://papeo.apr/demo](https://papeo.apr/demo) + Footnote †: C Click here to open the Pape version of this document: [https://papeo.apr/demoapr/demo](https://papeo.apr/demoapr/demo) + Footnote †: C Click here to open the Pape version of this document: [https://papeo.apr/demoapr/demoapr/demo](https://papeo.apr/demoapr/demoapr/demo) + across different fields in recent years. Prior work, such as in psychology and education, has found various benefits in a personal and multimedia communication style (i.e., videos and dialogues) over formal and technical text, including positive social context and experiences [38], lowered cognitive load and increased interest [57], and improved comprehension when multiple alternative explanations were available [2]. However, prior HCI research has also showed that carefully designed interfaces are crucial for users to consume multiple formats without being overwhelmed [25]. In this work, we build on prior theoretical and HCI research to explore the design space for combining research papers and talk videos into a cohesive reading experience by investigating the perspectives of both paper authors and readers. Talk videos differ from papers in format and content, and this can serve to address various challenges in research consumption. Specifically, while reading papers allows scholars to dig deep into all the details of a prior work, the process can be cognitively demanding as scholars must disentangle meaning from complex written explanations [8]. This process is further complicated as researchers may lack the background knowledge required to understand the explanations or due to variability in the quality of the writing [61; 62]. Even further, to keep pace with the rapidly expanding literature, researchers are increasingly pressured to skim papers, and they attempt to gain a high-level understanding from scattered fragments of writing [31; 54]. In contrast, a talk video may present visuals that can help illustrate complex explanations [17; 23; 68] and, due to their wider audience, focus less on specialized concepts or background knowledge while using simpler language [20; 69]. Furthermore, as talk videos typically do not contain all the details, they can present scholars with a concise and easy-to-understand overview of the corresponding papers [9; 49]. Despite the various ways in which talk videos can complement paper reading, these two formats remain largely disconnected. Readers have to choose between using either the talk video or the paper as their primary way to consume prior work, and cognitive costs to switch between the two formats could be prohibitively high. For example, if a scholar watching a talk video wants to find a specific implementation detail for a machine learning model that was omitted in the video, they must search through pages in the paper to find the corresponding passage. Similarly, when reading a paper about an interactive user interface, it can also be costly for a scholar to scrub through its talk video to search for a screencast of the system to see it in action. This disconnect prohibits readers from fluidly transitioning between papers and talk videos because context switching can be disruptive [11] and incurs significant cognitive load [6]. As a result, while the research community has recently made significant efforts in creating presentation talk videos and making them widely available even after conferences, researchers are unable to fully capitalize on their benefits. In a formative study with researchers (n=14), we investigated opportunities and challenges in consuming papers and videos together, and the design space for combining these two formats. Instead of augmenting one format with the other, our findings revealed that researchers alternated their focus between the paper and video to control the level of detail in which they consumed the paper. Additionally, researchers observed how linking video segments to relevant paper passages (e.g., paragraphs, figures) could facilitate navigation, as the video could act as a visual map for the paper. Finally, researchers were against replacing or overlaying content in one format with content from the other as this could obscure information and the effort they dedicated in authoring both formats. Based on these findings, we designed a novel paper reading experience, _Papeos_ (**paper** and **video**), that integrates segments of the talk videos as localized _video notes_ alongside corresponding sections of the paper. As a user scrolls through a Papeo, they can see color-coded _highlight bars_ in the paper that hint at meaningful passages that have been covered in the video and, next to the paper, correspondingly color-coded video notes with thumbnails of the relevant video segments. When the user struggles to understand a portion of the paper, they can click on the highlight bar or video note to play the segment and get a summarized, alternative explanation. Instead of scrolling through the paper, the user can also choose to focus on the video by navigating between video notes or "autolaying" through them. To avoid disturbing the user's watching, the system fixes the video note's position in the viewport and scrolls the paper to the relevant passage. To grant authors control on how Papeos are created for their papers and facilitate the creation process, we also present an authoring interface where authors can link their papers and talk videos with the help of AI suggestions. To evaluate Papeos, we conducted a within-subjects study (n=16) where participants read and wrote a summary for the systems section of three papers using only the paper, the paper and talk video, or a Papeo. Our study revealed that Papeos could help researchers understand papers and decrease their mental demand during reading. Additionally, through Papeos, each format became a guide for the other which facilitated participants' navigation in the two formats and encouraged them to interact with both formats more. As a consequence of the reduced cognitive demand and improved navigation support, participants composed summaries that more comprehensively covered details from the papers. In addition, we conducted a field deployment of Papeos during an HCI conference where we had over 250 unique visitors to our reading interface. This paper presents the following contributions: 1. A formative study using a design probe (Fig. 2) with 14 participants that revealed user needs and potential benefits of combining talk videos and research papers for readers. 2. Co-design sessions with 14 paper authors that focused on understanding how authors would like to combine their papers and talk videos, to explore the design space for combining scholarly papers with talk videos. 3. Papeos: A novel reading experience that augments research papers with margin notes that present segments from a talk video alongside relevant passages in the paper (Fig. 3). 4. A mixed-initiative authoring interface that facilitates the creation of Papeos through AI-based suggestions, to explore the costs and feasibility of creating Papeos (Fig. 6). 5. A within-subjects study with 16 participants that revealed how integrating talk videos into papers enables readers to leverage both formats for improved understanding and navigation. Related Work The goal of this work is to explore the design space for augmenting scientific paper reading with corresponding presentation talk videos. To better understand this space, we first review literature around these formats: tools that support general reading, scholarly reading, and knowledge consumption using videos. Finally, we also review prior techniques in other domains for linking between text documents and videos. ### Augmented Reading Interfaces The advent of computers has enabled the creation of reading environments that transcend the limitations of static print media and, instead, allow knowledge workers to interact with and explore text dynamically (Zhu et al., 2017; Zhu et al., 2017). Hypertext (Han et al., 2017) interconnected scattered text and documents, and this concept has been widely adopted in many reading tools today (e.g., Amazon Kindle's in-situ definitions (Brock et al., 2015), and Wikipedia's page previews (Zhu et al., 2018)). Expanding on hypertext, fluid documents (Han et al., 2017) and fluid links (Zhu et al., 2018) restructure documents to incorporate this linked content within the document, and various interfaces provide links between text and other document objects, such as tables (Zhu et al., 2018) or visualizations (Brock et al., 2015). To support active reading, various interfaces allow readers to annotate documents with multiple modalities, such as ink or voice (Zhu et al., 2017; Zhu et al., 2018), to manipulate the document's structure (Zhu et al., 2018), or ask questions and find answers during reading (Zhu et al., 2018; Zhu et al., 2018; Zhu et al., 2018). As documents are frequently dense in content, researchers have investigated how to scaffold navigation by providing overviews (Zhu et al., 2018; Zhu et al., 2018), highlighting or fading out content to direct readers' attention (Zhu et al., 2018; Zhu et al., 2018), or guiding readers based on the activity of other readers (Zhu et al., 2018; Zhu et al., 2018). Extending on this rich body of work, we investigate how to augment the dynamism of academic papers by leveraging and integrating existing talk videos. ### Tools for Reading Scientific Papers A variety of tools have been designed to address the challenges in reading papers (Zhu et al., 2018). As a crucial component of reading a paper is to contextualize it within the broader literature, CiteRead (Zhu et al., 2018) augments a paper with commentary from citing papers, CiteSee (Cite, 2018) contextualizes inline citations to a reader's previous reading and publishing activities with visual augmentations, and Threddy (Threddy, 2018) and Synergi (Srinivas et al., 2018) allow users to clip citing sentences and references to explore related themes and papers in the literature. More closely related to our work, there is a line of research that focused on enhancing both efficiency and comprehension during paper reading. Specifically, to help readers traverse the complex language and notation used in scientific papers, Paper Plain (Brock et al., 2015) provides definitions for unfamiliar terms and in-situ summaries of sections, and ScholarPhi (Zhu et al., 2018) surfaces position-sensitive definitions for unique terms and symbols. Also, to facilitate skimming of papers, Scim (Scim, 2018) highlights salient passages of the paper to direct readers' focus, and Spotlights (Zhu et al., 2018) surfaces important objects as temporary overlays to help readers identify them even as they quickly scroll through the paper. Finally, since most scholarly papers are available as PDFs, various approaches have aimed at overcoming the limitations of this format to increase accessibility (Zhu et al., 2018; Zhu et al., 2018) and dynamism (e.g., embedding animations (Zhu et al., 2018) or interactive elements (Zhu et al., 2018)). While prior work have focused on designs that can support specific user needs such as skimming (Scim, 2018) or simplification (Brock et al., 2015), in this work, we explore how incorporating talk videos has the potential to embody multiple user needs when reading a paper. Specifically, a talk video can present an author-curated summary for the paper, highlight significant aspects of the work. Linking video segments back to their corresponding passages in the papers also has the potential of allowing readers to skim the paper based on the passages that the authors selected to include in their talk videos. Furthermore, talk videos include additional commentary, audibly narrate the content which can supplement screen readers, and dynamically illustrate aspects of the work such as animations and screen recordings. ### Video-based Knowledge Consumption Videos are increasingly becoming a predominant channel through which people consume and learn knowledge. According to Mayer and Moreno's principles (Mayer and Moreno, 2018), videos can be cognitively beneficial as verbal and visual explanations allow viewers to build two mental representations (Zhu et al., 2018; Zhu et al., 2018) without mental overload as audio and visual channels can be processed simultaneously (Zhu et al., 2018). As support to these principles, various studies have demonstrated that videos can benefit learners in various domains (Zhu et al., 2018; Zhu et al., 2018; Zhu et al., 2018). While effective for consumption of knowledge, videos represent a continuous stream of frames, and it can be inherently difficult to skim through or locate information in videos, which prior work had shown to be a common need for scholars (Scim, 2018). To overcome this limitation and harness the potential of videos, various tools have been designed to facilitate video navigation in learning contexts (Zhu et al., 2018; Zhu et al., 2018; Zhu et al., 2018). In this work, we investigate the benefits of talk videos for consumption of research, and how to combine these with papers to support both video and paper navigation--allowing scholars to fluidly switch between the two formats. ### Bridging Text Documents and Videos To overcome the difficulty in skimming and efficiently navigating videos, prior work has investigated various approaches to bridge videos with relevant text documents in a variety of domains. In education, Video Digests (Zhu et al., 2018) and VideoDoc (Zhu et al., 2018) segment lecture videos into sections so that students can navigate between different parts of a lecture with transcript summaries, and Shin et al. (Shin et al., 2018) further combined transcripts with extracted blackboard notes. Beyond lecture videos, Truong et al. (Truong et al., 2018) transform transcripts into hierarchical tutorials for instructional makeup videos, and Sceneskim (Sceneskim, 2018) facilitates searching and browsing by temporally aligning movies with their captions, scripts and summaries. Further, Codemotion (Zhu et al., 2018) automatically extracts code shown in programming tutorials to allow the user to navigate tutorials based on code-related steps. While existing research above focused on improving video navigation with text extracted from the same videos (e.g., audio transcripts or blackboard notes extracted from the frames), in this work, we explore how to bridge talk videos with research papers, which are separate entities and different media, and investigate how combining them can facilitate navigation for both media and help scholars better comprehend prior research. ## 3. Formative and Co-design Study To explore the design space for combining research papers and talk videos, we conducted a formative study where participants explored the opportunities and challenges in combining the two formats from the perspectives of both readers and authors. ### Participants We invited 14 researchers who had previously published at least one paper and created accompanying talk videos. 10 were doctoral students, 2 were Master's students, and the remaining 2 were a postdoc and an undergraduate student. 10 of the 14 participants identified their discipline as human-computer interaction (HCI) or related sub-fields (e.g., visualizations, AI fairness), 3 as natural language processing (NLP), 2 as machine learning (ML), and 1 as computer vision (CV).1 Footnote 1: Several participants identified with multiple disciplines. ### Apparatus Consuming scholarly papers and talk videos at the same time is a new experience that may be hard for participants to imagine. In a preliminary version of this formative study, we gave participants (n=4) a paper and talk video pair side-by-side and instructed them to "_understand the content of the paper based on your real-life habits_". Although participants could freely choose how they wished to consume the paper and video, they all watched the whole video first and then delved into the paper. Participants expressed how this was not due to a lack of desire to jump to the paper while watching the video, but due to the prohibitively high cost of cross-referencing between formats. This preliminary study revealed that unaugmented papers and videos were inadequate to explore how readers wanted to leverage both formats together. Thus, we developed a technology probe (Zhu et al., 2017) (Fig. 2) where we could pre-link segments of a talk video to relevant passages in the paper (e.g., paragraphs, figures) and color-code them so that participants could switch between the two formats with lower cost. Before the study, one of the authors manually created the links between the papers and videos for three papers in each of the recruited participants' research fields (e.g., empirical HCI, systems HCI, NLP, CV). To create these links, the author followed criteria that were based on insights from the preliminary study: segment the video on slide transitions, and link segments to paragraphs based on content similarity (e.g., phrases, figures) while following the paper's reading order. ### Study Procedure The study consisted of two consecutive sessions. First, there was a formative session where participants took the perspective of paper readers and used the technology probe (Fig. 2) to read a paper where several passages were pre-linked to relevant segments of the talk video. Then, in a co-design session, participants took the perspective of paper authors and considered designs for combining their own research papers and talk videos. For the formative session, participants chose their preferred paper from the set of pre-linked paper-video pairs and, while thinking aloud, read the paper using the technology probe for 20 minutes. In the probe, linked passages in the paper were highlighted, and participants could click on a linked passage to automatically navigate to the corresponding segment in the video. The video segments were also displayed under the video timeline, and participants could click on a video segment to scroll to the corresponding passage in the paper. After the reading period, participants were asked about the benefits and drawbacks of using the probe and the talk video during paper reading. Figure 2. Technology probe used during the formative studies. On the left, a PDF reader for the paper where passages linked to video segments are highlighted (A). On the right, a video player for the talk video accompanied by an interactive timeline and a bar displaying the location and length of segments linked to the paper (B). Linked passage-segment pairs are color-coded. Then, participants took the perspective of authors and participated in a co-design session where they considered designs for combining their own research paper and talk video. To stimulate the participants and illustrate how to sketch designs, participants were provided with a slide deck that showed three example designs for interfaces that combined papers and videos. Participants were asked to think aloud and sketch designs in the slide deck, which was pre-populated with screenshots of the pages and key frames of the participant's paper and talk video that they provided prior to the study. To sketch out their designs, participants could resize and crop the screenshots, draw shapes, and use text boxes to describe the designs. During the session, one or two of the authors helped with the sketching by making edits based on participants' descriptions, and asked questions to encourage participants to elaborate further on their ideas or to consider alternative designs. Aside from one in-person participant, all participants joined remotely through Google Meet.2 This study was approved by our internal review board, and each participant was paid 45 USD for their time. Footnote 2: [https://meet.google.com](https://meet.google.com) ### Findings During the study sessions, we recorded participants' screens and the audio, which were then manually transcribed. Through a thematic analysis, the transcripts were coded and these codes were grouped into themes to identify the main insights from the study. Additionally, a thematic analysis was also conducted on the various designs for the co-design sessions to typify these designs based on their similarities. Based on insights from the reader and author sessions, we distilled design goals for augmenting research papers with talk videos. #### 3.4.1. As Readers In contrast to participants in the preliminary study, participants in this study followed different consumption patterns with the probe: five mainly read the paper and occasionally switched to the video, and nine followed the video while intermittently pausing to dive into the paper. Based on their experiences with the technology probe, participants noted various ways in which talk videos enriched the paper. Specifically, most participants (11/14) mentioned that the video provided summaries that were easier to consume than _"dense parts of the paper"_ (P5). Asides from summarizing, participants (7/14) also mentioned that videos explain details differently and that these alternative explanations were useful when they struggled to understand the paper. Participants also noted the significance of the audio-visual nature of videos. Several participants liked authors' narrations in videos (4/14) as listening could be less demanding or more _"passive"_ than reading (P10), and since they could have the _"author narrate [figures] for [them]"_ (P2). In terms of the visuals, various participants (5/14) described how illustrations, animations, or clips in the talk videos could better illustrate certain aspects of the paper. For example, P14 mentioned how a clip showing a demo of an interface helped them _"get a more clear idea of what the interaction would look like"_. Finally, a majority of participants (11/14) mentioned how watching the videos or skimming the video-based highlights in the paper gave them an overview of the papers and allowed them to _"make note"_ (P1) of details they wanted to dive deeper into--serving as a _"launching pad"_ into the paper (P3). Despite these benefits, however, there were various interaction challenges that limited participants' use of talk videos even with the support of our technology probe (Fig 2). For example, as a paper automatically scrolled to the relevant passage when the video progressed to the next segment, participants mentioned that the probe could disrupt their reading (3/14) or cause them to get lost (6/14). Additionally, participants (4/14) mentioned how they could not predict what information would be contained in a video segment before actually watching the segment and, therefore, could not anticipate when a segment would be useful or not. Finally, as video segments were linked to relatively lengthy passages in papers, various participants (8/14) mentioned how it was difficult to locate a detail mentioned in a video segment in the paper, or to distinguish what in the paper passages had been covered or not by the segment. #### 3.4.2. As Authors During the co-design sessions, participants produced a variety of designs for paper and video combinations. As seen in Table 1, several of the participants' designs shared structural similarities, but differed in terms of specific details or features. Participants considered both designs where the video supported paper reading and where the paper enhanced video watching, and some participants envisioned new formats where neither format was the main one. Based on participants' designs and their comments during the sessions, we distilled the main goals that participants considered when designing the combinations. One of the main goals that participants (12/14) mentioned was to enable users to flexibly switch the level of detail at which they consume the content. Specifically, the user can switch from the video to the paper to _"expand to see more details"_ (P1) or switch from the paper to the video to _"skip"_ (P5) sections that are less interesting. Beyond consumption, several participants (7/14) considered combinations that visually represented paper passages with the video to support navigation in the paper. For example, P2's design presented slides from the video as a visual outline that the user can use to navigate the paper. Finally, due to their difficulties in locating details from the video in the paper and vice-versa during the reading session, several participants (5/14) designed interfaces that supported more fine-grained links (e.g., highlighting passages in the paper that were mentioned in the video). Beyond revealing what authors wanted from the combinations, the co-design sessions also revealed constraints to possible designs. While several participants created designs that replaced paper passages with video elements, most participants (7/14) advocated against replacing content. Some participants mentioned that _"videos are rarely a one-to-one representation of a paper"_ (P3) and that replacing could _"delete information"_ (P10), while others noted how one format provided _"supplementary information"_ for the other (P6) and it could be more beneficial to consume them together. Additionally, P2 mentioned how they dedicated _"significant effort"_ in authoring their paper and video, and that they would want users to look at both artifacts. Another constraint was that, despite considering designs where the user mainly interacted with the video, most participants (7/14) considered the video as _"a way to advertise"_ their paper (P4) and that _"ultimately"_ (P11) they wanted to direct the user to their paper. This was reflected through their _"guiding tooltip"_ designs (Table 1). Finally, we asked participants about whether they would be willing to create links between their papers and talk videos to enable the combinations they designed. All participants mentioned that they would create these links as it could increase the visibility of their work and _"help as many people as possible to read and understand [my paper]"_ (P9). Although several participants mentioned that they would want the process of linking to be as easy as possible, all participants also mentioned that they would not want the process to be completely automatic. Instead, they would need to be _"involved in the process"_ (P3) to check and edit links made by an automatic pipeline. Interestingly, some participants even expressed how they would be willing to change how they author videos to make this semi-automatic linking process easier and more accurate: _"I might start baking this stuff into the slide deck"_ (P10) and _"It might have a positive influence on [...] how I design the the slides like making them more correlated to the paper"_ (P6). #### 3.4.3. Design Goals Based on the insights from the reader and author sessions, we distilled the following design goals for combining research papers and talk videos: * DG1: Allow readers to both focus on either the paper or video, but also enable them to fluidly switch between the two formats when needed. * DG2: Surface visuals from the video to help readers anticipate its content and to visually outline the paper. * DG3: Present fine-grained links that aid in the association of related details across formats. * DG4: Avoid occluding or replacing the content in a format with content from the other. * DG5: Aid in the creation of links between papers and videos but grant authors control over how they want to present their work. \begin{table} \begin{tabular}{p{113.8pt} p{113.8pt} p{113.8pt}} \hline \hline **Primary Format** & **Type of Design** & **Feature Differences** \\ \hline \multirow{8}{*}{**Paper**} & **Linked video popups:** display popup with video segment when user interacts with a linked paper passages. & Link popups on text (P4, P8, P11, P13), figures or tables (P2, P6, P8), or definitions and sections headers (P14). \\ & & \\ \cline{2-3} & **Univolved in the process** & Display popup based on user’s selected text (P3). \\ & & Display thumbnail instead of video segment (P6, P7). \\ \cline{2-3} & **Overlaid videos**: overlaying video segments on relevant passages of the paper. & Overlay on videos on figures (P4, P8, P13). \\ \cline{2-3} & & Overlay visual guides from video on tables or figures (P2, P6, P8), or mathematical equations (P8). \\ \cline{2-3} & **Video-based outline:** an outline or table of contents for the paper based on the links between video segments and paper passages. & Panel that displays a list of the slides extracted from the video as a navigational map (P2). \\ \hline \multirow{8}{*}{**Video**} & **Position-sensitive details**: hovering over elements in a video frame to reveal a tooltip with related details from the paper. & Hovering over keywords to see definitions (P7, P10), summarized tables to reveal the detailed tables from the paper (P10), or elements of a system to reveal related explanations from the paper (P13). \\ \cline{2-3} & **Guiding tooltips**: tooltips that appear as the video plays to encourage the viewer to check related sections of the paper. & Tooltip is accessible through an icon that is overlaid on the video (P10, P14) or text is shown next to the video (P1). \\ \cline{2-3} & **Side commentary**: panel next to the video that displays relevant passages from the paper as the video plays. & Commentary can include the full passages from the passage that is not included in the video (P5), or a summary of the passages (P5, P13) \\ \hline \multirow{8}{*}{**Combined**} & **Interweaved paper and video**: new format that interweaves elements from the paper with those from the video. & Embedding images, animated GIFs, and clips from the video inbetween passages of text (P6), inbetween summarized passages of text (P3), or replace text with the video elements (P2, P9). \\ \cline{2-3} & **Adaptive side-by-side**: paper and video displayed side-by-side but adaptively changes the size of each format. & User can manually change the amount of space taken by each format or the interfaces automatically changes them by inferring the user’s needs (P4). \\ \hline \hline \end{tabular} \end{table} Table 1. Overview of the co-design session that captured how authors envisioned combining their papers and talk videos. The table describes the types of designs that authors produced and the features that authors proposed for the different design types. Additionally, the design types were categorized based on their primary consumption formats. ## 4. Papeos Based on the design goals, we developed _Papeos_ (Figure 3), a novel reading experience that augments research papers with localized clips from the corresponding talk videos. In this section, we first illustrate the reading interface for Papeos. Then, we describe a mixed-initiative interface that allows paper authors to create Papeos for their papers and talk videos with lowered effort. ### Papeo Reading Interface The Papeo reader is designed to support a variety of use cases, such as leveraging linked video segments to guide users when text skimming (SS4.1.1), support users in fluidly switching between reading text passages and watching video segments to adjust the level of details they wish to consume (SS4.1.2), and allow users to continuously watch a talk video while having access to additional details in corresponding text passages (SS4.1.3). For this, the Papeo reader presents video segments as _video notes_ placed on the right side of paper pages and localized approximately next to their linked passages (Fig. 3). Since each page of a paper could contain multiple linked passages and video notes, Papeo renders color-coded _highlight bars_ next to passages and video notes alongside a paper for linked paper passages and video segments (DG4). #### 4.1.1. Video-Supported Skimming Researchers often skim read to get a high level understanding of research papers (Krishnan et al., 2017). By scrolling through the Papeo reader, the user can skim the paper by looking through the highlight bars and accompanying video notes. The highlight bars (Fig. 3a) reveal the portions of the paper that the author considered important when creating their video. The video notes (Fig. 3b) reveal the content of the video segment through the thumbnail (i.e., the first frame of the video segment) and the first line from the transcript which can, respectively, visually represent and summarize these passages of importance. By skimming based on these features, for example, a reader could prioritize reading high-level descriptions of a user interface and a few important quotes from the user study that were included in the conference presentation, instead of reading all implementation details and quotes that were not included. By remembering the thumbnails and their relevant locations in the paper, the user can also develop a "spatial mental map" (Steintein et al., 2017) of the paper to help them return to desired content in the paper (DG2). If the thumbnail or transcript line surfaces insufficient information about the video segment, the user can also hover and scrub over the highlight bar to peek into different moments in the segment (Fig. 4a). #### 4.1.2. Fluid Switching between Paper and Video As the user is reading through the paper, they may struggle to understand certain passages or may be less interested in particular sections. For example, an expert user might need to learn the implementation details of a machine learning paper but was already familiar with the background and related work. In these cases, if a video note is linked, the user can watch an alternative and/or summarized explanation of the passage by clicking on the highlight bar or video note itself (DG1). Clicking on the bar or note "activates" the video note (Fig. 4b): the thumbnail switches into a video player that starts playing the segment, the full transcript for the segment is shown, and the note increases in size. If it is only approximately aligned with the highlight bar, the note also moves to be exactly aligned--pushing away other notes if they would overlap. As the video plays, lines of the transcript are highlighted so that the user can discern what has already been spoken. Figure 3. The Papeo reader extends a PDF reader by incorporating highlight bars (A) alongside passages in a research paper that are linked to segments in the corresponding talk video. These video segments are displayed as video notes (B) that are localized next to the linked passages and present a thumbnail, a line from the transcript, and the total duration of the segment. While watching the video note, the user may want to read up on the same information in the paper to acquire more details or to take in a more formalized explanation. To focus back on the reading, the user can pause the video note through the player controls or by clicking anywhere outside the note to "deactivate" it. As users may start reading while the video note plays and forget to deactivate it, each video note only streams one video segment to minimize disruption. Thus, by default, once the video note reaches the end of the current segment, the player stops instead of progressing to the next segment in the video--unlike the preliminary research probe (DG1). Finally, when switching between the two formats mid-segment, the user may struggle to identify a detail in one format in the other due to the wording differences or the amount of text they have to traverse through. For example, if a reader watches a progressive animation explaining the architecture of a machine learning model and becomes curious about a specific hyper-parameter, it can be difficult for them to find the value of the hyper-parameter in the paper. To address this challenge, the Papeo reader provides _synchronized highlights_ (Fig. 4c). Based on how the paper author created the Papeo, certain words or phrases in the video transcript and paper are bold and underlined. When the user hovers over these words or phrases, they are highlighted and the related words or phrases in the other format are also highlighted to help the user discern and match details across the formats (DG3). #### 4.1.3. Video-Centric Consumption Besides skimming the text of the paper and switching between text and video segments, Papeo also support users if they wish to watch multiple segments or even the entire video continuously. While each video note only streams one segment from the video, the Papeo reader also allows the user to focus on and watch the video notes in order (DG1). When a video note ends, the user is provided with the option to re-watch the video segment or to jump to the next. To watch the whole video with no interruptions, the user can activate the "autoplasy" setting to automatically navigate and watch through all video segments. Whenever the user navigates between video notes, the paper scrolls automatically to the location of the next video note to allow the user to also check and read the linked paper passages (DG1). Figure 4. Illustration of features supported by the Papeo reader: (A) hovering and scrubbing over highlight bars allows users to quickly scrub through the linked video segments; (B) activated video notes present the users with player controls, the full transcript for the segment, and a segmented timeline for the whole video that presents the paper section where a note is located when the user hovers a segment; and (C) synchronized highlights are shown as blue text in the paper and bold text in the video transcript, and, when the user hovers over them, they become highlighted in sync. Figure 5. During video note-centric scrolling, the user can navigate to the video note for the next video segment, which takes over the viewport position of the current video note. With the video note fixed in position, the paper scrolls to the passages linked to the next video segment. This allows the user to continuously watch video segments without interruption while always having access to the linked passages next to the current video playback. To minimize disruption during autolpay, the Papeo reader employs _video note-centric scrolling_ (Fig. 5). In this type of scrolling, the different video notes stay fixed in same position while the paper scrolls to corresponding linked passages as the videos play. Thus, while the user is technically navigating between video notes and scrolling through the paper, they can continue to watch the video by fixing their gaze on the same part of their screen (DG1). Above activated video notes, the Papeo reader also provides a timeline (top in Fig. 4b) to allow the user to navigate between video notes and, consequently, navigate the paper based on these (DG2). The timeline is fragmented where each block represents a video note and the user can navigate to these notes by clicking on the blocks--navigation occurs through _note-centric scrolling_. To help the user track where they are in the video and what they have already watched, the block for the current video note is color-coded and blocks for notes that have been watched are opaque. Before navigating to a note, the user can hover over a block to see the title of the section or sub-section where the note is located ("_4.2 Pipeline_" in Fig. 4b)--allowing them to check where they will navigate to and what type of content may be contained in the video note (DG2). ### Papeo Authoring Interface To create Papeos, we propose an authoring interface (Figure 6) through which paper authors can link their papers and talk videos--granting them control over how these formats are linked (DG5). We developed this interface through an iterative design process. With early versions of the interface, we observed that authors dedicated significant effort to segment their videos and to search for paper passages that were relevant to these segments. To address this challenge, we adopted a mixed-initiative design for the authoring interface by providing automatic suggestions for segmenting videos and for linking papers and videos. To start authoring, the author first uploads a PDF of their paper and the talk video with transcript. Then, they access the authoring interface that consists of two panels: a video segmenter where the author can select segments of the video, and a paper annotator where they can then choose the passages to link to the segment (Fig. 6). To start linking their paper and video, the author first needs to create a video segment. To do so, they can watch the video, click on the timeline to create an initial segment, and drag the start and end thumbs to select a time range (Fig. 6a). Alternatively, authors can read the transcript and directly select a group of transcript lines (Fig. 6b). To improve efficiency, the interface also automatically groups transcript lines at the sentence-level to act as segment suggestions. When the author clicks on a group, the interface selects a segment that contains all of the lines in the group. When creating a segment from the transcript, the author can select or de-select lines to correct errors in the segment suggestions, or further fine-tune the start and end times by using the thumbs in the timeline since transcript lines do not always align with sentence boundaries (Fig. 6c). Figure 6. The Papeo authoring interface consists of a parsed PDF and a video segmenter. The segmenter timeline (A) displays the segments that have been created so far. The user can create a segment by clicking on the timeline or selecting lines in the transcript (B), and then dragging on the thumbs to fine-tune its length (C). Then, the user can manually click on relevant passages (D) to link them to a video segment, or review and adopt the automatically generated linking suggestions (E,F). After creating a video segment, the author can then link it to relevant passages (e.g., paragraphs, figures) in the paper. Instead of requiring authors to manually select paragraphs or figures, the interface presents these as clickable targets so that authors can select entire paragraphs with single clicks (Fig. 6d). This is made possible by leveraging the pre-trained VIA model to automatically parse the paper PDF and identify paragraph, figure, and table boundaries (Zhu et al., 2018). Since AI models can occasionally make mistakes, the author can also click-and-drag over an area of the paper to manually select a passage to recover from errors. One remaining challenge here is that it can be time consuming to search through the paper for relevant passages. For this, the interface suggests the five most likely passages based on the current video segment (i.e., link suggestions). After a video segment is created, the paper automatically scrolls to the highlighted top-1 suggestion for the author to review (Fig. 6f). The author can further review the top 2-5 suggestions using the suggestion navigation bar (Fig. 6e). Beyond the coarse-grained links between paper passages and video segments, the Papeo reader interface also supports fine-grained links (i.e., synchronized highlights) to help readers identify specific details. To create these fine-grained links, authors can select a paper passage or video segment that has been linked and click on the "Create Sync Highlight" button at the top of the interface. Then, the author can select words or phrases in the passages and the transcript of the video segment that they wish to link. After selecting the words, the author stores the synchronized highlight by clicking on the "Save Sync Highlight" button, and can proceed to create more synchronized highlights for the linked segment and passages. #### 4.2.1. Automatic Suggestions To make authoring Papeos more efficient, Papeos' authoring interface generates automatic suggestions for video segmentation and for paper-video linking. During development, to evaluate multiple algorithms and AI models for generating suggestions, we collected a small ground-truth dataset by having three of the authors and five recruited researchers link their papers and talk videos (total of 8 pairs) using an initial version of the authoring interface without automatic suggestions. For techniques with no tunable hyperparameters, we evaluated the technique on the whole ground-truth dataset. For techniques with hyperparameters, we performed 4-fold cross-validation where 25% of the data was used to identify the best hyperparameter values and the remaining 75% was used to evaluate the technique with the best identified hyperparameter values. For each technique, we specify the hyperparamters, if any. **Segment Suggestions**: We tested three different techniques for automatically segmenting videos (i.e., shot detection): (1) calculating pixel changes in the HSV (i.e., Hue, Saturation, and Value) colorspace between adjacent frames (Hue et al., 2018), (2) template matching which calculates the spatial similarity between a frame and the previous key frame (Zhu et al., 2018), and (3) segmenting the video at every transcript line containing a punctuation--as authors are likely to transition between scenes at the end of sentences. Both the HSV change and template matching techniques had two hyperparameters: minimum length of a segment, and threshold (i.e., HSV change or spatial similarity value that needs to be exceeded to predict a segment boundary). For evaluation, we calculated the number of predicted segment boundaries that were within 3 seconds of ground-truth boundaries to calculate precision, recall and F1-score. Based on the interaction we designed, we expected that it would be easier (i.e., fewer clicks) for authors to merge segment suggestions compared to splitting them, so we used the F3-score, which gives more weight to favor over-segmenting (i.e., more segments) and decided to adopt the punctuation-based auto-segmenter (Table 2). **Linking Suggestions**: Currently, the authoring interface provides _text_ passage linking suggestions that appear immediately _after_ a video segment was created. We initially aimed to also automatically identify video frames similar to figures and tables in the papers since authors often reuse figures and tables in their presentations. However, it became clear in early design iterations that mapping figures between papers and videos was a relatively simple task for test users. In contrast, they spent much greater effort when trying to find the right passages when mapping to text. Therefore, we focused on providing text passage linking suggestions, and used the ground-truth video segments from our dataset to test the following measures for matching text from the segments' transcripts to text in paper passages: (1) cosine similarity based on two text embedding models (i.e., Specter (Speer et al., 2018) and MiniLM (Zhu et al., 2018)), (2) ROUGE-L score (Zhu et al., 2018), and (3) a baseline that chooses the first paragraph of a random section in the paper. We designed the baseline based on the assumption that talk videos provide an overview of the paper and, as a result, might state information included in the overviews of each section (i.e., the first paragraphs). As seen from the results (Table 3), ROUGE-L had the highest top-1 accuracy while MiniLM embeddings had the highest top-5 accuracy. We then combined these two measures by simply adding the two scores which achieved higher top-1 and top-5 accuracies. Finally, we noticed how videos typically present information content in the same order as the paper--i.e., after linking a segment and passage, the next video segment would likely link to passages that appear later in the paper. Based on this, we developed an additional technique that adapts the Viterbi algorithm (Viterbi, 2017). Using this technique, we can consider, simultaneously, the semantic similarity between paper text and video transcripts, and how information might be presented in similar ordering in the two formats (e.g., background, methods, and then evaluation). More specifically, the potential links between segments and passages are considered to be states, and an observation is whether the segment and passage are actually linked. In this context, we first normalized the combined measure of MiniLM + ROUGE to use as the emission probability (i.e., probability of linking a segment to each passage). Then, we modeled the transition probability as a hyperparameter of the likelihood of \begin{table} \begin{tabular}{l c c c c c} \hline \hline **Algorithm** & **Precision** & **Recall** & **F1** & **F2** & **F3** \\ \hline \hline \multicolumn{1}{l}{Punctuation} & 0.405 & **0.906** & 0.541 & 0.701 & **0.786** \\ HSV Change & 0.499 & 0.805 & 0.605 & **0.706** & 0.751 \\ Template Match & **0.577** & 0.758 & **0.635** & 0.698 & 0.725 \\ \hline \hline \end{tabular} \end{table} Table 2. Recall, precision, and F1-, F2- and F3-scores for the algorithms tested for video segmentation. Highest values for each metric are shown in bold, and the technique used in the authoring interface is shown in blue. linking a video segment to a passage in-order and the remaining probability becomes the likelihood of linking in reverse order.3 This technique improved on both the top-1 and top-5 accuracies and was used to provide suggestions in the authoring interface. Footnote 3: Based on the +fold cross-validation and grid-search, the transition probability was set to 0.7, 0.5, 0.6, and 0.6 in each fold, respectively. #### 4.2.2. Preliminary User Evaluation To test the feasibility and costs of authors creating Papeos for their readers, we conducted a preliminary evaluation with 6 researchers (3 systems HCI, 3 empirical HCI, and 1 computer vision) to author a Papeo using their own research paper and talk videos. In general, participants mentioned that it was easy to use the authoring interface to link their papers and videos, and that they were enthusiastic to author Papeos for future papers through the interface. This evaluation demonstrated that participants spent an average of 25 minutes and 22 seconds (SD=5:31, max=30:17, min=15:19) to fully link their paper and video4. Additionally, we measured how frequently the authors used at least one of the top-5 suggestions when linking a segment to passages, and saw that suggestions were used for 71.3% (SD=11.6%, max=82.1%, min=57.1%) of all linked segments. In sum, we showed that authors can use our current authoring interface to create Papeos for their own papers with reasonable effort, and leave further automation and evaluation for future work. Footnote 3: [https://github.com/allenai/pdf-component-library](https://github.com/allenai/pdf-component-library) Footnote 4: [https://github.com/cobplete/react-player](https://github.com/cobplete/react-player) ### Implementation Details We implemented the reading and authoring interfaces for Papeos in around 6,500 lines of TypeScript, ReactJS, and CSS. For the PDF reader, we adapted our own open-source PDF reader library5 and, for the video player, we used the ReactPlayer package.6 The backend and AI-based suggestions were implemented using around 1,600 lines of Python code. We used a Flask server, the HuggingFace Transformer library7 for the SPECTER (Fan et al., 2017) and MiniLM (Wang et al., 2018) models, and the PySceneDetect8 and OpenCV9 packages for shot detection based on the HSV colorspace and template matching, respectively. Footnote 6: [https://huggingface.co/docs/transform/index](https://huggingface.co/docs/transform/index) Footnote 7: [https://cenedetet.com/en/latest/](https://cenedetet.com/en/latest/) ## 5. User Study Through our formative study, we observed that talk videos and papers provided different benefits to users. Specifically, we found evidence that talk videos have the potential of complementing paper reading so that the reader can quickly get an overview but also selectively dive deeper into details. However, the interaction cost of fluidly consuming the two formats together can be prohibitively high, which led to a set of design goals that drove the development of Papeo. Thus, we conducted a within-subjects study to investigate whether Papeos can help readers to both acquire a comprehensive understanding of the paper and efficiently identify relevant details. We compared three conditions: Papeos with linked papers and videos, separated papers and talk videos, and papers only. With each condition, participants were asked to read the systems section of an assigned paper and to write a summary for the section that was _comprehensive_ and _detailed_. Through this task, we investigated the following research questions: * RQ1. Can Papeos reduce the cognitive load involved in reading and understanding research papers? * RQ2. How do Papeos affect researchers' navigation of research papers and talk videos? * RQ3. Can Papeos help researchers to both comprehensively cover significant aspects of papers and read these in detail? ### Study Design #### 5.1.1. Participants We recruited 16 early-stage researchers in HCI for the study through the authors' social media (Twitter) and snowball sampling. 12 of the participants were first to third year doctoral students, and 4 were Master's students. Our study focused on early-stage researchers as they may receive the greatest benefit from augmenting paper reading with talk videos--e.g., simplify and visually represent complex explanations, highlight important aspects of a paper. All participants reported reading research papers at least once a week to several times a day. The study lasted a total of 90 minutes, and participants were compensated 45 USD for their time. The study was approved by our internal review broad. #### 5.1.2. Conditions During the study, participants read and wrote summaries for three different papers. For each paper, they used one of the following conditions: Papeo, Paper+Video, and only Paper. The ordering of the conditions was counterbalanced to mitigate the influence of ordering effects. In the Papeo condition, the participants used the Papeo reader. In the Paper+Video condition, participants used a basic PDF reader and a basic video player in separate tabs or windows, and, in the Paper condition, they only used the PDF reader. The basic PDF reader and video player were developed using the same base libraries and packages as the Papeo reader, and provided all basic functionalities available in other similar readers and players (e.g., zoom in, zoom out, playback speed controls). #### 5.1.3. Reading Materials All of the participants read the same three papers (Han et al., 2017; Wang et al., 2018; Wang et al., 2018) in the same order. We chose the papers from the initial dataset of linked papers and video used to evaluate the automatic suggestions (SS4.2.2). Specifically, we chose HCI papers that presented systems that incorporated AI or algorithmic pipelines, and whose "System" sections were of relatively similar length. We focused on systems papers as they present interfaces that may be \begin{table} \begin{tabular}{l c c} \hline \hline **Algorithm** & **Top-1** & **Top-5** \\ \hline Random first paragraph of a section & 0.029 & 0.080 \\ SPECTER Embeddings & 0.399 & 0.623 \\ MiniLM Embeddings & 0.464 & 0.768 \\ ROUGE-L Score & 0.493 & 0.739 \\ Combined (MiniLM + ROUGE-L) & 0.572 & 0.797 \\ Viterbi with Combined & 0.626 & **0.863** \\ \hline \hline \end{tabular} \end{table} Table 3. Top-1 and top-5 accuracy for the algorithms tested for linking paper passages and video segments. Highest values for each metric are shown in bold, and the technique used in the authoring interface is shown in blue. easier to understand with videos. Additionally, as our goal was to evaluate whether Papeos can help readers identify details, we narrowed down to systems that incorporated pipelines as they may include a substantial amount of design and implementation details. To match these criteria, we chose two papers written by authors of this paper. In Appendix A, we provide a quantitative analysis of these Papeos to illustrate how they did not differ significantly from those authored by other researchers. #### 5.1.4. Procedure The study was conducted through a popular video conferencing software. After a brief introduction to the overall study, participants performed the task for each paper in order. For each paper, participants were first provided with a short tutorial to the interface(s) that they would be using and, using a example paper and video, were allowed to use and test the interfaces for 5 minutes. Then, participants proceeded to the assigned paper and were instructed to first fully read the paper's abstract. After they read the abstract, participants were given 15 minutes to read the systems section of the paper and simultaneously write a summary that maximized the following criteria: * _Comprehensive_: how well the summary provides an overview of the entire section. * _Detailed_: how many specific details on the interactions and underlying models are included in the summary. * _Coherent_: how well the summary flows or, in other words, how well the sentences connect logically. (This criteria was included to prevent summaries that simply listed details.) To focus on capturing what they learned during the sessions, participants were informed that they could write a maximum of 14 sentences, were not allowed to copy-paste, and that the quality of their writing (e.g., spelling, grammar) would not be evaluated. Once the given time passed, participants were asked to complete a survey about the task and, then, proceeded to the next task. After all the tasks, we conducted a semi-structured interview about participants overall experience. #### 5.1.5. Measures To evaluate the summaries, we developed a rubric for each paper where we listed all of the details contained in the system section of the paper, and we grouped these details according to the aspect of the system that they described (e.g., feature, pipeline component). Then, for each summary, we annotated whether the summary presents each of these details and rated its coherency on a 7-point Likert scale. To measure detail, we counted the number of details included in the summary, and, to measure comprehensiveness, we calculated the proportion of system aspects that were covered by the included details. Two of the authors who did not observe the studies performed the annotations while being blind to the conditions that generated the summaries. To verify reliability, the two authors first independently annotated the summaries for one paper, compared annotations and discussed to reach a consensus on the annotation process, and then independently annotated the paper again. This resulted on a Cohen's kappa of 0.712 for annotating the details and Krippendorff's alpha of 0.744 for coherency ratings. As the agreement was substantial, each of the authors was assigned with one of the remaining papers, and they independently annotated the summaries for that paper. Additionally, we collected participants ratings, on a 7-point Likert scale, to the following five questions from the survey: "_I found it easy to write the summary", "I found it easy to orient myself (i.e., know where information is) in the paper/video"_, and "_I found it easy to navigate to different parts of the paper/video"_. The survey also contained five questions from the NASA-TLX questionnaire (McCarthy et al., 2016) to measure perceived workload--excluding the question on physical demand. Finally, we analyzed interaction logs to measure how frequently participants (1) switched between the formats, (2) scrolled in the paper, and (3) scrubbed in the video. For switches, we counted every instance where the user interacted with one format after interacting with the other format, for scrolling and scrubbing, all consecutive actions within one second and in the same direction were counted as one action. ### Results Our results revealed that Papeos helped reduce participants' mental load during reading, facilitated and promoted navigation of both the paper and video, and led to more comprehensive summaries. For the statistic analysis of each measure, we first conducted a Shapiro-Wilk test to determine if the data was parametric or non-parametric. Then, when comparing between all three conditions, we used a one-way, repeated measures ANOVA when parametric and a Friedman test when non-parametric When comparing between the Paper+Video and Papeo conditions, we used a paired t-test when parametric and Wilcoxon signed-rank test when non-parametric. #### 5.2.1. **Enhance Understanding and Decrease Mental Load** As seen in Figure 7, participants perceived the reading and summarizing task to be easiest with Papeos. The ANOVA analysis showed a significant effect of the condition on participants' perceived ease (Q=6.982, p=0.030) and a gradual increase between conditions, with the task perceived to be easiest in the Papeo condition. This indicates that talk videos could facilitate the task for participants, but the support was not perceived as significant until they were integrated into the reading experience in Papeos. This is also reflected Figure 7. Perceived ease and mental demand were significantly affected by the condition used by participants. With Papeos, task ease was perceived to be highest and mental demand the lowest. by responses to the NASA-TLX questionnaire as there was significant effect of the conditions on mental demand (Q=12.182, p=0.002) and demand was perceived to be lowest when participants used Papeos. Furthermore, although these results were not significant, perceived temporal demand, effort and frustration were lowest and perceived performance was highest with Papeos (Table 4). According to participants' comments, these results could be attributed to the various ways (i.e., summaries, modalities, alternative explanations) in which talk videos supported understanding and how Papeos made these benefits available on demand. For example, P14 mentioned how Papeos summarized dense technical details but granted access to these details if needed: _"The video is high-level summary. It was easier to understand and, if I need to understand technical details, I can look the highlighted section."_ Additionally, P12 mentioned how Papeos allowed them to combine and consume multiple modalities simultaneously: _"Absolutely [preferenced Papeos] because I was visualizing and hearing the voice and reading the text. It was like three senses were active."_ Finally, beyond helping them understand, P8 described how Papeos allowed them to check their understanding by listening to alternative explanations: _"English is not my first language so sometimes I will have a concern whether I understand the author's intention correctly. But, with the video, usually they will discuss their research in more informal way."_ #### 5.2.2. **One Format as a Guide for the Other** As Papeos linked papers and videos, participants were able to use one format to guide their exploration of the other (Fig. 8). Specifically, we observed that the condition had a significant effect on participants' perceived navigation ease within the paper (Q=6.704, p=0.035), where participants perceived it to be easiest with Papeos and similar in the Paper and Paper+Video conditions. According to participants, the links between the paper and video in Papeos allowed them to navigate at a more fine-grained level than it was possible through the typical features of a paper. P11 mentioned, _"It breaks down the structure of the paper even more than the subsection headings. It also allows me to easily look for further details in the paper."_ Additionally, P16 described how they were able to _"move through the paper seamlessly"_ by navigating according to the video notes through the autoplay feature. In the opposite direction, participants perceived that it was significantly easier to orient themselves within the video in the Papeo condition when compared to the Paper+Video condition (W=27.000, p=0.034). This signifies that it was easier for participants to know and remember where specific information was found within the talk video when using Papeos. P2 described that, with Papeos, it was _"clear which part of video [was] linked to"_ to a specific passage of the paper, making it easier for them to find information they needed from the video. Through the localized video notes, participants could immediately access video segments that they needed when they needed them--without searching for them through the video. Thus, in Papeos, the video supported navigation in the paper, and the paper supported orientation in the video. #### 5.2.3. **Explore Casily, Engage More** Our analysis of the interaction logs (Fig. 8) revealed that participants engaged more with both formats when using Papeos. Participants switched between formats significantly more frequently in the Papeo condition compared to the Paper+Video condition (W=0.000, p=0.000). During the study, we observed that, in the Paper+Video condition, most participants watched the whole video first and then focused only on the paper during the remaining duration of the task. However, in the Papeo condition, participants continuously switched back-and-forth between the formats. Our analysis also revealed that the condition had a significant effect on how much participants scrolled in the \begin{table} \begin{tabular}{l c c c c c} \hline \hline **Condition** & **Mental** & **Temp.** & **Effort** & **Perf.** & **Frus.** \\ \hline Paper & 5.50 & 5.13 & 5.25 & 4.44 & 4.00 \\ & (1.27) & (1.63) & (1.48) & (1.46) & (1.79) \\ Paper + & 5.31 & 5.19 & 5.25 & 4.63 & 3.88 \\ Video & (0.79) & (1.38) & (0.93) & (1.15) & (1.15) \\ Papeo & **4.50** & **4.38** & **4.63** & **4.94** & **3.63** \\ & (1.03) & (1.63) & (1.31) & (1.39) & (1.59) \\ \hline p-value & **0.002** & 0.230 & 0.249 & 0.355 & 0.715 \\ \hline \hline \end{tabular} \end{table} Table 4. Mean and standard deviation (in parentheses) of NASA-TLX questionnaire responses on mental demand, temporal demand, effort, performance, and frustration. (n=48, p-value based on Friedman tests.) Figure 8. Results for perceived ease of navigation and orientation within the paper and the video. The condition had a significant effect on navigation within the paper and orientation within the video, with both perceived to be easiest with Papeos. paper (F=7.065, p=0.003) with participants scrolling to a similar degree in the Papeo and Paper conditions, and scrolling less in the Paper+Video condition. Additionally, participants scrubbed in the video to a similar degree in both the Papeo and Paper+Video conditions (t=-1.810, p=0.090). Considering how participants considered that it was easier to navigate in the paper and orient oneself in the video with Papeos, these results suggest that Papeos encouraged participants to engage with both formats, and to seek for and leverage their content during the task. #### 5.2.4. **More Comprehensive Coverage** The analysis of participants' summaries (Fig. 10) revealed that there was significant effect of the condition on the comprehensiveness of participants' summaries (F=3.497, p=0.043). Summaries in the Papeo condition were rated to be the most comprehensive while those in the Paper and Paper+Video condition were relatively similar. A plausible reason for this result is that, as Papeos facilitated exploration of the content, participants were able to delve into details throughout the section and were thus able to include these in their summaries. In terms of the other measures, there was no observed effect of the condition on the detail (Q=2.000, p=0.368) or coherency (Q=2.772, p=0.250) of participants' summaries. This demonstrates that, despite participants interacting with both formats more with Papeos and writing more comprehensive summaries, this was not at the expense of other qualities in participants' summaries. ## 6. Field Deployment To further investigate how researchers would engage with Papeos in the wild, we deployed this new format during CSCW 2022. During the duration of the conference, we promoted our interface through social media channels and a daily newsletter sent to conference attendees. Through a portal website, conference attendees could access our reading interface and consume Papeos for specific papers that were being presented during the conference. To prepulate this set of Papeos, we contacted several authors that were presenting in the conference and asked if they would like to use our authoring interface to create Papeos to promote their papers. Through this, we collected a set of 12 Papeos (or around six hours of volunteered authoring). The portal website also provided tutorials for using and creating Papeos and described what data is collected by the interfaces. During the two weeks of the conference, our reading interface was visited by 288 unique users and, on average, each user visited a total of 1.20 different Papeos (min=1, max=5). To analyze the interaction logs, we identified user sessions (i.e., sequence of actions between entering and leaving the interface) and removed anomalous sessions (e.g., user left the interface immediately after entering, or user entered the interface but only interacted with it hours later). We observed that readers were actively engaged with Papeos. The average number of actions per session (e.g., scroll, play video, scrub) was 32.02 (min=2, max=255) and the average session length was 5.74 minutes (min=1.02, max=40.21). In addition to these statistics, various researchers expressed positive comments about Papeos on social media. One researcher expressed how Papeos were _"easily scannable and digestible"_, which reflected our study findings, and another researcher noted how the format can _"humanize"_ papers by letting the reader _"hear the author's voice saying words that are often part of the fabric of the paper."_ Beyond these benefits, a researcher noted how Papeos can _"do more than just replicate the print experience"_ and _"help so all the effort we put into presentation videos doesn't get completely buried after a conference"_. In sum, through a field deployment, researchers found value in Papeo for real-world use cases, and wider adoption may require further lowering the cost of authoring Papeos. Figure 10. Results for the evaluation of participants’ summaries according to comprehensiveness, detail, and coherency. The condition had a significant effect on comprehensiveness of the summaries, with summaries evaluated to be the most comprehensive in the Papeo condition. Figure 9. Analysis of the frequency of switching between formats, scrolling in the paper, and scrubbing in the video showed that the condition had a significant effect on switching and scrolling. ## 7. Discussion In this paper, we propose Papeos, a novel reading experience that augments research papers with localized segments from talk videos to support skimming, navigation, and comprehension. While designers and researchers have taken various steps to reach the vision of _dynamic reading_, as discussed by Victor (Victor, 2017; Victor, 2018), the experiences they proposed required a prohibitive amount of effort to realize (e.g., authoring animations or demos (Papeos et al., 2019; Zhang et al., 2020)). In fact, _Distill_, a peer-reviewed journal that published interactive articles, cited authoring effort as a reason for their discontinuation (Victor, 2018). In our work, we instead recognize that researchers have already dedicated significant effort in authoring talk videos that may already possess features that can enhance academic papers, like progressive animations and demo walkthroughs. Our Papeo experience leverages these talk videos to, with relatively minimal additional effort, augment the experience of reading academic papers--taking a step towards the vision of _dynamic reading_. To extend on this vision, we identify various directions for enhancing and expanding on Papeos: automating the creation of Papeos to expand their availability, extending to other types of videos or content (i.e., blog posts), and leveraging paper-video links to generate talk videos from papers. In this section, we elaborate on the potential of Papeos and on these directions for future work. ### Papeos Everywhere Through our user study, we identified that Papeos can support understanding and navigation of papers--lowering various barriers of research consumption. Although they can aid early-stage researchers to access a larger body of knowledge, the coverage of papers that are supported with Papeos is limited by the paper authors' willingness to create Papeos. In our work, we focused on providing authors control over how their Papeos are created due to their concerns regarding automation errors. While this decision respects their preferences as authors, researchers as readers may desire a fully automatic approach as, despite possible errors, this enables them to leverage talk videos in more papers--a conflicting sentiment shared by various participants in the formative study. To increase the coverage of Papeos, future work could further develop the AI-based pipeline used for suggestions in the authoring interface. Specifically, the talk video segmentation algorithm can be enhanced by combining both visual and textual features. Additionally, while our work used general-purpose, state-of-the-art text embedding models, a small-scale dataset of paper-video links could be collected to fine-tune a sentence transformer (Victor, 2018) for this specific setting. However, as an improved pipeline may still present errors, the reading interface should be enhanced to provide users with error-recovery mechanisms--e.g., present multiple passages that could link to a video segment and allow the user to override erroneous links. With these improvements, future work can widen the availability of Papeos and lower the floor for early-stage researchers. ### Beyond Talks and Videos While our work focused on augmenting papers with talk videos, researchers employ an assortment of varying formats to communicate their research, and these could also be adopted to augment papers. For videos, there are various formats that exist aside from recordings of conference talks: video figures, demo videos, recordings of invited talks or thesis defenses, and, more recently, paper "explainers" on platforms like YouTube.10 These video formats may differ from talk videos and can therefore provide different benefits when employed in Papeos. For example, demo videos can present systems and their features in more detail, invited talks or thesis defenses can contextualize a paper within a extended thread of work, and "explainer" videos can simplify the content further as their target audience can include non-researchers. Instead of depending on existing videos, authors could also create custom video clips to augment their papers with Papeos in different forms. For example, while talk videos are constrained in length and were thus useful to summarize and skim the paper, custom video clips would not be constrained and may allow authors to augment their papers with extensive, additional commentary or comprehensive walkthroughs of interfaces. Footnote 10: Example channels include Two Minute Papers and AI Caffe Break. Aside from the visual aspect of videos, study participants and users from the deployment noted the significance of incorporating audio into the papers: enabling consumption with various modalities and "humanizing" papers. Future iterations of Papeo could support authors in creating additional audio-based notes to weave their voices into their papers. As an additional benefit, these audio notes could supplement screen readers and help increase the accessibility of papers by providing authors with a lightweight mechanism for creating alternative descriptions. Beyond videos, researchers frequently promote their research through other channels such as blog posts and social media (e.g., Twitter threads), and Papeos could integrate content from these formats as text-based notes. As research is increasingly distributed through a greater number of formats, Papeos can serve as a first step to connect these forms into one cohesive experience. ### Generating Videos for Papers As talk videos only cover a subset of the paper, Papeos can surface the important passages of the paper but, due to the same reason, they cannot provide video notes for the other passages. In our user study, several participants expressed how they could struggle to understand a passage, but were disappointed to not find any video notes to assist them. To remedy this limitation, future work could extend on existing work on document-to-video generation (Kumar et al., 2019) to automatically generate video segments from paper passages. Specifically, with passages as input, a pipeline could generate summaries for the video's transcript (Victor, 2018) and slides for the frames (Papeos et al., 2019; Victor, 2018). Then, the pipeline could produce video segments by combining these and incorporating audio with text-to-speech models--or even add an artificial talking head (Kumar et al., 2019). To train and tune the AI models involved in this generative pipeline, future work could use our authoring interface to collect a larger dataset of paper passage and video segment pairs. By presenting these generated video segments when requested by the reader, future Papeos can more comprehensively support the paper reading experience. ### Limitations Our studies revealed various benefits of Papeos that we believe can be generalize beyond the set of papers we have tested. At the same time, we acknowledge several factors could effect the usefulness of Papeos: * Type of work: Formative study participants noted that videos were more useful for work involving interactive and/or dynamic artifacts (e.g., HCI systems). * Paper sections covered: User study participants expressed how Papeos were especially helpful for summarizing information dense sections. * Visuals: Formative and user study participants noted that supplemental visuals in videos, especially those animated or presented gradually, were effective illustrating information in the paper. * Communication style: Formative and user study participants appreciated videos that communicated paper content in a different style (e.g., informal language). Future work should investigate the effectiveness of our approach according to these factors. Additionally, to fit the user study within 90 minutes, our user study focused on HCI papers with system contributions and only investigated the benefits of Papeos when reading one section in the paper. However, we argue that our various studies together demonstrated benefits of our approach that can generalize across papers, types of work, and domains: highlights, summaries, and audio narrations. For example, even for a qualitative paper, our approach can highlight important paper fragments (e.g., author selected themes and quotes), and provide the authors' audio narrations and summaries. Future work can conduct additional studies to investigate the significance of these benefits with papers of diverse domains and contributions. ## 8. Conclusion This paper presents Papeos, a novel reading experience that integrates segments from talk videos as localized margin notes in academic papers. To facilitate the creation of Papeos, we introduce an authoring interface that aids paper authors in linking video segment and paper passages through algorithmic and AI-based suggestions. Through a within-subjects user study (n=16), we found that Papeos could enhance understanding of papers by providing summaries of complex passages and allowing readers to consume information in multiple modalities. With Papeos, participants leveraged each format (i.e., paper and video) to guide their navigation in the other format, which in turn facilitated navigation in both formats and encouraged more comprehensive reading of the paper. These findings and responses from researchers in a field deployment suggest the potential for leveraging existing, alternative forms of research communication to augment research papers and enable more dynamic reading experiences. ###### Acknowledgements. The authors would like to thank Doug Downey, Shannon Zejiang Shen, Evie Cheng, and Juho Kim for their insightful discussions and feedback. We also thank the anonymous reviewers for their constructive feedback. Finally, we would like to thank the various researchers that participated in our various studies and the deployment of our system.
2306.04685
RelSIM: A Relativistic Semi-implicit Method for Particle-in-Cell Simulations
We present a novel Relativistic Semi-Implicit Method (RelSIM) for particle-in-cell (PIC) simulations of astrophysical plasmas, implemented in a code framework ready for production runs. While explicit PIC methods have gained widespread recognition in the astrophysical community as a reliable tool to simulate plasma phenomena, implicit methods have been seldom explored. This is partly due to the lack of a reliable relativistic implicit PIC formulation that is applicable to state-of-the-art simulations. We propose the RelSIM to fill this gap: our new method is relatively simple, being free of nonlinear iterations and only requiring a global linear solve of the field equations. With a set of one- and two-dimensional tests, we demonstrate that the RelSIM produces more accurate results with much smaller numerical errors in the total energy than standard explicit PIC, particularly when characteristic plasma scales (skin depth and plasma frequency) are heavily underresolved on the numerical grid. By construction, the RelSIM also performs much better than the Relativistic Implicit-Moment Method (RelIMM), originally proposed for semi-implicit PIC simulations in the relativistic regime. Our results are promising to conduct large-scale (in terms of duration and domain size) PIC simulations of astrophysical plasmas, potentially reaching physical regimes inaccessible by standard explicit PIC codes.
Fabio Bacchini
2023-06-07T18:00:03Z
http://arxiv.org/abs/2306.04685v2
# RelSIM: A Relativistic Semi-implicit Method for Particle-in-Cell Simulations ###### Abstract We present a novel Relativistic Semi-Implicit Method (RelSIM) for particle-in-cell (PIC) simulations of astrophysical plasmas, implemented in a code framework ready for production runs. While explicit PIC methods have gained widespread recognition in the astrophysical community as a reliable tool to simulate plasma phenomena, implicit methods have been seldom explored. This is partly due to the lack of a reliable relativistic implicit PIC formulation that is applicable to state-of-the-art simulations. We propose the RelSIM to fill this gap: our new method is relatively simple, being free of nonlinear iterations and only requiring a global linear solve of the field equations. With a set of one- and two-dimensional tests, we demonstrate that the RelSIM produces more accurate results with much smaller numerical errors in the total energy than standard explicit PIC, particularly when characteristic plasma scales (skin depth and plasma frequency) are heavily underresolved on the numerical grid. By construction, the RelSIM also performs much better than the Relativistic Implicit-Moment Method (RelIMM), originally proposed for semi-implicit PIC simulations in the relativistic regime. Our results are promising to conduct large-scale (in terms of duration and domain size) PIC simulations of astrophysical plasmas, potentially reaching physical regimes inaccessible by standard explicit PIC codes. + Footnote †: journal: ApJS 0000-0002-3091-8885]Fabio Bacchini 0000-0002-4880-7888]Fabio Bacchini ## 1 Introduction and Review of the Particle-in-Cell Panorama Modern astrophysical research is tightly linked with numerical simulations carried out on supercomputers. In many cases, pen-and-paper calculations do not suffice to analyze the behavior of complex astrophysical systems, whose dynamics is often highly nonlinear and multiphysics, multiscale in nature. Of particular importance in astrophysics is the modeling of plasmas, which are ubiquitous in the Universe. In several astrophysical environments (e.g. the surroundings of black holes and neutron stars, in supernova remnants, X-ray binary systems, etc.), plasma dynamics interacts with strong gravity, radiation physics, QED effects, and electromagnetic fields of extreme strengths. This interaction often results in strong plasma-energization processes, through which plasma particles (e.g. electrons, positrons, and ions) can be accelerated to relativistic energies. Nonlinear plasma dynamics, particularly coupled with such effects, is hard to describe analytically, resulting in an ever-growing need for advanced simulation tools. The Particle-in-Cell (PIC) method is among the most successful approaches for the numerical simulation of relativistic astrophysical plasmas. With their origin dating back to the 1960s, PIC methods have attracted widespread attention after the birth of high-performance computing. Nowadays, massively parallel PIC codes are routinely employed for the study of plasma phenomena, particularly in the collisionless regime where binary particle-particle encounters are negligibly rare, and particles only interact via self-generated electromagnetic fields. Other computational approaches to model collisionless plasmas exist, but they typically rely on specific assumptions that discard small-scale physics (e.g. hybrid methods) or they involve prohibitive computational costs in most practical cases (e.g. Vlasov methods). Even after decades of evolution, the basic structure of a PIC code remains as described in classical textbooks (e.g. Birdsall & Langdon, 1991): a computational grid is employed to solve a set of Maxwell's equations (employ ing CGS units here and in the remainder of the text), \[\frac{\partial\mathbf{E}}{\partial t}=c\mathbf{\nabla}\mathbf{\times}\mathbf{B}-4\pi\mathbf{J}, \tag{1}\] \[\frac{\partial\mathbf{B}}{\partial t}=-c\mathbf{\nabla}\mathbf{\times}\mathbf{E}, \tag{2}\] \[\mathbf{\nabla}\mathbf{\cdot}\mathbf{E}=4\pi\rho, \tag{3}\] \[\mathbf{\nabla}\mathbf{\cdot}\mathbf{B}=0, \tag{4}\] where \(c\) is the speed of light, \(\mathbf{E}\) and \(\mathbf{B}\) are the electric and magnetic fields, and the current and charge density \(\mathbf{J}\) and \(\rho\) are source terms linked to the particle motion. In a PIC code, a large number of computational particles are evolved according to the relativistic equations of motion, \[\frac{\mathrm{d}\mathbf{x}}{\mathrm{d}t}=\mathbf{v}, \tag{5}\] \[\frac{\mathrm{d}\mathbf{u}}{\mathrm{d}t}=\frac{q}{m}\left(\mathbf{E}+\frac{\mathbf{v}}{c} \mathbf{\times}\mathbf{B}\right), \tag{6}\] where \(\mathbf{x}\) and \(\mathbf{u}\) are the particle position and the spatial part of the 4-velocity (i.e. a 3-vector), \(q\) and \(m\) are the particle charge and mass, and \(\mathbf{v}=\mathbf{u}/\gamma\) with \(\gamma=\sqrt{1+u^{2}/c^{2}}=1/\sqrt{1-v^{2}/c^{2}}\) the relativistic Lorentz factor. Solving the equations of motion with a finite number of particles is essentially equivalent to sampling the particle distribution function \(f(\mathbf{x},\mathbf{u},t)\) (i.e. a solution of the Vlasov equation) with a Monte Carlo approach; in this sense, \(\mathbf{J}\) and \(\rho\) entering Maxwell's equations are moments of \(f\) gathered from particle quantities onto the grid. Because the number of particles in a PIC simulations is limited by computational resources, the distribution function in PIC runs is (sometimes heavily) affected by numerical noise. Nevertheless, PIC codes have become the primary tool for investigating astrophysical plasmas from first principles, owing to their simplicity, reliability and remarkable performance on parallel computing architectures. PIC codes employed in the astrophysical community can be divided in two main categories, i) "explicit" codes and ii) "implicit" codes. Algorithmically, explicit methods involve an explicit discretization of the field equations, which are therefore solved explicitly, followed by a particle "push" based on an implicit discretization of the equations of motion which can however be recast into an explicit solution procedure. The explicit approach for fields and particles can vary in flavor, but it typically consists of a temporal leap-frogging procedure where fields and particles are decoupled. This approach is easy to implement and extremely versatile in terms of parallelization and performance. Several codes employed in astrophysical and laboratory plasma research (e.g. EPOCH, Arber et al., 2015; OSIRIS, Fonseca et al., 2002; SHARP, Shalaby et al., 2017; Smilei, Derouillat et al., 2018; Tristan v2, Hakobyan et al., 2023; VOR-PAL, Nieter & Cary, 2004; VPIC, Bird et al., 2022; WarpX, Fedeli et al., 2022; Zeltron, Cerutti et al., 2013; Bacchini et al., 2022; etc.1) have been employing the explicit-PIC approach for a long time, and have successfully attacked many open problems in relativistic astrophysics (e.g. Spitkovsky, 2008; Zhdankin et al., 2017; Comisso & Sironi, 2018; Guo et al., 2021; Werner & Uzdensky, 2021; Sironi, 2022), even including quantum-electrodynamics and strong-gravity effects (e.g. Parfrey et al., 2019; Crinquand et al., 2020; Sridhar et al., 2021; El Mellah et al., 2022; Galishnikova et al., 2023; Groselj et al., 2023; Hakobyan et al., 2023), or frame transformations appropriate e.g. for expanding/shearing plasmas (Riquelme et al., 2012; Hoshino, 2015; Sironi & Narayan, 2015; Bacchini et al., 2022; Tran et al., 2023). The simplicity of explicit PIC also allows for efficient implementation on new architectures such as GPUs (e.g. PIConGPU, Burau et al., 2010; Entity, Hakobyan et al., 2023, in prep). Footnote 1: The author apologizes for any omissions from this list, which could grow infinitely long in principle. Despite their widespread success, explicit codes also suffer from severe limitations linked to numerical instabilities: the explicit discretization introduces artificial unstable modes that essentially destroy the simulation results very rapidly when certain criteria are not met. In particular, locally underresolving temporal and spatial scales such as (the inverse of) the plasma frequency \(\omega_{\mathrm{p}}=\sqrt{4\pi q^{2}n/m}\) (where \(n\) is the plasma number density) and skin depth \(c/\omega_{\mathrm{p}}\) in many cases produces unphysical results. If these stability conditions are violated even in one single computational cell, the entire simulation may be irremediably compromised. Depending on the physical case, other scales may require full resolution on the grid, e.g. the Debye length \(\lambda_{\mathrm{D}}=\sqrt{kT/(4\pi q^{2}n)}\) (where \(T\) is the plasma temperature). In many cases, these restrictions are not problematic, because the phenomena of interest take place precisely at the aforementioned spatiotemporal scales which therefore need to be resolved to accurately capture the corresponding processes. However, when this is not the case, severe limitations arise on the applicability of explicit codes, where the time step and the grid spacing are determined by the most restrictive plasma conditions in the whole simulation domain. For example, several problems involve a very large separation of scales in which only the phenomena at the largest scales are important, and the smaller scales could in principle be left underresolved; explicit methods require instead to resolve all scales. Similarly, when large density gradients are involved, the local skin depth and inverse plasma frequency could vary dramatically within the domain; interesting physics may take place only in very localized regions where \(c/\omega_{\rm p}\) and \(\omega_{\rm p}^{-1}\) are very small, hence only those regions would require finer grids and smaller time steps, but explicit methods will instead impose restrictive simulation parameters everywhere (e.g. in compact-object magnetospheres, Cerutti and Beloborodov, 2016; Hakobyan et al., 2023). Finally, when considering the presence of multiple plasma species, restrictions in the numerical parameters arise due to the need to resolve the scales of the lighter species (usually electrons) whereas many interesting phenomena primarily involve the large scales determined by the heavier species (usually ions), resulting in extremely intensive computations (e.g. ion-scale magnetic reconnection, large-scale wave decay, solar-wind turbulence, and shocks; Spitkovsky, 2008; Werner et al., 2018; Verscharen et al., 2019; Bacchini et al., 2022). Especially in the latter case, it may be undesirable to simply switch to a different paradigm (e.g. hybrid codes, Caprioli et al., 2018; Bott et al., 2021; Squire et al., 2022), since doing so implies discarding potentially interesting electron physics that can still occur at ion scales. In short, explicit PIC codes may simply not suffice, in specific cases of interest, to carry out simulations over large spatial and temporal scales due to an intrinsic limitation of the numerical method that results in prohibitive computing costs. For these reasons, extensive research has been dedicated to developing implicit PIC methods. These methods do not suffer from the instabilities affecting explicit PIC, and can in principle allow for simulations where spatiotemporal scales are arbitrarily underresolved2. Implicit PIC codes essentially involve an implicit discretization of Maxwell's equations, together with a particle push which may or may not be nonlinearly coupled to the field-solver step. If this nonlinear coupling is retained, the resulting approach is usually labeled "fully implicit" (Lapenta and Markidis, 2011; Markidis and Lapenta, 2011; Bacchini et al., 2019; Chen et al., 2020; Angus et al., 2023) and may involve the solution of a very large, nonlinear system of equations, whose dimension can be of the order of the total number of particles in a simulation. Such extremely large systems are hard to handle in practice, since convergence of iterative solution methods is not guaranteed; even with advanced preconditioning, it is not straightforward to obtain acceptable scaling behavior on supercomputing infrastructures. Several approaches have been developed to ameliorate the problem, e.g. the reduction of the nonlinear system via nonlinear substitution of the particle equations into the field equations ("kinetic enslavement", e.g. Markidis and Lapenta, 2011; Taitano et al., 2013; Bacchini et al., 2019 and references therein). Even with such improvements, fully implicit PIC codes have not reached a level of maturity that makes them applicable in practical situations. Footnote 2: Note that the physics occurring at underresolved scales is not captured accurately, but is rather averaged over. When the particle push and the field advance are decoupled in implicit PIC methods, these are termed "semi-implicit" and present a much lower level of complexity with respect to fully implicit PIC methods. The decoupling essentially consists of rewriting the source terms in Maxwell's equations as linear functions of the electromagnetic fields, thereby removing any nonlinearity and reducing the problem to a linear-solve step on the grid. This decoupling can be carried out in several fashions (see next Sections), either approximately (i.e. using a linearization) via the so-called Implicit-Moment Method (IMM, e.g. Brackbill and Forslund, 1982; Lapenta et al., 2006) or exactly in the case of the Energy-Conserving Semi-Implicit Method (ECSIM, Lapenta, 2017). The latter is particularly interesting because, as the name suggests, solving the implicit equations without approximations results in the _exact_ (i.e. to machine precision) conservation of total energy throughout the numerical simulation, a feat that no currently employed explicit method achieves. Although the original method has been later refined and improved (e.g. Chen and Toth, 2019; Campos Pinto and Pages, 2022), conservation of energy is particularly important for stability, and the ECSIM has demonstrated the capability to allow for very long simulations on very large spatial scales (e.g. Park et al., 2019; Zhou et al., 2019; Arro et al., 2022; Pezzini et al., 2023, in prep.). The important caveat here is that the IMM and ECSIM are _nonrelativistic_, i.e. by assuming that particle speeds are much smaller than \(c\), the particle-push step is replaced with its nonrelativistic counterpart where Lorentz factors are unitary and the equations of motion simplify to the Newtonian limit. In the relativistic regime instead, constructing a semi-implicit PIC method is more involved precisely due to the presence of the Lorentz factor, which introduces an intrinsic nonlinearity in the particle equations of motion. This detail is crucial: a relativistic version of the IMM (termed RelIMM, Noguchi et al., 2007; Kempf et al., 2015) can be formu lated, but its applicability is severely hindered by the nonlinearity of the particle equations (see Section 2). The ECSIM, instead, simply cannot be directly extended to relativistic applications while also retaining its exact energy-conservation properties, because the reformulation of the Maxwell sources into linear functions of the fields cannot be carried out without approximations in the relativistic case (see Section 3). As a consequence, the only relativistic semi-implicit PIC method presented in literature so far is the aforementioned ReIMM, which however performs poorly in practical applications (see Section 4) and has never been applied in production runs. The focus of this work is a novel, simple, and reliable semi-implicit PIC method that is ready for production simulations of astrophysical plasma phenomena in the relativistic regime. We call the new method the Relativistic Semi-Implicit Method (RelSIM); like the ECSIM, the RelSIM retains a simple formulation based on "mass matrices" (see Section 3), is free of nonlinear iterations, and surpasses the RelIMM in terms of performance and quality of the results (see Section 4). Due to the intrinsic nonlinear nature of the relativistic Vlasov-Maxwell system, our approach to remove nonlinear iterations necessarily sacrifices exact energy conservation; however, we demonstrate that the new RelSIM still possesses excellent energy-conservation properties, which make it superior to the RelIMM. Because the new method is implicit in nature, it can be employed in situations where plasma scales are dramatically underresolved without loss of stability, in contrast with explicit methods. This work is organized as follows: in Section 2 we review (and improve upon, with a generalized reformulation) the RelIMM originally presented in Noguchi et al. (2007). In Section 3 we derive and present the new RelSIM. In Section 4 we present quantitative comparisons between relativistic explicit and semi-implicit PIC methods (including the new RelSIM) in a number of representative test cases. Finally, in Section 5 we discuss and summarize our results. ## 2 Review of the Relativistic Implicit-Moment Method In this Section we review the RelIMM (Noguchi et al., 2007; Kempf et al., 2015) to which we add modifications and improvements. The original RelIMM is based on a \(\theta\)-scheme applied to Maxwell's equations for each grid element \(g\), \[\frac{\mathbf{E}_{g}^{n+\theta}-\mathbf{E}_{g}^{n}}{\theta\mathbf{\wedge}t}=c\mathbf{\nabla} \mathbf{\times}\mathbf{B}_{g}^{n+\theta}-4\pi\mathbf{J}_{g}^{n+1/2}, \tag{7}\] \[\frac{\mathbf{B}_{g}^{n+\theta}-\mathbf{B}_{g}^{n}}{\theta\Delta t}=-c\mathbf{\nabla}\mathbf{ \times}\mathbf{E}_{g}^{n+\theta}, \tag{8}\] \[\mathbf{\nabla}\mathbf{\cdot}\mathbf{E}_{g}^{n+\theta}=4\pi\rho_{g}^{n+\theta}, \tag{9}\] where electromagnetic fields at \(n+\theta\) are calculated as a linear interpolation between integer temporal steps, e.g. \(\mathbf{E}^{n+\theta}=\theta\mathbf{E}^{n+1}+(1-\theta)\mathbf{E}^{n}\) with \(\theta\in[1/2,1]\). The condition \(\mathbf{\nabla}\mathbf{\cdot}\mathbf{B}=0\) is automatically satisfied at all times if the computational grid possesses mimetic properties (i.e. it preserves the basic analytic properties of differential operators, see e.g. Lipnikov et al., 2014 for a review). For each particle \(p\), the relativistic equations of motion are \[\frac{\mathbf{x}_{p}^{n+1}-\mathbf{x}_{p}^{n}}{\Delta t}=\overline{\mathbf{v}}_{p}, \tag{10}\] \[\frac{\mathbf{u}_{p}^{n+1}-\mathbf{u}_{p}^{n}}{\Delta t}=\frac{q_{p}}{m_{p}}\left(\mathbf{ E}^{n+\theta}(\mathbf{x}_{p}^{n+1/2})+\frac{\overline{\mathbf{v}}_{p}}{c}\mathbf{ \times}\mathbf{B}^{n}(\mathbf{x}_{p}^{n+1/2})\right), \tag{11}\] where the half-step particle position \(\mathbf{x}^{n+1/2}=(\mathbf{x}^{n+1}+\mathbf{x}^{n})/2\), and \(\overline{\mathbf{v}}\) is an arbitrarily defined half-step velocity. The precise definition of \(\overline{\mathbf{v}}\) is what distinguishes different particle pushers. In the nonrelativistic regime (i.e. \(\mathbf{u}=\mathbf{v}\)), the unambiguous definition \(\overline{\mathbf{v}}\equiv(\mathbf{v}^{n+1}+\mathbf{v}^{n})/2\) provides second-order accuracy and allows for solving the momentum equation (11) with a simple operator-split approach (the "Boris" method, Boris, 1970). In the relativistic regime, defining \(\overline{\mathbf{v}}\) is non-straightforward due to the nonlinearity in the Lorentz factor (see Ripperda et al., 2018 for a review on the subject). Several definitions of \(\overline{\mathbf{v}}\) have been presented in literature (e.g. Boris, 1970; Vay, 2008; Lapenta & Markidis, 2011; Higuera & Cary, 2017); in most cases, the half-step velocity is of the form \[\overline{\mathbf{v}}=\frac{\mathbf{u}^{n+1}+\mathbf{u}^{n}}{2\bar{\gamma}}, \tag{12}\] such that no work is exerted on computational particles by magnetic fields3, reflecting reality. This is verified regardless of the definition of \(\bar{\gamma}\), which then acts as the true discriminant between relativistic particle pushers. A popular choice is the relativistic Boris pusher, where Footnote 3: This can be seen by dotting eqs. (12) and (11) and noticing that only electric fields contribute to a particle’s change in energy. \[\bar{\gamma}=\sqrt{1+\left[\mathbf{u}^{n}+q\Delta t\mathbf{E}^{n+\theta}(\mathbf{x}_{p}^{n +1/2})/(2m)\right]^{2}/c^{2}}, \tag{13}\] which allows the direct solution of eq. (11) since \(\bar{\gamma}\) can be computed from known quantities (if particle positions and electromagnetic fields are known). Another option, considered in the original RelIMM algorithm (Noguchi et al., 2007), and later discussed in detail by Lapenta & Markidis (2011), is the definition \[\bar{\gamma}=(\gamma^{n+1}+\gamma^{n})/2, \tag{14}\] which possesses important properties for energy conservation: with this \(\bar{\gamma}\), it is straightforward to show that \[m_{p}\bar{\mathbf{v}}_{p}\mathbf{\cdot}(\mathbf{u}_{p}^{n+1}-\mathbf{u}_{p}^{n}) =m_{p}c^{2}(\gamma_{p}^{n+1}-\gamma_{p}^{n}) \tag{15}\] \[=\bar{\mathbf{v}}_{p}\Delta t\mathbf{\cdot}q_{p}\mathbf{E}^{n+\theta}(\mathbf{x} _{p}^{n+1/2})\] \[=(\mathbf{x}_{p}^{n+1}-\mathbf{x}_{p}^{n})\mathbf{\cdot}q_{p}\mathbf{E}^{n+\theta }(\mathbf{x}_{p}^{n+1/2}),\] resulting in the physically correct consequence that the change in a particle's kinetic energy \(mc^{2}\gamma\) between two time steps is exactly equal to the work done by the electric field during that time step. Other choices of \(\bar{\gamma}\) do not respect this condition. However, choosing the half-step Lorentz factor (14) implies a much more complicated solution of the momentum equation (11) than in the Boris case (see Section 2.2 and Appendix A). Ultimately, the RelIMM can be formulated with any sensible choice of \(\bar{\gamma}\), as we will show later. ### RelIMM: Field Solver To construct a RelIMM scheme from our discretized Maxwell's equations, we start by combining Ampere's and Faraday's laws: by taking the curl of eq. (8) and inserting it into eq. (7) we get \[\mathbf{E}_{g}^{n+\theta} +(c\theta\Delta t)^{2}\mathbf{\nabla}\mathbf{\times}\mathbf{\nabla}\mathbf{ \times}\mathbf{E}_{g}^{n+\theta} \tag{16}\] \[=\mathbf{E}_{g}^{n}+c\theta\Delta t\mathbf{\nabla}\mathbf{\times}\mathbf{B}_{g}^{ n}-4\pi\theta\Delta t\mathbf{J}_{g}^{n+1/2}.\] Then, we expand the curl term \(\mathbf{\nabla}\mathbf{\times}\mathbf{\times}\mathbf{\times}\mathbf{E}^{n+\theta}=\mathbf{\nabla}\mathbf{ \nabla}\mathbf{\cdot}\mathbf{E}^{n+\theta}-\mathbf{\nabla}^{2}\mathbf{E}^{n+\theta}\) and use Gauss's law (9) to obtain \[\mathbf{E}_{g}^{n+\theta} -(c\theta\Delta t)^{2}\mathbf{\nabla}^{2}\mathbf{E}_{g}^{n+\theta} \tag{17}\] \[=\mathbf{E}_{g}^{n}+c\theta\Delta t\mathbf{\nabla}\mathbf{\times}\mathbf{B}_{g}^ {n}-4\pi\theta\Delta t\mathbf{J}_{g}^{n+1/2}\] \[\quad-4\pi(c\theta\Delta t)^{2}\mathbf{\nabla}\rho_{g}^{n+\theta}.\] Finally, we can employ a (\(\theta\)-scheme-discretized) charge-continuity equation \(\partial\rho/\partial t=-\mathbf{\nabla}\mathbf{\cdot}\mathbf{J}\) to express the charge density at \(n+\theta\) as a function of the half-step current, \[\rho_{g}^{n+\theta}=\rho_{g}^{n}-\theta\Delta t\mathbf{\nabla}\mathbf{\cdot}\mathbf{J}_{g }^{n+1/2}, \tag{18}\] which inserted into the previous equation gives \[\mathbf{E}_{g}^{n+\theta} -(c\theta\Delta t)^{2}\mathbf{\nabla}^{2}\mathbf{E}_{g}^{n+\theta} \tag{19}\] \[=\mathbf{E}_{g}^{n}+c\theta\Delta t\mathbf{\nabla}\mathbf{\times}\mathbf{B}_{g}^{ n}-4\pi\theta\Delta t\mathbf{J}_{g}^{n+1/2}\] \[\quad-4\pi(c\theta\Delta t)^{2}\mathbf{\nabla}\rho_{g}^{n}+4\pi c^{2 }(\theta\Delta t)^{3}\mathbf{\nabla}\mathbf{\nabla}\mathbf{\cdot}\mathbf{J}_{g}^{n+1/2}.\] Eq. (19) is the central point of interest of the IMM. In principle, the sources and specifically the half-step current \(\mathbf{J}^{n+1/2}\) provide a nonlinear coupling of Maxwell's equations with the particle equations of motion: the current is defined as \[\mathbf{J}_{g}^{n+1/2}=\frac{1}{\Delta V_{g}}\sum_{p}q_{p}\bar{\mathbf{v}}_{p}W(\mathbf{x} _{p}^{n+1/2}-\mathbf{x}_{g}), \tag{20}\] where \(\Delta V_{g}\) is the volume associated with each grid element and \(W(\mathbf{x}_{p}^{n+1/2}-\mathbf{x}_{g})\) is a chosen interpolation function (usually a first-order b-spline). Note that here \(\bar{\mathbf{v}}\) is a function of \(\mathbf{E}^{n+\theta}\) via eq. (11), and \(\mathbf{x}^{n+1/2}\) is a function of \(\bar{\mathbf{v}}\) (and thus of \(\mathbf{E}^{n+\theta}\)) via eq. (10). Because of this, eq. (19) and the particle equations of motion in principle constitute a very large (of size \(\sim N_{p}\)), fully coupled nonlinear system to be solved in order to advance the numerical solution. The core of the IMM approach consists instead of solving a _linear_ system to find \(\mathbf{E}^{n+\theta}\), by recasting \(\mathbf{J}^{n+1/2}\) as a linear function of the electric field. To do so, we first expand \(W\) around \(\mathbf{x}_{p}^{n+1/2}\), \[W(\mathbf{x}_{p}^{n+1/2}-\mathbf{x})= \tag{21}\] \[W(\mathbf{x}_{p}^{n}-\mathbf{x})-(\mathbf{x}_{p}^{n+1/2}-\mathbf{x}_{p}^{n})\mathbf{ \cdot}\mathbf{\nabla}W(\mathbf{x}_{p}^{n}-\mathbf{x})\] \[+\frac{1}{2}(\mathbf{x}_{p}^{n+1/2}-\mathbf{x}_{p}^{n})(\mathbf{x}_{p}^{n+1/2 }-\mathbf{x}_{p}^{n})\mathbf{\cdot}\mathbf{\nabla}W(\mathbf{x}_{p}^{n}-\mathbf{x})\] \[+\ldots\] and recognizing \(\mathbf{x}_{p}^{n+1/2}-\mathbf{x}_{p}^{n}=(\Delta t/2)\bar{\mathbf{v}}_{p}\), we can substitute in the expression for the current (20) keeping terms up to first order in \(\Delta t\), \[\mathbf{J}_{g}^{n+1/2} =\frac{1}{\Delta V_{g}}\sum_{p}q_{p}\bar{\mathbf{v}}_{p}W(\mathbf{x}_{p} ^{n+1/2}-\mathbf{x}_{g}) \tag{22}\] \[-\frac{\Delta t}{2\Delta V_{g}}\mathbf{\nabla}\mathbf{\cdot}\sum_{p}q_{p }\bar{\mathbf{v}}_{p}\bar{\mathbf{v}}_{p}W(\mathbf{x}_{p}^{n+1/2}-\mathbf{x}_{g})\] \[+\mathcal{O}(\Delta t^{2}),\] where we have also used vector identities to bring the divergence operator out of the summation. In this way, we have removed the nonlinear dependence of the interpolation function on \(\bar{\mathbf{v}}\) (and therefore on \(\mathbf{E}^{n+\theta}\)). Next, we will construct a _linear_ dependence of the velocity on the unknown electric field. We consider the momentum equation (11) and assume a definition of the half-step velocity \(\bar{\mathbf{v}}=(\mathbf{u}^{n+1}+\mathbf{u}^{n})/(2\bar{\gamma})\), \[\bar{\gamma}_{p}\bar{\mathbf{v}}_{p}=\mathbf{u}_{p}^{n}+\frac{q_{p}\Delta t}{2m_{p}} \left(\mathbf{E}^{n+\theta}(\mathbf{x}_{p}^{n+1/2})+\frac{\bar{\mathbf{v}}_{p}}{c}\mathbf{ \times}\mathbf{B}^{n}(\mathbf{x}_{p}^{n+1/2})\right). \tag{23}\] This equation could be easily solved for \(\bar{\mathbf{v}}\) if \(\bar{\gamma}\) were known. However, the latter is in the most general case a nonlinear function of \(\bar{\mathbf{v}}\), which prevents the formal inversion of the equation above. In addition, to construct \(\mathbf{J}^{n+1/2}\) as a linear function of \(\mathbf{E}^{n+\theta}\), we need \(\bar{\mathbf{v}}\) itself to be a linear function of \(\mathbf{E}^{n+\theta}\). For this reason we are forced to introduce an approximation where we replace the unknown \(\bar{\gamma}\simeq\Gamma\), such that \[\Gamma_{p}\bar{\mathbf{v}}_{p}=\mathbf{u}_{p}^{n}+\frac{q_{p}\Delta t}{2m_{p}}\left( \mathbf{E}^{n+\theta}(\mathbf{x}_{p}^{n+1/2})+\frac{\bar{\mathbf{v}}_{p}}{c}\mathbf{\times}\bm {B}^{n}(\mathbf{x}_{p}^{n+1/2})\right), \tag{24}\] and we assume that \(\Gamma\) can be computed explicitly without knowing \(\bar{\gamma}\). The definition of \(\Gamma\) depends on the chosen particle pusher: for the Boris pusher, we can approximate eq. (13) as \[\bar{\gamma}_{p}\simeq\Gamma_{p}\equiv\sqrt{1+\left[\mathbf{u}_{p}^{n}+q_{p} \Delta t\mathbf{E}^{n}(\mathbf{x}_{p}^{n})/(2m_{p})\right]^{2}/c^{2}}, \tag{25}\] using only known quantities to calculate \(\Gamma\). Likewise, for the Lapenta-Markidis definition (14), \[\bar{\gamma}_{p}\simeq\Gamma_{p}\equiv\gamma_{p}^{n}+\frac{q_{p}\Delta t}{2m_ {p}c^{2}}\mathbf{E}^{n}(\mathbf{x}_{p}^{n})\mathbf{\cdot}\mathbf{v}_{p}^{n}, \tag{26}\] and so on for other choices of particle pushers. Our formulation here provides a more general version of the original RelIMM (Noguchi et al., 2007), since here we allow for choices of pushers other than the Lapenta-Markidis one. With the approximation introduced via \(\Gamma\), we can write an explicit solution of eq. (11), \[\bar{\mathbf{v}}_{p}=\mathbf{\alpha}_{p}\mathbf{u}_{p}^{n}+\frac{q_{p}\Delta t}{2m_{p}} \mathbf{\alpha}_{p}\mathbf{E}_{p}^{n+\theta}, \tag{27}\] where \[\mathbf{\alpha}_{p}=\frac{1}{\Gamma_{p}(1+\beta_{p}^{2})}\left[\mathbb{I}-\mathbb{ I}\mathbf{\times}\mathbf{\beta}_{p}/\Gamma_{p}+\mathbf{\beta}_{p}\mathbf{\beta}_{p}/\Gamma_{p}^{2 }\right], \tag{28}\] with \(\mathbf{\beta}_{p}=q_{p}\Delta t\mathbf{B}_{p}^{n}/(2m_{p}c)\) and we have used the shorthand notation \(\mathbf{E}_{p}^{n+\theta}=\mathbf{E}^{n+\theta}(\mathbf{x}_{p}^{n+1/2})\), \(\mathbf{B}_{p}^{n}=\mathbf{B}^{n}(\mathbf{x}_{p}^{n+1/2})\). Inserting eq. (27) into eq. (22) and keeping first-order terms yields \[\mathbf{J}_{g}^{n+1/2} \simeq\frac{1}{\Delta V_{g}}\sum_{p}q_{p}\mathbf{\alpha}_{p}\mathbf{u}_{p }^{n}W(\mathbf{x}_{p}^{n}-\mathbf{x}_{g}) \tag{29}\] \[+\frac{\Delta t}{2\Delta V_{g}}\sum_{p}\frac{q_{p}^{2}}{m_{p}}\bm {\alpha}_{p}\mathbf{E}_{p}^{n+\theta}W(\mathbf{x}_{p}^{n}-\mathbf{x}_{g})\] \[-\frac{\Delta t}{2\Delta V_{g}}\mathbf{\nabla}\mathbf{\cdot}\sum_{p}q_{p}( \mathbf{\alpha}_{p}\mathbf{u}_{p}^{n})(\mathbf{\alpha}_{p}\mathbf{u}_{p}^{n})W(\mathbf{x}_{p}^{n}- \mathbf{x}_{g})\] \[+\mathcal{O}(\Delta t^{2}).\] To finally obtain an expression for \(\mathbf{J}^{n+1/2}\) as a linear function of \(\mathbf{E}^{n+\theta}\), we need to introduce further approximations. First, since \(\mathbf{x}_{p}^{n+1/2}\) needed to evaluate \(\mathbf{E}_{p}^{n+\theta}\) and \(\mathbf{B}_{p}^{n}\) is not known when calculating the current, we employ \[\mathbf{B}_{p}^{n}\simeq\mathbf{B}^{n}(\mathbf{x}_{p}^{n})=\sum_{g^{\prime}}\mathbf{B}_{g^{ \prime}}^{n}W(\mathbf{x}_{p}^{n}-\mathbf{x}_{g^{\prime}}), \tag{30}\] \[\mathbf{E}_{p}^{n+\theta}\simeq\mathbf{E}^{n+\theta}(\mathbf{x}_{p}^{n})=\sum_{g^{\prime}} \mathbf{E}_{g^{\prime}}^{n+\theta}W(\mathbf{x}_{p}^{n}-\mathbf{x}_{g^{\prime}}). \tag{31}\] Second, we bring the electric field out of the summation, \[\sum_{p} \frac{q_{p}^{2}}{m_{p}}\mathbf{\alpha}_{p}\left(\sum_{g^{\prime}}\mathbf{E}_{ g^{\prime}}^{n+\theta}W(\mathbf{x}_{p}^{n}-\mathbf{x}_{g^{\prime}})\right)W(\mathbf{x}_{p}^{n}- \mathbf{x}_{g}) \tag{32}\] \[\simeq\left(\sum_{p}\frac{q_{p}^{2}}{m_{p}}\mathbf{\alpha}_{p}W(\mathbf{ x}_{p}^{n}-\mathbf{x}_{g})\right)\mathbf{E}_{g}^{n+\theta}.\] This equality is exactly true when \(W\) is a zeroth-order b-spline (i.e. the interpolation is of nearest-grid-point type). This choice of \(W\) is very uncommon as it introduces high levels of noise in the interpolated data. For the more common choice of first-order b-splines, the operation above introduces a (rather crude) approximation, which as we will show results in artificial energy damping. This choice however is functional to obtain a final expression of the current that solely requires particle quantities at the previous time step and that is linear in the unknown electric field, \[\mathbf{J}_{g}^{n+1/2} \simeq\frac{1}{\Delta V_{g}}\sum_{p}q_{p}\mathbf{\alpha}_{p}\mathbf{u}_{p }^{n}W(\mathbf{x}_{p}^{n}-\mathbf{x}_{g}) \tag{33}\] \[+\frac{\Delta t}{2\Delta V_{g}}\left(\sum_{p}\frac{q_{p}^{2}}{m_{p }}\mathbf{\alpha}_{p}W(\mathbf{x}_{p}^{n}-\mathbf{x}_{g})\right)\mathbf{E}_{g}^{n+\theta}\] \[-\frac{\Delta t}{2\Delta V_{g}}\mathbf{\nabla}\mathbf{\cdot}\sum_{p}q_{p}( \mathbf{\alpha}_{p}\mathbf{u}_{p}^{n})(\mathbf{\alpha}_{p}\mathbf{u}_{p}^{n})W(\mathbf{x}_{p}^{n}- \mathbf{x}_{g}).\] Inserting eq. (33) into eq. (19) yields the final field equation of the RelIMM, \[(\mathbb{I}+\mathbf{\mu}_{g})\mathbf{E}_{g}^{n+\theta}-(c\theta\Delta t)^{2} \left[\mathbf{\nabla}^{2}\mathbf{E}_{g}^{n+\theta}+\mathbf{\nabla}\mathbf{\nabla}\mathbf{\cdot}(\mathbf{ \mu}_{g}\mathbf{E}_{g}^{n+\theta})\right] \tag{34}\] \[=\mathbf{E}_{g}^{n}+\theta\Delta t\left[c\mathbf{\nabla}\mathbf{\times}\mathbf{B} _{g}^{n}-4\pi\left(\widehat{\mathbf{J}}_{g}-\frac{\Delta t}{2}\mathbf{\nabla}\mathbf{\cdot} \widehat{\mathbf{\Pi}}_{g}\right)\right]\] \[\quad-4\pi(c\theta\Delta t)^{2}\left[\mathbf{\nabla}\rho_{g}^{n}-\theta \Delta t\mathbf{\nabla}\mathbf{\cdot}\left(\widehat{\mathbf{J}}_{g}-\frac{\Delta t}{2}\mathbf{ \nabla}\mathbf{\cdot}\widehat{\mathbf{\Pi}}_{g}\right)\right],\] where \[\mathbf{\mu}_{g}=\frac{2\pi\theta\Delta t^{2}}{\Delta V_{g}}\sum_{p}\frac{q_{p}^{2}}{m _{p}}\mathbf{\alpha}_{p}W(\mathbf{x}_{p}^{n}-\mathbf{x}_{g}), \tag{35}\] \[\widehat{\mathbf{J}}_{g}=\frac{1}{\Delta V_{g}}\sum_{p}q_{p}\mathbf{\alpha}_{p}\mathbf{u}_{p }^{n}W(\mathbf{x}_{p}^{n}-\mathbf{x}_{g}), \tag{36}\] \[\widehat{\mathbf{\Pi}}_{g}=\frac{1}{\Delta V_{g}}\sum_{p}q_{p}(\mathbf{\alpha}_{p}\mathbf{u}_{ p}^{n})(\mathbf{\alpha}_{p}\mathbf{u}_{p}^{n})W(\mathbf{x}_{p}^{n}-\mathbf{x}_{g}). \tag{37}\] Eq. (34) is linear in \(\mathbf{E}^{n+\theta}\) and thus can be solved efficiently with any standard linear solver, once the source terms have been calculated from known particle quantities. The magnetic field can then be updated using eq. (8) and extrapolating to \(\mathbf{B}^{n+1}=(\mathbf{B}^{n+\theta}-(1-\theta)\mathbf{B}^{n})/\theta\). ### RelIMM: Particle Push Once eq. (34) is solved on the grid, the particles can be evolved according to eqs. (10) and (11). Because the two equations are nonlinearly coupled (\(\mathbf{x}^{n+1/2}\) depends on \(\bar{\mathbf{v}}\) and vice versa), the particle push is carried out iteratively, starting from an initial guess for \(\bar{\mathbf{v}}\), according to the following steps: 1. Compute \(\mathbf{x}^{n+1}\) (and thus \(\mathbf{x}^{n+1/2}\)) using the current \(\bar{\mathbf{v}}\); 2. Interpolate \(\mathbf{E}^{n+\theta}\) and \(\mathbf{B}^{n}\) from the grid onto the current particle position \(\mathbf{x}^{n+1/2}\); 3. Compute \(\bar{\gamma}\) according to the preferred definition (e.g. Boris, Lapenta-Markidis, etc.); 4. Compute \(\mathbf{u}^{n+1}\) (and thus \(\bar{\mathbf{v}}\)) using \(\bar{\gamma}\) and the interpolated fields. Step 3 above is carried out differently for different particle pushers. For the Lapenta-Markidis pusher Lapenta and Markidis (2011), however, no explicit expression for \(\bar{\gamma}\) has been presented in literature, to the best of our knowledge. In Appendix A we report such an explicit solution for the first time. Once \(\bar{\gamma}\) is known, we can obtain the new particle 4-velocity as \(\mathbf{u}^{n+1}=2\bar{\gamma}\bar{\mathbf{v}}-\mathbf{u}^{n}\), assuming the half-step velocity has been defined as \(\bar{\mathbf{v}}=(\mathbf{u}^{n+1}+\mathbf{u}^{n})/(2\bar{\gamma})\). The iterative solution of the particle equations of motion can then continue as illustrated above. In typical implementations of the (Rel)IMM, this iteration is not carried out until convergence (which is not guaranteed due to the nonlinearly implicit nature of the equations), but rather steps 1-4 above are repeated a fixed number of times to avoid excessive computational costs. ### Summary of the RelIMM One complete time iteration of the RelIMM is composed of the following steps: 1. Gather the source terms \(\rho^{n}\), \(\widehat{\mathbf{J}}\), \(\widehat{\mathbf{\Pi}}\) on the grid from known particle quantities at time step \(n\). 2. Solve eq. (34) for \(\mathbf{E}^{n+\theta}\) using any preferred linear solver. 3. Update the position and 4-velocity of all particles iteratively by solving the coupled system (10)-(11) with the preferred definition of \(\bar{\gamma}\) and using \(\mathbf{E}^{n+\theta}\) and \(\mathbf{B}^{n}\). 4. Finalize the field solution on the grid by computing \(\mathbf{E}^{n+1}\) and \(\mathbf{B}^{n+1}\). Although this formulation of the RelIMM is rather simple as it only involves a linear solve for the grid quantities, it also presents several drawbacks and approximations: * To interpolate source quantities from particles to grid, the interpolation function is expanded around \(\mathbf{x}_{p}^{n+1/2}\), since \(\mathbf{x}_{p}^{n+1/2}\) is not known when gathering the sources (see eq. (21)). This results in the need to calculate an additional source quantity \(\widehat{\mathbf{\Pi}}\). * In the definition of the rotation matrix \(\mathbf{\alpha}\), \(\bar{\gamma}\) is approximated to an expression (which depends on the chosen pusher) such that it can be evaluated using known field and particle quantities at the previous time step (see e.g. eq. (26)). * Crucially, to make the dependence of \(\mathbf{J}^{n+1/2}\) on \(\mathbf{E}^{n+\theta}\) linear, it is assumed that the particle-grid interpolation functions are zeroth-order b-splines (see eq. (32)), which is rarely the case due to excessive numerical noise introduced by low-order interpolation. * Since the particle position and 4-velocity are synchronized in time, the particle-push step involves an iteration that needs to be carried out until convergence (in principle, although in practice a fixed number of iterations are realized instead). The performance of the standard RelIMM, even compared to an explicit relativistic PIC method, is severely affected by the shortcomings listed above. As we will show, the approximations introduced result in large errors in the total energy when employing coarse grid resolutions. Since reducing the number of cells in relativistic PIC simulations is in principle the main advantage of implicit methods, this particular point renders the standard RelIMM rather unattractive. In the next Section, we present a new method that eliminates many of the drawbacks affecting the RelIMM. ## 3 The New Relsim Formulation Here, we present a new Relativistic Semi-Implicit Method (RelSIM) for PIC that substantially improves over the standard RelIMM. Our approach extends the nonrelativistic ECSIM method (Lapenta, 2017) to relativistic regimes, and is free of many of the drawbacks affecting the RelIMM. The new method necessarily sacrifices exact energy conservation in order to discard nonlinear iterations, but energy errors are much smaller than those observed when applying the RelIMM (see Section 4). We construct the new RelSIM method starting with the same \(\theta\)-scheme employed in the IMM for the discretized Maxwell's equations, \[\frac{\mathbf{E}_{g}^{n+\theta}-\mathbf{E}_{g}^{n}}{\theta\Lambda t}=c\mathbf{\nabla}\mathbf{ \times}\mathbf{B}_{g}^{n+\theta}-4\pi\mathbf{J}_{g}^{n+1/2}, \tag{38}\] \[\frac{\mathbf{B}_{g}^{n+\theta}-\mathbf{B}_{g}^{n}}{\theta\Lambda t}=-c\mathbf{\nabla}\mathbf{ \times}\mathbf{E}_{g}^{n+\theta}, \tag{39}\] and a slightly modified particle pusher, \[\frac{\mathbf{x}_{p}^{n+1/2}-\mathbf{x}_{p}^{n-1/2}}{\Delta t}=\frac{\mathbf{u}_{p}^{n}}{ \gamma_{p}^{n}}, \tag{40}\] \[\frac{\mathbf{u}_{p}^{n+1}-\mathbf{u}_{p}^{n}}{\Delta t}=\frac{q_{p}}{m_{p}}\left( \mathbf{E}^{n+\theta}(\mathbf{x}_{p}^{n+1/2})+\frac{\bar{\mathbf{v}}_{p}}{c}\mathbf{\times} \mathbf{B}^{n}(\mathbf{x}_{p}^{n+1/2})\right), \tag{41}\] where the position update is now staggered with respect to the velocity and can be carried out separately from the velocity update. Like for the RelIMM, the choice of \(\bar{\mathbf{v}}\) is free but we assume it to have the form \(\bar{\mathbf{v}}=(\mathbf{u}^{n+1}+\mathbf{u}^{n})/(2\bar{\gamma})\). The velocity update is then performed by choosing \(\bar{\gamma}\) according to the form given by available particle pushers (Boris, Lapenta-Markidis, etc.). ### RelSIM: Field Solver Next, we derive the field solver of the new RelSIM. We follow the same steps presented in Section 2.1 to recast Maxwell's equations into the form \[\begin{split}\mathbf{E}_{g}^{n+\theta}&+(c\theta\Delta t )^{2}\mathbf{\nabla}\mathbf{\times}\mathbf{\nabla}\mathbf{\times}\mathbf{E}_{g}^{n+\theta}\\ &=\mathbf{E}_{g}^{n}+c\theta\Delta t\mathbf{\nabla}\mathbf{\times}\mathbf{B}_{g}^ {n}-4\pi\theta\Delta t\mathbf{J}_{g}^{n+1/2},\end{split} \tag{42}\] but we now avoid the expansion of the \(\mathbf{\nabla}\mathbf{\times}\mathbf{\nabla}\mathbf{\times}\) term4 that was performed for eq. (17). Next, we employ the expression for the half-step current, Footnote 4: This expansion can be in principle still carried out, but the following substitution \(\mathbf{\nabla}\mathbf{\cdot}\mathbf{E}=4\pi\rho\) (which is operated for the RelIMM) would result in a loss of energy conservation. \[\mathbf{J}_{g}^{n+1/2}=\frac{1}{\Delta V_{g}}\sum_{p}q_{p}\bar{\mathbf{v}}_{p}W(\mathbf{x }_{p}^{n+1/2}-\mathbf{x}_{g}), \tag{43}\] which can now be collected from the particles at position \(\mathbf{x}^{n+1/2}\), computed from the known velocity \(\mathbf{u}^{n}\). Here we can directly substitute the (approximate) solution of eq. (41) for \(\bar{\mathbf{v}}\) and separate out to find \[\begin{split}\mathbf{J}_{g}^{n+1/2}&\simeq\frac{1}{ \Delta V_{g}}\sum_{p}q_{p}\mathbf{\alpha}_{p}\mathbf{u}_{p}^{n}W(\mathbf{x}_{p}^{n+1/2}- \mathbf{x}_{g})\\ &+\frac{\Delta t}{2\Delta V_{g}}\sum_{p}\frac{q_{p}^{2}}{m_{p}} \mathbf{\alpha}_{p}\mathbf{E}_{p}^{n+\theta}W(\mathbf{x}_{p}^{n+1/2}-\mathbf{x}_{g}),\end{split} \tag{44}\] where \(\mathbf{\alpha}\) is again given by \[\mathbf{\alpha}_{p}=\frac{1}{\Gamma_{p}(1+\beta_{p}^{2})}\left[\mathbb{I}- \mathbb{I}\mathbf{\times}\mathbf{\beta}_{p}/\Gamma_{p}+\mathbf{\beta}_{p}\mathbf{\beta}_{p}/ \Gamma_{p}^{2}\right], \tag{45}\] and \(\mathbf{\beta}_{p}=q_{p}\Delta t\mathbf{B}_{p}^{n}/(2m_{p}c)\), \(\mathbf{E}_{p}^{n+\theta}=\mathbf{E}^{n+\theta}(\mathbf{x}_{p}^{n+1/2})\), \(\mathbf{B}_{p}^{n}=\mathbf{B}^{n}(\mathbf{x}_{p}^{n+1/2})\). Through \(\mathbf{\alpha}\) we have introduced the first (and only) approximation needed to construct our new method, i.e. the assumption \(\bar{\gamma}\simeq\Gamma\) with \(\Gamma\) defined according to the chosen particle pusher (see Section 2). Now, differently from the IMM approach, we bring the unknown electric field out of the summation over \(p\) in a manner that does not introduce any further approximations, i.e. \[\begin{split}\sum_{p}&\frac{q_{p}^{2}}{m_{p}}\mathbf{ \alpha}_{p}\left(\sum_{g^{\prime}}\mathbf{E}_{g^{\prime}}^{n+\theta}W(\mathbf{x}_{p}^{ n+1/2}-\mathbf{x}_{g^{\prime}})\right)W(\mathbf{x}_{p}^{n+1/2}-\mathbf{x}_{g})\\ &=\sum_{g^{\prime}}\mathbf{M}_{gg^{\prime}}\mathbf{E}_{g^{\prime}}^{n+ \theta},\end{split} \tag{46}\] where \[\mathbf{M}_{gg^{\prime}}=\sum_{p}\frac{q_{p}^{2}}{m_{p}}\mathbf{\alpha}_{p}W(\mathbf{ x}_{p}^{n+1/2}-\mathbf{x}_{g^{\prime}})W(\mathbf{x}_{p}^{n+1/2}-\mathbf{x}_{g}) \tag{47}\] is the _mass matrix_ first introduced by Lapenta (2017) for the original nonrelativistic ECSIM. The difference here is the presence of the relativistic Lorentz factor \(\Gamma\) in the definition of \(\mathbf{\alpha}\) above. Comparing to the RelIMM (eq. (32)), we observe that here we are not introducing any assumption on the interpolation functions \(W\). For the common choice of first-order b-splines, eq. (46) requires the calculation of 9 mass matrices (when the electric field has 3 components) per grid point. Inserting the equation above into the definition of the current, we obtain a final expression for the field equation of the RelSIM, \[\begin{split}\mathbf{E}_{g}^{n+\theta}&+(c\theta\Delta t )^{2}\mathbf{\nabla}\mathbf{\times}\mathbf{\times}\mathbf{E}_{g}^{n+\theta}+\sum_{g^{\prime}} \mathbf{\mu}_{gg^{\prime}}\mathbf{E}_{g^{\prime}}^{n+\theta}\\ &=\mathbf{E}_{g}^{n}+c\theta\Delta t\mathbf{\nabla}\mathbf{\times}\mathbf{B}_{g}^{n }-4\pi\theta\Delta t\mathbf{\hat{J}}_{g},\end{split} \tag{48}\] where \[\mathbf{\mu}_{gg^{\prime}}=\frac{2\pi\theta\Delta t^{2}}{\Delta V_{g}}\mathbf{M}_{ gg^{\prime}}, \tag{49}\] \[\widehat{\mathbf{J}}_{g}=\frac{1}{\Delta V_{g}}\sum_{p}q_{p}\mathbf{\alpha}_{p}\mathbf{u}_{p}^ {n}W(\mathbf{x}_{p}^{n+1/2}-\mathbf{x}_{g}). \tag{50}\] Eq. (48) is linear in \(\mathbf{E}^{n+\theta}\) and can be handled with a linear solver. Like in the case of the RelIMM, \(\mathbf{B}^{n}\) can be updated once the electric field is known. ### RelSIM: Particle Push The particle update is easier for the RelSIM than for the RelIMM, since the former involves no iteration (see Section 2.2). Because particle positions and velocities are staggered in time, here we need to update the position via eq. (40) before the field solve. Once \(\mathbf{E}^{n+\theta}\) is known on the grid, the velocity update (41) can be carried out according to the preferred relativistic particle pusher. ### Summary of the RelSIM One complete time iteration of the RelSIM is composed of the following steps: 1. Update the position of all particles by solving eq. (40). 2. Compute the current \(\widehat{\mathbf{J}}\) and mass matrices \(\mathbf{M}\) on the grid from known particle quantities (\(\mathbf{x}^{n+1/2}\) and \(\mathbf{u}^{n}\)). 3. Solve eq. (48) for \(\mathbf{E}^{n+\theta}\) using any preferred linear solver. 4. Update the 4-velocity of all particles by solving the momentum equation (41) with the preferred definition of \(\bar{\gamma}\) and using \(\mathbf{E}^{n+\theta}\) and \(\mathbf{B}^{n}\). 5. Finalize the field solution on the grid by computing \(\mathbf{E}^{n+1}\) and \(\mathbf{B}^{n+1}\). In contrast with the RelIMM, the RelSIM does not need any nonlinear iterations for the particle update and does not require the calculation of the dielectric tensor \(\widehat{\mathbf{\Pi}}\); furthermore, the RelSIM does not rely on any assumptions other than an approximation of the half-step Lorentz factor \(\Gamma\simeq\bar{\gamma}\) needed to linearize the field equations. As a drawback, the RelSIM requires the calculation of the mass matrices, which adds to the complexity of the field solver. However, we will show that this downside is largely compensated by the superior energy-conservation properties of the RelSIM with respect to the original RelIMM. ## 4 Validation Tests In this Section, we perform several test simulations to assess the numerical performance of the new RelSIM. In general, we compare the results obtained with the RelSIM to those obtained with the RelIMM as well as with a standard explicit-PIC code. For the explicit runs, we employ Zeltron, which is a state-of-the-art tool utilized for many production applications in relativistic astrophysics (e.g. Cerutti et al., 2013; Zhdankin et al., 2017; Werner et al., 2018; Parfrey et al., 2019; Mehlhaff et al., 2021; Bacchini et al., 2022; Galishnikova et al., 2023). Zeltron uses a standard explicit leapfrog discretization for particle and field equations and a numerical grid based on a Yee lattice. The RelIMM and RelSIM are implemented in the basic framework employed by iPic3D and ECSIM, i.e. a grid with colocated electromagnetic fields (see e.g. Markidis et al., 2010) and the discretization discussed in the previous Sections for field and particle equations. To solve the linear problem for the electric-field update, we employ a Jacobian-free Newton-Krylov iterative solver. ### Beam Instabilities in 1D As a first test, we consider the textbook case employed as a sanity check for every PIC code, i.e. a one-dimensional beam instability. In a 1D periodic domain \(x\in[0,L]\) we initialize two counterpropagating neutral beams of electron-positron plasma (i.e. \(m_{i}=m_{e}\)). Particle velocities are drawn from a relativistic Maxwell-Juttner distribution with mean Lorentz factor \(\gamma_{0}=1/\sqrt{1-v_{0}^{2}/c^{2}}=2\) and a thermal spread \(\Theta_{0}=kT_{0}/(m_{e}c^{2})=0.001\). We consider both the simple electrostatic case in which the beam drift direction is along \(x\) (i.e. a two-stream instability or TSI) and the electromagnetic case where the beams propagate perpendicularly to \(x\) (i.e. a filamentation instability or FI). These instabilities have maximum growth rates \(\Gamma^{\rm TSI}/\omega_{\rm p,b}=1/(2\gamma_{0}^{3/2})\) and \(\Gamma^{\rm FI}/\omega_{\rm p,b}=(v_{0}/c)\sqrt{2/\gamma_{0}}\) (where \(\omega_{\rm p,b}\) is the plasma frequency calculated with the density of a single beam; see e.g. Bret et al., 2010) respectively. These two classical tests are useful to assess the basic properties of standard PIC schemes, and we simulate both with an explicit method as well as with the RelIMM and the RelSIM. In Fig. 1, we show the results for both test cases. For the TSI, the numerical domain is of size \(L=32c/\omega_{\rm p,b}\) divided in 64 cells, with 156 particles per cell for each species (electrons and positrons). The Courant-Friedrichs-Lewy (CFL) ratio is kept such that \(c\Delta t/\Delta x=0.25\). The evolution of the electric energy is shown in the top-left panel of Fig. 1, for the explicit-PIC case, the RelIMM, and the RelSIM. The reference theoretical growth rate is also shown for comparison. We observe that the electrostatic instability is captured well by all methods during the linear stage. The nonlinear stage shows (expected) differences between the methods, but an overall agreement in the saturation level of the electric energy. The bottom-left panel of the same Figure shows the evolution in time of the relative error on the total energy of the system, which should be conserved exactly in principle. We immediately notice that while the explicit approach and the new RelSIM keep energy errors well controlled, the RelIMM introduces much larger deviations, up to 10% of the total energy. For the electromagnetic FI, the domain is of size \(L=12.8c/\omega_{\rm p,b}\) and we employ a grid with 256 cells and 20 particles per cell per species. The CFL ratio is such that \(c\Delta t/\Delta x=0.5\). The top-right panel of Fig. 1 shows the evolution of the magnetic energy, again for all methods, compared with the theoretical linear growth rate. We observe that all runs capture the linear stage relatively well, albeit with small deviations from theoretical expectations (potentially due to the low resolution employed). The saturation level during the nonlinear stage is again similar for all methods; however, it is interesting to notice that in this case the explicit method displays the largest energy errors, shown in the bottom-right panel of the same Figure. The RelIMM shows errors similar to the explicit method by the end of the run, while the RelSIM keeps energy errors roughly two orders of magnitude lower. For the two very simple test cases considered, we conclude that all methods perform relatively well (as expected), at least in terms of capturing the linear stage of the instability. The RelSIM in particular distinguishes Figure 1: Paradigmatic one-dimensional test cases for PIC methods, simulating the interaction of two counterpropagating electron-positron plasma beams with initial mean Lorentz factor \(\gamma_{0}=2\). Left column: electric-energy evolution (top) and relative error on the total energy (bottom) for the electrostatic two-stream instability (where the beam velocity is along \(x\)). Right column: magnetic-energy evolution (top) and relative error on the total energy (bottom) for the electromagnetic filamentation instability (where the beam velocity is perpendicular to \(x\)). For both instabilities, the system’s evolution follows the theoretical growth rate relatively well (before reaching a statistically similar nonlinear state) when employing an explicit-PIC method, the standard RelIMM from Noguchi et al. 2007, and the new RelSIM presented here. Errors in the energy are always much smaller for the new RelSIM with respect to the other two methods. itself by introducing smaller errors on the total energy with respect to both the RelIMM and explicit methods, in all cases. This is an important property for numerical methods in general, but we will show in the next Sections that it can actually prove fundamental in physical cases of interest. ### Ion-electron Shock in 1D As a second test, we consider a one-dimensional shock problem in an ion-electron plasma with large mass ratio \(m_{i}/m_{e}\gg 1\), which is relevant for e.g. particle acceleration at supernova remnants, plasma-expansion experiments, solar flares, and solar wind (Dieckmann et al., 2010; Jones, 2011; Park et al., 2013; Liseykina et al., 2015; Caprioli et al., 2018). This test case is a representative example of a fully kinetic system where the global dynamics is almost entirely driven by the ions and occurs over ion-related length and time scales. Electrons mostly act as a background fluid, with little to no effect on the overall system evolution. Such a problem is challenging for explicit codes, which need to resolve all scales down to the electron skin depth and plasma frequency, whereas implicit codes in principle allow for underresolving electron scales. To set up a simplified shock problem, we initialize two plasma beams of uniform density \(n_{0}=n_{0,i}=n_{0,e}\) traveling in a uniform background magnetic field \(\mathbf{B}_{0}=(0,0,B_{0})\) such that the ion magnetization \(\sigma_{0,i}=B_{0}^{2}/(4\pi n_{0}m_{i}c^{2})=0.01\) everywhere. Ions and electrons in the beams have equal temperature \(T_{0}\) such that the initial thermal spread \(\Theta_{0,i}=\Theta_{0,e}/(m_{i}/m_{e})=kT_{0}/(m_{i}c^{2})=10^{-6}\). The beams are initialized in a domain \(x\in[0,L]\) with initial mean velocity \(\mathbf{v}_{0}=(\pm v_{0},0,0)\) (with a \(+\) sign if \(x<L/2\) and a \(-\) sign otherwise) where \(v_{0}/c=0.1\). The domain size here is \(L=20\tilde{\rho}_{\mathrm{C},i}\), where \(\tilde{\rho}_{\mathrm{C},i}\equiv mcv_{0}/(qB_{0})\) (note that with the chosen \(\sigma_{0,i}=0.01\), \(\tilde{\rho}_{\mathrm{C},i}=c/\omega_{\mathrm{p},i}\)). Finally, the initial electric field is set equal to \(-\mathbf{v}_{0}\mathbf{\times}\mathbf{B}_{0}\). With this setup, two shock waves are created at \(t=0\) at the domain center; both shocks then travel toward the boundaries with speed \(\sim v_{0}\). Although the initial state is nonperiodic in nature, we employ periodic boundary conditions for simplicity, assuming that spurious boundary effects do not drastically influence the solution until the shocks reach \(x=0\) or \(x=L\) around \(t\simeq 100\omega_{\mathrm{p},i}^{-1}\). For this reason, we halt the simulation at \(t=85\omega_{\mathrm{p},i}^{-1}\), before boundary effects come into play. We simulate our simple shock setup with an explicit method as well as with the RelIMM and the new RelSIM. In the top panel of Fig. 2, we show a representative snapshot of both ions and electrons in the \(x-v_{x}\) phase space at \(t=84\omega_{\mathrm{p},i}^{-1}\). The shock fronts are visible at \(x\simeq L/2\pm 9c/\omega_{\mathrm{p},i}\). The ions (in red) show signatures of multiple reflections in the shock downstream, and a generally complex distribution in phase space as a result of the shock propagation. Electrons (in purple), instead, are predominantly behaving as a thermal background. We first test the convergence of the RelSIM with respect to the numerical resolution. Fixing \(c\Delta t/\Delta x=0.7\), we employ a mass ratio \(m_{i}/m_{e}=100\) and vary the grid spacing \(\Delta x/(c/\omega_{\mathrm{p},i})=1,0.1,0.01\). In terms of electron scales, this corresponds to \(\Delta x/(c/\omega_{\mathrm{p},e})=10,1,0.1\), i.e. the electron skin depth goes from dramatically underresolved to relatively well resolved. In all runs we initialize 100 particles per cell per species. In the bottom-left panel of Fig. 2, we show the results of this test in terms of the evolution of total magnetic and ion energy. We observe that the results converge when the ion scales are well resolved by the computational grid, i.e. further increasing the resolution beyond \(\Delta x/(c/\omega_{\mathrm{p},i})=0.1\) does not dramatically alter the evolution of the system. A substantial difference arises when ion scales are only marginally resolved (\(\Delta x/(c/\omega_{\mathrm{p},i})=1\)) which is unsurprising, given that the characteristic length scales of the problem are not well captured. As a second test, we consider the exact same initial conditions but with a realistic mass ratio \(m_{i}/m_{e}=1836\). In this case, electron and ion scales are even more separated and our reference resolutions \(\Delta x/(c/\omega_{\mathrm{p},i})=1,0.1,0.01\) are such that \(\Delta x/(c/\omega_{\mathrm{p},e})\simeq 42,4.2,0.42\) in terms of electron scales. Moreover, from our choice \(c\Delta t/\Delta x=0.7\) it follows that \(\Delta t/\omega_{\mathrm{p},e}^{-1}\simeq 29,2.9,0.29\), i.e. both electron length and time scales are largely underexolved in two out of three runs. The results for the case \(\Delta x/(c/\omega_{\mathrm{p},i})=0.1\) (i.e. \(\Delta x/(c/\omega_{\mathrm{p},e})\simeq 4.2\)) are shown in the bottom-right panel of Fig. 2 in terms of the evolution of the total energy of the system, which should be exactly conserved, for the explicit run and for the RelIMM and RelSIM cases. For the explicit method, we observe that errors in the total energy rapidly increase right from the start of the simulation. We find that this energy error precisely corresponds to an unphysical increase in electron energy (not shown) that occurs when electron scales are underresolved in the explicit run. The RelIMM and RelSIM runs behave much better, introducing much smaller errors; in particular, the RelSIM displays the smallest errors out of the three methods. The same qualitative conclusion applies to all runs conducted here, including those not shown in Fig. 2. These results show that the new RelSIM is superior to the standard explicit PIC when underresolving electron scales. This is rather unsurprising, given that the implicit approach eliminates stability constrains affect ing explicit methods; however, the RelSIM also produces better results than those obtained with the original RelIMM, by introducing smaller energy errors in all cases. An additional interesting feature is that the RelSIM retains stability even in demanding cases in which ion scales are marginally resolved or underresolved. As shown in Fig. 2, when ion scales are not accurately captured the dynamics of the system at those scales is approximated, but not completely lost; underresolving ion scales does not result in a loss of stability of the method, i.e. the underresolved simulation is not disrupted by numerical artifacts growing unboundedly. ### Relativistic Reconnection in 2D In this Section we consider a two-dimensional setup for magnetic reconnection as a test for PIC algorithms in multiple dimensions. Reconnection is a ubiquitous process in the Universe, conjectured to play a major role in many high-energy astrophysical environments as well as in solar, heliospheric, and fusion plasmas (see e.g. Hoshino & Lyubarsky, 2012 for a review on relativistic reconnection). We initialize two relativistic Harris sheets (e.g. Harris, 1962; Melzani et al., 2013) in a double-periodic domain \((x,y)\in[L_{x}\times L_{y}]\) where an upstream, magnetized ion-electron thermal plasma flows into a region of magnetic-polarity inversion and experiences acceleration as reconnection dissipates magnetic energy. To set up the initial Harris equilibrium we first impose the upstream conditions in terms of the background ion thermal spread \(\Theta_{0,i}=kT_{0}/m_{i}c^{2}\) and ion magnetization \(\sigma_{0,i}=B_{0}^{2}/(4\pi n_{0}m_{i}c^{2})\) where we have assumed that \(n_{0}=n_{0,i}=n_{0,e}\) and \(T_{0}=T_{0,i}=T_{0,e}\). The Figure 2: One-dimensional ion-electron shock test. Two ion-electron beams propagating in a uniform out-of-plane magnetic field with opposite initial velocities collide along the \(x\)-direction and create outgoing shock waves. Top panel: Ions display a complex dynamics with multiple reflections at the shock fronts, while electrons mostly act as a thermal background. Bottom-left panel: The evolution in time of magnetic and ion energy shows that the new RelSIM produces converged results when the ion scales are well resolved, even if electron scales are underresolved. Bottom-right panel: The evolution of the total energy during the simulation shows that explicit PIC methods rapidly introduce large errors when electron scales are underresolved. Implicit methods remain stable, but the standard RelIMM performs much worse than the new RelSIM in terms of energy conservation. ion temperature and magnetization are free parameters, while the background electron thermal spread follows from \(\Theta_{0,e}=\Theta_{0,i}(m_{i}/m_{e})\). Then, the plasma conditions inside the current sheets can be calculated from the initial magnetic-field profile, \[B_{x}(y)=\begin{cases}-B_{0}\tanh\left(\frac{y-L_{y}/4}{\delta}\right)&\text{if }\quad y<L_{y}/2\\ B_{0}\tanh\left(\frac{y-3L_{y}/4}{\delta}\right)&\text{if}\quad y>L_{y}/2 \end{cases}, \tag{51}\] where \(\delta\) is the current-sheet half-thickness (a free parameter of the setup). By imposing \(c\boldsymbol{\nabla}\times\boldsymbol{B}=4\pi\boldsymbol{J}\) and pressure balance across a current sheet, we can find the plasma drift velocity \(\boldsymbol{v}_{\text{CS}}=(0,0,\pm v_{\text{CS}})\) (equal and opposite for the two species) and temperature at the current-sheet center (e.g. Melzani et al., 2013), \[\frac{v_{\text{CS}}\Gamma_{\text{CS}}}{c}=\frac{B_{0}}{8\pi q\alpha n_{0} \delta}, \tag{52}\] \[\Theta_{\text{CS},i}=\Theta_{\text{CS},e}/(m_{i}/m_{e})=\frac{B_{0}^{2}\Gamma _{\text{CS}}}{16\pi\alpha n_{0}m_{i}c^{2}}=\frac{\sigma_{i,0}\Gamma_{\text{CS }}}{2}, \tag{53}\] where \(q=|q_{i}|=|q_{e}|\) and \(\alpha\) is the ratio of plasma density between current-sheet center and upstream. The drift motion determines a Lorentz factor \(\Gamma_{\text{CS}}=1/\sqrt{1-v_{\text{CS}}^{2}/c^{2}}\) of the drifting plasma inside the current sheet. The overdensity ratio \(\alpha\) is a free parameter like \(\delta\), but note that \(\alpha\) and \(\delta\) must be chosen5 such that \(v_{\text{CS}}<c\). Footnote 5: Choosing \(\alpha\) and \(\delta\) such that \(v_{\text{CS}}>c\) is equivalent to choosing parameters for which the Harris equilibrium cannot be satisfied. As a first test, we consider the simple pair-plasma case, \(m_{i}=m_{e}\), and impose an upstream magnetization \(\sigma_{0}=\sigma_{0,i}=\sigma_{0,e}=10\) (calculated with both species) and temperature \(\Theta_{0}=\Theta_{0,i}=\Theta_{0,e}=0.01\). We consider a domain of size \(L_{x}=L_{y}/2=51.2c/\omega_{\text{p}}\), where \(\omega_{\text{p}}\) includes the density of both species combined. The current-sheet half-thickness \(\delta/(c/\omega_{\text{p}})=1\) and the overdensity ratio \(\alpha=5\). We perform a set of simulations where we progressively decrease the grid spacing \(\Delta x/(c/\omega_{\text{p}})=4,2,1,0.5\) keeping \(c\Delta t/\Delta x=1/\sqrt{2}\) fixed. In all cases we initialize 64 particles per cell per species, and we do not perturb the initial equilibrium, such that the onset of the tearing instability leading to reconnection is only determined by numerical noise. We employ a high-resolution simulation with \(\Delta x/(c/\omega_{\text{p}})=0.05\) as a reference result. A representative snapshot of the reference solution when reconnection is fully developed is shown in Fig. 3: the out-of-plane current-density distribution during the nonlinear stage of the simulation (left panel) features the typical "plasmoids" created by the fragmentation of the initial current sheets. The evolution of the system's energetics (right panel) is such that magnetic energy is depleted in favor of kinetic energy, before reaching a statistical steady state. Our system size is relatively small, such that the reconnection process is halted within a few hundred plasma periods. Fig. 4 shows the results, for the resolutions indicated above, of the pair-plasma simulation run with the explicit method (left column), the RelIMM (middle column), and the RelSIM (right column) in terms of the total (top row), magnetic (middle row), and kinetic (bottom row) energy over time. For the explicit method, we Figure 3: Representative pair-plasma reconnection simulation at high numerical resolution. Left: Spatial distribution of the out-of-plane current density during the nonlinear stage of the evolution, showing the typical “plasmoid” structures originating from the fragmentation of the initial current sheets. Right: Typical evolution of the system’s energetics, where magnetic energy (in blue) is converted into kinetic energy (in red) while the total energy (in black) is conserved. observe how numerical errors rapidly destroy the solution when \(\Delta x>c/\omega_{\rm p}\) (note that the case \(\Delta x=4c/\omega_{\rm p}\) immediately fails after a few time steps and is therefore not shown). Both implicit methods, instead, remain stable for all resolutions considered. As the number of grid points decreases the solution becomes less accurate, and particles and fields exchange less and less energy. It is however interesting to note that the RelSIM already provides relatively well-converged results for \(\Delta x\leq 2c/\omega_{\rm p}\), at least in terms of the rate of depletion of magnetic energy (and the corresponding increase in kinetic energy). Conversely, the RelIMM still shows larger differences in the solutions between \(\Delta x\leq c/\omega_{\rm p}\) and \(\Delta x\leq 0.5c/\omega_{\rm p}\). The evolution of the total energy for the two implicit methods appears tightly linked with how well those methods converge: the RelSIM systematically shows much lower energy errors, and converges faster, than the RelIMM, suggesting that energy conservation is of primary importance to produce qualitatively accurate results even at low resolutions. As a second test case, we consider an ion-electron plasma with realistic mass ratio \(m_{i}/m_{e}=1836\). We initialize the system similarly to the pair-plasma case, but with important differences. The upstream ion magnetization and temperature are \(\sigma_{0,i}=10\) and \(\Theta_{0,i}=0.01\), and by choosing \(T_{0,e}=T_{0,i}\) this implies \(\Theta_{0,e}=18.36\), i.e. upstream electrons are relativistically hot (with mean Lorentz factor \(\gamma_{0,e}\simeq 55\)). The domain size is \(L_{x}=L_{y}/2=51.2c/\omega_{\rm p,i}\), and we choose \(\delta/(c/\omega_{\rm p,i})=1\) and \(\alpha=5\). Because \(\gamma_{0,e}\gg 1\), this setup is representative of the so-called "semirelativistic" reconnection regime, relevant for e.g. accreting black-hole coronae and blazar jets (e.g. Rowan et al., 2017; Ball et al., 2018; Werner et al., 2018; Kilian et al., 2019). Figure 4: Simulations of pair-plasma reconnection at different resolutions with an explicit PIC method (left column), the RelIMM (middle column), and the RelSIM (right column). For an array of grid spacings \(\Delta x/(c/\omega_{\rm p})=4,2,1,0.5\) with 64 particles per cell per species (electrons and positrons), we show the evolution of the total (top row), magnetic (middle row), and kinetic (bottom row) energy. Note that when \(\Delta x/(c/\omega_{\rm p})=4\) the explicit simulation fails after only a few time steps, hence the corresponding lines are not shown in the left column. 2020). With respect to the pair-plasma case, ion and electron spatiotemporal scales are now separated, but less than they would be in a completely nonrelativistic scenario: the relativistic electron skin depth is indeed \(c/\omega_{\mathrm{p,e}}^{\mathrm{r}}=c\sqrt{\gamma_{0,\mathrm{e}}}/\omega_{\mathrm{ p,e}}\simeq 0.17c/\omega_{\mathrm{p,i}}\), i.e. a factor \(\sqrt{\gamma_{0,\mathrm{e}}}\) larger than the corresponding nonrelativistic counterpart. Since relativistic explicit PIC codes must resolve \(c/\omega_{\mathrm{p,e}}^{\mathrm{r}}\) on the numerical grid, the presence of relativistic electrons helps relaxing the stability criterion for explicit simulations. In our test, we employ numerical resolutions \(\Delta x/(c/\omega_{\mathrm{p,i}})=4,2,1,0.5\) corresponding to \(\Delta x/(c/\omega_{\mathrm{p,e}}^{\mathrm{r}})\simeq 23.1,11.6,5.8,2.9\); we also keep \(c\Delta t/\Delta x=1/\sqrt{2}\) fixed. As a result, in all cases the electron spatial and temporal scales are underresolved. Fig. 5 shows the same quantities as Fig. 4, now for the ion-electron case: the evolution of total, magnetic, and total kinetic energy (i.e. of ions and electrons combined) for our array of simulations using the explicit method, the RelIMM, and the RelSIM. The results are very similar to the pair-plasma case: while the explicit runs either quickly fail or display large numerical errors, the implicit runs remain stable even when both electron and ion scales are dramatically underresolved. The RelSIM again performs systematically better than the RelIMM, introducing smaller numerical errors and converging much faster to the expected behavior (i.e. the dissipation of magnetic energy corresponding to an increase of kinetic energy). As a last experiment, we consider the effect of different particle pushers on the performance of the RelSIM in the ion-electron case. While the global evolution of the reconnection layers is similar between the \(m_{i}=m_{e}\) and \(m_{i}\gg m_{e}\) cases, the individual species behave differently in the latter scenario and in particular they receive different amounts of energy from the reconnection process (e.g. Werner et al., 2018). In Fig. 6 (top panel), we plot the evolution in time of the ion (in red) and electron (in purple) kinetic energy during the reconnection simulation run with the RelSIM and with resolution \(\Delta x=0.5c/\omega_{\mathrm{p,i}}\). We distinguish between the evolution produced by the Boris and the Lapenta-Markidis pushers. Interestingly, we observe a completely opposite behavior in the two cases: with the Boris pusher, electrons gain more energy than ions, while the reverse occurs with the Lapenta-Markidis pusher. The latter behavior (ions gaining larger amounts of energy), for our choice of relatively low ion magnetization \(\sigma_{0,i}=10\), corresponds to theoretical expectations and earlier numerical experiments (for much larger magnetizations the two species gain approximately the same amount of energy; see Werner et al., 2018). Hence, we conclude that the Lapenta-Markidis pusher produces a more accurate result for this specific physical case. It is also instructive to measure energy conservation in this run, which we show in the bottom panel of Fig. 6. The evolution of the energy error in the two simulations is such that the Lapenta-Markidis pusher produces smaller energy errors at all times, resulting in a \(\sim 3\%\) energy deviation at the end of the run, while the Boris pusher reaches errors around 3 times larger while also inverting the behavior of electron and ion energy gain. This result is not particularly surprising, considering that the Lapenta-Markidis pusher intrinsically possess superior energy-conservation properties, as discussed in Section 2. Our experiments for the simple two-dimensional reconnection setup considered here leads us to conclude that the new RelSIM performs much better than both the explicit method and the original RelIMM, producing physically meaning results even at very low resolutions where ion and electron time and length scales are dramatically underresolved. ## 5 Discussion and Summary We have presented a novel Relativistic Semi-Implicit Method (RelSIM) for fully kinetic simulations of astrophysical plasmas. Implicit PIC methods in general possess superior stability and energy-conservation properties with respect to standard explicit methods, but an implicit _relativistic_ PIC method suitable for production runs is currently missing from the panorama of available approaches. We propose the RelSIM, currently implemented in the framework of the ECSim code (Lapenta, 2017), as a production-ready tool for large-scale PIC simulations. The work presented in this paper can be summarized as follows: * We have reviewed the Relativistic Implicit-Moment Method (RelIMM), originally presented by Noguchi et al. (2007), generalizing it to be compatible with different particle pushers available in literature. In doing so, we have also presented for the first time an explicit solution for the Lapenta-Markidis relativistic pusher (Lapenta & Markidis, 2011), which provides better energy conservation with respect to other standard approaches such as the Boris pusher. * We have constructed the new RelSIM method as a relativistic extension of the Energy-Conserving Semi-Implicit method (ECSIM) first presented in Lapenta (2017). The new method can also be employed with several particle pushers available in literature. To derive the RelSIM, we have introduced one single approximation in the discrete Vlasov-Maxwell system, in contrast with several heavy approximations on which the RelIMM is based. The RelSIM is also free of nonlinear iterations, only requiring a linear solver on the field quantities stored on the grid. * We have thoroughly tested the RelSIM in a number of idealized setups in one and two spatial dimensions, comparing its performance to that of the RelIMM and of a standard explicit-leapfrog PIC method (implemented in the state-of-the-art code Zeltron). For this purpose, we have employed idealized setups for one-dimensional beam instabilities in pair plasmas, a one-dimensional ion-electron shock, and a two-dimensional reconnection setup for pair plasmas and for ion-electron plasmas. In all our experiments, the RelSIM performs distinctively better than both a standard explicit method and the original RelIMM. We have quantified this performance in terms of i) errors in the total energy of the system; ii) stability and convergence of the method when relevant plasma scales (e.g. ion and electron skin depth and plasma period) are dramatically underresolved; and iii) behavior with different particle pushers. In our tests, we found that the RelSIM produces much smaller energy errors than the other methods, and that compared to explicit methods it retains stability even when plasma scales are underresolved by orders of magnitude (as expected). In these underresolved cases, explicit PIC methods rapidly produce unphysical results, while the RelSIM simply approximates the solution capturing the physics correctly up to the resolved scales. In our two-dimensional relativistic reconnection simulation of an ion-electron plasma with realistic mass ratio, we also found that using the Lapenta-Markidis integrator produces more physically realistic results than the standard Boris integrator. First-principles simulations in the collisionless regime are an extremely powerful tool to study relativistic plasmas, but are often limited by numerical constraints imposed by standard explicit methods. We showed here that implicit PIC methods such as the new RelSIM do Figure 5: As in Fig. 4 but for an ion-electron reconnection setup with realistic mass ratio \(m_{i}/m_{e}=1836\). not suffer from these limitations; while more computationally intensive than explicit PIC, the RelSIM compensates by allowing for lower resolutions when it is not necessary to resolve the smallest scales in a certain system. We provided an example by simulating an ion-electron shock case in one dimension, where the scales of the problem are those of ions, and electrons mostly provide a neutralizing background. In this case, it is interesting to retain the electron physics since high-energy electrons could in principle interact with ion-scale structures and experience acceleration; but the problem does not intrinsically require the full modeling of electron scales, which explicit methods are bound to resolve. We showed that the RelSIM can indeed completely neglect electron-scale physics while retaining stability, allowing for cheap ion-scale simulations that also include kinetic electrons, in contrast with e.g. hybrid methods. In multidimensional simulations, the gain factor of implicit methods is even larger, because computational time can be saved by reducing the grid resolution along each spatial dimension. We provided an example with a paradigmatic two-dimensional reconnection problem, where the RelSIM retains stability even with a coarse grid resolution that underresolves the largest scales. In such a scenario, it is legitimate to question whether employing a poor numerical resolution makes sense at all, since in doing so the reconnection physics may be lost. In such a case, we envision the application of our method in combination with a nonuniform grid (e.g. Chacon and Chen, 2016; Croonen et al., 2023, in prep.) that concentrates resolution in the reconnection region, while dramatically underresolving the upstream-plasma scales (where plasma simply flows uniformly toward the current sheets). In this way, resolved reconnection physics could be retained while also speeding up calculations in the upstream without loss of stability. While the RelSIM is in principle ready for production runs, ample ground is available for improvements and future developments: * Differently from its nonrelativistic counterpart, the RelSIM does not conserve energy to machine precision, due to intrinsic nonlinearities in the field equations. By removing these nonlinearities we obtain a simpler, linear system, but we also introduce (small) energy errors. It is in principle possible to retain exact energy conservation by discarding our approximation and iterating on the field-particle equations up to exact nonlinear convergence; alternatively, a fixed amount of iterations could also help in improving energy conservation without reaching exact accuracy (see e.g. Angus et al., 2023). * While in our tests we have employed the strategy by Chen and Toth, 2019 to ensure that Gauss's law for \(\mathbf{E}\) is satisfied, the algorithm does not by default conserve charge. This is because by construction we cannot adopt charge-conserving deposition schemes (e.g. Villasenor and Buneman, 1992; Esirkepov, 2001) in our implicit method6. As was shown for the nonrelativistic ECSIM, charge conservation can be imposed exactly in several ways (e.g. Chen and Toth, 2019; Campos Pinto and Pages, 2022), or approximately via divergence-cleaning Figure 6: Comparison of different particle pushers applied to representative ion-electron reconnection problem with resolution \(\Delta x=0.5c/\omega_{p,i}\). Top panel: Evolution in time of the ion (in red) and electron (in purple) kinetic energy for two runs employing the standard Boris pusher and the Lapenta-Markidis pusher (triangle and square markers respectively). Bottom panel: The evolution of the relative error in the total energy for the same runs. schemes (e.g. Marder, 1987); in the future, we will explore different strategies to impose charge conservation in the RelSIM optimally. * To construct our method, we have also staggered particle positions and velocities in time, such that no iteration is needed to advance the particles. Similarly, we have also decentered \(\mathbf{B}\) in the particle momentum equation, such that the magnetic field needed to advance the particles is immediately available. These choices may influence the behavior of the method in specific cases, e.g. by modifying the \(\mathbf{E\times B}\)-motion of particles in electromagnetic fields. We will consider these numerical issues and possible improvements in future work (see also Angus et al., 2023 and references therein). * In our first implementation, we have not considered the possibility of subcycling on the particle update or to employ smoothing to combat numerical noise in the solution. Both operations could be readily added to the RelSIM in exactly the same fashion that is employed for the nonrelativistic ECSIM (e.g. Lapenta, 2023). We leave the exploration of these possibilities for future work. In summary, even in our first implementation, the new RelSIM provides a reliable, production-ready alternative to standard explicit PIC methods for astrophysical plasma simulations. We specifically target scenarios where large scale separation exists between different plasma species, or where the physics of interest only occurs in localized regions, or where the time and length scales involved become prohibitive for explicit approaches. Our method could also be combined with existing hybrid approaches for multiscale simulations (e.g. Toth et al., 2016; Markidis et al., 2018; Bacchini et al., 2020), to further extend its reach to even larger-scale systems. In such situations, the RelSIM could provide dramatic speedup, helping to probe astrophysical regimes so far inaccessible with state-of-the-art codes. ## Acknowledgements The author would like to thank Sasha Philippov, Anatoly Spitkovsky, Jean-Luc Vay, Stefano Markidis, Giuseppe Arro, and Giovanni Lapenta for useful discussions and suggestions throughout the development of this work. F.B. acknowledges support from the FEDtWIN programme (profile Prf-2020-004, project "ENERGY") issued by BELSPO. The computational resources and services used in this work were provided by the VSC (Flemish Supercomputer Center), funded by the Research Foundation Flanders (FWO) and the Flemish Government - department EWI. This work was performed in part at Aspen Center for Physics, which is supported by National Science Foundation grant PHY-1607611. ## Appendix A Explicit solution for the Lapenta-Markidis momentum update The Lapenta-Markidis particle mover (Lapenta and Markidis, 2011) possesses superior energy-conservation properties with respect to other standard relativistic pushers. However, contrary to other popular movers, no explicit solution of the momentum equation employed in this approach has been presented in literature. We report such a solution here for the first time. Recall, for each particle \(p\), that for the Lapenta-Markidis definition \(\mathbf{\bar{v}}_{p}=\mathbf{\bar{u}}_{p}/\bar{\gamma}_{p}\), \(\bar{\gamma}_{p}=(\gamma_{p}^{n+1}+\gamma_{p}^{n})/2\), and dotting eq. (11) with \(\mathbf{\bar{v}}_{p}\) and rearranging terms gives \[c^{2}\bar{\gamma}_{p}(\bar{\gamma}_{p}-\gamma_{p}^{n})=\frac{q_{p}\Delta t}{2m _{p}}\mathbf{E}_{p}^{n+\theta}\mathbf{\cdot\bar{u}}_{p},\] (A1) where \(\mathbf{\bar{u}}_{p}=(\mathbf{u}_{p}^{n+1}+\mathbf{u}_{p}^{n})/2\). Now, from eq. (11) we can write an explicit expression for \(\mathbf{\bar{u}}_{p}\) in terms of \(\bar{\gamma}_{p}\), following exactly the same procedure that allowed us to write eq. (27): with the shorthand notation \(\mathbf{\beta}_{p}=q_{p}\Delta t\mathbf{B}_{p}^{n}/(2m_{p}c)\), \(\mathbf{\epsilon}_{p}=q_{p}\Delta t\mathbf{E}_{p}^{n+\theta}/(2m_{p})\), \(\mathbf{u}_{p}^{\prime}=\mathbf{u}_{p}^{n}+\mathbf{\epsilon}_{p}\), \[\mathbf{\bar{u}}_{p}=\frac{\mathbf{u}_{p}^{\prime}+(\mathbf{u}_{p}^{\prime}\mathbf{\cdot\beta}_ {p})\mathbf{\beta}_{p}/\bar{\gamma}_{p}^{2}+\mathbf{u}_{p}^{\prime}\mathbf{\times\beta}_{p} /\bar{\gamma}_{p}}{1+\beta_{p}^{2}/\bar{\gamma}_{p}^{2}}.\] (A2) Combining these two equations provides a fourth-order polynomial, \[-\bar{\gamma}_{p}^{4}+\gamma_{p}^{n}\bar{\gamma}_{p}^{3}+\xi\bar{\gamma}_{p}^ {2}+\eta\bar{\gamma}_{p}+\zeta=0,\] (A3) where the coefficients of the polynomial are \(\xi=\mathbf{u}_{p}^{\prime}\mathbf{\cdot}\mathbf{\epsilon}_{p}/c^{2}-\beta_{p}^{2}\), \(\eta=(\mathbf{u}_{p}^{\prime}\mathbf{\times}\mathbf{\beta}_{p})\mathbf{\cdot}\mathbf{\epsilon}_{p}/c ^{2}+\mathbf{\beta}_{p}^{2}\eta_{p}^{n}\), and \(\zeta=(\mathbf{u}_{p}^{\prime}\mathbf{\cdot}\mathbf{\beta}_{p})(\mathbf{\beta}_{p}\mathbf{\cdot} \mathbf{\epsilon}_{p}/c^{2})\). Solving for \(\bar{\gamma}\) can be done with any preferred method, and we find that a direct solution provides the fastest result: root analysis shows that \(\bar{\gamma}\geq 1\) only for \[\bar{\gamma}_{p}=\frac{\gamma_{p}^{n}}{4}+\frac{1}{2}\sqrt{2P+\frac{Q}{4\sqrt{ P+R}}-R}+\frac{1}{2}\sqrt{P+R},\] (A4) where \[P=\frac{2}{3}\xi+\frac{(\gamma_{p}^{n})^{2}}{4},\qquad Q=4\xi\gamma^{n}+8\eta +(\gamma_{p}^{n})^{3},\] \[R=\frac{S}{3T}+\frac{T}{3},\qquad S=\xi^{2}-3\eta\gamma_{p}^{n}-12\zeta,\] \[T=\sqrt[3]{\frac{U+\sqrt{U^{2}-4S^{3}}}{2}},\qquad U=-2\xi^{3}+9\xi\eta\gamma _{p}^{n}-72\xi\zeta+27\eta^{2}-27\zeta(\gamma_{p}^{n})^{2}.\] Pathological cases for this solution exist but are straightforward to handle, e.g. when \(\mathbf{\epsilon}_{p}=\mathbf{0}\) the solution reduces to \(\bar{\gamma}_{p}=\gamma_{p}^{n}\). Once \(\bar{\gamma}_{p}\) is known, the new particle 4-velocity can be calculated from eq. (A2) via extrapolation, \(\mathbf{u}_{p}^{n+1}=2\mathbf{\bar{u}}_{p}-\mathbf{u}_{p}^{n}\).
2310.11208
Parabolic frequency monotonicity on the conformal Ricci flow
This paper is devoted to the investigation of the monotonicity of parabolic frequency functional under conformal Ricci flow defined on a closed Riemannian manifold of constant scalar curvature and dimension not less than 3. Parabolic frequency functional for solutions of certain linear heat equation coupled with conformal pressure is defined and its monotonicity under the conformal Ricci flow is proved by applying Bakry-Emery Ricci curvature bounds. Some consequences of the monotonicity are also presented.
Abimbola Abolarinwa, Shahroud Azami
2023-10-17T12:37:50Z
http://arxiv.org/abs/2310.11208v2
# Parabolic frequency monotonicity on the conformal Ricci flow ###### Abstract. This paper is devoted to the investigation of the monotonicity of parabolic frequency functional under conformal Ricci flow defined on a closed Riemannian manifold of constant scalar curvature and dimension for less than 3. Parabolic frequency functional for solutions of certain linear heat equation coupled with conformal pressure is defined and its monotonicity under the conformal Ricci flow is proved by applying Bakry-Emery Ricci curvature bounds. Some consequences of the monotonicity are also presented. Key words and phrases:Frequency functional; Conformal Ricci flow; Drifting Laplacian; Monotonicity;weighted measure 2010 Mathematics Subject Classification: 53C21, 53E20, 35K65,58J35 ###### Contents * 1 Introduction and main results * 1.1 Frequency functionals * 1.2 Conformal Ricci flow * 1.3 Main results * 2 Notation and Preliminaries * 3 Proof of main theorem and its applications * Proof of Theorem 1.1 [MISSING_PAGE_POST] Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix Appendix 23 Appendix 23 Appendix 23 Appendix Appendix 23 Appendix 23 Appendix Appendix 23 Appendix 23 Appendix 23 Appendix Appendix 23 Appendix 23 Appendix Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix Appendix 23 Appendix 23 Appendix Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix 23 Appendix Appendix 23 Appendix 23 Appendix Appendix 23 Appendix 23 Appendix Appendix 23 Appendix Appendix 23 Appendix Appendix 23 Appendix Appendix 23 Appendix 23 Appendix Appendix 23 Appendix Appendix 23 Appendix 23 Appendix Appendix 23 Appendix 23 Appendix Appendix 23 Appendix Appendix 23 Appendix 23 Appendix Appendix 23 Appendix Appendix 23 Appendix 23 Appendix Appendix 23 Appendix Appendix 23 Appendix Appendix 23 Appendix Appendix 23 Appendix Appendix 23 Appendix Appendix 23 Appendix 23 Appendix Appendix 23 Appendix 23 Appendix Appendix 23 Appendix Appendix 23 Appendix 23 Appendix Appendix 23 Appendix Appendix 23 Appendix Appendix 23 Appendix Appendix 23 Appendix Appendix 23 Appendix 23 Appendix Appendix Appendix 23 Appendix Appendix 23 Appendix Appendix 23 Appendix Appendix 23 Appendix Appendix 23 Appendix Appendix 23 Appendix Appendix 23 Appendix Appendix 23 Appendix Appendix 23 Appendix Appendix 23 Appendix Appendix 23 Appendix Appendix 23 Appendix Appendix 23 Appendix Appendix 23 Appendix Appendix 23 Appendix Appendix 23 Appendix Appendix 23 Appendix Appendix 23 Appendix 23 Appendix Appendix 23 Appendix Appendix 23 Appendix 23 Appendix Appendix 23 Appendix 23 Appendix Appendix 23 Appendix Appendix 23 Appendix Appendix 23 Appendix Appendix 23 Appendix Appendix 23 Appendix Appendix 23 Appendix Appendix 23 Appendix Appendix 23 Appendix Appendix 23 Appendix Appendix 23 Appendix Appendix 23 Appendix Appendix 23 Appendix Appendix 23 Appendix Appendix 23 Appendix Appendix 23 Appendix Appendix 23 Appendix Appendix 23 Appendix Appendix 23 Appendix Appendix 23 Appendix Appendix 23 Appendix Appendix 23 Appendix Appendix 23 Appendix Appendix 23 Appendix Appendix 23 Appendix Appendix 23 Appendix Appendix 23 Appendix Appendix 23 Appendix Appendix 23 Appendix Appendix 23 Appendix Appendix 23 Appendix Appendix 23 Appendix Appendix Appendix 23 Appendix Appendix Appendix 23 Appendix Appendix 23 Appendix Appendix 23 Appendix Appendix 23 Appendix Appendix 23 Appendix Appendix 23 Appendix Appendix 23 Appendix Appendix 23 Appendix Appendix 23 Appendix Appendix Appendix 23 Appendix Appendix 23 Appendix Appendix 23 Appendix Appendix 23 Appendix Appendix 23 Appendix Appendix 23 Appendix Appendix 23 Appendix Appendix Appendix 23 Appendix Appendix 23 Appendix Appendix Appendix 23 Appendix Appendix 23 Appendix Appendix Appendix 23 Appendix Appendix Appendix 23 Appendix Appendix Appendix 23 Appendix Appendix 23 Appendix Appendix 23 Appendix Appendix Appendix 23 Appendix Appendix Appendix 23 Appendix Appendix Appendix Appendix 23 Appendix Appendix Appendix Appendix 23 Appendix Appendix Appendix 23 Appendix Appendix Appendix Appendix 23 Appendix Appendix Appendix Appendix Appendix 23 Appendix The principal aim of the present paper is to extend monotonicity of parabolic frequency functional for linear heat equation to the setting of compact (without boundary) Riemannian manifold evolving by the conformal Ricci flow, and then investigate what geometric condition(s) is/are required for such monotonicity and as well as its possible applications. The precise definitions, history, applications of, and relevant literature on frequency functional and conformal Ricci flow are discussed in what follows. ### Frequency functionals Let \(q\) be a fixed point in \(\mathbb{R}^{n}\) and \(\mathcal{B}(\partial_{t}r)\) be a ball of radius \(r>0\) centred at \(q\). Almgren [1] introduced the frequency functional (which is known in literature as elliptic frequency functional) for a harmonic function \(v(x)\) on \(\mathbb{R}^{n}\), i.e., \(\Delta_{\mathbb{R}^{n}}v(x)=0\) on \(\mathbb{R}^{n}\), as follows \[E_{F}(r)=\frac{r\int_{\mathcal{B}(q,r)}|\nabla v(x)|^{2}dx}{\int_{\partial \mathcal{B}(q,r)}|v(x)|^{2}ds}, \tag{1.1}\] where \(\partial\mathcal{B}(q,r)\) and \(dS\) are respectively the boundary of \(\mathcal{B}(q,r)\) and the induced \((n-1)\)-dimensional Hausdorff measure on \(\partial\mathcal{B}(q,r)\). Here, \(\nabla\) and \(\Delta_{\mathbb{R}^{n}}\) are gradient and Laplace operators on \(\mathbb{R}^{n}\), respectively. The rate of growth of harmonic function \(v\) near the fixed point \(q\) is determined by the functional in (1.1). Furthermore, Almgren [1] proved that \(\underline{E}_{F}(r)\) is monotone nondecreasing with respect to \(r\), the consequence of which led to the study of the local regularity of harmonic functions and minimal surfaces. Since the work of Almgren [1], the monotonicity of elliptic functional \(E_{F}(r)\) have been successfully applied in the analysis of more general elliptic and parabolic partial differential equations, and there have been also considerable generalization to Riemannian manifolds. We mention but a few literature: Garofalo and Lin [12, 13] investigated the unique continuation properties for elliptic operators by using the monotonicity of frequency functional on Riemannian manifolds. The authors in [16, 20, 21] applied monotonicity of the frequency functional to estimate the size of nodal and critical sets of solutions to elliptic and parabolic equations. Colding and Miniezzi in [9] applied frequency monotonicity to prove finite dimensionality of the space of polynomial growth of harmonic functions on manifolds with nonnegative Ricci curvature and Euclidean volume growth, while in [10] they extended the result to the case of static manifold using drifting Laplacian. The counterpart of \(E_{F}(r)\) for solutions to the heat equation on \(\mathbb{R}^{n}\) is called parabolic frequency functional, which was first introduced by Poon [26] to the study of the unique continuation of solutions to parabolic equations on \(\mathbb{R}^{n}\). Consider a smooth solution \(u=u(x,t)\) to the heat equation \[\partial_{t}u-\Delta_{M}u=0\quad\text{in }M\times[0,T], \tag{1.2}\] where \((M,g)\) is a complete Riemannian manifold, \(\Delta_{M}\) is the Laplace-Beltrami operator on \(M\) and \(T>0\). The Parabolic frequency for \(u\) is defined by [25] (see also [26]) \[P_{F}(t)=t\cdot\frac{\int_{M}|\nabla u|^{2}(x,T-t)H(x,y;t)d\mu(g)}{\int_{M}u^{ 2}(x,T-t)H(x,y;t)d\mu(g)}, \tag{1.3}\] where \(H(x,y;t)\) is the fundamental solution to the heat equation (1.2), \(y\) being a referenced point (not so important) in \(M\), and \(d\mu(g)\) is the volume form with respect to the Riemannian metric \(g\). Restricting \(M\) to possessing nonnegative sectional (or bisectional for holomorphic function) curvature and parallel Ricci curvature, Poon [26] and Ni [25] proved that \(P_{F}(t)\) is monotone nondecreasing by using Hamilton's matrix Harnack estimate [15]. Recently, these results have been generalized to more general Riemannian manifolds by Li and Wang [27]. Let \(\tau(t)\) be the backward time, \(\kappa(t)\) be the time-dependent function and \(d\nu\) be the weighted measure. For a solution \(u(t)\) of the heat equation, Baldauf and Kim [5] defined the parabolic frequency as follows \[U(t)=-\frac{\tau(t)||\nabla u||_{L^{2}(d\nu)}^{2}}{||u||_{L^{2}(d\nu)}^{2}}e^{ -\int\frac{1-\kappa(t)}{\tau(t)}dt}\] In the above definition, the exponential term involving time-dependent function \(\kappa\) serves as a correction term which depends on the geometry of the flow, analogous to the error term involving \(r\) in the elliptic case. The authors [5] proved that parabolic frequency \(U(t)\) for the solution of heat equation is monotone increasing along the Ricci flow with bounded Bakry-Emery Ricci curvature. See also the recent preprints [3, 4, 6, 18] for related results under the Ricci-Bourguignon, Ricci-harmonic and mean curvature flows. Motivated by the above cited works, we study monotonicity of a well defined parabolic frequency function (see ((1.8)) below) for a form of linear heat equation defined in ((1.7)) along the conformal Ricci flow. This study is more interesting since conformal Ricci flow performs better than (and even complementary to Ricci flow in searching for certain geometric features, and has wider applications in conformal geometry of constant scalar curvature. ### Conformal Ricci flow The conformally modified Ricci flow was introduced by Fischer in [11] and named conformal Ricci flow as a result of the role played by conformal geometry in the derivation of its equations. Precisely, let \((M,g_{0})\) be a smooth \(n\)-dimensional (\(n\geq 3\)) closed connected manifold together with Riemannian metric \(g_{0}\) of constant scalar curvature \(R_{0}\). The conformal Ricci flow is defined by a one-parameter family of metric \(g(t)\) satisfying the following parabolic system \[\left\{\begin{aligned} &\frac{\partial g(t)}{\partial t}+\frac{R_{0}}{n}g(t) =-2p(t)g(t),&(x,t)\in M\times(0,T),\\ & R_{g(t)}=R_{0},&(x,t)\in M\times[0,T),\end{aligned}\right. \tag{1.4}\] together with the initial condition \(g(0)=g_{0}\) and a family of function \(p(t)\), \(t\in[0,T)\), where \(Ric(t)\) and \(R_{g(t)}\) are the Ricci tensor and scalar curvature of the evolving metric \(g(t)\), respectively. By the constraint equation \(R_{g(t)}=R_{0}\) in system (1.4), the flow is known to preserve constant scalar curvature of the evolving metric. Indeed, this accounts for naming the function, \(p=p(t)\), conformal pressure, since it serves as time-dependent Lagrange multiplier and makes the term \(-p(t)g(t)\) acting as the constraint force necessary to preserve the scalar curvature constraint. Consequently, \(p(t)\) is known to solve a time-dependent elliptic partial differential equation under the flow \[(n-1)\Delta p+R_{0}p=-\left|Ric-\frac{R_{0}}{n}g\right|^{2}\qquad\text{ in }\ M\times[0,T). \tag{1.5}\] Considering the role of the conformal pressure, the function \(p(t)\) is expected to be zero at an equilibrium point and strictly positive otherwise. Hence, the equilibrium points of the conformal Ricci flow are characterised by Einstein metrics, and the term \((Ric-\frac{R_{0}}{n}g)\) can then be regarded as a measure of deviation from an equilibrium point. Since the volume of a Riemannian manifold \((M,g)\) is a positive real number and the scalar curvature is a real-valued function on \(M\), the constraint on \(R_{g(t)}\) is considerably more drastic than the volume constraint of the Ricci flow [14, 15]. The Ricci flow in general does not preserve the property of constant scalar curvature. Thus, the configuration space of the conformal Ricci flow equations is considerably smaller than that of the Ricci flow. Obviously, conformal Ricci flow will perform better in searching for certain geometric features since working on a smaller configuration space is more advantageous than working on a larger configuration. More concrete similarities and differences between conformal Ricci flow and the classical Ricci flow, as well as some possible applications of conformal Ricci flow to 3-manifold geometry and conformal geometry are highlighted in [11]. Fischer's paper [11] has presented a proof of the short-time existence and uniqueness of the conformal Ricci flow on closed manifolds with negative constant scalar curvature \(R_{0}<0\). In that same paper, he also observed that Yamabe constant is strictly increasing along the flow on negative Yamabe type closed manifolds. For detail discussion on Yamabe problem, interested readers can consult the book by Aubin [2]. The following references [7, 8, 17, 19, 22, 23, 24] can be found for further studies on conformal Ricci flow. ### Main results Denote the partial derivative of any time-dependent quantity by \(\partial_{t}\) (i.e., \(\partial_{t}u(x,t)=\underline{u}_{t}(x,t)\)). Consider a one-parameter family of metrics \(g=g(t)\), \(t\in[0,T)\), \(T>0\), on an \((m+1)\)-dimensional closed manifold with initial metric \(g(0)\) having constant scalar curvature \(-m(m+1)\) which is preserved under the flow as \(R_{g(t)}\equiv-m(m+1)\). Referring to (1.4) and (1.5), one sees that \((M^{m+1},g(t),p(t))\) evolves by conformal Ricci flow given in the following system (1.6) \[\begin{array}{ll}\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par bound. Thus, \(\bar{p}(t)\) is viewed as material-constant (or space-constant) pressure function. Following [18] we define the parabolic frequency functional \(Q(t)\) for the solutions of heat equation (1.7) along the conformal Ricci flow as follows \[Q(t)=\frac{h(t)\int_{M}|\nabla_{g(t)}v|^{2}_{g(t)}dV_{g(t)}}{\int_{M}v^{2}dV_{g (t)}}e^{-\int_{t_{0}}^{t}\big{(}2p(s)+\frac{h^{\prime}(s)+k(s)}{h(s)}\big{)}ds} \tag{1.8}\] where \(h\) and \(k\) are smooth functions with respect to time-variable \(t\in[t_{0},t]\subset[0,T)\), and \(dV_{g(t)}\) is the weighted measure on \((M,g(t))\) (see the appropriate definition of \(dV_{g(t)}\) in ((2.4)) below). The involvement of a finite time-dependent scalar function \(p\) in the exponential term of the functional is natural. First, to reflect the coupling of the solution of the heat equation with the conformal pressure, and secondly, to show the conformal nature of \(Q(t)\) in preserving the scalar curvature constraint. (See [11] for complete description of the pressure field \(\bar{p}(t)\)). In this paper we prove monotonicity of (1.8) along the flow (1.6) coupled with the heat equation (1.7) and then obtain our main result. Let \(\mathscr{R}ic_{f}\) and \(\mathscr{L}_{f}\) be Bakry-Emery Ricci curvature tensor and drifting Laplacian, respectively. **Theorem 1.1**.: _Let \((M^{m+1},g(t),p(t))\), \(t\in[0,T)\) be a solution to the conformal Ricci flow (1.6) with \(\mathscr{R}ic_{f}\leq(\frac{k(t)}{2h(t)}+\frac{R_{g}}{m+1})g\) and \(R_{g}=-m(m+1)\)._ 1. _If_ \(h(t)\) _is positive (i.e.,_ \(h(t)>0\)_), then the parabolic frequency_ \(Q(t)\) _is monotone nonincreasing along the conformal Ricci flow._ 2. _If_ \(h(t)\) _is negative (i.e.,_ \(h(t)<0\)_), then the parabolic frequency_ \(Q(t)\) _is monotone nondecreasing along the conformal Ricci flow._ _Furthermore, \(Q^{\prime}(t)=0\) only if \(v\) is an eigenfunction of \(\mathscr{L}_{f}\) satisfying \(-\mathscr{L}_{f}v=c(t)v\)._ As an application we have the following corollaries. **Corollary 1.2**.: _(Backward uniqueness). Assuming the hypotheses of Theorem 1.1 hold. If \(h(t)<0\) and \(v(\cdot,b)=0\), then \(v(\cdot,t)=0\) for any \(t\in[a,b)\subset(0,T)\), \(a<b\)._ **Corollary 1.3**.: _(Eigenvalue monotonicity). Define the first nonzero eigenvalue of \((M^{m+1},g(t),p(t))\) with respect to the drifting Laplacian on weighted measure \(dV_{g(t)}\) by_ \[\lambda(g(t)):=\inf_{M}\frac{\int_{M}\langle\mathscr{L}_{f}u,u\rangle dV_{g(t )}}{\int_{M}u^{2}dV_{g(t)}}:\ u\in W^{1,2}(M^{m+1})\setminus\{0\}\ \int_{M}udV_{g(t)}=0,\] _where \(u(t)\) solves the linear heat equation (1.7). Suppose \((M^{m+1},g(t),p(t)),\ t\in[0,T)\) solves (1.6) with \(\mathscr{R}ic_{f}\leq(\frac{k(t)}{2h(t)}+\frac{R_{g}}{m+1})g\) and \(R_{g}=-m(m+1)\). Then for any \(t\in[t_{0},t_{1}]\subset(0,T)\)_ _(i) If \(h(t)>0\) then \(h(t)\lambda(g(t))\) is a monotone decreasing function._ _(ii) If \(h(t)<0\) then \(h(t)\lambda(g(t))\) is a monotone increasing function._ _Remark 1.4_.: The Bakry-Emery Ricci condition \(\mathscr{R}ic_{f}\leq(\frac{k(t)}{2h(t)}+\frac{R_{g}}{m+1})g\) is equivalent to \(\mathscr{R}ic_{f}\leq(\frac{k(t)}{2h(t)}-m)g\) by the assumption that \(R_{g}=-m(m+1)\) is preserved by the conformal Ricci flow. Note that the equation \(\mathscr{R}ic_{f}-\frac{1}{m+1}R_{g}=0\) is the quasi-Einstein equation since \(R_{g}\) is constant, which in this can be compared with Ricci solitons. _Remark 1.5_.: The above result (Theorem 1.1) is expected to have further applications in the setting of homogeneous \(3\)-manifold geometry and conformal geometry of dimension greater than \(3\) in the spirit of Fischer [11]. Lastly, we will consider some more general parabolic equation coupled with nondynamic conformal pressure \(p(t)\) \[|(\partial_{t}-\Delta)u|\leq\bar{p}(t)(|u|+|\nabla u|) \tag{1.9}\] along the conformal Ricci flow. The frequency \(Q(t)\) for a needs not to be monotone but its derivative will be suitably bounded yielding backward uniqueness of solution when \(h(t)>0\) (see Theorem 4.1). The outline of the rest part of this paper is as follows: The next section (Section 2 is devoted to other notation and some preliminaries.) The proof of main result and its applications are discussed in Section 3. The last section (Section 4) is basically on the proof of backward uniqueness of solution to (1.9). ## 2. Notation and Preliminaries Other notation that will be required in the sequel is presented first. Let the dimension of the underlying manifold \(M\) be a number \(m+1\) not less than \(3\). Let \(t\) be the abstract time parameter in the half-closed region \([0,T),T>0\). The scalar curvature, Ricci curvature and volume element of \((M,g(t))\) are respectively denoted by \(R=R_{g(t)}\), \(Ric=Ric(g(t))\) and \(d\mu=d\mu_{g(t)}\). In local coordinates \((x^{1},\cdots,x^{m+1})\), \[d\mu_{g(t)}:=\sqrt{\det g_{ij}(t)}dx^{1}\wedge\cdots\wedge dx^{m+1}.\] Let \(\nabla\) and \(\Delta=\nabla_{g(t)}\) be the Levi-Civita connection and the Laplace-Beltrami operator with respect to \(g(t)\). Denote \(|\cdot|_{g(t)}=g(t)(\cdot,\cdot)^{\frac{1}{2}}\) called \(g(t)\)-metric norm, e.g., \(|\nabla u|_{g(t)}^{2}=\langle\nabla u,\nabla u\rangle_{g(t)}\), where \(\langle\cdot,\cdot\rangle_{g(t)}\) is the inner product with respect to metric \(g(t)\). The time-dependent drifting Laplacian or \(f\)-Laplacian (also called weighted Laplacian) for a smooth function \(f\) on \(M\) is denoted by \[\mathscr{L}_{f}(\cdot):=\mathscr{L}_{g(t),f}(\cdot):=e^{f}\mathrm{div}(e^{-f} \nabla(\cdot))=\Delta(\cdot)-\langle\nabla f,\nabla(\cdot)\rangle.\] The weighted form of the Ricci curvature tensor is the so called Bakry-Emery curvature tensor \[\mathscr{R}ic_{f}(t):=Ric(t)+\mathrm{Hess}f,\] where \(\mathrm{Hess}f\) is the Hessian of function \(f\). Note that having a condition of the form \(\mathscr{R}ic_{f}=\kappa g\) is saying that \((M,g(t))\) is a Ricci soliton, which is a special solution to the Ricci flow [15] and very useful in the singularity analysis of the Ricci flow. These notations with and without subscript \(g(t)\) are used interchangeably without resulting to any confusion. It is well know that \(\mathscr{L}_{f}\) and \(\mathscr{R}ic_{f}\) are related via the weighted (or drifting) Bochner formula for an atleast \(C^{3}\)-function \(h\) \[\frac{1}{2}\mathscr{L}_{f}(|\nabla h|^{2})=|\text{Hess }h|^{2}+\langle\nabla h, \nabla\mathscr{L}_{f}h\rangle+\mathscr{R}ic_{f}(\nabla h,\nabla h). \tag{2.1}\] Along the flow (1.4) on a closed manifold of dimension \(\geq 3\), we let \(\tau(t)=T-t\) be the backward time and define the following conjugate heat equation on \((M^{m+1},g(t),p(t))\) as \[\partial_{t}H(t)=-\Delta_{g(t)}H(t)+(m+1)p(t)H(t) \tag{2.2}\] with the fundamental solution \[H(t)=(4\pi\tau(t))^{-\frac{m+1}{2}}e^{-f(t)}\] One can then show that \(f(t)\) satisfies the conjugate heat equation \[\partial_{t}f(t)=-\Delta_{g(t)}f(t)+|\nabla_{g(t)}f(t)|^{2} \tag{2.3}\] Define the weighted volume form as \[dV_{g(t)}=H(t)d\mu_{g(t)}=-(4\pi\tau(t))^{-\frac{m+1}{2}}e^{-f(t)}d\mu_{g(t)}\] satisfying \(\int_{M}dV_{g(t)}=1\). Recall that \(d\mu_{g(t)}\) evolves under conformal Ricci flow (1.6) by the formula (see [11, 19]) \[\partial_{t}(d\mu_{g(t)})=-(m+1)p(t)d\mu_{g(t)}. \tag{2.4}\] Therefore one can compute that \[\partial_{t}(dV_{g(t)})=(\mathbb{H}_{t}-(m+1)p(t))d\mu_{g(t)}=-\frac{\Delta_{ g(t)}H(t)}{H(t)}dV_{g(t)}. \tag{2.5}\] For a smooth function \(v:M\times[t_{0},t_{1}]\to\mathbb{R}\) with \(v(\cdot,t),\partial_{t}v(\cdot,t)\in W^{2,2}_{0}(dV)\) for any \(t\in[t_{0},t_{1}]\subset[0,T)\), we define quantities \(I(t)\) and \(E(t)\) as follows \[I(t)=\int_{M}v^{2}dV_{g(t)}, \tag{2.7}\] \[\mathscr{E}(t)=-h(t)\int_{M}\langle v,\mathscr{L}_{f}v\rangle dV_{g (t)}=h(t)\int_{M}|\nabla v|^{2}_{g(t)}dV_{g(t)}. \tag{2.6}\] To this end, referring to definitions in (2.6) and (2.7) and reverting to (1.8), \(Q(t)\) is thus written as \[Q(t)=\frac{E(t)}{I(t)}e^{-\int_{t_{0}}^{t}\big{(}2p(s)+\frac{h^{\prime}(s)+k(s )}{h(s)}\big{)}ds}, \tag{2.8}\] where \(h(t)\) and \(k(t)\) are both smooth functions with respect to time-variable \(t\in[0,T)\). ## 3. Proof of main theorem and its applications At first, we state some important results that will be required in the proof of the main theorem. Here and in what follows, by the virtues of (1.6) and (1.7), the triple \((g(t),p(t),v(t))\), \(t\in[0,T)\) solves the following system \[\left\{\begin{array}{l}\partial_{t}g=-2(Ric(t)+(m+p(t))g(t)),\\ (-\Delta+(m+1))p(t)=\frac{1}{m}|Ric+mg|^{2},\\ \partial_{t}v-\Delta_{g(t)}v=\bar{p}(t)v,\\ (g(0),v(0))=(g_{0},v_{0})\end{array}\right. \tag{3.1}\] on closed connected manifold of dimension \(m+1\geq 3\) with constant scalar curvature. Since the conformal pressure \(p(t)\) is a non-dynamical field, no initial value of \(\bar{p}(t)=\max_{x\in M}p(t)\) is required. **Lemma 3.1**.: _(Finiteness of \(p(t)\)). Let \((g(t),p(t))\), \(v\in[0,T)\) be a smooth solution to the conformal Ricci flow on a closed manifold with \(R_{0}=-m(m+1)\) satisfying \(|Ric|(x,t)\leq K(t)\) for all \((x,t)\in M^{m+1}\times[0,\bar{T}).\) Then the conformal pressure \(p(t)\) satisfies \(0\leq p(t)\leq K^{2}(t)\), for all \(t\in[0,T)\), that is \(p(t)\) is finite under the flow \(g(t)\)._ Proof.: We know that \(p(t)\) solves the elliptic equation (i.e., the second equation) in system (3.1) on \(M^{m+1}\times[0,T)\). So by the strong maximum principle, \(p(t)\geq 0\) for all \((x,t)\in M^{m+1}\times[0,T)\). The following conclusion can be reached: either (a) \(p(t)=0\) and \(Ric+mg=0\) or (b) \(p(t)>0\) and \(Ric+mg\neq 0\) (see [11, Proposition 3.3] for detail). Suppose \((x_{0},t)\) is the maximum point, that is \(\bar{p}(x_{0},t):=\max_{x\in M}p(x,t)\), we have \(\nabla p(x_{0},t)=0\) and \(\Delta p(x_{0},t)\leq 0\). Hence \[m(m+1)p(x_{0},t)\leq\hskip-10.0ptRic+mg|^{2}(x_{0},t)=\hskip-10.0ptRic|^{2}(x_ {0},t)-m^{2}(m+1)\leq K^{2}(t).\] Hence, \(0\leq p(x,t)\leq K^{2}(t)\) for all \((x_{0},t)\in M^{m+1}\times[0,T)\). **Lemma 3.2**.: _Let \((g(t),p(t),v(t))\), \(t\in[0,T)\) solves the system (3.1). The following identities hold:_ \[\partial_{t}\left(|\nabla v|^{2}_{g(t)}\right)=2(Ric+mg(t))(\nabla v,\nabla v) +2(p(t)+\bar{p}(t))|\nabla v|^{2}_{g(t)}+2\langle\nabla v,\nabla\Delta v\rangle, \tag{3.2}\] \[\left(\partial_{t}-\Delta_{g(t)}\right)|\nabla v|^{2}_{g(t)}=2(p(t)+\bar{p}(t ))|\nabla v|^{2}_{g(t)}+2mg(t)(\nabla v,\nabla v)-2|\mbox{Hess }v|_{g(t)}, \tag{3.3}\] _where \(\bar{p}(t):=\max_{x\in M}p(t)\)._ Proof.: Following the standard computation under geometric flow we have \[\partial_{t}(|\nabla v|^{2})=-[\partial_{t}g](\nabla v,\nabla v)+2\langle \nabla v,\nabla\partial_{t}v\rangle.\] Reverting to system (3.1), we have values for the quantities \(-[\partial_{t}g]\) and \(\partial_{t}v\), which when substituted into the last expression together with the fact that \(\nabla\bar{p}(t)=0\) yields (3.2). Combining (3.1) with the classical Bochner formula proves (3.3). **Lemma 3.3**.: _For all \(u,v\in W^{1,2}_{0}(dV_{g(t)})\), the drifting Laplacian \(\mathscr{L}_{f(t)}\) satisfies integration by parts formula, i.e.,_ \[\int_{M}u\mathscr{L}_{f(t)}vdV_{g(t)}=-\int_{M}\langle\nabla u,\nabla v\rangle_ {g(t)}dV_{g(t)},\] _and it is self-adjoint with respect to the weighted measure \(dV_{g(t)}\), i.e.,_ \[\int_{M}u\mathscr{L}_{f(t)}vdV_{g(t)}=\int_{M}(\mathscr{L}_{f(t)}u)vdV_{g(t)}.\] Proof.: Recall that \(\mathscr{L}_{f(t)}v:=e^{f(t)}\mathrm{div}(e^{-f(t)}\nabla v)\) and \(dV_{g(t)}:=(4\pi\tau(t))^{-\frac{m+1}{2}}e^{-f(t)}d\mu_{g(t)}\). Direct computation using classical integration by parts gives \[\int_{M}u\mathscr{L}_{f}v\ dv =(4\pi\tau)^{-\frac{m+1}{2}}\int_{M}ue^{f}\mathrm{div}(e^{-f} \nabla v)e^{-f(t)}d\mu\] \[=-(4\pi\tau)^{-\frac{m+1}{2}}\int_{M}e^{-f}\langle\nabla u, \nabla v\rangle d\mu:=-\int_{M}\langle\nabla u,\nabla v\rangle d\nu\] \[=(4\pi\tau)^{-\frac{m+1}{2}}\int_{M}\mathrm{div}(e^{-f}\nabla u) vd\mu\] \[=(4\pi\tau)^{-\frac{m+1}{2}}\int_{M}\hskip-14.226378pt\int_{M} \hskip-14.226378pt\mathrm{div}(e^{-f}\nabla u)ve^{-f(t)}d\mu:=\int_{M}(\mathscr{ L}_{f}u)v\ dv.\] That is, \[\int_{M}u\mathscr{L}_{f}u\mathscr{L}_{f}u\mathscr{L}_{f}\mathscr{L}_{f} \mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f} \mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f} \mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f} \mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f} \mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f} \mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f} \mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f} \mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f} \mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f} \mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f} \mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f} \mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f} \mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f} \mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f} \mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f} \mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f} \mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f} \mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f} \mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f} \mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f} \mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f} \mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f} \mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f} \mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f} \mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f} \mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f} \mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f} \mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f} \mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f} \mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f} \mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f} \mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f} \mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f} \mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f} \mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f} \mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f} \mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f} \mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f} \mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f} \mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f} \mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f} \mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f} \mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f} \mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f} \mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f} \mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f} \mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f} \mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f} \mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f}\mathscr{f}\mathscr{L}_{f}\mathscr{L}_{f} \mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f} \mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f} \mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f} \mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f} \mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L}_{f}\mathscr{L of \(I(t)\) (recall that \(I(t)\) is defined in (2.6)) as follows \[I^{\prime}(t) =\int_{M}\left(2vv_{t}-v^{2}\frac{\Delta H}{H}\right)dV\] \[=\int_{M}\left(2v(v_{t}-\Delta v)-2|\nabla v|^{2}\right)dV\] \[=2\bar{p}(t)\int_{M}v^{2}dv-2\int_{M}|\nabla v|^{2}dV=2\bar{p}I(t )-\frac{2}{h(t)}E(t).\] Similarly, we can compute derivative of the energy \(E(t)\) (recall that \(E(t)\) is defined in (2.7)) as follows \[E^{\prime}(t)=h^{\prime}(t)\int_{M}|\nabla v|^{2}dV+h(t)\int_{M}(\partial_{t}- \Delta)\nabla v|^{2}dV \tag{3.5}\] Applying (3.3) of Lemma 3.2 into (3.5), we obtain due to the condition \(h(t)>0\) and the fact that \(\mathfrak{p}(t)\leq\mathfrak{p}(t)\). Combining the last equation with drifting Reilly formula of Lemma 3.4 we get which leads to the following inequality by invoking the Bakry-Emery curvature bound \(\mathscr{R}ic_{f}\leq(\frac{k(t)}{2h(t)}+\frac{R_{g}}{m+1})g\). Proceeding from here, we can express the frequency function \(Q(t)\) in terms of rescaled quantities \(\widetilde{I}(t)\) and \(\widetilde{E}(t)\) as follows such that \[\widetilde{I}(t)=I(t)e^{-\int_{t_{0}}^{t}2p(s)ds}\] and \[\widetilde{E}(t)=E(t)e^{-\int_{t_{0}}^{t}\left(4p(s)+\frac{k(s)+h^{\prime}(s)} {h(s)}\right)ds}\] with their respective derivatives computed as follows \[\widetilde{I}^{\prime}(t)\geq-\frac{2}{h(t)}E(t)e^{-\int_{t_{0}}^{t}2p(s)ds}\] and \[\widetilde{E}^{\prime}(t)\leq-2h(t)e^{-\int_{t_{0}}^{t}\left(4p(s)+\frac{k(s) +h^{\prime}(s)}{h(s)}\right)ds}\int_{M}(\mathscr{L}_{f}v)^{2}dV.\] Now using the bound for \(\widetilde{E}(t)\) will allow the derivative of the frequency functional \(Q(t)\) to be bounded. Suppose \(h(t)>0\) and denote by \(\Gamma(t)\) the following integral \[\Gamma(t):=\int_{t_{0}}^{t}\left(4p(s)+\frac{k(s)+h^{\prime}(s)}{h(s)}\right)ds.\] \[\widetilde{I}^{2}(t)Q^{\prime}(t) =\widetilde{I}(t)\widetilde{E}^{\prime}(t)-\widetilde{I}^{\prime }(t)\widetilde{E}(t)\] \[\leq e^{-\Gamma(t)}\left[-2h(t)I(t)\int_{M}(\mathscr{L}_{f}v)^{2} dV+\frac{2}{h(t)}E^{2}(t)\right]\] \[=-2h(t)e^{-\Gamma(t)}\left[I(t)\int_{M}(\mathscr{L}_{f}v)^{2} dV-\left(\frac{1}{h(t)}E^{2}(t)\right)^{2}\right]\] \[=-2h(t)e^{-\Gamma(t)}\left[\left(\int_{M}v^{2}dV\right)\left( \int_{M}(\mathscr{L}_{f}v)^{2}dV\right)-\left(\int_{M}|\nabla v|^{2}dV\right) ^{2}\right]\] \[\leq 0.\] The first inequality is due to the bound for \(\widetilde{E}(t)\) whilst the last inequality is due to integration by parts formula and Cauchy-Schwarz inequality. That is, \[\int_{M}|\nabla v|^{2}dV=-\int_{M}\langle v\rangle\mathscr{L}_{f}v\rangle dV \leq\int_{M}|\mathscr{L}_{f}v|dV\] which implies \[\left(\int_{M}|v|^{2}dV\right)\left(\int_{M}|\mathscr{L}_{f}v|^{2}dV\right)- \left(\int_{M}|\nabla v|^{2}dV\right)^{2}\geq 0. \tag{3.6}\] Hence, \(Q(t)\) is a nonincreasing function along the conformal Ricci flow if \(h(t)>0\). The proof for the case of \(h(t)<0\) follows suit. Moreover, suppose \(Q^{\prime}(t)=0\), the equality in Cauchy schwarz inequality (3.6) implies \(-\mathscr{L}_{f}v\), where \(c(t)=\frac{1}{h(t)}Q(t)e^{\int_{t_{0}}^{t}\left(2p(s)+\iota^{\prime}\frac{h^{ \prime}(s)+k(s)}{h(s}\right)ds}\). **Proof of Corollary 1.2**.: Recall that \(I^{\prime}(t)=-2\bar{p}(t)I(t)-\frac{2}{h(t)}E(t)\). Therefore \[\frac{d}{dt}\left(\log I(t)\right)=\frac{I^{\prime}(t)}{I(t)} =2\bar{p}(t)-\frac{2}{h(t)}\frac{E(t)}{I(t)}\] \[=2\bar{p}(t)-\frac{2}{h(t)}Q(t)e^{\int_{t_{0}}^{t}\left(2p(s)+ \frac{h^{\prime}(s)+k(s)}{h(s)}\right)ds}.\] Integrating both sides of the last equation on \([a,b]\subset[t_{0},t_{1})\) and using the monotonicity of \(Q\) in Theorem 1.1, we have \[\log I(b)-\log I(a) \geq 2\int_{a}^{b}\bar{p}(t)dt-2\int_{a}^{b}\frac{Q(t)}{h(t)}e^{ \int_{t_{0}}^{t}\left(2p(s)+\frac{h^{\prime}(s)+k(s)}{h(s)}\right)ds}dt\] \[\geq 2\int_{a}^{b}\bar{p}(t)dt-2Q(a)\int_{a}^{b}\frac{1}{h(t)}e^{ \int_{t_{0}}^{t}\left(2p(s)+\frac{h^{\prime}(s)+k(s)}{h(s)}\right)ds}dt.\] Exponentiating yields (since \(p(t)\) and \(h(t)\) are finite) \[\frac{I(b)}{I(a)}\geq\exp\left\{2\int_{a}^{b}\bar{p}(t)dt-2Q(a)\int_{a}^{b} \frac{1}{h(t)}e^{\int_{t_{0}}^{t}\left(2p(s)+\frac{h^{\prime}(s)+k(s)}{h(s)} \right)ds}dt\right\}.\] Therefore, if \(v(\cdot,b)=0\), then \(I(b)=0\) and the last inequality implies \(I(a)=0\). Since \(a\) is arbitrary we conclude that \(I(t)=0\) for any \(t\in[a,b]\subset(0,T)\). Hence \(v(\cdot,t)=0\) for any \(t\in[a,b]\subset(0,T)\). Proof of Corollary 1.3.: Given the time interval \([t_{0},t]\subset(0,T)\). Let \((g(t),p(t),v(t))\)\(t\in[0,T)\) solves (3.1), that is, \(v(t)\) solves \(v_{t}=\Delta v+\bar{p}(t)v\). By the hypothesis, \(v(\cdot,t_{0})\) is the eigenfunction of \(-\mathscr{L}_{f(t_{0})}\) corresponding to the first eigenvalue \(\lambda(t_{0}):=\lambda(g(t_{0}))\), thus; \(-\mathscr{L}_{f(t_{0})}v(\cdot,t_{0})=\lambda(t_{0})v(\cdot,t_{0})\). Then we have the frequency functional \(Q(t)\) for \(v(t)\) by (2.8) as follows \[Q(t):=\frac{-h(t)\int_{M}\langle v,\mathscr{L}_{f}v\rangle d\bar{p}(t)}{Mv^{2 }dV}e^{-\int_{t_{0}}^{t}\left(2p(s)+\frac{h^{\prime}(s)+k(s)}{h(s)}\right)ds}.\] Based on Theorem 1.1, we know that \(Q(t)\) is nonincreasing for \(h(t)>0\) and nondecreasing for \(h(t)<0\). To this end, we refer to the definition of \(\lambda(t):=\lambda(g(t))\) (see the statement of the corollary) and have for any \(h(t)>0\) and any \(t\in[t_{0},t_{1}]\subset(0,T)\) that \[h(t_{0})\lambda(t_{0})=Q(t_{0})\geq Q(t)\geq h(t)\lambda(t)e^{-\int_{t_{0}}^{t }\left(2p(s)+\frac{h^{\prime}(s)+k(s)}{h(s)}\right)ds}.\] As a consequence of this, it is clear that \(h(t)\lambda(t)\) is monotone decreasing for \(h(t)>0\). The second part of the corollary can be proved in a similar way. ## 4. General parabolic equations In this section we consider some more general parabolic equation coupled with non-dynamic conformal pressure \(p(t)\) along the conformal Ricci flow. Here the parabolic frequency \(Q(t)\) in relation to solution \(v\) of \[|(\partial_{t}-\Delta)v|\leq\bar{p}(t)(|v|+|\nabla v|), \tag{4.1}\] where \(\bar{p}(t)=\max_{x\in M}p(t)\), along the conformal Ricci flow needs to be monotone but its derivative can be suitably bounded to imply backward uniqueness of solution when \(h(t)>0\). The main theorem of this section is the following: **Theorem 4.1**.: _Let \(v:M^{m+1}\times[t_{0},t_{1}]\to\mathbb{R}\), \(t_{0}<t_{1}\) satisfy (4.1) along the conformal Ricci flow \((M^{m+1},g(t),p(t))\), \(t\in[t_{0},t_{1}]\) with \(\mathscr{R}ic_{f}\leq(\frac{k(t)}{2h(t)}+\frac{R_{g}}{m+1})g\) and \(R_{g}=-m(m+1)\). If \(v(\cdot,t_{1})=0\), then \(v(\cdot,t)\equiv 0\) for all \(t\in[t_{0},t_{1}]\)._ As in Section 2, define \(\tau(t)=T-t\) to be the backward time and the conjugate fundamental solution at some point \((x,t)\) to be \[H(t)=(4\pi\tau(t))^{-\frac{m+1}{2}}e^{-f(t)}\] on \((M^{m+1},g(t),p(t))\) so that the weighted measure remains \(dV=Hdt\). For a smooth function \(v:M\times[t_{0},t_{1}]\to\mathbb{R}\) with \(v(\cdot,t),\partial_{t}v(\cdot,t)\in W^{2,2}_{0}(dV)\) and for any \(t\in[t_{0},t]\subset[0,T)\), we define the quantities \(I(t)\), \(E(t)\) and the parabolic functional \(Q(t)\) as in (2.6)-(2.8). **Proposition 4.2**.: _Let \(v:M^{m+1}\times[t_{0},t_{1}]\to\mathbb{R}\) satisfy (4.1) along the conformal Ricci flow \((M^{m+1},g(t),p(t))\), \(t\in[t_{0},t_{1}]\) with \(\mathscr{R}ic_{f}\leq(\frac{k(t)}{2h(t)}+\frac{R_{g}}{m+1})g\) and \(R_{g}=-m(m+1)\). Then_ \[(\log I(t))^{\prime}\geq-3\bar{p}(t)-\frac{\bar{p}(t)+2}{h(t)}Q(t)e^{\int_{t_ {0}}^{t}\left(\frac{2p(8)+\frac{\prime\,h^{\prime}(s)+k(s)}{h(s)}}{}\right)ds}, \tag{4.2}\] \[Q^{\prime}(t)\leq\bar{p}^{2}(t)[Q\mathscr{R}\bar{h(t_{0})}] \tag{4.3}\] _and_ (4.4) \[\begin{split}&\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad Applying integration by parts formula again we have \[I^{\prime}(t) =\int_{M}\left(2vv_{t}-v^{2}\frac{\Delta H}{H}\right)dV\] \[=\int_{M}\left(2vv_{t}-2v\Delta v-2|\nabla v|^{2}\right)dV\] \[=2\int_{M}v\left(\partial_{t}-\Delta+\mathscr{L}_{f}\right)vdV\] \[=2\int_{M}v\left[\mathscr{L}_{f}+\frac{1}{2}(\partial_{t}-\Delta )\right]vdV+\int_{M}v(\partial_{t}-\Delta)vdV.\] Letting \[\widetilde{I}(t)=I(t)e^{-\int_{t_{0}}^{t}2p(s)ds}\] Then \[\widetilde{I^{\prime}}(t) =\Bigg{\{}2\int_{M}v\left[\mathscr{L}_{f}+\frac{1}{2}(\partial_{ t}-\Delta)\right]vdV+\int_{M}v(\partial_{t}-\Delta)vdV\] \[\qquad\qquad\qquad-2p(t)I(t)e^{-\int_{t_{0}}^{t}2p(s)ds}. \tag{4.5}\] Note that we can also write \(E(t)\) as follows \[E(t) =-h(t)\int_{M}v\mathscr{L}_{f}vdV\] \[=-h(t)\int_{M}v\left[\mathscr{L}_{f}+\frac{1}{2}(\partial_{t}- \Delta)\right]vdV+\frac{h(t)}{2}\int_{M}v(\partial_{t}-\Delta)vdV. \tag{4.6}\] Along the conformal Ricci flow, and with the aid of the classical Bochner formula, we have \[\left(\partial_{t}-\Delta)\nabla v\right|^{2} =2mg(t)(\nabla v,\nabla v)+2p(t)|\nabla v|^{2}\] \[\qquad\qquad-2|\text{Hess }v|^{2}+2\langle\nabla v,\nabla( \partial_{t}-\Delta)v\rangle. \tag{4.7}\] Computing the derivative of \(E(t)\) using (4.7) we have \[E^{\prime}(t) =h^{\prime}(t)\int_{M}|\nabla v|^{2}dV+h(t)\int_{M}(\partial_{t}- \Delta)|\nabla v|^{2}dV\] \[=h(t)\left[\frac{h^{\prime}(t)}{h(t)}\int_{M}|\nabla v|^{2}dV+ \int_{M}(\partial_{t}-\Delta)|\nabla v|^{2}dV\right]\] \[=h(t)\int_{M}\Big{[}\frac{h^{\prime}(t)+2h(t)p(t)}{h(t)}|\nabla v |^{2}+2mg(t)(\nabla v,\nabla v)\] \[\qquad\qquad\qquad-2|\text{Hess }v|^{2}+2\langle\nabla v,\nabla( \partial_{t}-\Delta)v\rangle\Big{]}dV.\] Applying integration by parts formula and drifting Reilly formula (Lemma 3.4 gives \[E^{\prime}(t)=h(t)\int_{M} \Big{[}\frac{h^{\prime}(t)+2h(t)p(t)}{h(t)}|\nabla v|^{2}-2| \mathscr{L}_{f}v|^{2}+2(\mathscr{R}ic_{f}+mg)(\nabla v,\nabla v)\] \[-2(\mathscr{L}_{f}v)(\partial_{t}-\Delta)v\Big{]}dV.\] Applying the Bakry-Emery Ricci curvature bound \(\mathscr{R}ic_{f}\leq(\frac{k(t)}{2h(t)}+\frac{R_{g}}{m+1})g\) with \(R_{g}=-m(m+1)\) gives \[E^{\prime}(t) \leq-2h(t)\int_{M}\left[|\mathscr{L}_{f}v|^{2}+(\mathscr{L}_{f}v) (\partial_{t}-\Delta)v\right]dV\] \[\qquad+(k(t)+h^{\prime}(t)+2h(t)p(t))\int_{M}\left|\nabla v\right| ^{2}dV\] \[=-2h(t)\int_{M}\left[\left|\left(\mathscr{L}_{f}+\frac{1}{2} \Big{(}\partial_{t}-\Delta\Big{)}v\right|^{2}-\frac{1}{4}\Big{(}\partial_{t} -\Delta\Big{)}v\right|^{2}\right]dV\right.\] \[\qquad+\left.\left[2p(t)+\frac{h^{\prime}(t)+k(t)}{h(t)}\right] E(t),\] where we have used the completing the square method. Also letting \[\widetilde{E}(t)=E(t)e^{-\int_{t_{0}}^{t}\big{(}4p(s)+\frac{h^{\prime}(s)+k(s )}{h(s)}\big{)}ds}. \tag{4.8}\] Then \[\widetilde{E}^{\prime}(t)\leq\left\{-2h(t)\int_{M}\left[\left(\mathscr{L}_{f }+\frac{1}{2}(\partial_{t}-\Delta)\right)v\right|^{2}-\frac{1}{4}\Big{|}\Big{(} \partial_{t}-\Delta\Big{)}v\Big{|}^{2}\right]dV\right. \tag{4.9}\] Combining (4.5), (4.6) and (4.8) we get (4.10) \[\widetilde{I}^{\prime}(t)\widetilde{E}(t)= \left\{\begin{aligned} &\hskip-2.845276pt\hskip-2.845276pt \hskip-2.845276pt\hskip-2.845276pt\hskip-2.845276pt\hskip-2.845276pt\hskip-2.845 276pt\hskip-2.845276pt\hskip-2.845276pt\hskip-2.845276pt\hskip-2.845276pt \hskip-2.845276pt\hskip-2.845276pt\hskip-2.845276pt\hskip-2.845276pt\hskip-2.845 276pt\hskip-2.845276pt\hskip-2.845276pt\hskip-2.845276pt\hskip-2.845276pt \hskip-2.845276pt\hskip-2.845276pt\hskip-2.845276pt\hskip-2.845276pt\hskip-2.845 276pt\hskip-2.845276pt\hskip-2.845276pt\hskip-2.845276pt\hskip-2.845276pt \hskip-2.845276pt\hskip-2.845276pt\hskip-2.845276pt\hskip-2.845276pt\hskip-2.845 276pt\hskip-2.845276pt\hskip-2.845276pt\hskip-2.845276pt\hskip-2.845276pt \hskip-2.845276pt\hskip-2.845276pt\hskip-2.845276pt\hskip-2.845276pt \hskip-2.845276pt\hskip-2.845276pt\hskip-2.845276pt\hskip-2.845276pt \hskip-2.845276pt\hskip-2.845276pt\hskip-2.845276pt\hskip-2.845276pt \hskip-2.845276pt\hskip-2.845276pt\hskip-2.845276pt\hskip-2.845276pt \hskip-2.845276pt\hskip-2.845276pt\hskip-2.845276pt\hskip-2.845276pt \hskip-2.845276pt\hskip-2.845276pt\hskip-2.845276pt\hskip-2.845276pt \hskip-2.845276pt\hskip-2.845276pt\hskip-2.845276pt\hskip-2.845276pt \hskip-2.845276pt\hskip-2.845276pt\hskip-2.845276pt\hskip-2.845276pt \hskip-2.845276pt\hskip-2.845276pt\hskip-2.845276pt\hskip-2.845276pt \hskip-2.845276pt\hskip-2.845276pt\hskip-2.845276pt\hskip-2.845276pt \hskip-2.845276pt\hskip-2.845276pt\hskip-2.845276pt\hskip-2.845276pt \hskip-2.845276pt\hskip-2.845276pt\hskip-2.845276pt\hskip-2.845276pt \hskip-2.845276pt\hskip-2.845276pt\hskip-2.845276pt\hskip-2.845276pt \hskip-2.845276pt\hskip-2.845276pt\hskip-2.845276pt\hskip-2.845276pt \hskip-2.845276pt\hskip-2.845276pt\hskip-2.845276pt\hskip-2.845276pt \hskip-2.845276pt\hskip-2.845276pt\hskip-2.845276pt\hskip-2.845276pt \hskip-2.845276pt\hskip-2.845276pt\hskip-2.845276pt\hskip-2.845276pt \hskip-2.845276pt\hskip-2.845276pt\hskip-2.845276pt\hskip-2.845276pt \hskip-2.845276pt\hskip-2.845276pt\hskip-2.845276pt\hskip-2.845276pt \hskip-2.845276pt\hskip-2.845276pt\hskip-2.845276pt\hskip-2.845276pt \hskip-2.845276pt\hskip-2.845276pt\hskip-2.845276pt\hskip-2.845276pt \hskip-2.845276pt\hskip-2.845276pt\hskip-2.845276pt\hskip-2.845276pt \hskip-2.845276pt\hskip-2.845276pt\hskip-2.845276pt\hskip-2.845276pt \hskip-2.845276pt\hskip-2.845276pt\hskip-2.845276pt\hskip-2.845276pt\hskip-2.845276pt \hskip-2.845276pt\hskip-2.845276pt\hskip-2.845276pt\hskip-2.845276pt \hskip-2.845276pt\hskip-2.845276pt\hskip-2.845276pt\hskip-2.845276pt \hskip-2.845276pt\hskip-2.845276pt\hskip-2.845276pt\hskip-2.845276pt \hskip-2.845276pt\hskip-2.845276pt\hskip-2.845276pt\hskip-2.845276pt\hskip-2.845276pt \hskip-2.845276pt\hskip-2.845276pt\hskip-2.845276pt\hskip-2.845276pt \hskip-2.845276pt\hskip-2.845276pt\hskip-2.845276pt\hskip-2.845276pt\hskip-2.845276pt \hskip-2.845276pt\hskip-2.845276pt\hskip-2.845276pt\hskip-2.845276pt\hskip-2.845276pt \hskip-2.845276pt\hskip-2.845276pt\hskip-2.845276pt\hskip-2.845276pt \hskip-2.845276pt\hskip-2.845276pt\hskip-2. we obtain \[\widetilde{I}^{2}(t)Q^{\prime}(t)=\widetilde{I}(t)\widetilde{E}^{ \prime}(t)-\widetilde{I}^{\prime}(t)\widetilde{E}(t)\] \[\leq e^{-\Gamma_{2}(t)}\Bigg{\{}-2h(t)I(t)\int_{M}\left[\left|\left( \mathscr{L}_{f}+\frac{1}{2}\Big{(}\partial_{t}-\Delta\Big{)}\right)v\right|^{ 2}-\frac{1}{4}\Big{|}\Big{(}\partial_{t}-\Delta\Big{)}v\right|^{2}\right]dV\] \[\qquad\quad+2h(t)\left(\int_{M}v\left[\mathscr{L}_{f}+\frac{1}{2} (\partial_{t}-\Delta)\right]vdV\right)^{2}-\frac{h(t)}{2}\left(\int_{M}v(( \partial_{t}-\Delta)vdV\right)^{2}\Bigg{\}}\] \[=-2h(t)e^{-\Gamma_{2}(t)}\Bigg{\{}I(t)\left(\int_{M}\left|\left( \mathscr{L}_{f}+\frac{1}{2}\Big{(}\partial_{t}-\Delta\Big{)}\right)v\right|^ {2}\right)-\left(\int_{M}\mathscr{L}_{f}+\frac{1}{2}(\partial_{t}-\Delta) \right]vdV\right)^{2}\] \[\leq \frac{h(t)}{2}I(t)\left(\int_{M}\left|\left(\partial_{t}-\Delta \right)v\right|^{2}dV\right)e^{-\Gamma_{2}(t)}\] \[\leq \frac{h(t)}{2}\bar{p}^{2}(t)I(t)\left(\int_{M}\left(|v|+|\nabla v |\right)^{2}\!dV\right)e^{-\Gamma_{2}(t)}\] \[\leq \bar{p}^{2}(t)I(t)\Big{(}h(t)I(t)+E(t)\Big{)}e^{-\Gamma_{2}(t)}\] Using the condition \(h(t)>0\) we have \[\begin{array}{l}\includegraphics[width=14.226378pt]{Fig2}\left(h(t)+ \frac{E(t)}{I(t)}\right)e^{\int_{t_{0}}^{t}\big{(}2p(s)+\frac{h^{\prime}(s)+k(s) }{h(s)}\big{)}ds}\\ \includegraphics[width=14.226378pt]{Fig2}\left(\frac{E(t)}{I(t)}+h(t)\right)e ^{\int_{t_{0}}^{t}\big{(}2p(s)+\frac{h^{\prime}(s)+k(s)}{h(s)}\big{)}ds}\\ \includegraphics[width=14.226378pt]{Fig2}\left(\frac{E(t)}{I(t)}+h(t_{0}) \right)\end{array}\] which proves (4.3) and (4.4) of the proposition. **Corollary 4.3**.: _Let \(\emptyset:M^{m+1}\times[t_{0},t_{1}]\to\mathbb{R}\) satisfy (4.1) along the conformal Ricci flow \((M^{m+1},g(t),p(t))\), \(t\in[t_{0},t_{1}]\subset[0,T)\). Then_ \[I(t_{1})\geq I(t_{0})\exp\Bigg{\{}-3(t_{1}-t_{0})\sup_{t\in[t_{0},t_{1}]}\max_ {x\in M}p(t)-\Bigg{[}\left(2+\sup_{t\in[t_{0},t_{1}]}\max_{x\in M}p(t)\right) \tag{4.11}\] \[\times\Big{(}Q(t_{0})+h(t_{0})\Big{)}\left(e^{\int_{t_{0}}^{t_{1} }p^{2}(t)dt}\right)\Bigg{]}\int_{t_{0}}^{t_{1}}\frac{1}{h(t)}e^{-\int_{t_{0}}^ {t_{1}}2p(s)+\frac{h^{\prime}(s)+k(s)}{h(s)}ds}dt\Bigg{\}}.\] _Moreover, if \(v(\cdot,t_{1})=0\), then \(v(\cdot,t)\equiv 0\) for all \(t\in[t_{0},t_{1}]\)._ This corollary is indeed a restatement of Theorem 4.1. Proof.: Integrating (4.2) on the interval \([t_{0},t_{1}]\) gives \[\begin{split}\log I(t_{1})-&\log I(t_{0})\\ &\geq-3\int_{t_{0}}^{t_{1}}\bar{p}(t)dt-\int_{t_{0}}^{t_{1}}\frac{ \bar{p}(t)+2}{h(t)}Q(t)e^{-\int_{t_{0}}^{t_{1}}\left(2p(s)+\sfrac{\sfrac{h^{ \prime}(s)+k(s)}{h(s)}}{h(s)}\right)ds}dt\\ &\geq-3(t_{1}-t_{0})\sup_{t\in[t_{0},t_{1}]}\bar{p}(t)\\ &\qquad-\left(2+\sup_{t\in[t_{0},t_{1}]}\bar{p}(t)\right)\int_{t _{0}}^{t_{1}}\frac{Q(t)}{h(t)}e^{-\int_{t_{0}}^{t}\left(2p(s)+\sfrac{h^{\prime }(s)+k(s)}{h(s)}\right)ds}dt.\end{split} \tag{4.12}\] Integrating (4.4) we get \[\log[Q(t)+h(t_{0})]\leq\log[Q(t_{0})+h(t_{0})]+\sfrac{\sfrac{h^{\prime}(s)+k( s)}{h(s)}}{\bar{p}^{2}(t)dt}.\] Therefore, \(Q(t)\) is bounded by \[Q(t)\leq(Q(t_{0})+h(t_{0}))e^{\int_{t_{0}}^{t_{1}}\bar{p}^{2}(t)dt}-\bar{h}(t_ {0}). \tag{4.13}\] Inserting this bound (4.13) into (4.12), and then (4.11) is obtained by exponentiation. The conclusion in the second part of the corollary follows immediately from the first part. Note that the fact that \(p(t)\geq 0\) is finite and the assumption that \(h(t)>0\) (uniformly) for all \(t\in[t_{0},t_{1}]\subset[0,T)\) makes the integral appearing in (4.11) finite. The authors declare that there is no conflict of interest.
2308.02980
The charge and mass symmetry breaking in the $KK\bar{K}$ system
In the framework of the Faddeev equations in configuration space, we investigate the $K$(1460) meson as a resonant state of the $KK\bar{K}$ kaonic system. We perform calculations for the particle configurations $K^{0}K^{+}K^{-}$ and $K^{0}K^{+}\overline{{K}^{0}}$ within two models: the $ABC $ model, in which all three particles are distinguishable, and the $AAC$ model when two particles are identical. The models differ in their treatment of the kaon mass difference and the attractive Coulomb force between the $K^{+}K^{-}$ pair. We found that the Coulomb shift adds over 1 MeV to the three-body binding energy. The expected correction to the binding energy due to mass redistribution from $AA$ to $AB$ is found to be negligible, up to a maximum of 6\% of the relative mass correction. At the same time, the symmetry of the wave function is distorted depending on the mass ratio value. We found that the repulsive $KK$ interaction plays essential role in the binding energy of the $KK\bar K$ system and report the mass of 1461.8 or 1464.1 MeV for the neutral $K^{0}$(1460) and 1466.5 or 1468.8 MeV for the charged $K^{+}$(1460) resonances, respectively, depending on the parameter sets for $KK$ and $K\bar{K}$ interactions.
Igor Filikhin, Roman Ya. Kezerashvili, Branislav Vlahovic
2023-08-06T00:53:58Z
http://arxiv.org/abs/2308.02980v1
# The charge and mass symmetry breaking in the \(Kk\bar{K}\) system ###### Abstract In the framework of the Faddeev equations in configuration space, we investigate the \(K(1460)\) meson as a resonant state of the \(KK\bar{K}\) kaonic system. We perform calculations for the particle configurations \(K^{0}K^{+}K^{-}\) and \(K^{0}K^{+}\bar{K^{0}}\) within two models: the \(ABC\) model, in which all three particles are distinguishable, and the \(AAC\) model when two particles are identical. The models differ in their treatment of the kaon mass difference and the attractive Coulomb force between the \(K^{+}K^{-}\) pair. We found that the Coulomb shift adds over 1 MeV to the three-body binding energy. The expected correction to the binding energy due to mass redistribution from \(AA\) to \(AB\) is found to be negligible, up to a maximum of 6% of the relative mass correction. At the same time, the symmetry of the wave function is distorted depending on the mass ratio value. We found that the repulsive \(KK\) interaction plays essential role in the binding energy of the \(KK\bar{K}\) system and report the mass of 1461.8 or 1464.1 MeV for the neutral \(K^{0}(1460)\) and 1466.5 or 1468.8 MeV for the charged \(K^{+}(1460)\) resonances, respectively, depending on the parameter sets for \(KK\) and \(K\bar{K}\) interactions. ## I Introduction Few-body physics has received interest for decades. Since 1961, when the Faddeev equations [1] in the momentum representation were formulated and a few years later the Faddeev-Noyes equations in configuration space were suggested [2], special attention has been given to three-body systems constituted by nucleons, mesons, two nucleons and a meson, two mesons and a nucleon, quarks, and three-particle cluster systems. At low energies, the general approach for solving the three-body problem is based on the use of methods for studying the dynamics of three particles in discrete and continuum spectra. Among the most powerful approaches are the method of Faddeev equations in momentum [1; 3] or coordinate [2; 4; 5; 6] spaces. However, the method of hyperspherical harmonics, the variational method in the harmonic-oscillator basis, and the variational method complemented with the use of explicitly correlated Gaussian basis functions has been successfully employed for the solution of a few-body problem in atomic, nuclear, high energy physics, and even in condensed matter physics [7]. Three-body systems can be composed of three identical particles (\(AAA\) model), two identical and the third one (\(AAC\) model), and three non-identical particles (\(ABC\) model). The \(nnn\), \(nns\), and \(ssn\) baryons are examples of the quantum system with three and two identical quarks, while \(3\alpha\) (cluster model for \({}^{12}\)C), \(ppn(^{3}\)He), and \(nnp(^{3}\)H) nucleon systems are composed of three and two identical particles [8], respectively. One can also have systems with a meson and two baryons or two mesons and a baryon, such as kaonic clusters \(NN\bar{K}\), \(\bar{K}\bar{K}N\), \(K\bar{K}N\), and \(KK\bar{K}\), which are considered as systems with two identical or three non-identical particle, depending on the configuration of the particles or theoretical approach for the description of kaonic clusters. When two identical particles are fermions or bosons the wave functions of the three-body kaonic clusters are antisymmetric or symmetric, correspondingly, with respect to the two identical particles exchange. The kaonic clusters \(NN\bar{K}\), \(\bar{K}\bar{K}N\), and \(K\bar{K}N\) were intensively studied in the framework of the Faddeev equations in momentum and coordinate representation [9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19]. The Faddeev equations were used to study four-body kaonic clusters (see review [20]). It is a very challenging task to solve the Faddeev equations exactly and usually introduce some reasonable approximations of the Faddeev equations, such as the use of separable potentials, energy-independent kernels, on-shell two-body scattering amplitudes, the Faddeev-type Alt-Grassberger-Sandhas equations' [21]. In the framework of the fixed-center approximation for the Faddeev equations, dynamically generated three-body resonances formed via meson-meson and meson-baryon interactions were studied (see [22; 23; 24; 25] and references herein). The \(K(1460)\) pseudoscalar was a subject of interest since the middle of the seventies. In 1976, the first high statistics study of the \(K^{\pm}p\to K^{\pm}\pi^{+}\pi^{-}p\) process was carried out at SLAC, using a 13 GeV incident \(K^{\pm}\) beam [26]. The \(J^{\pi}=0^{-}\) partial-wave analysis of the \(\pi\pi K\) system in this reaction led to the report of the first evidence for a strangeness-one pseudoscalar meson with a mass of \(\sim 1400\) MeV and a width of \(\sim 250\) MeV. A few years later the ACCORD collaboration [27] using SLAC \(K^{-}\) beam at 63 GeV investigated the diffractive process \(K^{-}p\to K^{-}\pi^{+}\pi^{-}p\) and the data sufficient for partial-wave analysis extending up to a mass of 2.1 GeV were collected. The data analysis confirmed the existence of a broad \(0^{-}\) resonance with a mass of \(\sim 1460\) MeV. However, even the 2018 PDG [28] did not list the \(K(1460)\) as an "established particle". In the most recent studies of the resonance structure in \(D^{0}\to K^{\mp}\pi^{\pm}\pi^{\pm}\pi^{\mp}\) decays using \(pp\) collision, data collected at 7 and 8 TeV with the LHCb experiment [29], and the precise measurements of the \(a_{1}^{+}(1260)\), \(K_{1}^{-}(1270)\) and \(K(1460)\) resonances are made. Within a model-independent partial-wave analysis performed for the \(K(1460)\) resonance, it is found that the mass is roughly consistent with previous studies [26; 27]. They showed the evidence for the K(1460) in \(\bar{K}^{*}(892)\pi^{-}\) and \([\pi^{+}\pi^{-}]^{L\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! to address the issues related to the charge and mass symmetry breaking in the \(KK\bar{K}\) system and the validity of the \(AAC\) model. In the \(AAC\) and \(ABC\) models, we are using \(s\)-wave \(K\bar{K}\) and \(KK\) two-body potentials [35] and in the \(ABC\) model experimental kaon masses, as the inputs. This paper is organized as follows. In Secs. II and III we present the Faddeev equations in configuration space for \(ABC\) and \(AAC\) models and their application for the \(K^{0}K^{+}K^{-}\) and \(K^{0}K^{+}\overline{K^{0}}\) configurations. Averaged mass and potential approaches which lead to the reduction of the \(ABC\) model to the \(AAC\) one are given in Secs. IV and V, respectively. Results of numerical calculations for \(AAC\) and \(ABC\) models are presented in Sec. VI. The concluding remarks follow in Sec. VII. ## II Faddeev equations in configuration space The three-body problem can be solved in the framework of the Schrodinger equation or using the Faddeev approach in the momentum [1; 3] or configuration [2; 6; 49] spaces. The Faddeev equations in the configuration space have different forms depending on the type of particles and can be written for: i. three non-identical particles; ii. three particles when two are identical; iii. three identical particles. The identical particles have the same masses and quantum numbers. In the particle configurations \(K^{0}K^{+}K^{-}\) and \(K^{0}K^{+}\overline{K^{0}}\) the \(K^{+}\) and \(K^{-}\), and \(K^{0}\) and \(\overline{K^{0}}\) have equal masses, respectively. However, these particle configurations cannot be considered in the framework of the \(AAC\) model because \(K^{-}\) and \(\overline{K^{0}}\) are antiparticles of \(K^{+}\) and \(K^{0}\), respectively, hence, are non-identical particles. Moreover, in the \(K^{0}K^{+}K^{-}\) particle configuration the \(K^{+}\) and \(K^{-}\) are a particle and antiparticle, respectively, which have the same masses but different charges. The non-identical particles are un-exchangeable bosons. Thus, the \(K^{0}K^{+}K^{-}\) and \(K^{0}K^{+}\overline{K^{0}}\) particle configurations each consist of three non-identical particles and must be treated within the \(ABC\) model. ### Faddeev equations for the \(Abc\) model In the Faddeev method in configuration space, alternatively, to the finding the wave function of the three-body system using the Schrodinger equation, the total wave function is decomposed into three components [2; 6; 49]: \[\Psi(\mathbf{x}_{1},\mathbf{y}_{1})=U(\mathbf{x}_{1},\mathbf{y}_{1})+W( \mathbf{x}_{2},\mathbf{y}_{2})+Y(\mathbf{x}_{3},\mathbf{y}_{3}). \tag{1}\] Each Faddeev component corresponds to a separation of particles into configurations \((kl)+i\), \(i\neq k\neq l=1,2,3\). The Faddeev components are related to their own set of the Jacobi coordinates \(\mathbf{x}_{i}\) and \(\mathbf{y}_{i}\), \(i=1,2,3\). There are three sets of Jacobi coordinates. The total wave function is presented by the coordinates of one of the sets shown in Eq. (1) for \(i=1\). The mass-scaled Jacobi coordinates \(\mathbf{x}_{i}\) and \(\mathbf{y}_{i}\) are expressed via particle coordinates \(\mathbf{r}_{i}\) and masses \(m_{i}\) in the following form: \[\mathbf{x}_{i}=\sqrt{\frac{2m_{k}m_{l}}{m_{k}+m_{l}}}(\mathbf{r}_{k}-\mathbf{ r}_{l}),\qquad\mathbf{y}_{i}=\sqrt{\frac{2m_{i}(m_{k}+m_{l})}{m_{i}+m_{k}+m_{l}}}( \mathbf{r}_{i}-\frac{m_{k}\mathbf{r}_{k}+m_{l}\mathbf{r}_{l}}{m_{k}+m_{l}}). \tag{2}\] In Eq. (1), each component depends on the corresponding coordinate set which is expressed in terms of the chosen set of mass-scaled Jacobi coordinates. The orthogonal transformation between three different sets of the Jacobi coordinates has the form: \[\left(\begin{array}{c}\mathbf{x}_{i}\\ \mathbf{y}_{i}\end{array}\right)=\left(\begin{array}{cc}C_{ik}&S_{ik}\\ -S_{ik}&C_{ik}\end{array}\right)\left(\begin{array}{c}\mathbf{x}_{k}\\ \mathbf{y}_{k}\end{array}\right),\ \ C_{ik}^{2}+S_{ik}^{2}=1, \tag{3}\] where \[C_{ik}=-\sqrt{\frac{m_{i}m_{k}}{(M-m_{i})(M-m_{k})}},\ \ \ S_{ik}=(-1)^{k-i}\mbox{sign}(k-i) \sqrt{1-C_{ik}^{2}}.\] Here, \(M\) is the total mass of the system. The components \(U(\mathbf{x_{1},y_{1}})\), \(W(\mathbf{x}_{2},\mathbf{y}_{2})\), and \(Y(\ \mathbf{x}_{3},\mathbf{y}_{3})\) satisfy the Faddeev equations in the coordinate representation written in the form [6]: \[\begin{array}{l}(H_{0}+V_{23}(\mathbf{x_{1}})-E)U(\mathbf{x},\mathbf{y})=- V_{23}(\mathbf{x_{1}})(W(\mathbf{x}_{2},\mathbf{y}_{2})+Y(\ \mathbf{x}_{3},\mathbf{y}_{3}),\\ (H_{0}+V_{13}(\mathbf{x_{2}})-E)W(\mathbf{x},\mathbf{y})=-V_{13}(\mathbf{x_{2 }})(U(\mathbf{x}_{1},\mathbf{y}_{1})+Y(\ \mathbf{x}_{3},\mathbf{y}_{3}),\\ (H_{0}+v_{12}(\mathbf{x_{3}})-E)Y(\mathbf{x},\mathbf{y})=-V_{12}(\mathbf{x_{3 }})(U(\mathbf{x}_{1},\mathbf{y}_{1})+W(\ \mathbf{x}_{2},\mathbf{y}_{2}).\end{array} \tag{4}\] Here, \(H_{0}=-(\Delta_{\bf x}+\Delta_{\bf y})\) is the kinetic energy operator with \(\hbar^{2}=1\) and \(V_{kl}\) is the interaction potential between the pair of particles (\(kl\)), \(i\neq k\neq l\). In the system of equations (4), the independent variables are \({\bf x}\) and \({\bf y}\) and can be chosen to be \({\bf x_{i}}\) and \({\bf y_{i}}\), where \(i\) is 1 or 2, or 3. After that, the remaining coordinates are expressed by the chosen coordinates according to the Jacobi coordinates transformation (3). ### Faddeev equations for the _Aac_ model The system of Eqs. (4) can be reduced to a simpler form for a case of two identical particles when a particle \(B\) in the \(ABC\) model is replaced by a particle \(A\). In this case, for the Bose particles, the total wave function of the system is decomposed into the sum of the Faddeev components \(U\) and \(W\) corresponding to the \((AA)C\) and \((AC)A\) types of rearrangements: \[\Psi=U+W+PW,\] where \(P\) is the permutation operator for two identical particles. Consequently, Eqs. (4) can be rewritten as follows [49]: \[\begin{array}{l}(H_{0}+V_{AA}-E)U=-V_{AA}(W+PW),\\ (H_{0}+V_{AC}-E)W=-V_{AC}(U+PW).\end{array} \tag{5}\] In Eqs. (5) \(V_{AA}\) and \(V_{AC}\) are the interaction potentials between identical and non-identical particles, respectively. III The \(Abc\) model versus \(Aac\) for \(K^{0}k^{+}k^{-}\) and \(K^{0}k^{+}\overline{K^{0}}\) particle configurations We study the \(KK\bar{K}\) system using the available \(s\)-wave effective phenomenological \(KK\) and \(K\bar{K}\) potentials [35] and considering the \(K\) and \(\bar{K}\) kaon's experimental masses and charges based on Ref. [31]. This leads to the consideration of the \(K(1460)\) resonance according to the following neutral or charged particle configurations: \(K^{0}K^{0}\bar{K}\), \(K^{0}K^{+}K^{-}\), \(K^{+}K^{+}\bar{K}\), \(K^{0}K^{+}\overline{K^{0}}\). For the description of the \(KK\bar{K}\) system within the \(ABC\) and \(AAC\) models, we focus, as on the representative, on the following two particle configurations: \(K^{0}K^{+}K^{-}\) and \(K^{0}K^{+}\bar{K}^{0}\). The configuration \(K^{0}K^{+}K^{-}\) includes the Coulomb interaction and \(K^{+}\) and \(K^{-}\) have the same masses but are non-identical that make three particles distinguishable. The configuration \(K^{0}K^{+}\overline{K^{0}}\) does not include the Coulomb interaction and two particles \(K^{0}\) and \(\overline{K^{0}}\) have the same masses but are non-identical. Therefore, the exact treatment of the \(K^{0}K^{+}K^{-}\) and \(K^{0}K^{+}\overline{K^{0}}\) particle configurations must be done within the \(ABC\) model. We consider \(s\)-wave pair potentials. Hence, the bound state problem for the \(KK\bar{K}\) system should be formulated using the Faddeev equations in the \(s\)-wave approach [17]. In the \(s\)-wave approach for the \(K^{0}K^{+}K^{-}\) particle configuration Eqs. (4) reads: \[\begin{array}{l}(H_{0}+v_{K^{0}K^{+}}+v_{G}^{U}-E){\cal U}=-v_{K^{0}K^{+}}({ \cal W}+{\cal Y}),\\ (H_{0}+v_{K^{+}K^{-}}+v_{G}^{V}-E){\cal W}=-v_{K^{+}K^{-}}({\cal U}+{\cal Y}), \\ (H_{0}+v_{K^{0}K^{-}}+v_{C}^{V}-E){\cal Y}=-v_{K^{0}K^{-}}({\cal U}+{\cal W}). \end{array} \tag{6}\] In Eqs. (6) \(v_{K^{0}K^{+}}\), \(v_{K^{+}K^{-}}\) and \(v_{K^{0}K^{-}}\) are the \(s\)-wave \(KK\) and \(K\bar{K}\) potentials, while \(v_{C}^{U}\), \(v_{C}^{{\cal W}}\) and \(v_{C}^{{\cal Y}}\) are the components of the Coulomb potential related to the \(K^{+}K^{-}\) electrostatic interaction depending on the mass-scaled Jacobi coordinate of the corresponding Faddeev components \({\cal U}\), \({\cal W}\), and \({\cal Y}\). A consideration of the Coulomb interaction in the framework of the Faddeev formalism is a challenging problem [51]. Following [8], we consider the Coulomb \(K^{+}K^{-}\) interaction included on the left-hand side of the Faddeev equations (6) as a perturbation. The Coulomb attraction in the \(K^{0}K^{+}K^{-}\) violates the \(AAC\) symmetry and makes three kaons distinguishable. If one neglects the Coulomb attraction in the \(K^{0}K^{+}K^{-}\) in Eqs. (6), even the equality of \(K^{+}\) and \(K^{-}\) masses does not allow to treat the \(K^{0}K^{+}K^{-}\) configuration in the framework of the \(AAC\) model. However, if one considers \(K^{0}\) and \(K^{+}\) as identical particles with masses equal to the average of their masses and neglects the Coulomb attraction, \(K^{0}K^{+}K^{-}\) can be considered within the \(AAC\) model. A schematic for the \(K^{0}K^{+}K^{-}\) is presented in Fig. 1 when it is treated as \(ABC\) and \(AAC\) models. Within the \(AAC\) model the Faddeev equations (5) for two identical particles in the three-body \(K^{0}K^{+}K^{-}\) system for the \(s\)-wave interparticle interactions can be written in the following form: \[\begin{array}{l}(H_{0}+v_{K^{0}K^{+}}-E){\cal U}=-v_{K^{0}K^{+}}(1+P){\cal W }\;,\\ (H_{0}+v_{K^{+}K^{-}}-E){\cal W}=-v_{K^{+}K^{-}}(U+P{\cal W})\;.\end{array} \tag{7}\] In Eqs. (7) the \({\cal U}\) and \({\cal W}\) are the \(s\)-wave Faddeev components of the wave function, and the exchange operator \(P\) acts on the particles' coordinates only. The \(K^{0}K^{+}\overline{K^{0}}\) particle configuration also can be considered within the \(AAC\) model, if one considers \(K^{0}\) and \(K^{+}\) as identical particles with masses equal to the average of their masses. In this case, Eqs. (7) describe \(K^{0}K^{+}\overline{K^{0}}\) configuration with only difference that \(v_{K^{+}K^{-}}\) interaction should be replaced by the \(v_{K^{0}\overline{K^{0}}}\) potential. ## IV The mass difference of kaons: from the \(Aac\) to \(Abc\) model In the previous studies \(KK\bar{K}\) system was considered within the \(AAC\) model. Such consideration is valid if we ignore the difference between \(K^{+}\) and \(K^{0}\) masses and the Coulomb interaction between charged kaons. As the first step for a realistic consideration, we are using the experimental kaon masses instead of the average kaon mass used in the \(AAC\) model. After that, we consider that the \(AB\) pair is the kaonic pair and antikaon is considered as the particle \(C\). Within this \(ABC\) model, the masses of \(A\) and \(B\) kaons are varied around the average value of the kaon mass \(m=(m_{A}+m_{B})/2\). These variations have different signs for the \(A\) and \(B\) kaons. In other words, we consider the \(ABC\) model with variable masses of the \(A\) and \(B\) kaons but keeping the sum of masses of the \(AB\) kaon pair constant. This approach allows us to understand how the binding energy of the \(ABC\) is sensitive to the variation of the \(A\) and \(B\) masses. Consider the particles in the \(ABC\) model, where \(C\) is the antiparticle, have masses \(m_{A}\), \(m_{C}\), and \(m_{B}\), and can be numbered manually as \[\begin{array}{l}m_{1}=(m_{A}+m_{B})/2+m_{A}/2-m_{B}/2=m+\Delta m,\\ m_{2}=(m_{A}+m_{B})/2+m_{B}/2-m_{A}/2=m-\Delta m,\\ m_{3}=m_{C},\end{array} \tag{8}\] where \(\Delta m=(m_{A}-m_{B})/2\) is small and kaons are particles 1 and 2 and the antikaon is particle 3. Following Friar at al. [50], we write the kinetic energy operator \(\hat{H_{0}}\) in terms of the individual momenta of the particles in the center-of-mass as: \[\hat{H_{0}}=\sum_{i=1,2,3}\frac{\pi_{i}^{2}}{2m_{i}}\approx\frac{\pi_{3}^{2}} {2m_{3}}+\frac{\pi_{1}^{2}}{2m}+\frac{\pi_{2}^{2}}{2m}-\frac{\Delta m}{m} \frac{\pi_{1}^{2}}{2m}+\frac{\Delta m}{m}\frac{\pi_{2}^{2}}{2m}. \tag{9}\] This expression follows from the Taylor series for the \(1/m\). In the first order perturbation theory, the correction for the energy can be presented as \(\langle\hat{H_{0}}\rangle\approx\langle\hat{H_{0}}^{\ \Delta m=0}\rangle+\langle \Delta H_{0}\rangle\), where \(\langle\hat{H_{0}}^{\ \Delta m=0}\rangle=\langle\Psi^{R}|\frac{\pi_{3}^{2}}{2m_{3}}+\frac{\pi_{2}^{ 2}}{2m}+\frac{\pi_{2}^{2}}{2m}|\Psi^{R}\rangle\) and \[\langle\Delta H_{0}\rangle=\langle\Psi^{R}|\frac{\Delta m}{m}\frac{\pi_{2}^{2 }}{2m}-\frac{\Delta m}{m}\frac{\pi_{1}^{2}}{2m}|\Psi^{R}\rangle. \tag{10}\] Here, the \(\Psi^{R}\) is the coordinate part of the wave function \(\Psi=\eta_{isospin}\otimes\Psi^{R}\). Within the \(AAC\) model, \(\langle\Delta H_{0}\rangle=0\), due to \(\Delta m=0\). In the \(ABC\) model, this relation is approximately satisfied \[\langle\Delta H_{0}\rangle\approx 0. \tag{11}\] The possible value for the \(\Delta m\) is restricted so that \(|\Delta m/m|\ll 1\). The linear approximation Eq. (9) is applicable when we consider only the first two terms of the Taylor series for the function \(1/m\) near the point \(m=1\). The next Figure 1: The schematic represents the reduction of the \(ABC\) model (\(a\)) to \(AAC\) model (\(b\)). The kaons and antikaon of the \(K^{0}K^{+}K^{-}\) particle configuration are shown in blue and red colors, respectively, together with the experimental masses. The kaon pair with the Coulomb interaction is encircled by the oval. The crosses indicate the position of the middle point between \(A\) and \(B\) particles and two \(A\) particles in the \(ABC\) and \(AAC\) models, respectively. The Jacobi coordinates related to configurations \((AB)C\) and \((AA)C\) are shown by arrows. quadratic terms of the expansion cannot be compensated similarly to Eq. (11) due to the alternating series of the Tailor expansion: \(m/m_{i}=\sum_{n}(-1)^{n}(\Delta m/m)^{n}\), \(i=1,2\). Therefore, we can assume that the symmetrical variation of kaons' masses described by Eq. (8) for the \(ABC\) model does not lead to a significant change in the \(AAC\) binding energy when \(|\Delta m/m|\ll 1\). This assumption follows from the compencation effect for the three-body Hamiltonian expressed by Eqs. (10)-(11). ## V Coulomb interaction: from the \(Abc\) to \(Aac\) model Let us ignore the repulsive potential acting between the \(K^{0}\) and \(K^{+}\) kaons. Such truncation shifts the three-body energy to a fixed value. After that the equation for the Faddeev component \({\cal U}\) in Eqs. (6) is eliminated. Also, we can neglect the Coulomb interaction and consider only the nuclear interaction between kaons and antikaon. The corresponding system of equations reads: \[\begin{array}{l}(H_{0}+v_{2}-E){\cal W}=-v_{2}{\cal Y},\\ (H_{0}+v_{3}-E){\cal Y}=-v_{3}{\cal W},\end{array} \tag{12}\] where, for simplicity, we denoted \(v_{K^{+}K^{-}}=v_{2}\) and \(v_{K^{0}K^{-}}=v_{3}\). Assuming that the difference \(\Delta v=|(v_{3}-v_{2})/2|\) of the potentials \(v_{2}\) and \(v_{3}\) is small one can introduce the average potential \[\overline{v}=\frac{(v_{2}+v_{3})}{2}, \tag{13}\] so that \[v_{2}=\frac{(v_{2}+v_{3})}{2}+\frac{(v_{2}-v_{3})}{2},\quad v_{3}=\frac{(v_{2 }+v_{3})}{2}+\frac{(v_{3}-v_{2})}{2}, \tag{14}\] and rewrite Eqs. (12) in the form \[\begin{array}{l}(H_{0}+\overline{v}+\Delta v-E){\cal W}=-v_{2}{\cal Y},\\ (H_{0}+\overline{v}-\Delta v-E){\cal Y}=-v_{3}{\cal W}.\end{array} \tag{15}\] In the first order of the perturbation theory, by averaging Eqs. (15), one obtains \[\begin{array}{l}\langle{\cal W}_{0}|(H_{0}+\overline{v}+\Delta v-E)|{\cal W }_{0}\rangle=-\langle{\cal W}_{0}|(\overline{v}+\Delta v)|{\cal Y}_{0}\rangle,\\ \langle{\cal Y}_{0}|(H_{0}+\overline{v}-\Delta v-E)|{\cal Y}_{0}\rangle=- \langle{\cal Y}_{0}|(\overline{v}-\Delta v)|{\cal W}_{0}\rangle,\end{array} \tag{16}\] where \({\cal W}_{0}\) and \({\cal Y}_{0}\) are the solutions of Eqs. (15) with the potential \(\overline{v}\), when \(\Delta v\) is omitted, that gives the energy of \(E_{0}\). Therefore, from (16) we obtain \[\begin{array}{l}\langle{\cal W}_{0}|E_{0}+\Delta v-E)|{\cal W}_{0}\rangle=- \langle{\cal W}_{0}|\Delta v|{\cal Y}_{0}\rangle,\\ \langle{\cal Y}_{0}|(E_{0}-\Delta v-E)|{\cal Y}_{0}\rangle=-\langle{\cal Y}_{ 0}|(-\Delta v)|{\cal W}_{0}\rangle.\end{array} \tag{17}\] For equal kaon masses the functions \({\cal W}_{0}\) and \({\cal Y}_{0}\) are the same and, therefore, \(\langle{\cal W}_{0}|\Delta v|{\cal W}_{0}\rangle=\langle{\cal Y}_{0}|\Delta v |{\cal Y}_{0}\rangle\) and \(\langle{\cal W}_{0}|\Delta v|{\cal Y}_{0}\rangle=\langle{\cal Y}_{0}|\Delta v |{\cal W}_{0}\rangle\). By adding the algebraic equations in (17), we obtain \(E=E_{0}\). In other words, in the first order of perturbation theory, the average potential gives \(E=E_{0}\). Now let's assume that masses of \(K^{0}\) and \(K^{+}\) are equal to their average mass. The Coulomb potential is a perturbation with respect to the strong kaon-kaon interaction. The Coulomb potential can be treated within the given above scheme. One can denote the Coulomb part of the \(K^{+}K^{-}\) potential as \(v_{C}^{\cal W}\). The \(v_{C}^{\cal W}\) is proportional to the \(\frac{1}{x}\) and \(v_{C}^{\cal Y}\) (\(v_{C}^{\cal U}\)) is proportional to the \(\frac{1}{x^{\prime\prime}}\) (\(\frac{1}{x^{\prime}}\)). Here, the mass-scaled Jacobi coordinate \(x^{\prime}=|{\bf x_{1}}|\) corresponds to the \({\cal U}\) channel and is expressed by coordinates \(x=|{\bf x_{2}}|\) and \(y=|{\bf y_{2}}|\) of the channel \({\cal W}\). The \(x^{\prime\prime}=|{\bf x_{3}}|\) is the coordinate in the channel \({\cal Y}\) conjugated to the \({\cal W}\) channel and is expressed via coordinates \(x\) and \(y\) of the channel \({\cal W}\) (see Eq. (4)). The average potential is \(\overline{v}_{C}=(v_{C}^{\cal W}+v_{C}^{\cal Y})/2\). The \(\Delta v_{C}=(v_{C}^{\cal W}-v_{C}^{\cal Y})/2\) defines the difference between the channels' potentials \(v_{C}^{\cal W}\) and \(v_{C}^{\cal Y}\). One can repeat the procedure presented by Eqs. (16)-(17) and finally for the \(K^{0}K^{+}K^{-}\) with the Coulomb interaction obtains the following equation that corresponds to the \(AAC\) model: \[\begin{array}{l}(H_{0}+v_{KK}+\overline{v}_{C}^{\cal U}-E){\cal U}=-v_{KK}(1+ P){\cal W},\\ (H_{0}+v_{KK^{-}}+\overline{v}_{C}^{\cal W}-E){\cal W}=-v_{KK^{-}}(U+P{\cal W }),\end{array} \tag{18}\] where \[\begin{array}{c}v_{KK}=v_{K^{0}K^{+}},\quad\overline{v}_{C}^{\mathcal{L}}=-n/x^ {\prime},\\ v_{KK^{-}}=v_{K^{0}K^{-}},\quad\overline{v}_{C}^{\mathcal{W}}=-n(1/x^{\prime \prime}+1/x)/2,\end{array} \tag{19}\] and the masses of the kaons are equal to their average mass. The \(n\) is the Coulomb charge parameter. ## VI Numerical results ### Interaction potentials In the present work, we use the \(s\)-wave effective potentials for \(KK\) and \(K\bar{K}\) interactions from Ref. [35]. The \(K\bar{K}\) is an attractive interaction that makes the \(KK\bar{K}\) system bound, while the \(KK\) interaction is described by a repulsive potential. The \(K\bar{K}\) and \(KK\) potentials are written in the form of one range Gaussian: \[v_{\bar{K}K}(r)=v_{0}exp((-r/b)^{2}), \tag{20}\] where \(v_{0}\) and \(b\) are the strength (depth) and the range of the potential, respectively. In calculations are used two model potentials A and B with the set of parameters listed in Table 1. ### From the \(Aac\) to \(Abc\) model: Effects of kaons mass difference and Coulomb force Within the theoretical formalism presented in the previous sections, we calculate the binding energies \(E_{3}\) of the \(K^{0}K^{+}K^{-}\) and \(K^{0}K^{+}\overline{K^{0}}\) and \(E_{2}^{KK}\) of the bound \(K\bar{K}\) pairs. The three-particle energy \(E_{3}(V_{KK}=0)\), due to the kaon-antikaon interactions, but with the omitted interaction between two kaons, is another significant characteristic of the three-particle kaonic system. An analysis of the \(E_{3}(V_{KK}=0)\) shows that the repulsive \(KK\) interaction plays essential role in the resonance energy. The results of calculations of these energies for the \(K^{0}K^{+}K^{-}\) and \(K^{0}K^{+}\overline{K^{0}}\) particle configurations in the \(AAC\) and \(ABC\) models are presented in Table 2. The analysis of the results leads to the following conclusions: i. the binding energies \(E_{3}\) and \(E_{3}(V_{KK}=0)\) of \(K^{0}K^{+}K^{-}\) and \(K^{0}K^{+}\overline{K^{0}}\) calculated in the \(AAC\) model, with the average kaon masses \(\overline{m}_{K}=495.7\) MeV, and \(ABC\) model are the same. Thus, the difference of the kaon masses does not affect \(E_{3}\) and \(E_{3}(V_{KK}=0)\): the mass distinguishability is not important for \(E_{3}\) and \(E_{3}(V_{KK}=0)\) energies when the Coulomb interaction is neglected. The difference of 2.5 MeV for \(E_{3}\) and 2.3 MeV for \(E_{3}(V_{KK}=0)\) is related to the parameter sets A and B for \(KK\) and \(K\bar{K}\) interactions, respectively; ii. the mass distinguishability has a small effect on the binding energy of \(K\bar{K}\) pairs; iii. the consideration of the Coulomb interaction in the framework of the \(ABC\) model leads to an increase of the binding energy \(E_{3}\) and the energy \(E_{2}^{K^{+}K^{-}}\) of the bound \(K^{+}K^{-}\) pair in the \(K^{0}K^{+}K^{-}\) particle configuration for the parameter sets A and B for \(KK\) and \(K\bar{K}\) interactions, respectively; iv. the repulsive \(KK\) interaction plays an essential role in the binding energy of the \(KK\bar{K}\) system. The comparison of the \(E_{3}\) and \(E_{3}(V_{KK}=0)\) of \(K^{0}K^{+}K^{-}\) and \(K^{0}K^{+}\overline{K^{0}}\) energies shows that contribution of the repulsive \(KK\) interaction decreases the three-particle binding energy by about 38% and 25% for the set of parameters A and B, respectively; v. finally, the mass of neutral resonance \(K(1460)\) in the \(ABC\) (\(K^{0}K^{+}K^{-}\)) model is 1464.1 Mev and 1461.8 MeV for the parameter sets A and B for \(KK\) and \(K\bar{K}\) interactions, respectively. The mass of the charged resonance \(K^{+}(1460)\) in the \(ABC\) (\(K^{0}K^{+}\overline{K^{0}}\)) model is 1468.8 MeV and 1466.5 MeV for the parameter sets A and B for \(KK\) and \(K\bar{K}\) \begin{table} \begin{tabular}{c c|c c c} \hline \hline \multicolumn{5}{c|}{Parameters of potential} \\ \hline & \multicolumn{2}{c|}{Set A (\(b=0.66\) fm)} & \multicolumn{2}{c}{Set B (\(b=0.47\) fm)} \\ \cline{2-5} Interaction & \(v_{0}\) (\(I=0\)), MeV & \(v_{0}\) (\(I=1\)), MeV & \(v_{0}\) (\(I=0\)), MeV & \(v_{0}\) (\(I=1\)), MeV \\ \hline \(K\overline{K}\) & \(-630.0-210i\) & \(-630.0-210i\) & \(-1155.0-283i\) & \(-1155.0-283i\) \\ \(KK\) & \(0\) & \(104\) & \(0\) & \(313\) \\ \hline \hline \end{tabular} \end{table} Table 1: The parameter sets A and B for \(KK\) and \(K\bar{K}\) interactions. interactions, respectively. Our results for the mass of neutral and charged \(K(1460)\) resonance obtained within the \(ABC\) model are in reasonable agreement with the reported experimental value of the \(K(1460)\) mass [31]. Note, in contrast to the mass of the \(K(1460)\) resonance, calculations of the width using the Faddeev equations for the \(AAC\) model in momentum representation (\(\Gamma=50\) MeV) [35] and configuration space (\(\Gamma=104\) MeV) [40], variational method (\(\Gamma=110\) MeV) [35], hyperspherical harmonics method (\(\Gamma=49\) MeV) [36] did not reproduce the quite sizeable experimental width, 335.60\(\pm\)6.20\(\pm\)8.65 MeV [29; 31]. In Ref. [33] reported the width of approximately 200 MeV for the \(K^{+}K^{+}\overline{K^{0}}\) resonance and [34] presented the estimation for the width of \(\Gamma\geq 100\) MeV for \(K(1460)\) resonance. The study of the \(KK\bar{K}\) within the non-perturbative three-body dynamics did not calculate the width for this system. We calculate the width follow [40] using A and B sets for the potential parameters listed in Table 1. The comparison of the widths obtained in the \(ABC\) and \(AAC\) models shows that they are close enough with a negligible difference about 1 - 2 MeV: \(\Gamma_{KK\bar{K}}=104-106\) MeV and \(\Gamma_{KK\bar{K}}=117-119\) MeV for the potential with A and B parameter sets, respectively. In this work we study the resonance \(K(1460)\) considering only the channel \(KK\bar{K}\). One could also have channels like \(\pi\pi K\) and \(\pi\eta K\). These channels are included, in an effective way, in [34], using Faddeev equations in momentum space by Martinez Torres et al. [35], and employing the complex-scaling method [52] in the semi-relativistic framework in Ref. [39]. While the formation of the resonance \(K(1460)\) could be due to the \(KK\bar{K}\) channel mainly, the inclusion of other channels may have a relevant role in its mass and, more importantly, in its width. However, consideration of these channels in [35] gives a significantly smaller width (\(\Gamma=50\) MeV) than our single-channel result. Resonance positions in three-channel \(KK\bar{K}-\pi\pi K-\pi\eta K\) and two-channel \(KK\bar{K}-\pi\pi K\) and \(KK\bar{K}-\pi\eta K\) calculations presented in [39] demonstrate that the coupling to the \(\pi\pi K\) channel is significant to reproduce the large width of the resonance and the coupling to \(\pi\eta K\) channel makes a large contribution to the mass of the resonance. However, the consideration of the coupled-channel \(\pi\pi K\) and \(\pi\eta K\) and variation of the interaction range and strength of the one-range Gaussian potentials did not reproduce the experimental width. In Ref. [38] the channels \(K^{*}_{0}(1430)\pi\), \(K\rho\), and \(K^{*}_{0}(892)\pi\) are quoted as "decaying channels". These channels require two-body dynamics either beyond \(s-\)wave (to form \(K^{*}(892)\) or \(\rho\)) or well above 1 GeV (to form \(K^{*}(1430)\)), and these effects are not included in [34; 35; 39]. It is then natural that the width reported in those and present works is much smaller than the one quoted indicated in [26; 27] (\(\Gamma\sim 250\) MeV) or in the recent LHCb analysis [29] (\(\Gamma\sim 335\) MeV). \begin{table} \begin{tabular}{c c c c c c c} \hline \hline & \multicolumn{6}{c}{\(AAC\) model} \\ \hline Resonance & System & Mass, MeV Potentials & \(E_{3}\) & \(E_{3}(V_{K^{0}K^{+}}=0)\) & \(E_{2}^{K\bar{K}}\) \\ \hline & & \(m_{K^{-}}=493.7\) & A & \(-19.4\) & \(-31.2\) & \(-10.95\) \\ & & \(\overline{m}_{K}=495.7\) & B & \(-21.9\) & \(-28.9\) & \(-11.03\) \\ \cline{2-6} \(K^{+}(1460)\) & \(K^{0}K^{+}\overline{K^{0}}\) & \(\overline{m}_{K}=495.7\) & A & \(-20.1\) & \(-32.1\) & \(-11.40\) \\ & & \(m_{\overline{K^{0}}}=497.6\) & B & \(-22.4\) & \(-29.6\) & \(-11.34\) \\ \hline & & \multicolumn{6}{c}{\(ABC\) model} \\ \hline Resonance & System & Mass, MeV Potentials & \(E_{3}\) & \(E_{3}(V_{K^{0}K^{+}}=0)\) & \(E_{2}^{K^{+}\bar{K}}\) & \(E_{2}^{K^{0}\bar{K}}\) \\ \hline & & A & \(-19.4\) & \(-31.2\) & \(-10.74\) & \(-11.17\) \\ \(K^{0}(1460)\) & \(K^{0}K^{+}K^{-}\) & \(m_{K^{+}}=493.7\) & B & \(-21.9\) & \(-28.9\) & \(-10.86\) & \(-11.17\) \\ & & \(m_{K^{0}}=497.6\) & A\({}_{C}\) & \(-20.9\) & \(-\) & \(-12.37\) & \(-11.17\) \\ & & \(B_{C}\) & \(-23.2\) & \(-\) & \(-12.27\) & \(-11.17\) \\ \hline & & \(m_{K^{+}}=493.7\) & A & \(-20.1\) & \(-32.1\) & \(-11.17\) & \(-11.62\) \\ \(K^{+}(1460)\) & \(K^{0}K^{+}\overline{K^{0}}\) & \(m_{K^{0}}=497.6\) & B & \(-22.4\) & \(-29.6\) & \(-11.17\) & \(-11.49\) \\ & & \(m_{\overline{K^{0}}}=497.6\) & & & & & \\ \hline \hline \end{tabular} \end{table} Table 2: The binding energy of the \(K^{0}K^{+}K^{-}\) and \(K^{+}K^{0}\bar{K}^{0}\) particle configurations in the \(AAC\) and \(ABC\) model. Calculations are performed for the potentials with the A and B parameter sets, respectively. In the \(AAC\) model, the masses of the \(K^{+}\) and \(K^{0}\) kaons are equal to the average value of their masses (\(\overline{m}_{K}=495.7\) MeV) and the Coulomb attraction is omitted. A\({}_{C}\) and B\({}_{C}\) correspond to the calculations in the \(ABC\) model when the Coulomb attraction is included. The \(E_{3}\) is the binding energy of the \(KK\bar{K}\) system and \(E_{3}(V_{K^{0}K^{+}}=0)\) is the three-body energy, when the repulsive interaction between \(K^{+}\) and \(K^{0}\) is omitted. The \(E_{2}^{KK}\) and \(E_{2}^{K^{+}\bar{K}}\), \(E_{2}^{K^{0}\bar{K}}\) are the energy of the bound \(K\bar{K}\) and \(K\bar{K}\), \(K^{0}\bar{K}\) pairs, in \(AAC\) and \(ABC\) models, respectively. The energies are given in MeV. ### The mass-symmetry breaking in the \(Aa\) pair: from the \(Aac\) to \(Abc\) model Let us start with the bosonic \(AAC\) model with two identical particles with the average mass of two kaons and violate the mass-symmetry of this model by changing the masses of two identical particles but keeping the total mass of the \(AA\) pair constant. Such mass redistribution leads to the transformation of the \(AAC\) model with the symmetric wave function with respect to the exchange of \(AA\) particles to the \(ABC\) model with a lack of this symmetry. Now, in the \(AA\) pair of "identical" particles, we have the masses \[m_{K^{0}}(\zeta)=\overline{m}_{K}(1-\zeta),\qquad m_{K^{+}}(\zeta)=\overline{m }_{K}(1+\zeta),\] Figure 3: (**Left panel**) The mass redistribution effect on the root mean squared distances between kaons in the \(K^{0}K^{+}K^{-}\) system. The results of calculations for the \(AAC\) model (\(a\)), the \(ABC\) model using the experimental kaon masses (\(b\)), the \(ABC\) model with the different unrealistic masses for the \(K^{0}\) and \(K^{+}\) kaons (\(c\)). The total mass of the \(K^{0}K^{+}\) pair is constant and equals to the sum of experimental masses. In (\(a\))-(\(c\)) cases the total binding energy \(E_{3}(\zeta)=-21.9\) MeV is the same, while the \(r.m.s.\) distances between kaons are different. (**Right panel**) The \(r.m.s.\) distances between \(K^{+}K^{-}\) (solid curve) and \(K^{0}K^{-}\) (dashed curve) kaons in three-body systems as functions of the \(\zeta\) parameter. The vertical line corresponds to the parameter \(\zeta\) related to experimental masses of \(K^{+}\) and \(K^{0}\) kaons. Calculations are performed with the set B for \(KK\) and \(K\bar{K}\) interactions. where \(\overline{m}_{K}=\left(m_{K^{0}}+m_{K^{+}}\right)/2\) the average mass of the \(AB\) pair and \(\zeta\) is a mass scaling parameter that can be varied. The total mass of this pair is constant: \(m_{K^{0}}(\zeta)+m_{K^{+}}(\zeta)=m_{K^{0}}+m_{K^{+}}\). The kaonic system with the mass \(m_{K^{-}}\) and variable masses \(m(K^{0})\) and \(m(K^{+})\), must be considered within the \(ABC\) model. The cases when \(\zeta=0\) and \(\zeta=0.004\) correspond to the \(AAC\) model with average masses of \(K^{0}\) and \(K^{+}\) kaons and the \(ABC\) model that describes \(K^{0}K^{+}K^{-}\) with the experimental masses of kaons, respectively. Results of calculations of the binding energy within the \(ABC\) model with variable masses of two kaons and the energy of the bound \(AC\) and \(BC\) kaon pairs as functions of the mass scaling parameter \(\zeta\) are shown in Fig. 2. The total energy \(E_{3}(\zeta)\) does not depend on the mass redistribution between \(A\) and \(B\) kaons up to \(\zeta\leq 0.06\). The later value shows when the limit of the approximation (11) is reached. However, the bound \(AC\) and \(BC\) kaon pairs energies are sensitive to the variation of the parameter \(\zeta\). The \(K^{+}K^{-}\) and \(K^{0}K^{-}\) kaon pairs energies \(E_{2}^{+}(\zeta)\) and \(E_{2}^{-}(\zeta)\), which correspond to the increase of the \(K^{0}\) mass and the decrease of the \(K^{+}\) mass from the average value to the experimental mass, increases and decreases, respectively, with the \(\zeta\) increase. Thus, the redistribution of the mass between two kaons violates the exchange symmetry of the wave function in the \(AAC\) model and leads to the \(ABC\) model that gives the same total energy as the \(AAC\) model, but increases \(E_{2}^{+}(\zeta)\) and decreases \(E_{2}^{-}(\zeta)\) energies of the bound kaon pairs. The violation of the wave function symmetry from the symmetric with respect to the exchange of \(AA\) particles to a wave function without such symmetry should affect the average distance between kaons. To demonstrate that we calculated the root mean squared (\(r.m.s.\)) distances between kaons. The latter is illustrated in Fig. 3, the left panel, by presenting the \(r.m.s.\) distances between kaon pairs for the \(K^{0}K^{+}K^{-}\) particle configuration for the different values of \(\zeta\). In the \(AAC\) model \(\zeta=0\) and one gets the isosceles triangle. Consideration of experimental masses in the \(ABC\) model leads to the different root mean squared distances between particles. In Fig. 3, the right panel, is shown the dependence of the \(r.m.s.\) distances on \(\zeta\). One can conclude that the mass flow affects the \(r.m.s.\) distances between kaons up to \(\zeta\sim 0.06\). This effect is non-linear and tends to saturate. According to Eq. (2) and Eq. (4), the mass-scaled coordinates in the \(AAC\) model are \({\bf x}_{1}=\sqrt{\overline{m}_{K}}{\bf r}_{23}\) and \({\bf x}_{2}=\sqrt{\overline{m}_{K}m_{\xi}}\cdot{\bf r}_{13}\). These coordinates in the \(ABC\) model are \({\bf x}_{1}=\sqrt{\frac{\overline{m}_{K}^{2}-(\Delta m)^{2}}{\overline{m}_{K} }}{\bf r}_{23}\), \({\bf x}_{2}=\sqrt{\frac{2(\overline{m}_{K}+\Delta m)m_{\xi}}{\overline{m}_{K} +\Delta m+m_{K}}}{\bf r}_{13}\), \({\bf x}_{3}=\sqrt{\frac{2(\overline{m}_{K}-\Delta m)m_{\xi}}{\overline{m}_{K} -\Delta m+m_{K}}}{\bf r}_{12}\) and have a \(\Delta m\)-dependence. Thus, the dependence of pair potentials as a function of the \({\bf x}\) coordinate in Eq. (4) is expressed as a \(\Delta m\)-dependence. One would assume that three-body kinetic energy operator is affected by the mass scaling parameter \(\zeta\). However, in the first order perturbation theory for \(\zeta\leq 0.064\), as we have shown in Section IV, the kinetic energy matrix element \(\langle\hat{H}_{0}\rangle\) does not depend on \(\zeta\). The binding energy \(E=\langle\hat{H}_{0}\rangle+\langle(v_{K^{0}\bar{K}}+v_{K^{+}\bar{K}}+v_{KK})\rangle\) is also a constant. This means the matrix element of the total potential energy does not changed with the \(\zeta\) increasing. Because the width \(\Gamma\) is evaluated from the imaginary part of the complex potentials, the three-body width is also independent on the parameter \(\zeta\) and is the same in the \(AAC\) and \(ABC\) models. The two-body \(K^{+}\bar{K}\) and \(K^{0}\bar{K}\) subsystems do not have the compensation mechanism due to the absence of the third particle that has the three-body \(KK\bar{K}\) system as expressed by Eqs. (10)-(11). Two-body widths depend on the mass scaling parameter and repeat the \(E_{2}(\zeta)\) dependence as depicted in Fig. 4. The widths of the \(K^{+}\bar{K}\) and \(K^{0}\bar{K}\) pairs are the same, \(\Gamma_{2}=59.2\) MeV, in the \(AAC\) model (\(\zeta=0\)). In the \(ABC\) model, \(\zeta=0.004\), the widths are \(\Gamma_{2}=59.8\) MeV and \(\Gamma_{2}=58.6\) MeV for the \(K^{+}\bar{K}\) and \(K^{0}\bar{K}\), respectively, for the set of the parameters B. Figure 4: The two-body widths \(\Gamma_{2}\) of \(K^{+}K^{-}\) (solid line) and \(K^{0}K^{-}\) (dashed line) versus the \(\zeta\) parameter. Calculations are performed with the set B for \(K\bar{K}\) interaction. The vertical line shows the parameter \(\zeta\) related to experimental values for masses of \(K^{+}\) and \(K^{0}\) kaons. ### From the \(Aac\) to \(Abc\) model through asymmetry of \(Ac\) and \(Bc\) potentials Above we considered the bosonic \(AAC\) model and its transformation to the \(ABC\) model due to changing the masses of two identical particles. However, the bosonic \(AAC\) model can be also transformed to the \(ABC\) model in the case when interactions in the \(AC\) and \(BC\) pairs are different. For example, in Ref. [39] to interpret the three-body resonance as K(1460) were used two-body potentials with the different strength and range parameters. In our consideration of the \(K^{0}K^{+}K^{-}\) particle configuration the kaon-antikaon interaction we use the same strong interaction in the \(K^{0}K^{-}\) and \(K^{+}K^{-}\) pairs. However, interactions in the \(K^{0}K^{-}\) and \(K^{+}K^{-}\) pairs are different due to the Coulomb attraction between \(K^{+}\) and \(K^{-}\). The latter means that even if one considers the average mass for \(K^{0}\) and \(K^{+}\) kaons, the \(K^{0}K^{+}K^{-}\) configuration should be described within the \(ABC\) model. Let us demonstrate this in general via a model where a strong interaction in the \(AC\) and \(BC\) pairs is different by introducing scaled potentials \[v_{AC}\to v_{K^{0}K^{-}}(\xi)=\overline{v}(1-\xi),\qquad v_{BC}\to v_{K^{+}K^{ -}}(\xi)=\overline{v}(1+\xi).\] In the last expressions \(\overline{v}\) is the average potential of the \(AC\) and \(BC\) pairs, \(\xi\) is the potential scaling parameter and \(v_{AC}=v_{BC}\) when \(\xi=0\). Within this model we calculate the binding energies of the \(K^{0}K^{+}K^{-}\) and the bound pairs \(K^{0}K^{-}\) and \(K^{+}K^{-}\) by keeping the average potential constant. In Fig. 2 are shown dependencies of the binding energy \(E_{3}(\xi)\) of the \(K^{0}K^{+}K^{-}\) and energies \(E_{2}^{K^{0}K^{-}}\) (\(E_{2}^{+}(\xi)\)) and \(E_{2}^{K^{+}K^{-}}(E_{2}^{-}(\xi))\) of the \(K^{0}K^{-}\) and \(K^{+}K^{-}\) pairs, respectively, on the potential scaling factor \(\xi\). Results when \(\xi=0\) correspond to the \(AAC\) model, while the increment of \(\xi\) leads to the \(ABC\) model. The increase of \(\xi\) leads to the increase or decrease of the two-body energies of the \(K^{0}K^{-}\) and \(K^{+}K^{-}\) pair, respectively. At the same time, the binding energy of the \(K^{0}K^{+}K^{-}\) system remains unchanged up to a value of \(\xi=0.03\), as is demonstrated in Fig. 2 and can be seen from Eq. (17). A similar situation was demonstrated in the previous subsection when the mass scaling parameter \(\zeta\) increases up to the value of \(\zeta=0.06\). The Coulomb attraction acting in the single kaon-antikaon pair leads also to the \(ABC\) model and increases three-body energy by 1.5 MeV and 1.3 MeV for the parameter sets A and B for \(KK\) and \(K\bar{K}\) interactions, respectively. It can be noted that the averaging procedures presented in sections IV and V allow us to consider the Coulomb interaction in the \(AAC\) model. ## VII Conclusions In the framework of the Faddeev equations in configuration space, we investigated the \(K(1460)\) resonance dynamically generated via the \(KK\bar{K}\) system. We considered the \(KK\bar{K}\) system as the \(K^{0}K^{+}K^{-}\) and \(K^{0}K^{+}\overline{K^{0}}\) particle configurations that are analyzed using \(AAC\) and \(ABC\) models. We demonstrated that the \(ABC\) model can be reduced to the \(AAC\) one, where the wave function is symmetric with respect to the exchange of identical particles. The reduction is possible by averaging the masses of the \(AB\) pair or averaging \(AC\) and \(BC\) potentials, if they are different. It is shown that the repulsive \(KK\) interaction plays essential role in the binding energy of the \(KK\bar{K}\) system: contribution of the repulsive \(KK\) interaction decreases the three-particle binding energy by about 38% and 25% for the A and B parameter sets, respectively. Our three-body non-relativistic single-channel model predicts a quasi-bound state for the \(KK\bar{K}\) system. The mass of neutral \(K(1460)\) resonance calculated in the \(ABC\) model for the \(K^{0}K^{+}K^{-}\) particle configuration is 1464.1 MeV or 1461.8 MeV, while the mass of the charged \(K^{+}(1460)\) resonance for the \(K^{0}K^{+}\overline{K^{0}}\) particle configuration is 1468.8 MeV or 1466.5 MeV. These values are obtained for the parameter sets A and B for \(KK\) and \(K\bar{K}\) interactions, respectively. The results are in fair agreement with the experimental value of the \(K(1460)\) mass, 1482.40\(\pm\)3.58\(\pm\)15.22 MeV [31]. Due to the Coulomb attraction of three-body binding energy \(E_{3}\) increases, and the energy shift is 1.5 and 1.3 MeV for the parameter sets A and B for \(KK\) and \(K\bar{K}\) interactions, respectively. Let us note that within the \(AAC\) model, consideration of the Coulomb attraction is also possible using the averaging procedure for the Coulomb potential. The binding energy \(E_{3}\) of \(K^{0}K^{+}K^{-}\) and \(K^{0}K^{+}\overline{K^{0}}\) calculated in the \(AAC\) model with the average kaon masses and in the \(ABC\) model with the experimental kaon masses are the same. Effectively, the \(AAC\) model can reproduce the binding energy obtained in the \(ABC\) model if the corresponding relative correction of masses is not larger than 6%. Thus, the small difference of the kaon masses does not affect the binding energy of the \(K^{0}K^{+}K^{-}\) kaonic system when the Coulomb interaction is neglected. An increase in the mass difference of the kaons leads to the mass-symmetry violation of the system with the same binding energy. However, when the relative mass correction exceeds 6% the binding energies calculated using the \(AAC\) and \(ABC\) models differ. It should be noted that this effect has not been reported before. The three-body kaonic system allows us to demonstrate this effect clearly. One can consider similar nuclear systems, for example, \(np\Lambda\) or \(np\alpha\), instead of the \(KK\bar{K}\) kaonic system but the effect will be small due to a small difference between the proton and neutron masses. We have found that mass correction does not change three-body energy, however, violates the exchange symmetry that affects the \(ABC\) model wave function symmetry. In the \(AAC\) model, there is the exchange symmetry, related to the symmetric localization of identical particles. In contrast to the \(AAC\) model, consideration of experimental masses in the \(ABC\) model leads to the violation of this symmetry. This indicated by different root mean squared distances between kaons. The different \(r.m.s.\) distances between kaons are due to different kaon masses and/or potentials in the \(AAC\) and \(ABC\) models. We considered the \(KK\bar{K}\) system using the single-channel description with effective \(s\)-wave potentials. Some refinements can be done, such as using more realistic two-body potentials, including \(p\)-wave components, and/or considering the coupled-channel approach. It is important to note that the choice of the model and assumptions made in our analysis can always have an impact on the results. Therefore, it is essential to carefully consider the limitations and uncertainties for description \(KK\bar{K}\) system relating to the mass and charge symmetry braking. ###### Acknowledgements. This work is supported by the National Science Foundation grant HRD-1345219 and DMR-1523617 awards Department of Energy/National Nuclear Security Administration under Award Number NA0003979 DOD-ARO grant #W911NF-13-0165.
2308.13655
Atmospheric muon fluxes at sub-orbital neutrino detectors
Very-high-energy and ultra-high-energy neutrinos are messengers of energetic sources in the universe. Sub-orbital and satellite-based neutrino telescopes employ detectors of the atmospheric Cherenkov emission from extensive air showers (EASs) generated by charged particles. These Cherenkov detectors can be pointed below or above the Earth's limb. Cherenkov emissions produced from directions below the limb are from upward-going EASs produced in the atmosphere sourced by Earth-skimming neutrinos. When the Cherenkov telescope is pointed slightly above the Earth's limb, signals from EASs are initiated by cosmic ray interactions in the atmosphere. For sub-orbital detectors, muons produced from cosmic rays in the atmosphere can directly hit the Cherenkov telescope. Using a semi-analytic technique with cascade equations for atmospheric particle fluxes, we quantify the atmospheric muon flux that reaches sub-orbital telescopes like Extreme Universe Space Observatory Super Pressure Balloon 2 (EUSO-SPB2). We assess this potential background to the EAS signals. The calculation technique may also provide an understanding of the evolution of the muon content in individual EAS.
Diksha Garg, Mary Hall Reno
2023-08-25T20:01:45Z
http://arxiv.org/abs/2308.13655v1
# Atmospheric muon fluxes at sub-orbital neutrino detectors ###### Abstract: Very-high-energy and ultra-high-energy neutrinos are messengers of energetic sources in the universe. Sub-orbital and satellite-based neutrino telescopes employ detectors of the atmospheric Cherenkov emission from extensive air showers (EASs) generated by charged particles. These Cherenkov detectors can be pointed below or above the Earth's limb. Cherenkov emissions produced from directions below the limb are from upward-going EASs produced in the atmosphere sourced by Earth-skimming neutrinos. When the Cherenkov telescope is pointed slightly above the Earth's limb, signals from EASs are initiated by cosmic ray interactions in the atmosphere. For sub-orbital detectors, muons produced from cosmic rays in the atmosphere can directly hit the Cherenkov telescope. Using a semi-analytic technique with cascade equations for atmospheric particle fluxes, we quantify the atmospheric muon flux that reaches sub-orbital telescopes like Extreme Universe Space Observatory Super Pressure Balloon 2 (EUSO-SPB2). We assess this potential background to the EAS signals. The calculation technique may also provide an understanding of the evolution of the muon content in individual EAS. Introduction Neutrinos, neutral and weakly interacting particles, can travel astronomical distances unhindered and act as messengers from the distant universe. To detect the very-high-energy (VHE) (\(E>10^{15}\) eV) neutrino flux, large volume neutrino targets/detectors are needed. One approach is to use Earth as a neutrino converter, where neutrinos propagating through the Earth interact to produce charged leptons (electrons, muons, and tau-leptons). The charged leptons can exit the Earth and create extensive-air-showers (EASs) in the atmosphere. The EASs have optical Cherenkov, radio emission and fluorescent radiation associated to them. The radiation can be detected by ground-based, sub-orbital, and orbital neutrino telescopes such as IceCube [1], EUSO-SPB2 [2, 3], and a future POEMMA [4]. Another feature of cosmic ray interactions in the atmosphere is the generation of atmospheric neutrino and muon fluxes. The atmospheric muon flux incident on EUSO-SPB2 is the subject of this study. EUSO-SPB2 had a fluorescence telescope (FT) (nadir pointing) and a Cherenkov telescope (CT) that pointed above the limb to detect EAS from cosmic rays. It flew at an altitude of 33 km from the surface of the Earth. The muons produced from cosmic rays interactions in the atmosphere produce charged pions and kaons that can decay to muons. These muons can directly hit the CT and the FT. This is depicted in fig. 1, where \(\alpha\) describes the incident direction of the cosmic rays, which can be below or above the telescope's horizon. For the telescopes at an altitude of 33 km, \(\alpha>84.2^{\circ}\) for cosmic ray trajectories to be above the Earth's limb. Direct muon hits from the atmospheric muon flux can act as a potential background to the EAS signals measured by the CT and the FT. The atmospheric lepton fluxes are well-studied for ground-based instruments (see, e.g., refs. [5, 6, 7, 8, 9]), but less so for sub-orbital instruments like EUSO-SPB2. Here, we make a first estimate of the rate of atmospheric muons hitting the EUSO-SPB2 telescopes. ## 2 Atmospheric lepton fluxes with the \(Z\)-moment approximation The atmospheric particle flux \(\phi_{j}(E,X)\) for particle \(j\), as a function of energy \(E\) and column depth \(X\) can be written as \[\frac{d\phi_{j}(E,X)}{dX} = -\frac{\phi_{j}(E,X)}{\lambda_{j}(E)}-\frac{\phi_{j}(E,X)}{ \lambda_{j}^{\rm dec}(E)}+\sum S(k\to j)\,, \tag{1}\] \[S(k\to j) = \int_{E}^{\infty}dE^{\prime}\frac{\phi_{k}(E^{\prime},X)}{ \lambda_{k}(E^{\prime})}\frac{dn(k\to j;E^{\prime},E)}{dE}\,, \tag{2}\] for all particles except for muons. The symbol \(\lambda_{j}(E)\) and \(\lambda_{j}^{\rm dec}(E)=E\tau_{j}\rho/m_{j}\) are the interaction length and the decay length, respectively, of the particle \(j\) in the atmosphere. The quantities \(E\), \(\tau_{j}\), \(m_{j}\) are the energy, lifetime and mass of the particle \(j\). We have suppressed the angular (\(\alpha\)) dependence of \(\phi_{j}\) and \(X\). The column depth is given as: \[X(\ell,\alpha)=\int_{\ell}^{\infty}d\ell^{\prime}\rho(h(\ell^{\prime},\alpha)), \tag{3}\] where \(h(\ell,\alpha)\) is the altitude, \(\ell\) is the trajectory distance and \(\alpha\) is the nadir angle, also shown in fig. 1. The atmospheric density (\(\rho\)) considered here is given by an exponential distribution, \[\rho=\rho_{0}\exp(-h/h_{0})\, \tag{4}\] where \(h_{0}=6.4\) km and \(\rho_{0}h_{0}=1300\) g/cm\({}^{2}\). We approximate particle trajectories as one-dimensional, a good approximation above 10 GeV [9]. The interaction and decay distributions in the source term \(S(k\to j)\) (eq. (2)) in terms of cross-section (\(\sigma\)) and decay width (\(\Gamma\)) are \[\frac{dn(k\to j;E^{\prime},E)}{dE} = \frac{1}{\sigma_{kA}(E^{\prime})}\frac{d\sigma(kA\to jY;E^{\prime},E)}{dE} \quad(\mbox{interaction})\,, \tag{5}\] \[\frac{dn(k\to j;E^{\prime},E)}{dE} = \frac{1}{\Gamma_{k}(E^{\prime})}\frac{d\Gamma(k\to jY;E^{\prime},E)}{dE} \quad(\mbox{decay})\,\,. \tag{6}\] For muons, electromagnetic energy loss is incorporated in the continuous-energy-loss approximation, where \[\left\langle\frac{dE}{dX}\right\rangle\simeq\frac{dE}{dX}\simeq-(a+bE)\equiv- \beta(E)\,. \tag{7}\] Here, \(a\) and \(b\) are the energy loss parameters accounting for ionization, and bremsstrahlung, pair production and photo-nuclear energy loss, respectively. In this approximation \[\frac{d\phi_{\mu}(E,X)}{dX}=-\frac{\phi_{\mu}(E,X)}{\lambda_{j}^{\rm dec}(E)} +\frac{\partial}{\partial E}\left[\beta(E)\phi_{\mu}(E,X)\right]+\sum S(k \to\mu)\,. \tag{8}\] We use the same analytic approximation method involving spectrum-weighted \(Z\)-moments that is successful for calculating the atmospheric lepton fluxes at the Earth's surface [5, 6]. The analytic approximation to determine atmospheric lepton fluxes relies on flux-weighted integrals of differential distributions for particle production and decay. The source of particle \(j\) from initial Figure 1: Sub-orbital telescope (like, EUSO-SPB2) at 33 km altitude. In red is the particle trajectory to the telescope surface at different \(\alpha\) angles (\(\alpha=0^{\circ}\) at nadir). The cosmic rays (\(p\)) interact in the atmosphere to create pions and kaons which decay to muons. At a given point in the particle’s trajectory, \(h\) represents the altitude of that point and \(\ell\) represents the remaining trajectory distance from that point to the telescope. The figure is not to scale. particle \(k\), denoted \(S(k\to j)\), is given by eq. (2) and can be re-written in terms of \(Z\)-moment as \[S(k\to j) \simeq \left[\int_{E}^{\infty}dE^{\prime}\frac{\phi_{k}^{0}(E^{\prime})}{ \phi_{k}^{0}(E)}\frac{\lambda_{k}(E)}{\lambda_{k}(E^{\prime})}\frac{dn(k\to j;E ^{\prime},E)}{dE}\right]\frac{\phi_{k}(E,X)}{\lambda_{k}(E)} \tag{9}\] \[\equiv Z_{kj}(E)\frac{dn(k\to j;E^{\prime},E)}{dE}\;,\] where \(\phi_{k}(E,X)=\phi_{k}^{0}(E)f(X)\) and \(\Lambda_{k}=\lambda_{k}/(1-Z_{kk})\) so that \(f(X)=\exp{(-X/\Lambda_{k})}\). To evaluate the flux of muons, we step through the column depth of the atmosphere with steps \(\Delta X\) that are small relative to \(\lambda_{\mu}^{\rm dec}\) and \(\lambda_{j}\) for interactions and decays. For cosmic ray nucleons \(N\), and for the pions and kaons they produce in the atmosphere, this translates for each step to \[\phi_{N}(E,X+\Delta X) = \phi_{N}(E,X)\left(1-\frac{\Delta X}{\Lambda_{N}}\right) \tag{10}\] \[\phi_{j}(E,X+\Delta X) = \phi_{j}(E,X)\left(1-\frac{\Delta X}{\Lambda_{j}}-\frac{\Delta X }{\lambda_{j}^{\rm dec}}\right)+Z_{N\to j}\phi_{N}(E,X)\frac{\Delta X}{ \lambda_{N}} \tag{11}\] where \(j=\pi,K\). For muons \[\phi_{\mu}(E,X+\Delta X) = \left[\phi_{\mu}(E^{\prime},X)\left(1-\frac{\Delta X}{\lambda_{ \mu}^{\rm dec}}\right)+\sum_{j=\pi,K}Z_{j\to\mu}\phi_{j}(E^{\prime},X)\frac{ \Delta X}{\lambda_{j}^{\rm dec}}\right]\exp(b\Delta X) \tag{12}\] \[E^{\prime} = (E+a/b)\exp(-b\Delta X)-a/b\;. \tag{13}\] To simplify the calculation further, we make several approximations. The energy loss parameters \(a\), \(b\), and the \(Z\)-moments are taken to be energy independent with their respective values are taken from ref. [5], including pion and kaon decays as sources of muons in the atmosphere. We don't account for the Earth's magnetic field, an effect most important for muons with energies below 10 GeV [9]. We approximate the cosmic ray flux (\(\phi_{N}^{0}\)) as a function of energy per nucleon \(E\) by \[\phi_{N}^{0}(E)\left[\frac{\rm nucleons}{\rm cm^{2}\,s\,sr\,GeV}\right]=1.7\;(E /{\rm GeV})^{-2.7},\quad E<5\cdot 10^{6}\;{\rm GeV}\;.\] Figure 2: Muon energy (left) and the survival probability of muon (right) produced \(\ell\) distance away from the EUSO-SPB2 balloon. The balloon is at \(\ell=0\) km. It is shown for three final muon energies at the balloon. The dashed line is for \(\alpha=87^{\circ}\) and solid line is for \(\alpha=90^{\circ}\). The plot on the left shows the muon reaching the balloon (at \(\ell=0\) km) with a final energy of \(E_{\mu,f}=1,5,10\) GeV starts its trajectory at a higher energy. The muon flux is evaluated according to eq. (12). Our evaluation of the atmospheric muon flux on the ground is in agreement with the results in ref. [5]. For total atmospheric column depth \(X(\alpha)\), the muon energy at the beginning of the trajectory has initial energy \(E_{i}=(E+a/b)\,\exp(bX)-a/b\), and for each of the \(n\) steps in column depth, the muon energy is decreased according to eq. (13). Therefore, the muons reaching the telescope with a final energy \(E_{\mu,f}\) (\(E_{\mu}=E_{\mu,f}\) at \(\ell=0\)) will have started their journey with a higher initial energy, as shown in the left panel of fig. 2 for two different column depths corresponding to \(\alpha=87^{\circ}\) and \(\alpha=90^{\circ}\). Muon energy loss also impacts the survival probability of the muons, as shown in the left panel of fig. 2 which shows the muon survival probability as a function of \(\ell\) for three final muon energies and two column depths. ## 3 Results The cosmic rays interact after traversing \(X\simeq 120\) g/cm\({}^{2}\) in the atmosphere to form charged mesons, like pions and kaons, which can decay to muons. The column depth depends on the density of the air in the atmosphere, and is given by eq. (3). The atmosphere near the surface of the Earth is denser and its density decreases as we go farther up from the surface. Therefore, understanding the Figure 4: Muon flux (scaled by \(E_{\mu,f}^{2,7}\)) as a function of final muon energy reaching the detector. It is shown for different trajectories above and below the horizon. Right: Muon flux (scaled by \(E_{\mu,f}^{2.7}\)) as a function of \(\alpha\). It is shown for different final muon energies reaching the detector. Figure 3: Left: Altitude \(h\) (left) and trajectory distance \(\ell\) (right) as a function of column depth \(X\) starting from very high altitude, for trajectories below the horizon, at the horizon and above the horizon (different \(\alpha\) values) for an instrument at an altitude of 33 km. The atmospheric density is approximated by an exponential function. column depth variation along a particle's trajectory as a function of \(h\) and \(\ell\) is important. Figure 3 shows the column depth variation in the atmosphere as a function of altitude (left) and particle's trajectory distance (right). It is shown for different trajectories above and below the horizon. Our main results for the muon flux with energy \(E_{\mu,f}\) at a detector at an altitude of 33 km are shown in fig. 4. Figure 4 (left) shows the muon flux scaled by the \(E_{\mu,f}^{2.7}\), as a function of \(E_{\mu,f}\), for several values of \(\alpha\), from both below and above the horizon of the telescope. Figure 4 (right) shows the muon flux scaled by \(E_{\mu,f}^{2.7}\) as a function of \(\alpha\) for fixed final muon energies at the balloon. The particle trajectories with smaller \(\alpha\), have to travel longer distances to reach the balloon as compared to the particle trajectories with higher \(\alpha\). For higher \(\alpha\), the density of the atmosphere is lower, and it leads to the cosmic rays interacting to produce charged pions and kaons closer to the detector. This means the production of muons is also closer to the detector, increasing the muon flux reaching the telescopes on the balloon for higher \(\alpha\) trajectories. This is why the muon flux Figure 5: The ratio of muon flux produced from charged pions to charged kaons, as a function of final muon energies. It is shown for different trajectories below and above the horizon. Figure 6: Muon flux reaching per cm\({}^{2}\) of area of the detector per second scaled by final muon energies, as a function of final muon energy. Solid line is for the detector pointing to the horizon, and dashed line is for the detector pointing in nadir direction. reaching the balloon with a final muon energy of 1 GeV, for example, is lower for smaller \(\alpha\) but increases with higher \(\alpha\) (shown in fig. 4 right). We can see in fig. 4 (left), the muon flux increases with increasing muon final energies as more muons are able to reach the balloon. But the flux starts falling for \(E_{\mu,f}\gtrsim 10^{3}\) GeV. This is because fewer muons at higher energies are being produced as the probability of interaction for pions and kaons is higher than their decay probability. For reference, we show the ratio of the muon flux reaching the balloon produced from charged pion decays to the flux from charged kaon decays in fig. 5 for different values of \(\alpha\). The muon flux from pion decays is a factor of \(\sim 8-11\) higher than the flux from kaon decays at lower final muon energies. For higher final muon energies, the ratio of the muon flux produced from pions to kaons is nearly constant as a function of energy, with a value of \(\sim 1.5\). To calculate the rate of muons incident on the CT pointed horizontally, and from the upward muon flux incident on the FT pointing to the nadir, we integrate the flux of muons over the appropriate solid angle, starting with \(\alpha=84.2^{\circ}\) at the Earth's limb. For these two cases, the flux integrated over solid angle and energy for energy \(E_{\mu,f}\), denoted by \(\Phi_{\mu}(E_{\mu,f})\) are \[\Phi_{\mu}(E_{\mu,f}) \equiv \int_{E_{\mu,f}}dE_{\mu}\int_{\varphi=-\pi/2}^{\varphi=\pi/2} \int_{\alpha=84.2^{\circ}}^{\alpha=180^{\circ}}\phi_{\mu}(E_{\mu},\alpha) \sin^{2}\alpha\cos\varphi\,d\alpha d\varphi,\quad\mbox{horizontal} \tag{14}\] \[\Phi_{\mu}(E_{\mu,f}) \equiv \int_{E_{\mu,f}}dE_{\mu}\int_{\varphi=0}^{\varphi=2\pi}\int_{ \alpha=84.2^{\circ}}^{\alpha=90^{\circ}}\phi_{\mu}(E_{\mu},\alpha)\cos\alpha \sin\alpha\,d\alpha d\varphi,\quad\mbox{nadir}\,. \tag{15}\] The results for \(E_{\mu,f}\times d\Phi_{\mu}/dE_{\mu,f}\) are shown in fig. 6 as a function of final muon energy are shown for each orientation. The rate of up-going muons in the atmospheric flux that reach the FT, parallel to the Earth (pointing to the nadir) is much smaller than the rate incident on the CT pointing to the horizon. Integrating over energy with a minimum energy of \(E_{\mu,f}=1\) GeV yields \(6.6\times 10^{-3}\)/cm\({}^{2}\)/s on the horizontally-pointing CT, and \(2.1\times 10^{-6}\)/cm\({}^{2}\)/s on the nadir-pointing FT. The flux of muons on the ground is approximately 1/cm\({}^{2}\)/min \(\simeq 1.7\times 10^{-2}\)/cm\({}^{2}\)/s [10]. While our approximation that for a given \(\alpha\) we can use the cascade equations in a 1-dimensional approximation is good for \(E_{\mu,f}\gtrsim 10\) GeV, is not as good an approximation for lower energy muons where the Earth's magnetic field bends the muon trajectory (and makes it longer) [11]. For reference, \(\Phi_{\mu}(E_{\mu,f}=10\) GeV) is \(3.6\times 10^{-4}\)/cm\({}^{2}\)/s for the horizontal orientation of the telescope, and \(1.3\times 10^{-6}\)/cm\({}^{2}\)/s for the nadir orientation. ## 4 Discussion Our muon flux calculation is an approximate result, but for the orientation of CT pointing to its horizon, or near to it, is interesting enough to pursue with a more detailed evaluation. As muons pass through the plane of the detector, they are ionizing and deposit energy. More detailed modeling of the detector is required to understand their impact. Given the areas of the CT, if we assume that 100% of the muon rate triggers the detector, the atmospheric muon rate for muons above 1 GeV is 1.21 Hz and above 10 GeV is 0.066 Hz, on the CT of area 184 cm\({}^{2}\) with horizontal pointing. There may be an opportunity to measure the atmospheric muon flux with EUSO-SPB2 using data collected to determine the air glow background. **Acknowledgements** We thank J. Krizmanic and J. Szabelski for illuminating discussions. This work was supported in part the the US Department of Energy grant DE-SC-0010113.
2301.01982
Emotion-Cause Pair Extraction as Question Answering
The task of Emotion-Cause Pair Extraction (ECPE) aims to extract all potential emotion-cause pairs of a document without any annotation of emotion or cause clauses. Previous approaches on ECPE have tried to improve conventional two-step processing schemes by using complex architectures for modeling emotion-cause interaction. In this paper, we cast the ECPE task to the question answering (QA) problem and propose simple yet effective BERT-based solutions to tackle it. Given a document, our Guided-QA model first predicts the best emotion clause using a fixed question. Then the predicted emotion is used as a question to predict the most potential cause for the emotion. We evaluate our model on a standard ECPE corpus. The experimental results show that despite its simplicity, our Guided-QA achieves promising results and is easy to reproduce. The code of Guided-QA is also provided.
Huu-Hiep Nguyen, Minh-Tien Nguyen
2023-01-05T09:33:41Z
http://arxiv.org/abs/2301.01982v2
# Emotion-Cause Pair Extraction as Question Answering ###### Abstract The task of Emotion-Cause Pair Extraction (ECPE) aims to extract all potential emotion-cause pairs of a document without any annotation of emotion or cause clauses. Previous approaches on ECPE have tried to improve conventional two-step processing schemes by using complex architectures for modeling emotion-cause interaction. In this paper, we cast the ECPE task to the question answering (QA) problem and propose simple yet effective BERT-based solutions to tackle it. Given a document, our _Guided-QA_ model first predicts the best emotion clause using a fixed question. Then the predicted emotion is used as a question to predict the most potential cause for the emotion. We evaluate our model on a standard ECPE corpus. The experimental results show that despite its simplicity, our Guided-QA achieves promising results and is easy to reproduce. The code of Guided-QA is also provided. ## 1 Introduction Emotion Cause Extraction (ECE) is the task of detecting the cause behind an emotion given the emotion annotation Lee et al. (2010); Gui et al. (2016), see Figure 1 (Top). The text was divided into clauses and the task was to detect the clause containing the cause, given the clause containing the emotion. However, the applicability of ECE is limited due to the fact that emotion annotations are required at test time. Recently, Xia and Ding (2019) introduced the more challenging Emotion-Cause Pair Extraction (ECPE) task: extracting all possible emotion-cause clause pairs in a document without annotations. Figure 1 (Bottom) shows an example of the ECPE task. The input is a document of six clauses. Clauses c4 and c5 contain emotion with the emotion expressions "happy" and "worried". The emotion c4 has two causes c3 and c2, the emotion c5 has one cause c6, so the expected output is {(c4,c2), (c4,c3), (c5,c6)}. _Why cause-effect pair extraction?_ We argue that _independent_ extraction of cause and emotion may be ineffective. For a given document, ECPE models may predict correct cause but incorrect emotion. This makes the output _incomplete_, and subsequent processing steps less reliable Ding et al. (2020); Wei et al. (2020); Chen et al. (2020); Yan et al. (2021). We make a toy example of two models using the document in Figure 1. Model-1 predicts (c4,c1) and (c6,c3) as emotion-cause pairs. Its emotion, cause and pair accuracy scores are 0.5, 0.33 and 0.0. Model-2 predicts (c4, c2) and (c6, c1) as emotion-cause pairs. Its emotion, cause and pair accuracy scores are 0.5, 0.33 and 0.33. From the perspective of the pair extraction task, Model-2 is better. Figure 1: Illustration of ECE and ECPE tasks. Previous studies addressed the ECPE task by using sequence labeling (Lee et al., 2010; Cheng et al., 2021), clause-level classification (Gui et al., 2016; Ding et al., 2020; Chen et al., 2020), ranking (Wei et al., 2020), or recurrent synchronization (Chen et al., 2022). The methods achieved promising results, yet the use of interaction between emotion and cause clauses is still an open question. For example, c4 and c2 share "the old man" tokens, which refer to "him" in c3; and c5 and c6 share "he", which mentions "the old man" in c2 and c4. Based on this observation, we introduce a paradigm shift (Sun et al., 2022) for ECPE by using _span extraction_. As far as we know, (Gui et al., 2017) is the first work that uses question answering for emotion-cause detection. However, their work addresses the ECE task only, which requires the annotation of emotion for cause prediction. In contrast, our paradigm shift is applied to the ECPE task, which is more challenging and does not require the annotation of emotion for cause prediction. The paradigm bases on two hypotheses. First, information from emotion clauses can be used to infer cause clauses. Second, emotion and cause clauses share implicit interaction. The design of our model is based on these two hypotheses. For the first hypothesis, we form questions based on emotional information which is used to predict emotion clauses. For the second hypothesis, we used predicted emotion as the guided question for cause prediction. The model is trained by using the BERT-QA architecture (Devlin et al., 2018) in form of SQuAD task (Rajpurkar et al., 2016). Our paper makes three main contributions. * We formulate the ECPE task as a QA problem and propose a Guided-QA model to implicitly capture the relationship between emotion and cause clauses, in which the predicted emotion is used as a guided question for cause prediction. The model can capture the implicit interaction between emotions and causes with a simple but effective architecture. To the best of our knowledge, we are the first to address the ECPE task by using QA formulation. * We evaluate our model on the standard ECPE corpus (Xia and Ding, 2019; Fan et al., 2020). Experimental results show that our approach achieves promising results compared to previous methods. * We promote the reproducibility (Houghton et al., 2020) by providing the source code of our methods as well as rerunning publicly available source codes of the compared methods. ## 2 Related Work ECE and ECPE tasksThe ECE task was formulated as sequence-labeling by (Lee et al., 2010) and refined as clause-level by (Gui et al., 2016). Recently, the more challenging ECPE task (Xia and Ding, 2019) has attracted a lot of contributions with several strong methods (Ding et al., 2020; Wei et al., 2020; Chen et al., 2020; Cheng et al., 2021; Chen et al., 2022). For example, (Ding et al., 2020) introduced ECPE-MLL, which uses a sliding window for a multi-label learning scheme. ECPE-MLL extracts the emotion and cause by using the iterative synchronized multitask learning. (Chen et al., 2022) proposed a similar approach, recurrent synchronization network (RSN), that explicitly models the interaction among different tasks. (Wei et al., 2020) presented RankCP, a transition-based framework, by transforming the ECPE problem into directed graph construction, from which emotions and the corresponding causes can be extracted simultaneously based on labeled edges. The PairGCN model (Chen et al., 2020) used Graph Convolutional Networks to model three types of dependency relations among local neighborhood candidate pairs and facilitate the extraction of pair-level contextual information. We share the purpose of addressing the ECE and ECPE tasks with prior studies, however, instead of using classification or sequence labeling, we address the tasks with a new paradigm shift by using span extraction. It allows us to take into account the implicit interaction between emotion and cause clauses and to design a simple but effective BERT-based model for ECE and ECPE. (Bi and Liu, 2020) derived a span-based dataset and formulated a new ECSP (Emotion Cause Span Prediction) task from (Xia and Ding, 2019) but it has not attracted much attention. The accessibility of the dataset and source code may be the reason. We leave span-based ECSP evaluation as future work. **Paradigm shift in natural language processing** A paradigm is a general modeling framework or a family of methods to solve a class of tasks. For instance, sequence labeling is a mainstream paradigm for Part-of-speech (POS) tagging and Named entity recognition (NER). The sequence-to-sequence (Seq2Seq) paradigm is a popular tool for summarization and machine translation. Different paradigms usually require different formats of input and output, and therefore highly depend on the annotation of the tasks. Paradigm shift indicates the job of solving one NLP task in a new paradigm by reformulating the task along with changing the input-output formats. Paradigm shift in NLP has been explored scatter-ringly in recent years and with the advent of pre-trained language models, it became a rising trend (Li et al., 2019; Khashabi et al., 2020). An excellent survey of paradigm shifts in NLP has been done by (Sun et al., 2022). In this work, we realize such a paradigm shift for the ECPE task, i.e., we reformulate the clause-based text classification task as span extraction. **Span-based extractive question answering** Our formulation for the tasks of ECE and ECPE relates to span-based extractive QA, which has been widely investigated (Khashabi et al., 2020). More precisely, we design our model based on the pretrained language models (PLMs) such as BERT (Devlin et al., 2018) or RoBERTa (Liu et al., 2019). This is because applying PLMs as the backbone of QA systems has become a standard procedure. For detailed information, please refer to (Devlin et al., 2018). Figure 2 reproduced from (Devlin et al., 2018) shows how BERT is applied to the extractive QA task. Tokens of question \(q=q_{1},..,q_{n}\) and context \(C=c_{1},..,c_{m}\) are concatenated before being encoded by BERT. The contextual representations of tokens \(T_{i}\) are put into a feed-forward layer followed by a softmax. Each candidate span for the answer is scored as the product of start/end probabilities. The maximum scoring span is used as the prediction. The training objective is the loglikelihood of the correct start and end positions. By casting the ECPE to QA problem, our work leverages the powerful models of the BERT family (Devlin et al., 2018) to detect clause-level emotions and causes as well as emotion-cause pairs. ## 3 Method ### Problem Statement Given a document of \(n\) clauses \(d=(c_{1},c_{2},..,c_{n})\), the goal of ECPE is to detect all potential emotion-cause pairs \(P=\{..(c_{e},c_{c}),..\}\) where \(c_{e}\) is an emotion clause, and \(c_{c}\) is the corresponding cause clause (Xia and Ding, 2019). We formulated the ECPE task as a QA problem. Given a set of questions \(\{q_{e},q_{c}\}\) (\(q_{e}\) is for emotion and \(q_{c}\) is for cause) and a context document \(d\) with \(n\) clauses, the model learns to predict start and end positions of each \(c_{e}\) and \(c_{c}\): \(s_{c_{e}},e_{c_{e}}=f(d,q_{e}|\Theta)\) and \(s_{c_{c}},e_{c_{c}}=f(d,q_{c}|\Theta)\) to form \(P\). \(\Theta\) can be learnt by using independent or guided extraction. ### Independent Emotion, Cause Extraction We first introduce a simple version of our model, Indep-QA in Figure 3. Indep-QA receives a fixed question (for emotion or cause) and then pulls out corresponding emotion or cause clauses independently. Question formulationBecause no emotion/cause information is provided beforehand, we have to detect them first with generic questions. It is possible to use pre-defined questions for extraction (Mengge et al., 2020), however, we argue that the definition of questions is time-consuming, needs domain knowledge, and does not guarantee the semantic relationship between the questions and context documents. Instead, we use two short questions "emotion" and "cause" as an implicit indicator that provides additional information for the model. We leave the analysis of using generic questions such as "What is the emotion?" and "What is the cause?" as future work. Learning and predictionGiven a document \(d\) and a question ("emotion" or "cause"), we concatenated all clauses of \(d\) and the question to form a single sequence \(C\). The sequence was fed to a pre-trained language model (PLM) to obtain its hidden representations of tokens which were subsequently Figure 2: BERT-based extractive Question Answering fed into a feed-forward layer followed by a softmax layer. Each candidate span was scored as the product of start/end probabilities. The maximum scoring span was used as the prediction. Mapping predicted answer span to clausesThe predicted answer span may overlap with one or several clauses. We applied a span-to-clause mapping rule to determine which clauses are predicted results: the clause that overlaps most with the predicted span is returned. The tie is broken arbitrarily. For instance, In Figure 3, the predicted span for "emotion" overlaps with clauses \(c2\) and \(c3\) in which \(c_{2}\) is more overlapped. As a result, \(c_{2}\) is the predicted emotion. EC pair predictionGiven predicted emotion/cause clauses \(c_{e}\) and \(c_{c}\), Indep-QA simply predicts (\(c_{e}\), \(c_{c}\)) as an emotion-cause pair. As illustrated in Figure 3, \((c_{2},c4)\) is the predicted emotion-cause pair. ### Guided Emotion-Cause Pair Extraction The Indep-QA model extracts emotion/clause clauses independently but does not exploit the relationship between emotion and cause clauses, which plays an important role in the extraction of emotion-cause pairs Ding et al. (2020); Wei et al. (2020); Chen et al. (2020); Cheng et al. (2021); Chen et al. (2022). To better model this relationship, we introduce Guided-QA in Figure 4. The model receives an emotion question and predicts the corresponding emotion clause. Then the predicted emotion clause is used as a question for cause extraction. Compared to Indep-QA, the Guided-QA takes into account an implicit relationship from emotion for cause prediction. The Guided-QA model shares the question formulation, hidden representation learning, and the mapping process of the Indep-QA model. EC pair extractionWe used the predicted (noisy) emotion clause as the question for cause extraction. The interaction between emotion and cause happens here. The predicted emotion clause may or may not be the true one but on average, it contains much more information for the QA model than the generic question (i.e., "emotion"). Note that the predicted (noisy) emotion as the question was used for the test set only. For the training set, as the model already knows which clauses are emotion or cause, it uses the true emotion clause as the question. By swapping the role, the model can detect cause clauses first and use the noisy causes as questions to predict the emotions. In Section 5 we compare Emotion-first and Cause-first, the two variants of Figure 4: Guided pair extraction Guided-QA: Emotion is detected first (Left), Cause is detected first (Right). Figure 3: Independent extraction Indep-QA. Guided-QA and show that the gaps are tiny. In other word, the two variants are almost equivalent on the tested datasets. As our QA models use the best answer span for each question, only one emotion, one cause, and one EC pair are predicted for each document which are appropriate for the ECPE dataset. We also aware that the prediction of spans should be multiple and we aim to address this limitation in future work by using multiple span extraction methods Nguyen et al. (2021); Fu et al. (2021). ### Discussion Given a document of \(n\) clauses, existing schemes such as ECPE-MLL Ding et al. (2020), RankCP Wei et al. (2020) and PairGCN Chen et al. (2020) attempt to reduce the \(O(n^{2})\) complexity of emotion-cause pair classification by using sliding window, transition graph techniques. However, these techniques may miss certain interaction between the emotion-cause pair and the full context in the document. BERT-based QA models with full attention between the question and the context mitigate this issue. Through QA models, the emotion-cause relationship between all clauses is implicitly learned and we can leverage the power of existing QA methods. ## 4 Experimental Settings DatasetsWe followed the 10-split ECPE dataset provided by Xia and Ding (2019) and the 20-split TransECPE variant Fan et al. (2020) to evaluate our methods. Each split is a random partition of the 1945 documents to train/dev/test sets with ratio 8:1:1, i.e., the train set, dev set and test set contain approximately 1556, 194 and 195 documents. On average, each document contains 14.8 clauses. Table 1 shows the distribution of documents with different number of emotion-cause pairs. Most of the documents have only one emotion-cause pairs. This fact makes the detection of emotion/cause clauses as well as emotion-cause pairs challenging. Evaluation metricsWe used the precision, recall, and F1 score Xia and Ding (2019) as evaluation metrics for all three tasks of ECPE: emotion extraction, cause extraction and emotion-cause pair extraction. Let \(T_{e}\) and \(P_{e}\) be the number of ground-truth and predicted emotion clauses respectively, the precision, recall and F1 score for emotion are as defined as follows. \[P_{e}=\frac{|T_{e}\cap P_{e}|}{|P_{e}|}\] \[R_{e}=\frac{|T_{e}\cap P_{e}|}{|T_{e}|}\] \[F1_{e}=\frac{2*P_{e}*R_{e}}{P_{e}+R_{e}}\] Metrics for cause clauses and emotion-cause pairs are defined similarly. Implementation detailsOur model was implemented using BERT classes provided by Huggingface Wolf et al. (2020). The model was trained in 5 epochs, with the learning rate of \(5e-5\), and the batch size of 16. We used BERT Devlin et al. (2018)1 and RoBERTa Liu et al. (2019)2 for Chinese. All models were trained on a Tesla P100 GPU. Footnote 1: [https://huggingface.co/bert-base-chinese](https://huggingface.co/bert-base-chinese) Footnote 2: [https://huggingface.co/hfl/chinese-roberta-wwm-ext](https://huggingface.co/hfl/chinese-roberta-wwm-ext) ## 5 Results and Discussion Guided-QA: Emotion-first vs. Cause-firstWe first compare the two variants Emotion-first and Cause-first of the Guided-QA method. Table 2 shows that the two variants have almost equivalent performance on the tested datasets except the BERT-based results on 10-split ECPE. Also, the RoBERTa-based results are consistently better than the BERT-based, 1.1 to 2.0 points. In the next section, we pick the Emotion-first scores for comparing Guided-QA with other methods. Guided-QA vs. Indep-QAWe now compare Guided-QA and Indep-QA. For 10-split ECPE in the upper part of Table 3, the Guided-QA model is consistently better than Indep-QA for pair extraction. This is because Guided-QA takes into account the implicit interaction between emotion and cause clauses. For emotion or cause extraction, Indep-QA is competitive with Guided-QA. This is because they share the same formulation. The results in Table 4 also show similar observation. We also confirm the performance of our model by using RoBERTa to have better analysis. The results are consistent with the model using BERT, in which Guided-QA outputs better F-scores than the Indep-QA model. It also shows that our model can be improved further by using stronger PLMs. Guided-QA vs. strong baselinesWe compare our model with five strong methods for ECPE: ECPE-MLL3(Ding et al., 2020), RankCP4(Wei et al., 2020), PairGCN5(Chen et al., 2020), UTOS (Cheng et al., 2021), and RSN (Chen et al., 2022). For 10-split, our model using BERT follows ECPE-MLL, RankCP, and RSN. It shows that with a simple architecture, our model can output competitive results compared to complicated methods. For 20-split TransECPE in Table 4, the trend is consistent with Table 3, in which the Guided-QA model is competitive for both ECE and ECPE tasks. Footnote 3: [https://github.com/NUSTM/ECPE-MLL](https://github.com/NUSTM/ECPE-MLL) Footnote 4: [https://github.com/Determined22/Rank-Emotion-Cause/issues/3](https://github.com/Determined22/Rank-Emotion-Cause/issues/3) Moreover, as we observe from all the compared methods, the gaps between the reported pair-f1 scores for 10-split ECPE and 20-split TransECPE are 0.023 (=0.745-0.722) for ECPE-MLL, 0.042 for RankCP, 0.029 for UTOS, 0.003 for Indep-QA and 0.006 for Guided-QA, i.e., largest gap in RankCP and smallest gaps in our models. Across the two settings, our models seem more robust than the compared methods. ReproducibilityFor fair comparison (Houghton et al., 2020), we also rerun publicly available source codes in the original setting. The reproduced results confirm the gaps between reproduction and original results. Compared to the reproduced results, Guided-QA using BERT is the best for EC pair extraction. Compared to the results of reproduced methods, the Guided-QA is still better for both ECE and ECPE tasks. This confirms our hypotheses stated in Section 1. Compared to the results of strong baselines reported in papers, the F-scores of Guided-QA are still competitive. It shows that our simple model can output promising results compared to complicated ECPE methods (Ding et al., 2020; Wei et al., 2020; Chen et al., 2020; Cheng et al., 2021; Chen et al., 2022). The results from the original papers are just for reference because it seems there are gaps between the reproduced results and original results.6. This is because several scholars tried to reproduce the results, but it seems there are gaps between the reproduced results and original results. Footnote 6: [https://github.com/Determined22/Rank-Emotion-Cause/issues/3](https://github.com/Determined22/Rank-Emotion-Cause/issues/3) For 20-split TransECPE in Table 4, the trend is consistent with Table 3. The Guided-QA is competitive for both ECE and ECPE tasks. The model using RoBERTa is still the best. After rerunning the source codes of the baselines, we found that PairGCN has the best reproducibility. By adopting the standardized pipeline of BERT-based question answering, our models inherit its simplicity and reproducib \begin{table} \begin{tabular}{|c|c|c|} \hline & Number & Percentage \\ \hline Documents with one emotion-cause pair & 1746 & 89.77\% \\ Documents with two emotion-cause pairs & 177 & 9.10\% \\ Documents with more than two emotion-cause pairs & 22 & 1.13\% \\ All & 1945 & 100\% \\ \hline \end{tabular} \end{table} Table 1: Histogram of the number of emotion-cause pairs per document. \begin{table} \begin{tabular}{l c c c c c c c c c} \hline \hline **Model** & \multicolumn{3}{c}{Emotion Extraction} & \multicolumn{3}{c}{Cause Extraction} & \multicolumn{3}{c}{EC Pair Extraction} \\ \cline{2-10} & P & R & F1 & P & R & F1 & P & R & F1 \\ \hline **10-split ECPE** & & & & & & & & & \\ Emotion-first (BERT) & 0.847 & 0.908 & 0.876 & 0.719 & 0.792 & 0.754 & 0.771 & 0.692 & 0.729 \\ Cause-first (BERT) & 0.831 & 0.891 & 0.860 & 0.714 & 0.787 & 0.749 & 0.763 & 0.685 & 0.722 \\ Emotion-first (RoBERTa) & 0.854 & 0.916 & 0.884 & 0.732 & 0.806 & 0.767 & 0.786 & 0.706 & 0.744 \\ Cause-first (RoBERTa) & 0.843 & 0.904 & 0.873 & 0.733 & 0.807 & 0.768 & 0.784 & 0.704 & 0.742 \\ \hline **20-split TransECPE** & & & & & & & & & \\ Emotion-first (BERT) & 0.842 & 0.906 & 0.873 & 0.710 & 0.782 & 0.744 & 0.760 & 0.689 & 0.723 \\ Cause-first (BERT) & 0.833 & 0.897 & 0.864 & 0.713 & 0.785 & 0.747 & 0.761 & 0.690 & 0.724 \\ Emotion-first (RoBERTa) & 0.844 & 0.909 & 0.875 & 0.723 & 0.796 & 0.757 & 0.772 & 0.700 & 0.734 \\ Cause-first (RoBERTa) & 0.838 & 0.902 & 0.869 & 0.724 & 0.797 & 0.758 & 0.773 & 0.701 & 0.735 \\ \hline \end{tabular} \end{table} Table 2: Guided-QA Emotion-first vs. Cause-first on 10-split ECPE dataset and 20-split TransECPE dataset an issue in more complex methods like RankCP. Runtime comparisonWe also measured the running time of our model and the baselines. In Table 5, PairGCN which only uses BERT embeddings has the best running time. The other models take longer to run due to the fine-tuning of BERT models. Our model is the second best, which is much faster than ECPE-MLL. It shows that our model can balance between competitive accuracy and high speed. ## 6 Conclusion This paper introduces a paradigm shift for the ECPE task. Instead of treating the task as the conventional formulation, we formulate the extraction as a QA problem. Based on that, we design a model which takes into account the implicit interaction between emotion and cause clauses. Experimental results on a benchmark Chinese dataset show that using implicit interaction of emotions and causes can achieve competitive accuracy compared to strong baselines. Future work will consider explicit interaction between emotion and cause clauses. \begin{table} \begin{tabular}{l c c c c c c c c} \hline \hline **Model** & \multicolumn{2}{c}{Emotion Extraction} & \multicolumn{4}{c}{Cause Extraction} & \multicolumn{4}{c}{EC Pair Extraction} \\ \cline{2-9} & P & R & F1 & P & R & F1 & P & R & F1 \\ \hline Indep-QA (BERT) & 0.842 & 0.906 & 0.873 & 0.713 & 0.785 & **0.747** & 0.730 & 0.662 & 0.694 \\ Guided-QA (BERT) & 0.842 & 0.906 & 0.873 & 0.710 & 0.782 & 0.744 & 0.760 & 0.689 & **0.723** \\ \hline Indep-QA (RoBERTa) & 0.844 & 0.909 & 0.875 & 0.724 & 0.797 & 0.758 & 0.739 & 0.670 & 0.703 \\ Guided-QA (RoBERTa) & 0.844 & 0.909 & 0.875 & 0.723 & 0.796 & 0.757 & 0.772 & 0.700 & _0.734_ \\ \hline \hline ECPE-MLL (BERT) & 0.847 & 0.899 & 0.872 & 0.705 & 0.770 & 0.736 & 0.749 & 0.698 & 0.722 \\ RankCP (BERT) & 0.894 & 0.895 & 0.894 & 0.694 & 0.747 & 0.719 & 0.658 & 0.731 & 0.692 \\ UTOS (BERT) & 0.865 & 0.829 & 0.849 & 0.742 & 0.708 & 0.728 & 0.710 & 0.681 & 0.691 \\ \hline ECPE-MLL (BERT)* & — & — & — & — & — & — & 0.659 & 0.714 & 0.684 \\ RankCP (BERT)* & 0.896 & 0.897 & **0.896** & 0.694 & 0.749 & 0.720 & 0.657 & 0.731 & 0.691 \\ PairGCN (BERT)* & 0.804 & 0.878 & 0.839 & 0.689 & 0.770 & 0.727 & 0.677 & 0.746 & 0.709 \\ \hline \hline \end{tabular} \end{table} Table 4: Experimental results of different models on 20-split TransECPE dataset. * indicates reproduced results. The authors of PairGCN and RSN did not tested their models on TransECPE. \begin{table} \begin{tabular}{l c c c c c c c c c} \hline \hline **Model** & \multicolumn{2}{c}{Emotion Extraction} & \multicolumn{4}{c}{Cause Extraction} & \multicolumn{4}{c}{EC Pair Extraction} \\ \cline{2-9} & P & R & F1 & P & R & F1 & P & R & F1 \\ \hline Indep-QA (BERT) & 0.847 & 0.908 & **0.876** & 0.714 & 0.787 & 0.749 & 0.736 & 0.661 & 0.697 \\ Guided-QA (BERT) & 0.847 & 0.908 & **0.876** & 0.719 & 0.792 & **0.754** & 0.771 & 0.692 & **0.729** \\ \hline Indep-QA (RoBERTa) & 0.854 & 0.916 & 0.884 & 0.733 & 0.807 & 0.768 & 0.761 & 0.683 & 0.720 \\ Guided-QA (RoBERTa) & 0.854 & 0.916 & 0.884 & 0.732 & 0.806 & 0.767 & 0.786 & 0.706 & _0.744_ \\ \hline \hline ECPE-MLL (BERT) & 0.861 & 0.919 & 0.889 & 0.738 & 0.791 & 0.763 & 0.770 & 0.724 & 0.745 \\ RankCP (BERT) & 0.912 & 0.900 & 0.906 & 0.746 & 0.779 & 0.762 & 0.712 & 0.763 & 0.736 \\ PairGCN (BERT) & 0.886 & 0.796 & 0.838 & 0.791 & 0.693 & 0.738 & 0.769 & 0.679 & 0.720 \\ UTOS (BERT) & 0.882 & 0.832 & 0.856 & 0.767 & 0.732 & 0.747 & 0.739 & 0.706 & 0.720 \\ RSN (BERT) & 0.861 & 0.892 & 0.876 & 0.773 & 0.740 & 0.755 & 0.760 & 0.722 & 0.739 \\ \hline ECPE-MLL (BERT)* & — & — & — & — & — & — & 0.688 & 0.752 & 0.718 \\ RankCP (BERT)* & 0.741 & 0.744 & 0.742 & 0.614 & 0.647 & 0.627 & 0.573 & 0.625 & 0.597 \\ PairGCN (BERT)* & 0.784 & 0.883 & 0.829 & 0.686 & 0.795 & 0.735 & 0.675 & 0.772 & 0.718 \\ \hline \hline \end{tabular} \end{table} Table 3: Experimental results of different models on 10-split ECPE dataset. * indicates reproduced results. \begin{table} \begin{tabular}{|l|r r|} \hline & ECPE & TransECPE \\ \hline ECPE-MLL & 8.5h & 17h \\ RankCP & 3h & 6h \\ PairGCN & 42min & 85 min \\ \hline Indep-QA & 2h30 & 5h \\ Guided-QA & 2h30 & 5h \\ \hline \hline \end{tabular} \end{table} Table 5: Running time (train and test) on Tesla P100.
2302.09656
Credal Bayesian Deep Learning
Uncertainty quantification and robustness to distribution shifts are important goals in machine learning and artificial intelligence. Although Bayesian Neural Networks (BNNs) allow for uncertainty in the predictions to be assessed, different sources of uncertainty are indistinguishable. We present Credal Bayesian Deep Learning (CBDL). Heuristically, CBDL allows to train an (uncountably) infinite ensemble of BNNs, using only finitely many elements. This is possible thanks to prior and likelihood finitely generated credal sets (FGCSs), a concept from the imprecise probability literature. Intuitively, convex combinations of a finite collection of prior-likelihood pairs are able to represent infinitely many such pairs. After training, CBDL outputs a set of posteriors on the parameters of the neural network. At inference time, such posterior set is used to derive a set of predictive distributions that is in turn utilized to distinguish between aleatoric and epistemic uncertainties, and to quantify them. The predictive set also produces either (i) a collection of outputs enjoying desirable probabilistic guarantees, or (ii) the single output that is deemed the best, that is, the one having the highest predictive lower probability -- another imprecise-probabilistic concept. CBDL is more robust than single BNNs to prior and likelihood misspecification, and to distribution shift. We show that CBDL is better at quantifying and disentangling different types of uncertainties than single BNNs, ensemble of BNNs, and Bayesian Model Averaging. In addition, we apply CBDL to two case studies to demonstrate its downstream tasks capabilities: one, for motion prediction in autonomous driving scenarios, and two, to model blood glucose and insulin dynamics for artificial pancreas control. We show that CBDL performs better when compared to an ensemble of BNNs baseline.
Michele Caprio, Souradeep Dutta, Kuk Jin Jang, Vivian Lin, Radoslav Ivanov, Oleg Sokolsky, Insup Lee
2023-02-19T19:03:26Z
http://arxiv.org/abs/2302.09656v4
# Imprecise Bayesian neural networks ###### Abstract. Uncertainty quantification and robustness to distribution shifts are important goals in machine learning and artificial intelligence. Although Bayesian neural networks (BNNs) allow for uncertainty in the predictions to be assessed, different sources of uncertainty are indistinguishable. We present imprecise Bayesian neural networks (IBNNs); they generalize and overcome some of the drawbacks of standard BNNs. These latter are trained using a single prior and likelihood distributions, whereas IBNNs are trained using credal prior and likelihood sets. They allow to distinguish between aleatoric and epistemic uncertainties, and to quantify them. In addition, IBNNs are robust in the sense of Bayesian sensitivity analysis, and are more robust than BNNs to distribution shift. They can also be used to compute sets of outcomes that enjoy PAC-like properties. We apply IBNNs to two case studies. One, to model blood glucose and insulin dynamics for artificial pancreas control, and two, for motion prediction in autonomous driving scenarios. We show that IBNNs performs better when compared to an ensemble of BNNs benchmark. Key words and phrases:Bayesian deep learning; imprecise probabilities; credal sets; epistemic and aleatory uncertainties; uncertainty quantification; machine learning robustness 2010 Mathematics Subject Classification: Primary: 68T37; Secondary: 68T05, 68W25 ## 1. Introduction One of the greatest virtues an individual can have is arguably being aware of their own ignorance, and acting cautiously as a consequence. Similarly, an autonomous system using neural networks (NNs) would greatly benefit from understanding the probabilistic properties of the NN's output (e.g., variance, robustness to distribution shift), in order to incorporate them into any further decision-making. In this paper, we present a generalization of Bayesian neural networks that allows us to give a machine such a desirable quality. In the last few years, there has been a proliferation of work on calibrating (classification) NNs, in order to estimate the confidence in their outputs [20] or to produce conformal sets that are guaranteed to contain the true label, in a probably approximately correct (PAC) sense [42]. While such methods are a promising first step, they require a calibration set (in addition to the original training set) and cannot be directly used on out-of-distribution data without further examples. Bayesian neural networks (BNNs) offer one approach to overcome the above limitations. The Bayesian paradigm provides a rigorous framework to analyze and train uncertainty-aware neural networks, and more generally to support the development of learning algorithms [26]. In addition, it overcomes some of the drawbacks of deep learning models, namely that they are prone to overfitting, which adversely affects their generalization capabilities, and that they tend to be overconfident about their predictions when they provide a confidence interval. BNNs, though, are trained using a single prior, which may still suffer from miscalibration and robustness issues [35]. In this work we introduce imprecise Bayesian neural networks (IBNNs). Unlike other techniques in the fields of artificial intelligence (AI) and machine learning (ML) involving imprecise probabilities - that typically only focus on classification problems - IBNNs can be used for classification, prediction, and regression. They capture the ambiguity the designer faces when selecting which prior to choose for the parameters of a neural network and which likelihood distribution to choose for the training data at hand. An IBNN can be defined as a NN trained using credal prior and likelihood sets. Credal sets are sets of probability measures (see Remark 1); we use them to train IBNNs in order to overcome some of the drawbacks of BNNs. In particular, they allow to counter the criticism to the practice in (standard) Bayesian statistics of (i) using a single, arbitrary prior to represent the initial state of ignorance of the agent, (ii) using non-informative priors to model ignorance, and (iii) using a single, arbitrary likelihood to represent the agent's knowledge about the sampling model. In addition, they allow to achieve robustness in the sense of Bayesian sensitivity analysis (see Section 2.2), and to quantify and distinguish between epistemic and aleatoric uncertainties. This is desirable in light of several areas of recent ML research, such as Bayesian deep learning [13, 27], adversarial example detection [45], and data augmentation in Bayesian classification In the supplementary material, we give philosophical and theoretical motivations for training a BNN using credal sets. We also draw a comparison between credal sets and hierarchical Bayesian models: we argue that the use of credal sets is better justified from both philosophical and theoretical standpoints. We summarize our contributions next: (1) We present IBNNs, and develop the theoretical tools and the procedure required to use them in practice. (2) We present theoretical results to show that IBNNs are robust in the sense of Bayesian sensitivity analysis and are more robust than BNNs to distribution shifts. We also prove how IBNNs can be used to specify sets of outcomes that enjoy PAC-like guarantees. (3) We apply IBNNs to model two safety critical systems. One, the human insulin and blood glucose dynamics for artificial pancreas control, and two, motion prediction for autonomous driving. We demonstrate improvements in both these settings. **Structure of the paper.** Section 2 presents the needed preliminary concepts, followed by section 3 that introduces some important theoretical results. We discuss the applied aspects of IBNNs in in section 4. We present our experimental results in section 5, and we examine the related work in section 6. Section 7 concludes our work. In the supplementary material, we give further theoretical and philosophical arguments and we prove our claims. ## 2. Background and Preliminaries ### Bayesian neural networks In line with the recent survey on BNNs [26], Bayes' theorem can be stated as \(P(H\mid D)=[P(D\mid H)P(H)]/P(D)=P(D,H)/\int_{H}P(D,H^{\prime})\mathrm{d}H^{ \prime}\), where \(H\) is a hypothesis about which the agent holds some prior beliefs, and \(D\) is the data the agent uses to update their initial opinion. Probability distribution \(P(D\mid H)\) represents how likely it is to observe data \(D\) if hypothesis \(H\) were to be true, and is called likelihood_, while probability distribution \(P(H)\) represents the agent's initial opinion around the plausibility of hypothesis \(H\), and is called _prior_. The _evidence_ available is encoded in \(P(D)=\int_{H}P(D,H^{\prime})\mathrm{d}H^{\prime}\), while _posterior_ probability \(P(H\mid D)\) represents the agent's updated opinion. Using Bayes' theorem to train a predictor can be understood as learning from data \(D\): the Bayesian paradigm offers an established way of quantifying uncertainty in deep learning models. BNNs are stochastic artificial neural networks (ANNs) trained using a Bayesian approach [18, 26, 32, 47, 53]. The goal of ANNs is to represent an arbitrary function \(y=\Phi(x)\). Let \(\theta\) represent the parameters of the network, call \(\Theta\) the space \(\theta\) belongs to, and endow it with \(\sigma\)-algebra \(\mathcal{B}\). Stochastic neural networks are a type of ANN built by introducing stochastic components to the network. This is achieved by giving the network either a stochastic activation or stochastic weights to simulate multiple possible models with their associated probability distribution. This can be summarized as \(\theta\sim p(\theta)\), \(y=\Phi_{\theta}(x)+\varepsilon\), where \(\Phi\) depends on \(\theta\) to highlight the stochastic nature of the neural network, \(p\) is the density of a probability measure \(P\) on \(\Theta\) with respect to some \(\sigma\)-finite dominating measure \(\mu\), and \(\varepsilon\) represents random noise to account for the fact that function \(\Phi_{\theta}\) is just an approximation.1 Footnote 1: We can write \(p\) as the Radon-Nikodym derivative of \(P\) with respect to \(\mu\), that is, \(p=\mathrm{d}P/\mathrm{d}\mu\). To design a BNN, the first step is to choose a deep neural network _architecture_, that is, functional model \(\Phi_{\theta}\). Then, the agent specifies the _stochastic model_, that is, a prior distribution over the possible model parametrization \(p(\theta)\), and a prior confidence in the predictive power of the model \(p(y\mid x,\theta)\). Given the usual assumption that multiple data points from the training set are independent, the product \(\prod_{(x,y)\in D}p(y\mid x,\theta)\) represents the _likelihood_ of outputs \(y\in D_{y}\) given inputs \(x\in D_{x}\) and parameter \(\theta\), where (a) \(D=D_{x}\times D_{y}\) is the training set; (b) \(D_{x}=\{x_{i}\}_{i=1}^{n}\) is the collection of training inputs, which is a subset of the space \(\mathscr{D}_{\mathbf{x}}\) of inputs; (c) \(D_{y}=\{y_{i}\}_{i=1}^{n}\) is the collection of training outputs, which is a subset of the space \(\mathscr{D}_{\mathbf{y}}\) of outputs. In the remainder of the paper, we call "likelihood" both \(p(y\mid x,\theta)\) and \(\prod_{(x,y)\in D}p(y\mid x,\theta)\), as no confusion arises. The model parametrization can be considered to be hypothesis \(H\). Following [26], we assume independence between model parameters \(\theta\) and training inputs \(D_{x}\), \(D_{x}\perp\!\!\!\!\perp\theta\). Hence, Bayes' formula can be rewritten as \[p(\theta\mid D)=\frac{p(D_{y}\mid D_{x},\theta)p(\theta)}{\int_{\Theta}p(D_{y }\mid D_{x},\theta^{\prime})p(\theta^{\prime})\mathrm{d}\theta^{\prime}},\] which is proportional to \(p(D_{y}\mid D_{x},\theta)p(\theta)\); notice that the equality comes from having assumed \(D_{x}\perp\!\!\!\perp\theta\). Posterior density \(p(\theta\mid D)\) is high dimensional and highly nonconvex [24, 26], so computing it and sampling from it is a difficult task. The first issue is tackled using Variational Inference (VI) procedures, while Markov Chain Monte Carlo (MCMC) methods address the second challenge. Both are reviewed - in the context of machine learning - in [26, Section V]. BNNs can be used for prediction, regression, and classification [26, Section II]; besides having a solid theoretical justification, there are practical benefits from using BNNs, as presented in [26, Section III]. ### Imprecise probabilities As IBNNs are rooted in the theory of imprecise probabilities (IPs), in this section we give a gentle introduction to the IP concepts we will use throughout the paper. IBNNs are based on the _Bayesian sensitivity analysis_ (BSA) approach to IPs, that in turn is grounded in the _dogma of ideal precision_ (DIP, [4], [52, Section 5.9]). The DIP posits that in any problem there is an _ideal probability model_ which is precise, but which may not be precisely known. We call this condition _ambiguity_[17]. Facing ambiguity can be represented mathematically by a set \(\mathcal{P}\) of priors and a set \(\mathcal{L}\) of likelihoods that seem "plausible" or "fit" to express the agent's beliefs on the parameters of interest and their knowledge of the data generating process (DGP). Generally speaking, the farther apart the "boundary elements" of the sets (i.e. their infimum and supremum), the higher the agent's ambiguity. Of course, if \(\mathcal{P}\) and \(\mathcal{L}\) are singletons we go back to the usual Bayesian paradigm. BSA robustness has to be understood as follows: in the presence of prior ignorance and indecisiveness about the sampling model, it is better to give answers in the form of intervals or sets, rather than arbitrarily select a prior and a likelihood, and then update. They allow to represent _indecision_, thus leading to less informative but more robust conclusions. **Remark 1**.: Throughout the paper, we denote by \(\Pi=\{P_{1},\ldots,P_{k}\}\), \(k\in\mathbb{N}\), a finite set of probabilities on a generic measurable space \((\Omega,\mathcal{F})\), such that for all \(j\in\{1,\ldots,k\}\), \(P_{j}\) cannot be written as a convex combination of the other \(k-1\) components of \(\Pi\). We denote by \(\Pi^{\prime}\) its convex hull \(\Pi^{\prime}\equiv\operatorname{Conv}\Pi\), i.e., the set of probabilities \(P\) on \((\Omega,\mathcal{F})\) that can be written as \(P(A)=\sum_{j=1}^{k}\alpha_{j}P_{j}(A)\), for all \(A\in\mathcal{F}\), where the \(\alpha_{j}\)'s are elements of \([0,1]\) that sum up to \(1\). In the literature, it is referred to as a _finitely generated credal set_[11, 36]. Notice then that the extrema of \(\Pi^{\prime}\) are the elements of \(\Pi\), \(\operatorname{ex}\Pi^{\prime}=\Pi\). We first introduce the concepts of _lower_ and _upper probabilities_. The lower probability \(\underline{P}\) associated with \(\Pi\) is given by \(\underline{P}(A)=\inf_{P\in\Pi}P(A)\), for all \(A\in\mathcal{F}\). The upper probability \(\overline{P}\) associated with \(\Pi\) is defined as the conjugate to \(\underline{P}\), that is, \(\overline{P}(A):=1-\underline{P}(A^{c})=\sup_{P^{\prime}\in\Pi}P^{\prime}(A)\), for all \(A\in\mathcal{F}\). These definitions hold even if \(\Pi\) is not finite. Then, we have the following important result. **Lemma 2**.: \(\overline{P}\) is the upper probability for \(\Pi\) if and only if it is also the upper probability for \(\Pi^{\prime}\). That is, \(\overline{P}(A)=\sup_{P\in\Pi}P(A)=\sup_{P^{\prime}\in\operatorname{ex}\Pi^ {\prime}}P^{\prime}(A)=\sup_{P^{\prime}\in\Pi^{\prime}}P^{\prime}(A)\), for all \(A\in\mathcal{F}\). ## 3. Theoretical results In this section, we provide the procedure to follow in order to compute the posterior credal sets in the context of IBNNs, and we show how IBNNs are able to capture both aleatoric and epistemic uncertainties associated with the analysis. The training method is presented in Section 3.1. We show that IBNNs are more robust to distribution shifts than regular BNNs as a result of Lemma 4. We first give the formal definition of an IBNN. **Definition 3** (IBNN).: An IBNN is a stochastic artificial neural network trained using finitely generated credal prior and likelihood sets. ### IBNN procedure Let us denote by \(P_{x,\theta}\) a probability distribution on the space of outputs \(\mathscr{D}_{\mathbf{y}}\) having density \(p(\cdot\mid x,\theta)\equiv L(x,\theta)\). Denote by \(\mathfrak{post}(P,P_{x,\theta})\) the act of computing the posterior from prior \(P\) and likelihood \(P_{x,\theta}\) using a regular BNN, and by \(\#\) the cardinality operator. The IBNN procedure follows. * Specify a _finite_ set \(\mathcal{P}\) of plausible prior probabilities on the parameters of the neural network and a _finite_ set \(\mathcal{L}_{x,\theta}\) of plausible likelihoods; * Compute posterior \(P_{D}=\mathfrak{post}(P,P_{x,\theta})\), for all \(P\in\mathcal{P}\) and all \(P_{x,\theta}\in\mathcal{L}_{x,\theta}\). Step **S2** performs an element-wise application of Bayes' rule for all the elements of \(\mathcal{P}\) and \(\mathcal{L}_{x,\theta}\). We obtain a finite set \(\mathcal{P}_{D}\) of posteriors whose cardinality is given by \(\#\mathcal{P}\times\#\mathcal{L}_{x,\theta}\). Its convex hull \(\mathrm{Conv}\mathcal{P}_{D}\) is the credal posterior set. By Lemma 2 we have that the upper and lower probabilities of \(\mathcal{P}_{D}\) and \(\mathrm{Conv}\mathcal{P}_{D}\) coincide. Notice that in the case that \(\mathcal{P}\) and \(\mathcal{L}_{x,\theta}\) are both not singletons, for all \(A\in\mathcal{B}\) the interval \([\underline{P}_{D}(A),\overline{P}_{D}(A)]\) is wider than the case when one or the other is a singleton. In the limiting case where both are singletons, we retrieve the usual Bayesian updating, so the interval shrinks down to a point. We follow this procedure to compute the posterior credal set for the parameters of our IBNN. Being trained using credal sets makes IBNNs more robust to distribution shifts than BNNs. To see this, we present the following general result, and then we apply it to our case. Call \(\Delta(\Omega,\mathcal{F})\) the set of all probability measures on a generic probability space \((\Omega,\mathcal{F})\). Let \(\mathcal{P}\subset\Delta(\Omega,\mathcal{F})\) be a generic set of probabilities, and consider \(P^{\prime}\in\Delta(\Omega,\mathcal{F})\) such that \(P^{\prime}\not\in\mathcal{P}\). **Lemma 4**.: Call \(d\) any metric and \(div\) any divergence on the space \(\Delta(\Omega,\mathcal{F})\). Let \(d(\mathcal{P},P^{\prime}):=\inf_{P\in\mathcal{P}}d(P,P^{\prime})\) and \(div(\mathcal{P}\|P^{\prime}):=\inf_{P\in\mathcal{P}}div(P\|P^{\prime})\). Then, for all \(P\in\mathcal{P}\), \(d(\mathcal{P},P^{\prime})\leq d(P,P^{\prime})\) and \(div(\mathcal{P}\|P^{\prime})\leq div(P\|P^{\prime})\). Lemma 4 holds if \(\mathcal{P}\) is any set of probabilities, not just a credal sets. In the supplementary material, we show that the above result still holds if the elements of \(\mathcal{P}\) and \(P^{\prime}\) are defined on Euclidean spaces having different dimensions [6]. Let us now apply Lemma 4 to IBNNs. Let \((\mathscr{D}_{\mathbf{y}},\mathcal{A}_{y})\) and \((\mathscr{D}_{\mathbf{x}},\mathcal{A}_{x})\) denote the measurable spaces of outputs and inputs, respectively. Suppose that, when designing a regular BNN, an agent chooses likelihood \(\check{P}_{x,\theta}\), while when designing a more general IBNN, they start by specifying set \(\mathcal{L}_{x,\theta}=\{P_{1,x,\theta},\ldots,P_{k,x,\theta}\in\Delta(\mathscr{ D}_{\mathbf{y}},\mathcal{A}_{y}):k\in\mathbb{N},\,\theta\in\Theta,\,x\in\mathscr{D}_{ \mathbf{x}}\}\), and then let the induced credal set \(\mathrm{Conv}\mathcal{L}_{x,\theta}\) represent their uncertainty around the sampling model. Assume that \(\check{P}_{x,\theta}\in\mathcal{L}_{x,\theta}\) (this means that when designing the regular BNN, the agent chooses arbitrarily which of the elements of \(\mathcal{L}_{x,\theta}\) to use) and that the oracle data generating process \(P^{o}_{x,\theta}\neq\check{P}_{x,\theta}\), so that we are actually in the presence of distribution shift. Then, we have two cases. (1) If the true sampling model \(P^{o}_{x,\theta}\) belongs to \(\mathrm{Conv}\mathcal{L}_{x,\theta}\), then the distance - measured via a metric or a divergence - between \(\mathrm{Conv}\mathcal{L}_{x,\theta}\) and \(P^{o}_{x,\theta}\) is \(0\) while that between \(\check{P}_{x,\theta}\) and \(P^{o}_{x,\theta}\) is positive. (2) If \(P^{o}_{x,\theta}\not\in\mathrm{Conv}\mathcal{L}_{x,\theta}\), then the distance between \(\mathrm{Conv}\mathcal{L}_{x,\theta}\) and \(P^{o}_{x,\theta}\) is smaller than the distance between \(\check{P}_{x,\theta}\) and \(P^{o}_{x,\theta}\), no matter (i) which metric or distance we use (Lemma 4), (ii) whether or not \(P^{o}_{x,\theta}\) and the elements of \(\mathrm{Conv}\mathcal{L}_{x,\theta}\) are defined on the same Euclidean space (supplementary material, Lemma 14). A visual representation is given in Figure 1. A computational bottleneck of the IBNN procedure appears to be step **S2**, which is a combinatorial task. We have to calculate \(\#\mathcal{P}\times\#\mathcal{L}_{x,\theta}\) many posteriors, but this procedure allows us to forego any additional assumptions on the nature of the lower and upper probabilities.2 Clearly, the procedure is simplified if either \(\mathcal{P}\) or \(\mathcal{L}_{x,\theta}\) are singletons. As pointed out earlier, posteriors in \(\mathcal{P}_{D}\) are typically very high-dimensional and highly non-convex; we use well known techniques outlined in [26, Section V] to compute them in an efficient fashion. Footnote 2: If we are willing to make such assumptions, Theorem 7 in Appendix D shows how to compute the upper posterior using only upper prior and upper likelihood. ### Aleatoric-epistemic uncertainties In [23], the authors study uncertainty in the context of supervised learning. In particular, they extensively review the existing approaches to quantify aleatoric and epistemic uncertainty (AU and EU, respectively). The former refers to irreducible uncertainty, the variability in the outcome of an experiment which is due to inherently random effects. An example of AU is coin flipping: the data generating process in this experiment has a stochastic component that cannot be reduced by any additional source of information. EU, instead, corresponds to reducible uncertainty, caused by a lack of knowledge about the best model. Recall that, given a generic probability measure \(P\) on a measurable space \((\Omega,\mathcal{F})\), the (Shannon) entropy of \(P\) is defined as \(H(P):=\mathbb{E}[-\log p]=-\int_{\Omega}\log[p(\omega)]P(\mathrm{d}\omega)\) if \(\Omega\) is uncountable, where \(p=\mathrm{d}P/\mathrm{d}\mu\), for some \(\sigma\)-finite dominating measure \(\mu\). If \(\Omega\) is at most countable, we have that \(H(P)=-\sum_{\omega\in\Omega}P(\{\omega\})\log[P(\{\omega\})]\). The entropy primarily captures the shape of density \(p\), namely its "peakedness" or non-uniformity [14, 23], and hence informs about the predictability of the outcome of a random experiment: the higher its value, the lower the predictability. Now, consider a generic set of probabilities \(\mathcal{P}\) on \((\Omega,\mathcal{F})\). Then, we can define the imprecise versions of the Shannon entropy as proposed by [1, 23], \(\overline{H}(P):=\sup_{P\in\mathcal{P}}H(P)\) and \(\underline{H}(P):=\inf_{P\in\mathcal{P}}H(P)\), called the upper and lower Shannon entropy, respectively.3 The upper entropy is a measure of total uncertainty since it represents the minimum level of predictability associated with the elements of \(\mathcal{P}\). In [1, 23], the authors posit that it can be decomposed as a sum of aleatoric and epistemic uncertainties, and that this latter can be specified as the difference between upper and lower entropy, thus obtaining Footnote 3: In the supplementary material, we provide bounds to the values of upper and lower entropy. \[\underbrace{\overline{H}(P)}_{\text{total uncertainty}}=\underbrace{ \underline{H}(P)}_{\text{aleatoric uncertainty}}+\underbrace{\left[\overline{H}(P)-\underline{H}(P)\right]}_{\text{ epistemic uncertainty}}.\] We have the following lemma. **Lemma 5**.: Let \(\Pi,\Pi^{\prime}\) be sets of probability measures as the ones considered in Remark 1. Then, \(\overline{H}(P)\) is the upper Shannon entropy of \(\Pi\) if and only if it is the upper Shannon entropy of \(\Pi^{\prime}\). That is, \(\overline{H}(P)=\sup_{P\in\Pi}H(P)=\sup_{P^{\prime}\in\text{ex}\Pi^{\prime}}H(P ^{\prime})=\sup_{P^{\prime}\in\Pi^{\prime}}H(P^{\prime})\). Lemma 5 tells us that in the context of IBNNs, it is enough to compute the upper and lower entropy for the extreme elements of the prior, likelihood, and posterior sets - \(\mathcal{P}\), \(\mathcal{L}_{x,\theta}\), and \(\mathcal{P}_{D}\), respectively - to retrieve the upper and lower entropy for the whole credal sets \(\text{Conv}\mathcal{P}\), \(\text{Conv}\mathcal{L}_{x,\theta}\), and \(\text{Conv}\mathcal{P}_{D}\). ## 4. Practical aspects In this section, we first illustrate how to elicit credal prior and likelihood sets that are needed for step **S1** of the procedure in section 3.1. Then, we describe how IBNN are instrumental in specifying a set of outputs that enjoys PAC-like guarantees. Outlined in [26, Sections IV-B and IV-C1], for classification, the standard process for BNNs involves * a Normal prior with zero mean and diagonal covariance \(\sigma^{2}I\) on the coefficient of the network, that is, \(p(\theta)=\mathcal{N}(0,\sigma^{2}I)\). In the context of IBNNs, we could specify e.g. \(\mathcal{P}=\{P\in\Delta(\Theta,\mathcal{B}):\frac{\mathrm{d}P}{\mathrm{d} \mu}(\theta)=p(\theta)=\mathcal{N}(0,\sigma^{2}I),\,\sigma^{2}\in\{3,5,7\}\}\), to capture various levels of uncertainty around the "fatness" of the tails of the Normal distribution. We could also consider Normals centered at a positive, negative, and \(0\) vectors to capture the ideas of positive bias, negative bias, and no bias of the coefficients, respectively. * a categorical likelihood, \(p(y\mid x,\theta)=\text{Cat}(\Phi_{\theta}(x))\), whose parameter is given by the output of functional model \(\Phi_{\theta}\). In the context of IBNNs, we could specify \(\mathcal{L}_{x,\theta}=\{P_{x,\theta}\in\Delta(\mathscr{D}_{\mathbf{y}}, \mathcal{A}_{y}):\frac{\mathrm{d}P_{x,\theta}}{\mathrm{d}\nu}(y)=L_{x,\theta} (y)=\text{Cat}(\Phi_{i,\theta}(x)),i\in\mathcal{I},\theta\in\Theta,x\in\mathscr{ D}_{\mathbf{x}}\}\), where \(\mathcal{I}\subset\mathbb{N}\) is a generic index set, and \(\nu\) is some \(\sigma\)-finite dominating measure on \(\mathscr{D}_{\mathbf{y}}\). So we allow for different possible functional forms to capture the ambiguity around the true data generating process faced by the agent. More in general, we can use the priors and likelihoods that better fit the type of analysis we are performing. Let us denote by \(N:=\#\mathcal{P}\times\#\mathcal{L}_{x,\theta}\) the number of posteriors (more precisely, the number of extrema of \(\text{Conv}\mathcal{P}_{D}\)). These are probability distributions on the parameter space \(\Theta\); assume for simplicity that they all have density with respect to some (\(\sigma\)-finite) dominating measure. Every such posterior induces a distribution on the output space \(\mathscr{D}_{\mathbf{y}}\) via the posterior predictive distribution \(p(\tilde{y}\mid\tilde{x},x_{1},y_{1},\ldots,x_{n},y_{n})\) (see section F of the supplementary material). Let us denote the \(N\) "induced distributions" on \(\mathscr{D}_{\mathbf{y}}\) by \(\hat{\mathcal{P}}:=\{\hat{P}_{1},\ldots,\hat{P}_{N}\}\), and call their pdf's \(\{\hat{p}_{1},\ldots,\hat{p}_{N}\}\). Then, for all \(k\in\{1,\ldots,N\}\), compute the \(\alpha\)_-level Highest Density Region (HDR)_\(R(\hat{p}_{k}^{\alpha}):=\{y\in\mathscr{D}_{\mathbf{y}}:\hat{p}_{k}(y)\geq \hat{p}_{k}^{\alpha}\}\), where \(\alpha\in(0,1)\) and \(\hat{p}_{k}^{\alpha}\) is the largest constant such that \(\hat{P}_{k}[Y\in R(\hat{p}_{k}^{\alpha})]\geq 1-\alpha\). In dimension \(1\), \(R(\hat{p}_{k}^{\alpha})\) can be interpreted as the narrowest interval in which the value of the (true) output falls with probability of at least \(1-\alpha\) according to posterior \(\hat{P}_{k}\). Finally, compute the \(\alpha\)_-level Imprecise Highest Density Region (IHDR)_\(IR_{\alpha}:=\cup_{k=1}^{N}R(\hat{p}_{k}^{\alpha})\). By taking the union of the HDR's, we ensure that all the probability measures in the credal set \(\text{Conv}(\{\hat{P}_{1},\ldots,\hat{P}_{N}\})\) assign probability of at least \(1-\alpha\) to the event \(\{Y\in IR_{\alpha}\}\); this is a consequence of Lemma 2. In turn, this implies that \(\hat{P}(Y\in IR_{\alpha})=\inf_{k\in\{1,\ldots,N\}}\hat{P}_{k}(Y\in IR_{\alpha}) \geq 1-\alpha\). This can be interpreted as the event \(A=\{\)The (true) output belongs to \(IR_{\alpha}\}\) having lower probability of at least \(1-\alpha\), a PAC-like guarantee for the set of outputs \(IR_{\alpha}\) generated by our procedure.4 Footnote 4: The guarantee is PAC-like, and not exactly PAC: the approximation guarantee is captured by the lower probability of \(A\) being at least \(1-\alpha\). The correctness guarantee is not needed: \(IR_{\alpha}\) is only parametrized by \(\alpha\), so we do not incur in a potential failure when fitting a different parameter due to the randomness in the validation set [42]. This method allows to quantify and control the _individual epistemic uncertainty_ of \(IR_{\alpha}\), given by the difference between its upper and lower probabilities. To see this, notice that \(\widehat{P}(Y\in IR_{\alpha})\leq 1\), so \(\widehat{P}(Y\in IR_{\alpha})-\hat{P}(Y\in IR_{\alpha})\leq\alpha\). This can be interpreted as the epistemic uncertainty regarding the true output being in \(IR_{\alpha}\), which is smaller than \(\alpha\). The aleatoric uncertainty (AU), instead, is linked to the size of \(IR_{\alpha}\): the larger it is, the higher the AU the agent faces. If we want to avoid to perform the procedure only to discover that \(IR_{\alpha}\) is "too big", then we can add an "AU check" at the beginning. This, together with computing \(IR_{\alpha}\) in a classification setting, are explored in sections G and H of the supplementary material. ## 5. Experiments In [16], the authors pursue the same heuristic endeavor, but take an ensemble route. They too consider different BNNs, but instead of keeping them separate and use them to build a posterior credal set, they average them out. Similar to theirs, we elicit the following procedure, that we call ensemble of BNNs (EBNN). Consider \(k\) different BNNs, and compute the posterior distribution on the parameters; they induce \(k\) distributions on the output space \(\mathscr{D}_{\mathbf{y}}\), each having mean \(\mu_{j}\) and variance \(\sigma_{j}^{2}\). We posit that the EBNN distribution on \(\mathscr{D}_{\mathbf{y}}\) is a Normal having mean \(\mu_{\text{ens}}=1/k\sum_{j=1}^{k}\mu_{j}\) and covariance matrix \(\sigma_{\text{ens}}^{2}I\), where \(\sigma_{\text{ens}}^{2}=1/k\sum_{j=1}^{k}\sigma_{j}^{2}+1/(k-1)\sum_{j=1}^{k}( \mu_{j}-\mu_{\text{ens}})^{2}\). We use the \(\alpha\)-level HDR associated with \(P_{\text{ens}}\) as a benchmark for our \(IR_{\alpha}\). Being rooted in IP theory - and given the PAC-like property enjoyed by \(IR_{\alpha}\) - IBNNs are a better justified procedure than EBNN from a theoretical standpoint. In this section, we show with two applications that their implementation performs as well as, and sometimes better than, EBNN. In Appendix I, we include details of experiments where we train IBNNs for image classification tasks for standard datasets like CIFAR10 [29], SVHN [40], Fashion-MNIST [55], and MNIST [34]. We discuss how IBNNs are better than EBNN at disentangling AU and EU, and at quantifying them. ### Artificial Pancreas Control. Overall Setup. In this case study we consider the problem of data-driven control of human blood glucose-insulin dynamics, using an artificial pancreas system, Figure 2. External insulin delivery is accomplished by using an insulin pump controlled by the artificial pancreas software, which attempts to regulate the blood-glucose (BG) level of the patient within the euglycemic range of \([70,180]mg/dl\)[31]. Levels below \(70mg/dl\) lead to hypoglycemia, which can lead to loss of consciousness, coma or even death. On the other hand, levels above \(300mg/dl\) lead to a condition called the ketoacidosis, where the body can break down fat due to lack of insulin, and lead to build up of ketones. In order to treat this situation, patients receive external insulin delivery through insulin pumps. Artificial Pancreas (AP) systems can remedy this situation by measuring the blood glucose level, and automatically injecting insulin into the blood stream. Thus, we define the _unsafe regions_ of the space as : \((G(t)>300)\vee(G(t)<70)\), where \(G(t)\) is the BG value at time \(t\). This is the shaded region in Figure 3. **Neural Network Models and Controller.** Deep neural networks are effective in capturing the BG-insulin dynamics for personalized medical devices [31]. This allows for improved device performance. Even though standard feedforward neural networks can be used, Bayesian neural networks (BNN), and especially a collection of multiple BNNs, offer a better alternative towards uncertainty aware predictions. Here, we test the ramifications of these prediction sets, when used inside an online receding horizon control scheme for insulin delivery. We use the standard MPC control scheme for this purpose, well known in the literature [15]. More formally, assume \(G(t)\), and \(I(t)\) be the blood-glucose and insulin values at time \(t\). We denote the finite length trajectory of length \(H\) as the following \(\overleftarrow{G}_{H}(t):=[G(t-H+1),\ldots,G(t)]\), and \(\overleftarrow{I}_{H}(t):=[I(t-H+1),\ldots,I(t)]\). An uncertainty aware model \(M\) computes the following triplet \((G_{l}(t+l),G_{m}(t+l),G_{u}(t+l))=M(\overleftarrow{G}_{H}(t),\overleftarrow{ I}_{H}(t))\), where \(G_{m}\) is the mean prediction output, and \(G_{l},G_{u}\) are the lower and upper predictions Figure 3. Starting from an initial glucose value, the task of the artificial pancreas controller is to maintain the blood glucose value within safe operating limits using insulin as a mode of control. Figure 2. The Bayesian neural networks predict a future blood glucose value. These individual predictions are combined to get a robust estimate of the true value as an interval. This is used by the MPC control algorithm to recommend insulin dosage for the patient. The patient block in our experiment is simulated using the virtual patient models from the UVa-Padova simulator. of the glucose value. By design, it is true that \(G_{l}\leq G_{m}\leq G_{u}\). An MPC control algorithm solves \(\arg\min_{I_{0},I_{1},\ldots,I_{k-1}}\sum_{i=0}^{k-1}J(M,\overleftarrow{G}_{H}(t +i),\overleftarrow{T}_{H}(t+i))\). After every time step, the control algorithm picks the first insulin input \(I_{0}\) as the insulin bolus for the patient, and discards the rest. The cost function \(J\) in the MPC control scheme takes into account three factors, (i) Distance of the mean prediction level \(G_{m}\) at each time step from a target value of \(120mg/dl\), (ii) Distance of upper and lower predictions (\(G_{u}\) and \(G_{l}\)) from the unsafe regions of the state space \(G(t)>300\) and \(G(t)<70\), and (iii) Total insulin injected \(\sum_{t=0}^{k-1}I_{t}\). Starting with some initial glucose value - \(G(0)\), we measure the performance of the artificial pancreas controller as the fraction of time it spends in the unsafe regions, \(t_{\text{unsafe}}=1/L\sum_{t=1}^{L}\mathbb{1}((G(t)>300)\vee(G(t)<70))\), where \(\mathbb{1}(\cdot)\) denotes the indicator function. A lower value is more desirable. We compare EBNN and IBNNs as different realizations of the model \(M\). **Distribution Shift using Meals.** A well known problem with learnt models is distribution shift. Bayesian neural networks can address this issue by apprising the end user of the increased uncertainty. For regression models of the type described above, this appears as larger prediction intervals \([G_{l},G_{u}]\). The artificial pancreas controller can run into this situation in the following way : the insulin-glucose time series data collected for training the data-driven model \(M\) can be without meals, while at test time the patient can have meals. This creates a distribution shift between the training and test time data. Fortunately, the UVa-Padova simulator [12] allows us to create datasets with and without meal inputs. In this case study, the training data was obtained by randomly initializing the BG value in the range \([120,190]\), and simulating the patient for 720 minutes. The controller was executed at 5 minutes intervals. At test time the patient was supplied meals at specific time intervals (for details, see Appendix K). This creates a significant distribution shift since meals are effectively an unknown variable which can affect the system state. However, from the controller's perspective this is practical, since patients can have unannounced meals. **Results and Discussion.** To capture the difference in performance between EBNN and IBNN, we compute \(P_{\text{diff}}:=(t_{\text{unsafe}}^{EBNN}-t_{\text{unsafe}}^{IBNN})/t_{\text{ unsafe}}^{EBNN}\). Both \(t_{\text{unsafe}}^{EBNN}\) and \(t_{\text{unsafe}}^{IBNN}\) depend on interval \([G_{l},G_{u}]\); for EBNN, this latter corresponds to the \(\alpha\)-level HDR associated with \(P_{\text{ens}}\), while for IBNN it corresponds to \(IR_{\alpha}\). We consider one case in which an IBNN is trained using a credal prior set and only one likelihood (we choose different seeds which initialize the prior distributions but we keep the same architecture for the BNNs), and another case in which we do the opposite (we use the same seed and different architectures). \begin{table} \begin{tabular}{l l l l} \hline \hline \(1-\alpha\) & 0.9 & 0.95 & 0.99 \\ \hline \hline Varying Seeds & **2.3**\% & **3.5**\% & **5.2**\% \\ \hline Varying & & & \\ Architectures & **0.5**\% & -3.8**\% & **4.4**\% \\ \hline \hline \end{tabular} \end{table} Table 1: We report the performance improvements when using IBNNs as compared to EBNNs across 3 different values of \(\alpha\). Row 1 corresponds to the case where the individual BNNs are trained with different seeds for the prior distribution; and Row 2 is the case when the BNNs have different architectures. We report \(P_{\text{diff}}\), across different choices in Table 1. We observe that a more conservative estimate, as informed by the IBNN model as compared to the EBNN framework, results in controllers which respect the safety limits better. For more details on this case study, see Appendix K. ### Motion Prediction for Autonomous Racing In this next case study, we demonstrate the utility of IBNNs for motion prediction in autonomous driving scenarios. An important challenge in autonomous driving is understanding the intent of other agents and predicting their future trajectories to allow for safety-aware planning. In autonomous racing, where control is pushed to the dynamical limits, accurate and robust predictions are even more essential for outperforming opponent agents while assuring safety. Again, IBNNs provide a straightforward method for quantifying uncertainty and deriving robust prediction regions for anticipating an agent's behavior. We use the problem settings in [50] to define the problem of obtaining prediction sets for future positions of an autonomous racing agent. Our results show that the prediction regions have improved coverage when compared to EBNN. These results hold in both in-distribution and out-out-distribution settings, which are described below. **Problem.** Let \(O^{i}(t,l)\equiv O^{i}=\{p_{t-l}^{i},\ldots,p_{t}^{i}\}\) denote the \(i\)-th trajectory instance of an agent at time \(t\), consisting of the observed positions from time \(t-l\) up to time \(t\). Let then \(C^{i}\) be a time-invariant context variable. Let also \(F^{i}(t,h)\equiv F^{i}=\{p_{t+1}^{i},\ldots,p_{t+h}^{i}\}\) be the collection of the next \(h\) future positions. We wish to obtain a model \(M\) that predicts region \(\mathcal{R}_{\alpha}\) with probabilistic guarantees. In particular, for EBNN \(\mathcal{R}_{\alpha}\) is the \(\alpha\)-level HDR of \(P_{\text{ens}}\), so that \(P_{\text{ens}}(F^{i}\in\mathcal{R}_{\alpha})\geq 1-\alpha\), while for IBNN \(\mathcal{R}_{\alpha}=IR_{\alpha}\), so that \(\underline{\hat{P}}(F^{i}\in\mathcal{R}_{\alpha})\geq 1-\alpha\). The dataset consists of instances of \((O^{i},F^{i})\) divided into a training set \(\mathcal{D}_{\text{train}}\) and a testing set \(\mathcal{D}_{\text{test}}\). We train an uncertainty aware model on \(\mathcal{D}_{\text{train}}\) that computes the triplet \((F_{l}^{i},F_{m}^{i},F_{u}^{i})=M(O^{i},C^{i})\) where \(F_{l}^{i}\), \(F_{u}^{i}\), \(F_{m}^{i}\) are the lower, upper, and mean predictions of the future positions. The dataset \(\mathcal{D}_{\text{all}}\) is created by collecting simulated trajectories of autonomous race cars in the F1Tenth-Gym [41] (details in [50]). As shown in Figure 4, different racing lines were utilized including the center, right, left, and optimal racing line for the Spielberg track. We denote these by \(\mathcal{D}_{\text{center}}\), \(\mathcal{D}_{\text{right}}\), \(\mathcal{D}_{\text{left}}\), and \(\mathcal{D}_{\text{race}}\), respectively. Position \(p\) is a vector \(p=(x,y,\theta,v)^{\top}\), where \(x\) and \(y\) are positions, and \(\theta\) and \(v\) are the heading and speed, respectively. In total, the \(\mathcal{D}_{\text{all}}\) consists of 34686 train instances, 4336 validation instances, and 4336 test instances. **In-distribution vs. Out-of-distribution.** We consider the prediction task to be in-distribution when \(\mathcal{D}_{\text{train}},\mathcal{D}_{\text{test}}\subset\mathcal{D}_{\text{ all}}\). It is out-of-distribution (OOD) when \(\mathcal{D}_{\text{train}}\subset\mathcal{D}_{\text{center}}\cup\mathcal{D}_{ \text{right}}\cup\mathcal{D}_{\text{left}}\) and \(\mathcal{D}_{\text{test}}\subset\mathcal{D}_{\text{race}}\). **Metrics.** We train the ensemble of BNNs (EBNN) \(M_{\text{ens}}\), and the IBNN model \(M_{IBNN}\) using the same architecture and different seeds. We compare the performance with respect to the test set by computing the single-step coverage, where each prediction time-step is treated independently, and the multi-step coverage, which considers the entire \(h\)-step prediction and is more strict. Figure 5 depicts a sample of the in-distribution evaluation for each of the models. For a given trajectory, the red boxes indicate when the prediction region did not cover the actual trajectory at that time-step. Qualitatively, \(M_{IBNN}\) has less missed timesteps when compared to \(M_{\mathrm{ens}}\). Table 2 shows that IBNNs perform better in terms of both one-step and multi-step coverage. Figure 4. Motion Prediction for F1Tenth-Gym Environment [41]. Data is collected by simulating various racing lines on the Spielberg Track. Figure 5. F1Tenth In-distribution Results. Given an input of past observations, IBNNs exhibit better coverage of the future target trajectory. Predictions which do not cover the target within the desired \(1-\alpha\) level are indicated with red. Figure 6. F1Tenth Out-of-distribution (OOD) Results. Robust performance is exhibited by IBNNs when compared to EBNN in OOD settings. Similar results can be observed for the OOD scenario. As all models were trained on racing lines which are predominantly parallel to the track curvature, when the test set consists of instances with higher curvatures, the overall coverage of all models degrades. This can be seen in Figure 6, where the prediction of the models (orange) tends to be straight while the actual trajectory is more curved (green). Despite this, the figure and the coverage metrics in Table 2 show how IBNN exhibits a more robust behavior. ## 6. Related Work In [10], the authors introduce credal classifiers (CCs), as a generalization of classifiers based on Bayesian networks. Unlike CCs, IBNNs do not require independence assumptions between non-descendant, non-parent variables. In addition, IBNNs avoid NP-hard complexity issues of searching for optimal structure in the space of Bayesian networks [9]. In [37], an epistemic convolutional neural network (ECNN) is developed that explicitly models the epistemic uncertainty induced by training data of limited size and quality. A clear distinction is that ECNNs measure uncertainty in target-level representations whereas IBNNs identify the uncertainty measure on the parameter space \(\Theta\). Despite the merit of their work, we believe IBNNs achieve greater generality, since they are able to quantify aleatoric and epistemic uncertainty and are applicable to problems beyond classification. For a review of the state of the art concerning the distinction between EU and AU we refer to [23, 37]. More references can be found in Appendix L. ## 7. Conclusion We presented IBNNs, a generalization of BNNs that allows to distinguish between AU and EU, and to quantify them. We showed how they can be used to specify a set of outputs that enjoys PAC-like guarantees, and we applied them to safety-critical settings. In the future we \begin{table} \begin{tabular}{c c c c c c c} \hline \hline \multicolumn{6}{c}{In-Distribution Results} \\ \hline & \multicolumn{3}{c}{Ensemble} & \multicolumn{3}{c}{IBNN} \\ \cline{2-7} \(1-\alpha\) & 0.9 & 0.95 & 0.99 & 0.9 & 0.95 & 0.99 \\ \hline One-step & 0.962 & 0.980 & 0.992 & 0.992 & 0.995 & 0.997 \\ Multi-step & 0.638 & 0.826 & 0.937 & 0.914 & 0.948 & 0.979 \\ \hline \multicolumn{6}{c}{Out-of-Distribution Results} \\ \hline & \multicolumn{3}{c}{Ensemble} & \multicolumn{3}{c}{IBNN} \\ \cline{2-7} \(1-\alpha\) & 0.9 & 0.95 & 0.99 & 0.9 & 0.95 & 0.99 \\ \hline One-step & 0.919 & 0.950 & 0.980 & 0.979 & 0.988 & 0.995 \\ Multi-step & 0.532 & 0.703 & 0.860 & 0.825 & 0.884 & 0.943 \\ \hline \hline \end{tabular} \end{table} Table 2. F1Tenth Coverage Results. We report one-step coverage and multi-step coverage across 3 different values of \(\alpha\). IBNNs exceed coverage of EBNNs in all settings. plan to apply them to continual learning (CL) to overcome the curse of dimensionality and to capture an agent's preference over the tasks to perform. ## Appendix A Why do we need IPs? The main motivations for training an artificial stochastic neural network using credal sets are two. Let \((\Omega,\mathcal{F})\) be the measurable space of interest. 1. A single probability distribution does not suffice to represent ignorance in the sense of lack of knowledge; this is well documented in the literature, see e.g. [23] and references therein. Consider the example of complete ignorance (CI) in the case of a finite state space \(\Omega\)[23, Section 3.3]. In standard Bayesian analysis, CI is modeled in terms of the uniform distribution \(\mathcal{U}(\Omega)\); this is justified by Laplace's "principle of indifference". Then, however, it is not possible to distinguish between precise probabilistic knowledge about a random event - called _prior indifference_; think of the tossing of a fair coin - and a complete lack of knowledge due to an incomplete description of the experiment - called _prior ignorance_. Another problem is given by the additive nature of probability distributions. Consider again the example of a uniform distribution. First, let us observe that it is not invariant under reparametrization. In addition, if we model the ignorance about the length \(x\) of the side of a cube in \(\mathbb{R}^{3}\) via a uniform measure on the interval \([l,u]\subset\mathbb{R}\), then this does not yield a uniform distribution of \(x^{3}\) on \([l^{3},u^{3}]\), which suggests some degree of informedness about the cube's volume. Finally, as pointed out in [52], if we ask a subject - even an expert - about their opinion regarding some events, it is much more likely that they will report interval of probabilities rather than single values. 2. Working with credal sets allows to achieve _robustness_ in the sense of BSA: realistically large sets \(\mathcal{P}\) of priors and \(\mathcal{L}\) of likelihoods are elicited. Using credal sets, the agent recognizes that prior beliefs and knowledge about the sampling model are limited and imprecise. Combining each pair of functions in \(\mathcal{P}\) and \(\mathcal{L}\) using Bayes' rule, a class of posterior distributions - reflecting the updated state of uncertainty - is formed. If the available information is not sufficient to identify a unique posterior distribution, or a set of posteriors whose diameter is small, credal sets allow to represent _indecision_, thus leading to a less informative but more robust conclusions.5 Footnote 5: Here “diameter” has to be understood as the distance between upper and lower probability of event \(A\), for all \(A\in\mathcal{F}\). ## Appendix B On the use of credal sets Let us address a critique raised against the use of credal sets. In their recent work [33], the author argues against the use of sets of probabilities to model an agent's prior beliefs and their knowledge of the sampling model, while debating in favor of using hierarchical Bayesian models. As reported in [23, Secton 4.6.2], the argument against credal sets that is more cogent for the machine learning literature is that modeling a lack of knowledge in a set-based manner may hamper the possibility of inductive inference, up to a point where learning from empirical data is not possible any more. With this, we mean the following. As [43] points out, the natural candidate for a class of priors to represent complete ignorance is the class \(\mathcal{P}_{all}\) of all distributions. When this class leads to non-vacuous and useful conclusions, these are quite compelling and uncontroversial. It turns out that the posterior probabilities obtained from this class are vacuous, that is, their lower and upper bounds are 0 and 1: no finite sample is enough to annihilate a sufficiently extreme prior belief. There is then a compromise to be made, and this is the compromise of _near-ignorance_. The near-ignorance class should be vacuous a priori in some respects, typically the ones that are the most important for the analysis at hand. This way of proceeding is labeled as arbitrary by [33], which instead advocates for the use of hierarchical Bayesian procedures. We find this critique not compelling, as during the analysis the job of the agent is to model reality: as pointed out in [23, Secton 5], statistical inference is not possible without underlying assumptions, and conclusions drawn from data are always conditional on those assumptions. If we were to work every time with the maximum level of generality, we would hardly be able to reach any conclusions. For example, in a statistical analysis we never consider the state \(\Omega\) of _apparently possible states_[52, section 2.1.2], that is, the one that contains all the states \(\omega\) that are logically consistent with the available information. If we consider a coin toss, we let the state space be \(\Omega=\{\text{heads, tails}\}\), certainly not \(\Omega=\{\text{heads, tails, coin landing on its edge, coin braking into pieces on landing, coin disappearing down a crack in the floor}\}\). The same holds for sets of probabilities: it makes much more sense to work with near-ignorance credal sets than to work with \(\mathcal{P}_{all}\). A final reason to rebut the point in [33] is that the problems pointed out in (i) in section A that make the use of the uniform prior distribution - often interpreted as representing epistemic uncertainty in standard Bayesian inference - at least debatable are inherited by hierarchical Bayesian modeling, as specified in [5, 23, 25]. ### A note on how to specify credal sets As pointed out in [10, Section 3.3], there is a way of obtaining credal sets starting from sets of probability intervals; in addition, standard algorithms can compute the extreme elements of a credal set for which a probability interval has been provided [3]. However, the resulting number of extrema is exponential in the size of the possibility space [46];6 for this reason we prefer to specify a finitely generated credal set instead. Footnote 6: Recall that the possibility space of a random variable is the space of the values it can take on. ## Appendix C A further IP concept: the core Because of the conjugacy property of upper and lower probabilities, let us focus on upper probabilities only. We say that upper probability \(\overline{P}\) is _concave_ if \(\overline{P}(A\cup B)\leq\overline{P}(A)+\overline{P}(B)-\overline{P}(A\cap B)\), for all \(A,B\in\mathcal{F}\). Recall that \(\Delta(\Omega,\mathcal{F})\) denotes the set of all probability measures on \((\Omega,\mathcal{F})\). Upper probability \(\overline{P}\) completely characterizes the convex set \[\operatorname{core}(\overline{P}): =\{P\in\Delta(\Omega,\mathcal{F}):P(A)\leq\overline{P}(A),\forall A \in\mathcal{F}\} \tag{1}\] \[=\{P\in\Delta(\Omega,\mathcal{F}):\overline{P}(A)\geq P(A)\geq \underline{P}(A),\forall A\in\mathcal{F}\}\] where the second equality is a characterization [8, Page 3389]. Notice that the core is convex [38, Section 2.2] and weak\({}^{\star}\)-compact [38, Proposition 3].7 By complete characterization, we mean that it is sufficient to know \(\overline{P}\) to be able to completely specify \(\mathrm{core}(\overline{P})\). To emphasize this aspect, some authors say that \(\overline{P}\) is _compatible_ with \(\mathrm{core}(\overline{P})\)[19]. Since the core is convex, the set \(\mathrm{ex}[\mathrm{core}(\overline{P})]\) of extreme points of the core is well defined. It contains all the elements of the core that cannot be written as a convex combination of one another. We have the following important result [52, Theorem 3.6.2]. **Theorem 6**.: Suppose \(\mathrm{core}(\overline{P})\neq\emptyset\). Then, the following holds. 1. \(\mathrm{ex}[\mathrm{core}(\overline{P})]\neq\emptyset\). 2. \(\mathrm{core}(\overline{P})\) is the closure in the weak\({}^{\star}\) topology of the convex hull of \(\mathrm{ex}[\mathrm{core}(\overline{P})]\). 3. If \(\overline{P}(A)=\sup_{P\in\mathrm{core}(\overline{P})}P(A)\), for all \(A\in\mathcal{F}\), then \(\overline{P}(A)=\sup_{P\in\mathrm{ex}[\mathrm{core}(\overline{P})]}P(A)\), for all \(A\in\mathcal{F}\). So in order to define an upper probability \(\overline{P}\) that setwise dominates the elements of \(\mathrm{core}(\overline{P})\) it is enough to specify the extreme points of the core. ## Appendix D A new Bayes' theorem for IPs We present Theorem 7, a result that - although appealing - does not lend itself well to be used to train an IBNN. Call \(\Theta\) the parameter space of interest and assume it is Polish, that is, the topology for \(\Theta\) is complete, separable, and metrizable. This ensures that \(\Delta(\Theta,\mathcal{B})\) is Polish as well, where \(\mathcal{B}\) denotes the Borel \(\sigma\)-algebra for \(\Theta\). Let \(\mathcal{X}\) be the set of all bounded, non-negative, \(\mathcal{B}\)-measurable functionals on \(\Theta\). Call \(\mathscr{D}=\mathscr{D}_{\mathbf{x}}\times\mathscr{D}_{\mathbf{y}}\) the sample space endowed with the product \(\sigma\)-algebra \(\mathcal{A}=\mathcal{A}_{x}\times\mathcal{A}_{y}\), where \(\mathcal{A}_{x}\) is the \(\sigma\)-algebra endowed to \(\mathscr{D}_{\mathbf{x}}\) and \(\mathcal{A}_{y}\) is the \(\sigma\)-algebra endowed to \(\mathscr{D}_{\mathbf{y}}\). Let the agent elicit \(\mathcal{L}_{\theta}:=\{P_{\theta}\in\Delta(\mathscr{D},\mathcal{A}):\theta\in\Theta\}\). Assume that each \(P_{\theta}\in\mathcal{L}_{\theta}\) has density \(L(\theta)=p(D\mid\theta)\) with respect to some \(\sigma\)-finite dominating measure \(\nu\) on \((\mathscr{D},\mathcal{A})\); this represents the likelihood function for \(\theta\) having observed data \(D\subset\mathscr{D}\). We assume for now that \(L\in\mathcal{X}\), for all \(D\subset\mathscr{D}\). Let the agent specify a set \(\mathcal{P}\) of probabilities on \((\Theta,\mathcal{B})\). Then, compute \(\overline{P}\), and consider \(\mathcal{P}^{\mathrm{co}}:=\mathrm{core}(\overline{P})\); it represents the agent's initial beliefs.8 We assume that every \(P\in\mathcal{P}^{\mathrm{co}}\) has density \(p\) with respect to some \(\sigma\)-finite dominating measure \(\mu\) on \((\Theta,\mathcal{B})\), that is, \(p=\frac{\mathrm{d}P}{\mathrm{d}\mu}\). We require the agent's beliefs to be represented by the core for two main reasons. The first, mathematical, one is to ensure that the belief set can be completely characterized by the lower probability, and that lower probability \(\underline{P}\) is coherent [52, Sections 2.5 and 3.3.3]. The second, philosophical, one is the following [7]. A criticism brought forward by Walley in [52, Section 2.10.4.(c)] is that, given a lower probability \(\underline{P}\), there is no cogent reason for which the agent should choose a specific \(P_{T}\) that dominates \(\underline{P}\), or - for that matter - a collection of "plausible" probabilities. Because the core considers all (regular) probability measures that dominate \(\underline{P}\), it is the perfect instrument to reconcile Walley's behavioral and sensitivity analysis interpretations. Footnote 8: Superscript “co” stands for convex and (weak\({}^{\star}\)-)compact. Let the agent compute \(\overline{P}_{\theta}\), and consider \(\mathcal{L}_{\theta}^{\mathrm{co}}:=\mathrm{core}(\overline{P}_{\theta})\); it represents the set of plausible likelihoods. Let \[\mathscr{L}:=\left\{L=\frac{\mathrm{d}P_{\theta}}{\mathrm{d}\nu},\,P_{\theta} \in\mathcal{L}_{\theta}^{\mathrm{co}}\right\}, \tag{2}\] and denote by \(\overline{L}(\theta):=\sup_{L\in\mathscr{L}}L(\theta)\) and by \(\underline{L}(\theta):=\inf_{L\in\mathscr{L}}L(\theta)\), for all \(\theta\in\Theta\). Call \(\mathcal{P}_{D}^{\mathrm{co}}\) the following set \[\Bigg{\{}P_{D}\in\Delta(\Theta,\mathcal{B}):\frac{\mathrm{d}P_{D}}{\mathrm{d} \mu}=p(\theta\mid D)=\frac{L(\theta)p(\theta)}{\int_{\Theta}L(\theta)p(\theta )\mathrm{d}\theta},\,p=\frac{\mathrm{d}P}{\mathrm{d}\mu},\,P\in\mathcal{P}^{ \mathrm{co}},\,L=\frac{\mathrm{d}P_{\theta}}{\mathrm{d}\nu},\,P_{\theta}\in \mathcal{L}^{\mathrm{co}}_{\theta}\Bigg{\}},\] that is, the class of posterior probabilities when the prior is in \(\mathcal{P}^{\mathrm{co}}\) and the likelihood is in \(\mathcal{L}^{\mathrm{co}}_{\theta}\), and let \(\overline{P}_{D}(A)=\sup_{P_{D}\in\mathcal{P}^{\mathrm{co}}_{D}}P_{D}(A)\), for all \(A\in\mathcal{B}\). Then, the following is a generalization of Bayes' theorem in [54]. **Theorem 7**.: Suppose \(\mathcal{P}^{\mathrm{co}},\mathcal{L}^{\mathrm{co}}_{\theta}\neq\emptyset\). Then for all \(A\in\mathcal{B}\), \[\overline{P}_{D}(A)\leq\frac{\sup_{P\in\mathcal{P}^{\mathrm{co}}}\int_{\Theta} \overline{L}(\theta)\mathbb{1}_{A}(\theta)P(\mathrm{d}\theta)}{\mathbf{c}}, \tag{3}\] where \(\mathbf{c}:=\sup_{P\in\mathcal{P}^{\mathrm{co}}}\int_{\Theta}\overline{L}( \theta)\mathbb{1}_{A}(\theta)P(\mathrm{d}\theta)+\inf_{P\in\mathcal{P}^{ \mathrm{co}}}\int_{\Theta}\underline{L}(\theta)\mathbb{1}_{A^{c}}(\theta)P( \mathrm{d}\theta)\), provided that the ratio is well defined, where \(\mathbb{1}_{A}\) denotes the indicator function for \(A\in\mathcal{B}\). In addition, if \(\overline{P}\) is concave, then the inequality in (3) is an equality for all \(A\in\mathcal{B}\). This result is particularly appealing because, given some assumptions, it allows to perform a (generalized) Bayesian update of a prior upper probability (PUP) by carrying out only one operation, even when the likelihood is ill specified so that a set of likelihoods is needed. We also have the following. **Lemma 8**.: If \(\overline{P}\) is concave, then \(\overline{P}_{D}\) is concave too. This lemma is important because it tells us that the generalized Bayesian update of Theorem 7 preserves concavity, and so it can be applied to successive iterations. If at time \(t\) the PUP is concave, then the PUP at time \(t+1\) - that is, the posterior upper probability at time \(t\) - will be concave too. Necessary and sufficient conditions for a generic upper probability to be concave are given in [38, Section 5]. These results can be generalized to the case in which the elements of \(\mathcal{X}\) are unbounded using techniques in [49], and to the case in which the elements of \(\mathcal{X}\) are \(\mathbb{R}^{d}\)-valued, for some \(d\in\mathbb{N}\), since we never used specific properties of \(\mathbb{R}\) in our proofs (e.g. the fact that \(\mathbb{R}\) is a poset). Despite being attractive, the generalized Bayesian update of Theorem 7 hinges upon two assumptions, namely that \(\mathcal{P}^{\mathrm{co}}\) and \(\mathcal{L}^{\mathrm{co}}_{\theta}\) are both cores of an upper probability, and that the prior upper probability \(\overline{P}\) is concave; as the proverb goes, there is no free lunch. Having to check these assumptions, together with computing a supremum, an infimum, and the integrals in (3) make Theorem 7 inadequate to be applied in the context of IBNNs. ## Appendix E Bounds on upper and lower entropy In some problems, calculating \(\overline{H}(P)\) and \(\underline{H}(P)\) may be computationally costly. In this section we give an upper bound for \(\overline{H}(P)\) and a lower bound for \(\underline{H}(P)\) that are more computationally friendly. We first need to introduce three new concepts. **Definition 9**.: Consider a set \(\mathcal{P}\) of probabilities on a generic measurable space \((\Omega,\mathcal{F})\). We say that lower probability \(\underline{P}\) is _convex_ if \(\underline{P}(A\cup B)\geq\underline{P}(A)+\underline{P}(B)-\underline{P}(A \cap B)\), for all \(A,B\in\mathcal{F}\). Then, let \(\mathfrak{P}\) be either an upper or a lower probability, and consider a generic bounded function \(f\) on \((\Omega,\mathcal{F})\), that is, \(f\in B(\Omega)\). We define the _Choquet integral_ of \(f\) with respect to \(\mathfrak{P}\) as follows \[\int_{\Omega}f(\omega)\mathfrak{P}(\mathrm{d}\omega):=\int_{0}^{\infty} \mathfrak{P}\left(\{\omega\in\Omega:f(\omega)\geq t\}\right)\mathrm{d}t+\int_ {-\infty}^{0}\left[\mathfrak{P}\left(\{\omega\in\Omega:f(\omega)\geq t\} \right)-\mathfrak{P}(\Omega)\right]\mathrm{d}t,\] where the right hand side integrals are (improper) Riemann integrals. If \(\mathfrak{P}\) is additive, then the Choquet integral reduces to the standard additive integral. Finally, if \(\Omega\) is uncountable, define for all \(\omega\in\Omega\) \[\underline{\pi}(\omega):=\inf_{P\in\mathcal{P}}\frac{\mathrm{d}P}{\mathrm{d} \mu}(\omega)\quad\text{and}\quad\overline{\pi}(\omega):=\sup_{P\in\mathcal{P}} \frac{\mathrm{d}P}{\mathrm{d}\mu}(\omega).\] We call them _lower_ and _upper densities_, respectively. The following theorem gives the desired bounds. **Theorem 10**.: Consider a set \(\mathcal{P}\) of probabilities on a generic measurable space \((\Omega,\mathcal{F})\). If \(\Omega\) is uncountable, assume that every \(P\in\mathcal{P}\) is dominated by a \(\sigma\)-finite measure \(\mu\) and that the Radon-Nikodym derivatives \(\frac{\mathrm{d}P}{\mathrm{d}\mu}\) are continuous and bounded, for all \(P\in\mathcal{P}\). Define \[H(\underline{P}):=\begin{cases}-\int_{\Omega}\log\left[\overline{\pi}(\omega) \right]\underline{P}(\mathrm{d}\omega)&\text{if $\Omega$ is uncountable}\\ -\sum_{\omega\in\Omega}\underline{P}(\{\omega\})\log[\overline{P}(\{\omega\}) ]&\text{if $\Omega$ is at most countable}\end{cases}\] and similarly \[H(\overline{P}):=\begin{cases}-\int_{\Omega}\log\left[\underline{\pi}(\omega) \right]\overline{P}(\mathrm{d}\omega)&\text{if $\Omega$ is uncountable}\\ -\sum_{\omega\in\Omega}\overline{P}(\{\omega\})\log[\underline{P}(\{\omega\}) ]&\text{if $\Omega$ is at most countable}\end{cases},\] Then, \(\overline{H}(P)\leq H(\overline{P})\) and \(\underline{H}(P)\geq H(\underline{P})\). In addition, if \(\overline{P}\) is concave, the first bound is tighter, and if \(\underline{P}\) is convex, the second bound is tighter. **Remark 11**.: In [23, Section 4.6.1], the authors points out that [2] presents a generalization of the Hartley measure \(GH(P)\) that can be used to disaggregate the total uncertainty captured by \(\overline{H}(P)\) into aleatoric and epistemic uncertainties. We prefer not to introduce it in the present work because \(GH(P)\) is defined based on the mass function of a belief function [19, Definition 2.4]. This entails that the authors assume that lower probability \(\underline{P}\) associated with the set of probabilities of interest is a belief function, that is, for every collection \(\{A,A_{1},\ldots,A_{k}\}\) such that \(A\subset A_{i}\), the following holds \[\underline{P}(A)\geq\sum_{\emptyset\neq I\subset\{1,\ldots,k\}}(-1)^{\#I-1} \underline{P}(\cap_{i\in I}A_{i}),\] for all \(k\in\mathbb{N}\). As it is immediate to see, this is a very restrictive assumption that is not needed in the context of IBNNs. ## Appendix F How to derive a posterior predictive distribution Suppose we performed a Bayesian updating procedure so to obtain posterior pdf \(p(\theta\mid x_{1},y_{1},\ldots,x_{n},y_{n})\). Recall that \(\{(x_{i},y_{i})\}_{i=1}^{n}\in(\mathscr{D}_{\mathbf{x}}\times\mathscr{D}_{ \mathbf{y}})^{n}\) denotes the training set. We obtain the posterior predictive distribution \(p(\tilde{y}\mid\tilde{x},x_{1},y_{1},\ldots,x_{n},y_{n})\) on \(\mathscr{D}_{\mathbf{y}}\) as follows \[p(\tilde{y}\mid\tilde{x},x_{1},y_{1},\ldots,x_{n},y_{n}) =\int_{\Theta}p(\tilde{y},\theta\mid\tilde{x},x_{1},y_{1},\ldots, x_{n},y_{n})\ \mathrm{d}\theta\] \[=\int_{\Theta}p(\tilde{y}\mid\theta,\tilde{x},x_{1},y_{1},\ldots, x_{n},y_{n})\cdot p(\theta\mid\tilde{x},x_{1},y_{1},\ldots,x_{n},y_{n})\ \mathrm{d}\theta\] \[=\int_{\Theta}p(\tilde{y}\mid\theta,\tilde{x})\cdot p(\theta\mid x _{1},y_{1},\ldots,x_{n},y_{n})\ \mathrm{d}\theta,\] where \(p(\tilde{y}\mid\theta,\tilde{x})\) is the likelihood used to derive the posterior. Notice that the last equality comes from output \(\tilde{y}\) only depending on input \(\tilde{x}\) and parameter \(\theta\), and from having assumed \(D_{x}\perp\!\!\!\perp\theta\) (see section 2.1). From an applied point of view, a sample from \(p(\tilde{y}\mid\tilde{x},x_{1},y_{1},\ldots,x_{n},y_{n})\) is obtained as follows: 1. specify input \(\tilde{x}\); 2. sample a parameter \(\tilde{\theta}\) from the posterior, \(\tilde{\theta}\sim p(\theta\mid x_{1},y_{1},\ldots,x_{n},y_{n})\); 3. plug \(\tilde{\theta}\) in the likelihood and sample \(\tilde{y}\sim p(y\mid\tilde{\theta},\tilde{x})\). ## Appendix G Aleatoric uncertainty check for \(\alpha\)-level Ihdr The aleatoric uncertainty (AU) in \(IR_{\alpha}\) is linked to its size: the larger its diameter, the higher the AU the agent faces.9 If we want to avoid to perform the procedure in section 4 only to discover that \(IR_{\alpha}\) is "too big", then we can add an "AU check". Footnote 9: Since the diameter is a metric concept, we assume that we can find a well-defined metric \(d_{\mathbf{y}}\) on \(\mathscr{D}_{\mathbf{y}}\). If that is not the case, we substitute the diameter with the notion of cardinality. At the beginning of the analysis, compute the lower entropy associated with collection \(\{\hat{P}_{1},\ldots,\hat{P}_{N}\}\), \(\underline{H}(\hat{P})=\inf_{k\in\{1,\ldots,N\}}H(\hat{P}_{k})\), which gives us the aleatoric uncertainty within \(\{\hat{P}_{k}\}_{k=1}^{N}\). We then verify whether the lower entropy \(\underline{H}(\hat{P})\) is "too high". That is, if \(\underline{H}(\hat{P})>\varphi\), for some \(\varphi>0\), we want our procedure to abstain. This means that if the aleatoric uncertainty in set \(\{\hat{P}_{k}\}_{k=1}^{N}\) is too high, then our procedure does not return any output set for input \(\tilde{x}\). The value of \(\varphi\) can be set equal to the entropy of the probability measures that are typically used in the context the agent works in. For example, in medical applications the agent may consider the entropy of a Normal distribution, while in financial applications the entropy of a distribution with fatter tails, such as a \(t\)-distribution or a Cauchy. We call these _reference \(\varphi\) values_. If we add this "AU check", the 2-tuple that the agent needs to specify at the beginning of the procedure is \(\langle\varphi,\alpha\rangle\). ## Appendix H \(\alpha\)-level I HDR in a Classification setting In classification problems, BNNs compute the probability vector \[\hat{p}:=\frac{1}{\#\Theta}\sum_{\theta\in\Theta}\Phi_{\theta|D}(x),\] where we write \(\Phi_{\theta|D}\) to highlight the fact that \(\theta\) is sampled from posterior \(p(\theta\mid D)\), and then select the most likely class \(\hat{y}:=\arg\max_{i}p_{i}\), where the \(p_{i}\)'s are the elements of \(\hat{p}\). When applied to a classification setting, the general procedure introduced in section 4 becomes the following. Recall that we denote by \(N:=\#\mathcal{P}\times\#\mathcal{L}_{x,\theta}\) the number of distributions on \(\mathscr{D}_{\mathbf{y}}\) induced by the posteriors on the parameters of the neural network. Assume that \(\mathscr{D}_{\mathbf{y}}=\{y_{1},\ldots,y_{J}\}\), that is, there are \(J\in\mathbb{N}\) possible labels. Then, the induced distributions on \(\mathscr{D}_{\mathbf{y}}\) can be seen as \(N\)\(J\)-dimensional probability vectors vectors \(\hat{p}^{1},\ldots,\hat{p}^{N}\), where \(\hat{p}^{k}=(\hat{p}^{k}_{1}=\hat{p}^{k}(y_{1}),\ldots,\hat{p}^{k}_{J}=\hat{p }^{k}(y_{J}))^{\top}\) for all \(k\), \(\hat{p}^{k}_{j}\in[0,1]\) for all \(k\) and all \(j\), and \(\sum_{j=1}^{J}\hat{p}^{k}_{j}=1\) for all \(k\). Now fix any \(k\in\{1,\ldots,N\}\), and define the partial order \(\preceq_{k}\) on \(\mathscr{D}_{\mathbf{y}}\) as \(y_{l}\preceq_{k}y_{i}\iff\hat{p}^{k}(y_{l})\geq\hat{p}^{k}(y_{i})\) and \(y_{l}\prec_{k}y_{i}\iff\hat{p}^{k}(y_{l})>\hat{p}^{k}(y_{i})\), where \(i,l\in\{1,\ldots,J\}\), \(i\neq l\). This means that we can order the labels according to the probability that \(\hat{p}^{k}\) assigns to them: the first label will be the one having highest probability according to \(\hat{p}^{k}\), the second label will have the second-highest probability according to \(\hat{p}^{k}\), and so on. Now order the label space \(\mathscr{D}_{\mathbf{y}}\) according to \(\preceq_{k}\) so to obtain \[\mathscr{D}_{\mathbf{y}}^{\preceq_{k}}:=\{y_{1}^{\preceq_{k}},\ldots,y_{J}^{ \preceq_{k}}\}.\] This means that \(y_{1}^{\preceq_{k}}\preceq_{k}y_{j}^{\preceq_{k}}\), for all \(j\in\{2,\ldots,J\}\), \(y_{2}^{\preceq_{k}}\preceq_{k}y_{j}^{\preceq_{k}}\), for all \(j\in\{3,\ldots,J\}\) (but \(y_{1}^{\prec_{k}}\preceq_{k}y_{2}^{\prec_{k}}\)), and so on. That is, we order the labels from the most to the least likely according to \(\hat{p}^{k}\). Then, we call \(\alpha\)_-level credible set_ according to \(\hat{p}^{k}\), \(\alpha\in(0,1)\), the set \[\begin{split} CS_{\alpha}^{k}:=\bigg{\{}y_{1}^{\preceq_{k}}, \ldots,y_{j}^{\preceq_{k}}:\sum_{i=1}^{j}\hat{p}^{k}(y_{i}^{\preceq_{k}})\in[ 1-\alpha,1-\alpha+\varepsilon],\,j\leq J,\\ \text{and }\nexists j^{\prime}<j:\sum_{i=1}^{j^{\prime}}\hat{p}^{k} (y_{i}^{\preceq_{k}})\in[1-\alpha,1-\alpha+\varepsilon]\bigg{\}},\end{split} \tag{4}\] for some \(\varepsilon>0\). It corresponds to the \(\alpha\)-level HDR. Notice that we require \(\sum_{i=1}^{j}\hat{p}^{k}(y_{i}^{\preceq_{k}})\in[1-\alpha,1-\alpha+\varepsilon]\) because we may need to go slightly above level \(1-\alpha\). Just as a toy example, we may have 7 labels, 3 of which would give a 0.945 coverage, while 4 would give a coverage of 0.953. If we are interested in the \(\alpha=0.05\)-level credible set, we ought to include the fourth label, thus yielding a coverage slightly higher than \(1-\alpha=0.95\). The interpretation to \(CS_{\alpha}^{k}\) is the following: it consists of the smallest collection of labels to which \(\hat{p}^{k}\) assigns probability of at least \(1-\alpha\) (that is, those having the highest probability of being the correct one given the input). Finally, we call \(\alpha\)_-level imprecise credible set_, \(\alpha\in(0,1)\), the set \[ICS_{\alpha}:=\bigcup_{k=1}^{N}CS_{\alpha}^{k}.\] In turn, we have that \(\hat{\underline{P}}(Y\in ICS_{\alpha})\geq 1-\alpha\). **Remark 12**.: Notice that if a credible set of level \(\approx\alpha\) is enough, then we can replace the left endpoint of the interval in (4) with \(1-(\alpha+\varepsilon_{k})\), for some \(\varepsilon_{k}>0\). Strictly speaking, in this case we obtain an \((\alpha+\varepsilon_{k})\)-level credible set, which we denote by \(CS_{\alpha_{k}}^{k}\), where \(\alpha_{k}:=\alpha+\varepsilon_{k}\), \(k\in\{1,\ldots,N\}\). Going back to our toy example, we will have a credible set with 3 labels that yields a coverage of \(1-(\alpha+\varepsilon_{k})=0.945\approx 0.95=1-\alpha\), so \(\varepsilon_{k}=0.005\). In turn, this implies that the imprecise credible set will have a coverage of \(1-(\alpha+\max_{k\in\{1,\ldots,N\}}\varepsilon_{k})\), that is, it will have level \(\alpha+\max_{k\in\{1,\ldots,N\}}\varepsilon_{k}\). We denote it by \(ICS_{\tilde{\alpha}}\), where \(\tilde{\alpha}:=\alpha+\max_{k\in\{1,\ldots,N\}}\varepsilon_{k}\). ## Appendix I Experiments: classification Distribution shifts can introduce uncertainties in the system which can render the predictions meaningless. This can be due to naturally occurring corruptions as introduced in [21] for image datasets. The authors introduced 18 different noise types, which can be varied across 5 different severity levels. The intuition being that in the current context, increasing the noise severity should generally result in higher uncertainty. We evaluate an IBNN on 4 standard image datasets CIFAR10 [29], SVHN [40], Fashion-MNIST [55], and MNIST [34]. We use a slightly different set of perturbations introduced in [39] for gray-scale images like MNIST and Fashion-MNIST. Additionally, we perform cross domain testing for each dataset, where Figure 7. The different trends with increasing corruption severity for EBNN (baseline) versus IBNNs. IBNNs better inform the degree of shift as compared to EBNN. we expect the uncertainties to be higher. We implement and train a Resnet-20 Bayesian DNN model inside the library Bayesian-torch [28]. For each dataset, we train 4 different networks initialized with different seeds on the prior and with the same architecture. We use a learning-rate of 0.001, batch-size of 128, and train the networks using mean-field variational inference for 200 epochs. The inference is carried out by performing multiple forward passes through parameters drawn from the posterior distribution. We used 20 Monte-Carlo samples in the experiments. **Results.** Following [16], for EBNN we posit that \(1/k\sum_{j=1}^{k}\sigma_{j}^{2}\) captures the aleatoric uncertainty, and \(1/(k-1)\sum_{j=1}^{k}(\mu_{j}-\mu_{\text{ens}})^{2}\) captures the epistemic uncertainty; we use these values as baselines.10 We discuss the results for CIFAR10 presented in Table 3. Footnote 10: Notice that in this case we retain the assumption that the EBNN distribution on \(\mathscr{D}_{\mathbf{y}}\) has mean \(\mu_{\text{ens}}\) and covariance matrix \(\sigma_{\text{ens}}^{2}I\), but we do not require that it is a Normal. That is because in this case \(\mathscr{D}_{\mathbf{y}}\) is finite. Notice also that for IBNN the AU is given by the lower entropy \(\underline{H}(\hat{P}):=\inf_{\hat{P}\in\hat{P}}H(\hat{P})\), where \(\hat{\mathcal{P}}\) was defined in section 4, while the EU is given by the difference \(\overline{H}(\hat{P})-\underline{H}(\hat{P})\), and \(\overline{H}(\hat{P}):=\sup_{\hat{P}\in\hat{P}}H(\hat{P})\). We report the average uncertainties across all test samples as measured by the respective procedures. Note that we abstracted the severity levels to low (severity = 1), medium (severity = \(2,3\)), and high (severity = \(4,5\)) in this paper. For CIFAR10, we observe that IBNNs report lower levels of aleatoric uncertainty when subjected to low levels of corruption, which gradually increases as we move to higher levels. The epistemic uncertainty grows as well, but the change in the aleatoric component is more pronounced. However, for EBNN this is not true. Even though the epistemic uncertainty increases the aleatoric shows the reverse trend, contrary to our intuition of aleatoric uncertainty. Using IBNNs, we achieve the highest uncertainties, both aleatoric and epistemic, when applied to entirely different test sets, namely - MNIST, SVHN, and Fahion-MNIST. We report these numbers as well. \begin{table} \begin{tabular}{|l|l|l|l|l|l|l|l|l|l|l|l|} \hline \multicolumn{1}{c}{} & \multicolumn{4}{c||}{IBNN} & \multicolumn{4}{c||}{EBNN} \\ \cline{3-13} \multicolumn{1}{c}{} & \multicolumn{4}{c||}{Epistemic} & \multicolumn{4}{c||}{Aleatoric} & \multicolumn{4}{c|}{Epistemic} & \multicolumn{4}{c|}{Actaic} \\ \cline{3-13} \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & Low & Med & High & Low & Med & High & Low & Med & High \\ \hline \multirow{11}{*}{CIFAR-10} & gaussian noise & 0.145 & 0.169 & **0.176** & 0.066 & 0.099 & **0.102** & 0.012 & **0.016** & **0.106** & **0.065** & 0.056 & 0.055 \\ \cline{2-13} & shot noise & 0.129 & 0.158 & **0.174** & 0.053 & 0.084 & **0.164** & 0.01 & 0.014 & **0.106** & **0.069** & 0.051 & 0.055 \\ \cline{2-13} & speckle noise & 0.128 & 0.158 & **0.174** & 0.053 & 0.081 & **0.102** & 0.01 & 0.014 & **0.016** & **0.069** & 0.061 & 0.055 \\ \cline{2-13} & impulse noise & 0.134 & 0.161 & **0.174** & 0.052 & 0.085 & **0.118** & 0.011 & 0.015 & **0.017** & **0.069** & 0.059 & 0.051 \\ \cline{2-13} & defocus blur & 0.105 & 0.129 & **0.176** & 0.032 & 0.048 & **0.095** & 0.008 & 0.010 & **0.016** & **0.076** & 0.071 & 0.057 \\ \cline{2-13} & gaussian blur & 0.105 & 0.154 & **0.186** & 0.032 & 0.069 & **0.11** & 0.008 & 0.013 & **0.017** & **0.076** & 0.064 & 0.053 \\ \cline{2-13} & motion blur & 0.137 & 0.166 & **0.175** & 0.055 & 0.085 & **0.1** & 0.011 & 0.015 & **0.016** & **0.068** & 0.066 & 0.056 \\ \cline{2-13} & norm blur & 0.148 & 0.161 & **0.176** & 0.063 & 0.077 & 0.1 & 0.012 & 0.014 & **0.016** & **0.066** & 0.062 & 0.056 \\ \cline{2-13} & snow & 0.131 & 0.157 & **0.163** & 0.055 & 0.077 & **0.086** & 0.01 & 0.014 & **0.015** & **0.07** & 0.062 & 0.059 \\ \cline{2-13} & fog & 0.104 & 0.122 & **0.154** & 0.033 & 0.045 & **0.09** & 0.008 & 0.009 & **0.013** & **0.076** & 0.072 & 0.06 \\ \cline{2-13} & brightness & 0.103 & 0.108 & **0.124** & 0.031 & 0.034 & **0.043** & 0.008 & 0.008 & **0.011** & **0.076** & 0.075 & 0.072 \\ \cline{2-13} & contrast & 0.107 & 0.145 & **0.165** & 0.035 & 0.066 & **0.148** & 0.008 & 0.002 & **0.017** & **0.075** & 0.065 & 0.046 \\ \cline{2-13} & elastic & 0.134 & 0.144 & **0.168** & 0.053 & 0.059 & **0.089** & 0.011 & 0.012 & **0.015** & **0.069** & 0.067 & 0.058 \\ \cline{2-13} & plastic & 0.116 & 0.135 & **0.162** & 0.042 & 0.065 & **0.096** & 0.009 & 0.011 & **0.015** & **0.073** & 0.066 & 0.058 \\ \cline{2-13} & jpeng & 0.13 & 0.147 & **0.156** & 0.049 & 0.064 & **0.075** & 0.01 & 0.012 & **0.013** & **0.07** & 0.066 & 0.063 \\ \cline{2-13} & spatter & 0.117 & 0.145 & **0.147** & 0.041 & **0.055** & 0.062 & 0.009 & **0.012** & **0.012** & **0.073** & 0.065 & 0.066 \\ \cline{2-13} & saturate & 0.112 & 0.113 & **0.131** & 0.041 & 0.039 & **0.046** & 0.008 & **0.009** & **0.011** & 0.073 & **0.074** & 0.07 \\ \cline{2-13} & frost & 0.124 & 0.133 & **0.166** & 0.047 & 0.075 & **0.099** & 0.01 & 0.013 & **0.015** & **0.071** & 0.063 & 0.056 \\ \hline \multirow{11}{*}{Dataset} & \multicolumn{4}{c||}{Epistemic} & \multicolumn{4}{c|}{Aleatoric} & \multicolumn{4}{c|}{Epistemic} & \multicolumn{4}{c|}{Actaic} \\ \cline{2-13} & MNIST & 0.184 & & 0.142 & & 0.021 & & 0.044 \\ \cline{1-1} \cline{2-13} & Fashion MNIST & 0.183 & & 0.147 & & 0.022 & & 0.042 \\ \cline{1-13} & SVHN & 0.183 & & 0.141 & & 0.019 & & 0.046 \\ \hline \end{tabular} \end{table} Table 3: The 4 BNNs trained have the following accuracy : \(90,89,90,89\) in percentage terms and rounded to the nearest whole number. These are the probabilities of the most likely label to be the correct one according to the 4 different BNNs. For different categories of corruptions, increasing severity leads to higher levels of aleatoric uncertainty for IBNNs. When exposed to completely unseen datasets, this reaches its peak. In contrast, EBNN has the **Summary.** We summarize our results in Figure 7. With increasing corruption severity the aleatoric uncertainty for IBNNs becomes more pronounced. This is expected since for higher corruptions, it is hard to reconcile the performance gap by simple data-augmentation, without any additional knowledge. For EBNN, even though the epistemic uncertainty grows, the aleatoric part does not. Note, the absolute uncertainty differs due to the quantities being fundamentally different: probability versus entropy. However, the relative trends are consistent for each dataset, demonstrating the utility of IBNNs. **Other datasets.** The results for MNIST, Fashion-MNIST, and SVHN are presented in Tables 4, 5, and 6, respectively. We observe trends similar to CIFAR10's ones: the aleatoric uncertainty increases with corruption severity, and is the highest for completely unknown datasets. Appendix J Distance between a set of distributions \(\mathcal{P}\) and a single distribution \(P^{\prime}\) having different dimensions Many concepts in this section are derived from [6]. Let \(m,n\in\mathbb{N}\) such that \(m\leq n\), and let \(p\in[1,\infty]\). Call \(M^{p}(\mathbb{R}^{j})\) the set of probability measures on \(\mathbb{R}^{j}\) having finite \(p\)th moment, and \(M_{d}(\mathbb{R}^{j})\) the set of probability measures on \(\mathbb{R}^{j}\) having density with respect to some \(\sigma\)-finite \begin{table} \begin{tabular}{|l|l|l|l|l|l||l|l|l|l|l|l|l|} \cline{3-13} \multicolumn{1}{c}{} & \multicolumn{6}{c||}{IBNN} & \multicolumn{6}{c|}{EBNN} \\ \cline{3-13} \multicolumn{1}{c}{} & \multicolumn{6}{c||}{Epistemic} & \multicolumn{6}{c||}{Aleatoric} & \multicolumn{6}{c|}{Epistemic} & \multicolumn{6}{c|}{Aleatoric} \\ \cline{3-13} \multicolumn{1}{c}{} & \multicolumn{1}{c}{Low} & \multicolumn{1}{c}{Med} & \multicolumn{1}{c|}{High} & \multicolumn{1}{c|}{Low} & \multicolumn{1}{c|}{Med} & \multicolumn{1}{c|}{High} & \multicolumn{1}{c|}{Low} & \multicolumn{1}{c|}{Med} & \multicolumn{1}{c|}{High} & \multicolumn{1}{c|}{Low} & \multicolumn{1}{c|}{Med} & \multicolumn{1}{c|}{High} \\ \hline \multirow{9}{*}{Fashion MNIST} & brightness & 0.199 & **0.212** & 0.205 & **0.108** & 0.106 & 0.094 & 0.021 & **0.024** & 0.022 & **0.051** & 0.048 & **0.051** \\ \cline{2-13} & canny edges & **0.18** & 0.175 & 0.177 & 0.157 & **0.164** & 0.16 & **0.021** & 0.02 & 0.02 & **0.043** & 0.042 & **0.043** \\ \cline{2-13} & dotted line & 0.167 & **0.168** & **0.168** & **0.051** & **0.051** & **0.051** & 0.012 & **0.013** & **0.013** & **0.047** & 0.066 & 0.066 \\ \cline{2-13} & fog & **0.19** & 0.185 & 0.187 & 0.128 & **0.132** & 0.129 & **0.021** & 0.022 & **0.041** & **0.046** & 0.045 & **0.046** \\ \cline{2-13} & glass blur & 0.189 & 0.189 & **0.193** & 0.114 & 0.127 & **0.133** & 0.017 & 0.019 & **0.022** & **0.052** & 0.048 & 0.045 \\ \cline{2-13} & impulse noise & 0.169 & 0.196 & **0.202** & 0.052 & 0.09 & **0.111** & 0.013 & 0.019 & **0.02** & **0.066** & 0.055 & 0.051 \\ \cline{2-13} & motion blur & **0.182** & 0.171 & 0.163 & 0.106 & 0.157 & **0.174** & 0.017 & **0.02** & **0.019** & 0.053 & 0.041 & 0.038 \\ \cline{2-13} & rotate & **0.185** & 0.173 & 0.162 & 0.074 & 0.164 & **0.184** & 0.016 & **0.02** & **0.02** & **0.06** & 0.037 & 0.035 \\ \cline{2-13} & scale & 0.154 & **0.172** & 0.127 & 0.054 & 0.123 & **0.223** & 0.01 & **0.015** & 0.014 & **0.067** & 0.05 & 0.028 \\ \cline{2-13} & shear & 0.172 & 0.177 & **0.182** & 0.063 & 0.13 & **0.144** & 0.014 & 0.019 & **0.02** & **0.063** & 0.045 & 0.039 \\ \cline{2-13} & shot noise & 0.16 & 0.178 & **0.197** & 0.046 & 0.059 & **0.088** & 0.01 & 0.013 & **0.017** & **0.068** & 0.064 & 0.056 \\ \cline{2-13} & sparter & 0.15 & 0.189 & **0.191** & 0.044 & **0.082** & 0.074 & 0.01 & **0.018** & 0.017 & **0.07** & 0.057 & 0.059 \\ \cline{2-13} & stripe & **0.21** & 0.208 & 0.208 & 0.125 & **0.126** & 0.125 & **0.025** & **0.025** & **0.025** & **0.041** & **0.041** & **0.041** \\ \cline{2-13} & translate & 0.153 & **0.189** & 0.17 & 0.046 & 0.094 & **0.158** & 0.01 & 0.018 & **0.019** & **0.069** & 0.054 & 0.041 \\ \cline{2-13} & zigzag & 0.187 & **0.19** & 0.189 & 0.005 & 0.065 & **0.066** & **0.016** & **0.016** & **0.016** & **0.061** & **0.061** & **0.061** \\ \hline \multicolumn{13}{c}{} & \multicolumn{6}{c|}{Epistemic} & \multicolumn{6}{c||}{Aleatoric} & \multicolumn{6}{c|}{Epistemic} & \multicolumn{6}{c|}{Aleatoric} \\ \cline{2-13} & CIFAR & & 0.205 & & 0.104 & & 0.022 & & 0.050 \\ \cline{2-13} & MNIST & 0.165 & & 0.182 & & 0.021 & & 0.033 \\ \cline{2-13} & SVHN & & 0.218 & & 0.109 & & 0.026 & & 0.046 \\ \hline \end{tabular} \end{table} Table 6: The 4 BNNs trained have the following accuracies : \(93,92,92,92\) in percentage terms and rounded to the nearest whole number. These are the probabilities of the most likely label to be the correct one according to the 4 different BNNs. For different categories of corpus, increasing severity leads to higher levels of aleatoric uncertainty for IBNNs. When exposed to completely unseen datasets, this reaches its peak. The same is not true for EBNN. For the epistemic uncertainty as well, there is a clear trend with increasing corruption severity. \begin{table} \begin{tabular}{|l|l|l|l|l|l||l|l|l|l|l|l|l|} \cline{3-13} \multicolumn{1}{c}{} & \multicolumn{6}{c|}{IBNN} & \multicolumn{6}{c|}{EBNN} \\ \cline{3-13} \multicolumn{1}{c}{} & \multicolumn{6}{c|}{Epistemic} & \multicolumn{6}{c||}{Aleatoric} & \multicolumn{6}{c|}{Epistemic} & \multicolumn{6}{c|}{Aleatoric} \\ \cline{3-13} & Low & Med & High & Low & Med & High & Low & Med & High & Low & Med & High \\ \hline \multirow{9}{*}{SVHN} & brightness & 0.062 & 0.074 & **0.102** & 0.011 & 0.019 & **0.053** & 0.005 & 0.006 & **0.008** & **0.008** & 0.08 & 0.07 \\ \cline{2-13} & contrast & 0.077 & 0.099 & **0.143** & 0.022 & 0.048 & **0.174** & 0.006 & 0.008 & **0.018** & **0.08** & 0.07 & 0.04 \\ \cline{2-13} & defocus blur & 0.109 & **0.154** & 0.138 & 0.044 & 0.147 & **0.218** & 0.009 & 0.017 & **0.02** & **0.072** & 0.045 & 0.029 \\ \cline{2-13} & elastic & 0.182 & 0.177 & **0.192** & **0.14** & 0.122 & 0.102 & **0.023** & 0. dominating measure \(\mu\), \(j\in\{m,n\}\). Let \(O(m,n):=\{V\in\mathbb{R}^{m\times n}:VV^{\top}=I_{m}\}\), where \(I_{m}\) denotes the \(m\)-dimensional identity matrix, and for any \(V\in O(m,n)\) and any \(b\in\mathbb{R}^{m}\), define the following function \[\varphi_{V,b}:\mathbb{R}^{n}\to\mathbb{R}^{m},\quad x\mapsto\varphi_{V,b}(x):= Vx+b.\] Let \(\mathcal{B}(\mathbb{R}^{n})\) be the Borel \(\sigma\)-algebra on \(\mathbb{R}^{n}\), and for any \(Q\in\Delta(\mathbb{R}^{n},\mathcal{B}(\mathbb{R}^{n}))\), define \(\varphi_{V,b}(Q):=Q\circ\varphi_{V,b}^{-1}\), the pushforward measure. Consider then two generic probability measures \(Q,S\) such that \(Q\in M^{p}(\mathbb{R}^{m})\) and \(S\in M^{p}(\mathbb{R}^{n})\), and call \[\Phi_{p}^{+}(Q,n) :=\{\alpha\in M^{p}(\mathbb{R}^{n}):\varphi_{V,b}(\alpha)=Q,\, \text{for some }V\in O(m,n),b\in\mathbb{R}^{m}\},\] \[\Phi_{d}^{+}(Q,n) :=\{\alpha\in M_{d}(\mathbb{R}^{n}):\varphi_{V,b}(\alpha)=Q,\, \text{for some }V\in O(m,n),b\in\mathbb{R}^{m}\},\] \[\Phi^{-}(S,m) :=\{\beta\in M(\mathbb{R}^{m}):\varphi_{V,b}(S)=\beta,\,\text{ for some }V\in O(m,n),b\in\mathbb{R}^{m}\}.\] Recall now the definition of \(p\)-Wasserstein metric between two generic distributions defined on the _same_ Euclidean space. Let \(P_{1},P_{2}\in M^{p}(\mathbb{R}^{n})\), for some \(n\in\mathbb{N}\) and some \(p\in[1,\infty]\). Then, the \(p\)-Wasserstein distance between them is defined as \[W_{p}(P_{1},P_{2}):=\left[\inf_{\gamma\in\Gamma(P_{1},P_{2})}\int_{\mathbb{R} ^{2n}}\|x-y\|_{2}^{p}\gamma(\mathrm{d}(x,y))\right]^{1/p},\] where \(\|\cdot\|_{2}\) denotes the Euclidean distance, \(p=\infty\) is interpreted as the essential supremum, and \(\Gamma(P_{1},P_{2}):=\{\gamma\in\Delta(\mathbb{R}^{2n},\mathcal{B}(\mathbb{R} ^{2n})):\pi_{1}^{n}(\gamma)=P_{2},\pi_{2}^{n}(\gamma)=P_{1}\}\) is the set of couplings between \(P_{1}\) and \(P_{2}\), where \(\pi_{1}^{n}\) is the projection onto the first \(n\) coordinates, and \(\pi_{2}^{n}\) is the projection to the last \(n\) coordinates. Recall then the definition of \(f\)-divergence between two generic distributions defined on the _same_ Euclidean space. Let \(P_{1},P_{2}\in\Delta(\mathbb{R}^{n},\mathcal{B}(\mathbb{R}^{n}))\), for some \(n\in\mathbb{N}\), and assume \(P_{1}\ll P_{2}\). Then, for any convex functional \(f\) on \(\mathbb{R}\) such that \(f(1)=0\), the \(f\)-divergence between \(P_{1}\) and \(P_{2}\) is defined as \[div_{f}(P_{1}\|P_{2}):=\int_{\mathbb{R}^{n}}f\left(\frac{\mathrm{d}P_{1}}{ \mathrm{d}P_{2}}(x)\right)P_{2}(\mathrm{d}x).\] Aside from the Renyi divergence, the \(f\)-divergence includes just about every known divergences as special case [6]. The following are the main results of this section. **Lemma 13**.: Let \(m,n\in\mathbb{N}\) such that \(m\leq n\), and let \(p\in[1,\infty]\) and \(f\) be any convex functional on \(\mathbb{R}\) such that \(f(0)=1\). Consider a generic \(\mathcal{P}\subset M^{p}(\mathbb{R}^{m})\) and \(P^{\prime}\in M^{p}(\mathbb{R}^{n})\). Let \(\Phi_{p}^{+}(\mathcal{P},n):=\cup_{P\in\mathcal{P}}\Phi_{p}^{+}(P,n)\) and \(\Phi_{d}^{+}(\mathcal{P},n):=\cup_{P\in\mathcal{P}}\Phi_{d}^{+}(P,n)\). Define * \(W_{p}^{+}(P,P^{\prime}):=\inf_{\alpha\in\Phi_{p}^{+}(P,n)}W_{p}(\alpha,P^{ \prime})\), for all \(P\in\mathcal{P}\); * \(div_{f}^{+}(P\|P^{\prime}):=\inf_{\alpha\in\Phi_{d}^{+}(P,n)}div_{f}(P\|P^{ \prime})\), for all \(P\in\mathcal{P}\); * \(W_{p}^{+}(\mathcal{P},P^{\prime}):=\inf_{\alpha\in\Phi_{p}^{+}(\mathcal{P},n)} W_{p}(\alpha,P^{\prime})\); * \(div_{f}^{+}(\mathcal{P}\|P^{\prime}):=\inf_{\alpha\in\Phi_{d}^{+}(\mathcal{P},n)} div_{f}(P\|P^{\prime})\); Then, for all \(P\in\mathcal{P}\) the following holds \[W_{p}^{+}(\mathcal{P},P^{\prime})\leq W_{p}^{+}(P,P^{\prime})\quad\text{and} \quad div_{f}^{+}(\mathcal{P}\|P^{\prime})\leq div_{f}^{+}(P\|P^{\prime}).\] **Lemma 14**.: Let \(m,n\in\mathbb{N}\) such that \(m\leq n\), and let \(p\in[1,\infty]\) and \(f\) be any convex functional on \(\mathbb{R}\) such that \(f(0)=1\). Consider a generic \(\mathcal{P}\subset M^{p}(\mathbb{R}^{n})\) and \(P^{\prime}\in M^{p}(\mathbb{R}^{m})\). Let \(\Phi^{-}(\mathcal{P},m):=\cup_{P\in\mathcal{P}}\Phi^{-}(P,m)\). Define * \(W_{p}^{-}(P,P^{\prime}):=\inf_{\alpha\in\Phi^{-}(P,m)}W_{p}(\alpha,P^{\prime})\), for all \(P\in\mathcal{P}\); * \(div_{f}^{-}(P\|P^{\prime}):=\inf_{\alpha\in\Phi^{-}(P,m)}div_{f}(P\|P^{\prime})\), for all \(P\in\mathcal{P}\); * \(W_{p}^{-}(\mathcal{P},P^{\prime}):=\inf_{\alpha\in\Phi^{-}(\mathcal{P},m)}W_{p }(\alpha,P^{\prime})\); * \(div_{f}^{-}(\mathcal{P}\|P^{\prime}):=\inf_{\alpha\in\Phi^{-}(\mathcal{P},m)} div_{f}(P\|P^{\prime})\); Then, for all \(P\in\mathcal{P}\) the following holds \[W_{p}^{-}(\mathcal{P},P^{\prime})\leq W_{p}^{-}(P,P^{\prime})\quad\text{and} \quad div_{f}^{-}(\mathcal{P}\|P^{\prime})\leq div_{f}^{-}(P\|P^{\prime}).\] A visual representation of the application of Lemma 14 to IBNNs is given in Figure 8.11 Footnote 11: Notice that in Figure 8 we have that \(\mathscr{L}_{x,\theta}\equiv\mathcal{L}_{x,\theta}\). ## Appendix K Details on Artificial Pancreas Example **Artificial Pancreas Model.** An important factor when designing the controller for an artificial pancreas is to adapt the insulin delivery algorithm to the particular details of the patient. This is because patients display a wide range of variability in their response to insulin, depending on age, BMI, and other physiological parameters. The Bayesian neural network models have 2 hidden layers, with 10 neurons each, for the case study with 4 different seeds. This choice was informed by the experiments in [15]. For the case study with different architectures, we trained BNNs with 4 different widths: \(10,20,30,40\). The horizon length is \(H=10\) time steps, and the prediction horizon is 5 steps into the future. The neural networks were trained for 200 time steps, with a learning rate of 0.001, and batch size of 128 using mean-field variational inference. The training dataset consisted of 28,400 training samples, recorded without meals. **Controller.** We implemented a simple model predictive controller, using an off the shelf implementation of the covariance matrix adaptation evolution strategy (CMA-ES). The MPC planning horizon was \(k=5\), with a fixed seed for the randomized solver. ## Appendix L Further related work Modeling uncertainty has been a longstanding goal of ML/AI research and a variety of approaches have been developed for doing so [20, 42, 26]. Recently, emphasis has been placed on discerning between aleatoric and epistemic uncertainties [44, 30, 27]. In [48], the authors present an IP-based neural network which uses a regression technique based on probability intervals. Contrary to IBNNs, their NN is rooted in the frequentist approach to imprecise probabilities [22]. [37, Sections 2.1, 2.3] focuses on belief functions-based classification methods; IBNNs are a generalization of these methods because they can be used for regression, prediction, and classification, and because they are trained using credal sets, a much less restrictive concept than belief functions. ## Appendix M Proofs Proof of Lemma 2.: If \(\overline{P}(A)=\sup_{P^{\prime}\in\Pi^{\prime}}P^{\prime}(A)\), for all \(A\in\mathcal{F}\), then it is immediate to see that \(\overline{P}(A)=\sup_{P\in\Pi}P(A)=\sup_{P^{\prime}\in\mathrm{ex}\Pi^{\prime }}P^{\prime}(A)=\sup_{P^{\prime}\in\Pi^{\prime}}P^{\prime}(A)\), for all \(A\in\mathcal{F}\), since \(\Pi\subset\Pi^{\prime}\). Suppose now that \(\overline{P}(A)=\sup_{P\in\Pi}P(A)\), for all \(A\in\mathcal{F}\). Then, we have that for all \(P\in\Pi\) and all \(A\in\mathcal{F}\), \(P(A)\leq\overline{P}(A)\). Pick now any \(P^{\prime}\in\Pi^{\prime}\). We can write it as \(P^{\prime}=\sum_{j=1}^{k}\alpha_{j}P_{j}\), where \(\alpha_{j}\in[0,1]\), for all \(j\in\{1,\ldots,k\}\), \(\sum_{j=1}^{k}\alpha_{j}=1\), and \(\{P_{j}\}_{j=1}^{k}=\Pi\). Pick then any \(A\in\mathcal{F}\); we have \[P^{\prime}(A)=\sum_{j=1}^{k}\alpha_{j}P_{j}(A)\leq\sum_{j=1}^{k}\alpha_{j} \overline{P}(A)=\overline{P}(A).\] So \(\overline{P}(A)\geq P^{\prime}(A)\). Because this holds for all \(P^{\prime}\in\Pi^{\prime}\) and all \(A\in\mathcal{F}\), the claim is proven. Proof of Lemma 4.: Pick any metric \(d\) on \(\Delta(\Omega,\mathcal{F})\) and any \(\check{P}\in\mathcal{P}\). Because \(\check{P}\) belongs to \(\mathcal{P}\), \(\inf_{P\in\mathcal{P}}d(P,P^{\prime})\) can only be either equal to or smaller than \(d(\check{P},P^{\prime})\). By the definition of \(d(\mathcal{P},P^{\prime})\), if \(\inf_{P\in\mathcal{P}}d(P,P^{\prime})=d(\check{P},P^{\prime})\), then \(d(\mathcal{P},P^{\prime})=d(\check{P},P^{\prime})\). If instead \(\inf_{P\in\mathcal{P}}d(P,P^{\prime})<d(\check{P},P^{\prime})\), then \(d(\mathcal{P},P^{\prime})<d(\check{P},P^{\prime})\). The proof is similar for a generic divergence \(div\) on \(\Delta(\Omega,\mathcal{F})\). Proof of Lemma 5.: Suppose that \(\overline{H}(P)\) is the upper Shannon entropy of \(\Pi^{\prime}\), that is, \(\overline{H}(P)=\sup_{P^{\prime}\in\Pi^{\prime}}H(P^{\prime})\). Then, \(\overline{H}(P)=\sup_{P\in\Pi}H(P)=\sup_{P^{\prime}\in\mathrm{ex}\Pi^{\prime} }H(P^{\prime})=\sup_{P^{\prime}\in\Pi^{\prime}}H(P^{\prime})\) since \(\Pi\subset\Pi^{\prime}\). Suppose now that \(\overline{H}(P)\) is the upper Shannon entropy of \(\Pi\), that is, \[\overline{H}(P)=\sup_{P\in\Pi}H(P)=\sup_{P\in\Pi}\left\{-\int_{\Omega}\log\left[ \frac{\mathrm{d}P}{\mathrm{d}\mu}(\omega)\right]P(\mathrm{d}\omega)\right\},\] where it is assumed that for all \(P_{j}\in\Pi=\{P_{j}\}_{j=1}^{k}\), \(k\in\mathbb{N}\), there exists a \(\sigma\)-finite dominating measure \(\mu\) such that \(p_{j}=\frac{\mathrm{d}P_{j}}{\mathrm{d}\mu}\). Pick now any element \(P^{\prime}\in\Pi^{\prime}\). By assumption, we have that there exists a collection \(\{\alpha_{j}\}_{j=1}^{k}\) of reals such that \(\alpha_{j}\in[0,1]\), for all \(j\in\{1,\ldots,k\}\), \(\sum_{j=1}^{k}\alpha_{j}=1\), and \(P^{\prime}=\sum_{j=1}^{k}\alpha_{j}P_{j}\). Then, by the linearity of Radon-Nikodym derivatives, we have that \[p^{\prime}=\frac{\mathrm{d}P^{\prime}}{\mathrm{d}\mu}=\frac{\mathrm{d}\sum_{j= 1}^{k}\alpha_{j}P_{j}}{\mathrm{d}\mu}=\sum_{j=1}^{k}\alpha_{j}\frac{\mathrm{d }P_{j}}{\mathrm{d}\mu}=\sum_{j=1}^{k}\alpha_{j}p_{j}.\] Then, the following holds \[H(P^{\prime}): =-\int_{\Omega}\log p^{\prime}(\omega)P^{\prime}(\mathrm{d} \omega)=\int_{\Omega}(-\log)p^{\prime}(\omega)P^{\prime}(\mathrm{d}\omega)\] \[=\int_{\Omega}(-\log)\left(\sum_{j=1}^{k}\alpha_{j}p_{j}(\omega) \right)\left(\sum_{j=1}^{k}\alpha_{j}P_{j}\right)(\mathrm{d}\omega)\] \[\leq\int_{\Omega}\left[\sum_{j=1}^{k}\alpha_{j}(-\log)p_{j}( \omega)\right]\left(\sum_{j=1}^{k}\alpha_{j}P_{j}\right)(\mathrm{d}\omega) \tag{5}\] \[=\sum_{j=1}^{k}\alpha_{j}\int_{\Omega}\left[\sum_{j=1}^{k}\alpha_ {j}(-\log)\frac{\mathrm{d}P_{j}}{\mathrm{d}\mu}(\omega)\right]P_{j}(\mathrm{d}\omega)\] (6) \[\leq\sup_{P\in\Pi}\sum_{j=1}^{k}\alpha_{j}\int_{\Omega}\left[\sum_ {j=1}^{k}\alpha_{j}(-\log)\frac{\mathrm{d}P}{\mathrm{d}\mu}(\omega)\right]P( \mathrm{d}\omega)\] \[=\sup_{P\in\Pi}\int_{\Omega}(-\log)\frac{\mathrm{d}P}{\mathrm{d} \mu}(\omega)P(\mathrm{d}\omega)\] \[=\sup_{P\in\Pi}H(P)=\overline{H}(P),\] where (5) comes from the convexity of function \((-\log)\), while (6) comes from the \(P_{j}\)'s being finite measures (they are probability measures). This implies that \(H(P^{\prime})\leq\overline{H}(P)\), for all \(P^{\prime}\in\Pi^{\prime}\), which concludes the proof in the uncountable \(\Omega\) case. The proof for \(\Omega\) finite or countable is similar (and easier since we do not need to resort to probability density functions), so it is omitted. Proof of Theorem 7.: Assume \(\mathcal{P}^{\mathrm{co}},\mathcal{L}^{\mathrm{co}}_{\theta}\neq\emptyset\) and pick any \(A\in\mathcal{B}\). Recall that we can rewrite the usual Bayes' updating rule as \[P_{D}(A) =\frac{\int_{\Theta}L(\theta)\mathbbm{1}_{A}(\theta)P(\mathrm{d} \theta)}{\int_{\Theta}L(\theta)\mathbbm{1}_{A}(\theta)P(\mathrm{d}\theta)+\int_ {\Theta}L(\theta)\mathbbm{1}_{A^{c}}(\theta)P(\mathrm{d}\theta)}\] \[=\frac{1}{1+\frac{\int_{\Theta}L(\theta)\mathbbm{1}_{A^{c}}( \theta)P(\mathrm{d}\theta)}{\int_{\Theta}L(\theta)\mathbbm{1}_{A}(\theta)P( \mathrm{d}\theta)}},\] which is maximized when \[\frac{\int_{\Theta}L(\theta)\mathbbm{1}_{A^{c}}(\theta)P(\mathrm{d}\theta)}{ \int_{\Theta}L(\theta)\mathbbm{1}_{A}(\theta)P(\mathrm{d}\theta)}\] is minimized. But \[\frac{\int_{\Theta}L(\theta)\mathbbm{1}_{A^{c}}(\theta)P(\mathrm{d}\theta)}{ \int_{\Theta}L(\theta)\mathbbm{1}_{A}(\theta)P(\mathrm{d}\theta)}\geq\frac{ \inf_{P\in\mathcal{P}^{\mathrm{co}}}\int_{\Theta}\underline{L}(\theta) \mathbbm{1}_{A^{c}}(\theta)P(\mathrm{d}\theta)}{\sup_{P\in\mathcal{P}^{ \mathrm{co}}}\int_{\Theta}\overline{L}(\theta)\mathbbm{1}_{A}(\theta)P( \mathrm{d}\theta)},\] which proves the inequality in (3). Assume now that \(\overline{P}\) is concave. By [54, Lemma 1], we have that there exists \(\check{P}\in\mathcal{P}^{\mathrm{co}}\) such that \[\sup_{P\in\mathcal{P}^{\mathrm{co}}}\int_{\Theta}L(\theta)\mathbbm{1}_{A}( \theta)P(\mathrm{d}\theta)=\int_{\Theta}L(\theta)\mathbbm{1}_{A}(\theta)\check {P}(\mathrm{d}\theta), \tag{7}\] for all \(L\in\mathscr{L}\). In addition, by [54, Lemma 4], we have that for all \(X\in\mathcal{X}\) and all \(\epsilon>0\), there exists a non-negative, upper semi-continuous function \(h\leq X\) such that \[\begin{split}\left[\sup_{P\in\mathcal{P}^{\mathrm{co}}}\int_{ \Theta}X(\theta)P(\mathrm{d}\theta)\right]-\epsilon&<\sup_{P\in \mathcal{P}^{\mathrm{co}}}\int_{\Theta}h(\theta)P(\mathrm{d}\theta)\\ &\leq\sup_{P\in\mathcal{P}^{\mathrm{co}}}\int_{\Theta}X(\theta)P( \mathrm{d}\theta).\end{split} \tag{8}\] Let now \(X=\overline{L}\mathbbm{1}_{A}\). Notice that since \(\mathcal{L}^{\mathrm{co}}_{\theta}\) is weak\({}^{\star}\)-compact, by (2) so is \(\mathscr{L}\). This implies that \(\underline{L},\overline{L}\in\mathscr{L}\), since a compact set always contains its boundary, so \(\overline{L}\in\mathcal{X}\) as well, and in turn \(\overline{L}\mathbbm{1}_{A}\in\mathcal{X}\). Fix then any \(L\in\mathscr{L}\) and put \(h=L\mathbbm{1}_{A}\). It is immediate to see that \(h\) is non-negative and upper semi-continuous. Then, by (8), we have that for all \(\epsilon>0\) \[\begin{split}&\left[\sup_{P\in\mathcal{P}^{\mathrm{co}}}\int_{ \Theta}\overline{L}(\theta)\mathbbm{1}_{A}(\theta)P(\mathrm{d}\theta)\right]- \epsilon<\\ &\sup_{P\in\mathcal{P}^{\mathrm{co}}}\int_{\Theta}L(\theta) \mathbbm{1}_{A}(\theta)P(\mathrm{d}\theta)\leq\sup_{P\in\mathcal{P}^{\mathrm{co }}}\int_{\Theta}\overline{L}(\theta)\mathbbm{1}_{A}(\theta)P(\mathrm{d}\theta). \end{split} \tag{9}\] Combining (7) and(9), we obtain \[\begin{split}&\left[\sup_{P\in\mathcal{P}^{\mathrm{co}}}\int_{ \Theta}\overline{L}(\theta)\mathbbm{1}_{A}(\theta)P(\mathrm{d}\theta)\right]- \epsilon\\ &<\int_{\Theta}L(\theta)\mathbbm{1}_{A}(\theta)\check{P}(\mathrm{ d}\theta)\leq\sup_{P\in\mathcal{P}^{\mathrm{co}}}\int_{\Theta}\overline{L}( \theta)\mathbbm{1}_{A}(\theta)P(\mathrm{d}\theta),\end{split} \tag{10}\] for all \(L\in\mathscr{L}\). Pick now any \(\epsilon>0\) and put \[k :=\sup_{P\in\mathcal{P}^{\mathrm{co}}}\int_{\Theta}\overline{L}( \theta)\mathbb{1}_{A}(\theta)P(\mathrm{d}\theta)\] \[+\inf_{P\in\mathcal{P}^{\mathrm{co}}}\int_{\Theta}\underline{L}( \theta)\mathbb{1}_{A^{c}}(\theta)P(\mathrm{d}\theta)>0.\] Choose any \(L\in\mathscr{L}\) and \(\delta\in(0,\epsilon k)\). By (10) we have that \([\sup_{P\in\mathcal{P}^{\mathrm{co}}}\int_{\Theta}\overline{L}(\theta) \mathbb{1}_{A}(\theta)P(\mathrm{d}\theta)]-\delta<\int_{\Theta}L(\theta) \mathbb{1}_{A}(\theta)\check{P}(\mathrm{d}\theta)\) and that \([\inf_{P\in\mathcal{P}^{\mathrm{co}}}\int_{\Theta}\underline{L}(\theta) \mathbb{1}_{A^{c}}(\theta)P(\mathrm{d}\theta)]+\delta>\int_{\Theta}L(\theta) \mathbb{1}_{A^{c}}(\theta)\check{P}(\mathrm{d}\theta)\). Recall that \(\mathbf{c}:=\sup_{P\in\mathcal{P}^{\mathrm{co}}}\int_{\Theta}\overline{L}( \theta)\mathbb{1}_{A}(\theta)P(\mathrm{d}\theta)+\inf_{P\in\mathcal{P}^{\mathrm{ co}}}\int_{\Theta}\underline{L}(\theta)\mathbb{1}_{A^{c}}(\theta)P( \mathrm{d}\theta)\), and define \(\mathbf{d}:=\int_{\Theta}L(\theta)\mathbb{1}_{A}(\theta)\check{P}(\mathrm{d} \theta)+\int_{\Theta}L(\theta)\mathbb{1}_{A^{c}}(\theta)\check{P}(\mathrm{d}\theta)\). Then, \[\check{P}_{D}(A) =\frac{\int_{\Theta}L(\theta)\mathbb{1}_{A}(\theta)\check{P}( \mathrm{d}\theta)}{\mathbf{d}}\] \[\geq\frac{\big{[}\sup_{P\in\mathcal{P}^{\mathrm{co}}}\int_{ \Theta}\overline{L}(\theta)\mathbb{1}_{A}(\theta)P(\mathrm{d}\theta)\big{]}- \delta}{\mathbf{c}+\delta-\delta}\] \[=\frac{\sup_{P\in\mathcal{P}^{\mathrm{co}}}\int_{\Theta}\overline {L}(\theta)\mathbb{1}_{A}(\theta)P(\mathrm{d}\theta)}{\mathbf{c}}-\frac{ \delta}{k}\] \[>\frac{\sup_{P\in\mathcal{P}^{\mathrm{co}}}\int_{\Theta}\overline {L}(\theta)\mathbb{1}_{A}(\theta)P(\mathrm{d}\theta)}{\mathbf{c}}-\epsilon.\] Since this holds for all \(\epsilon>0\), we have that \[\sup_{P_{D}\in\mathcal{P}^{\mathrm{co}}_{D}}P_{D}(A)=\frac{\sup_{P\in\mathcal{ P}^{\mathrm{co}}}\int_{\Theta}\overline{L}(\theta)\mathbb{1}_{A}(\theta)P( \mathrm{d}\theta)}{\mathbf{c}},\] concluding the proof. Proof of Lemma 8.: In their works [51, 54], the authors show that concave upper probabilities are closed with respect to the generalized Bayes' rule. In particular, this means that, if we let \(\mathbf{b}:=\sup_{P\in\mathcal{P}^{\mathrm{co}}}\int_{\Theta}L(\theta) \mathbb{1}_{A}(\theta)P(\mathrm{d}\theta)+\inf_{P\in\mathcal{P}^{\mathrm{co}}} \int_{\Theta}L(\theta)\mathbb{1}_{A^{c}}(\theta)P(\mathrm{d}\theta)\), for any fixed \(A\in\mathcal{B}\), if \(\overline{P}\) is concave, then for all \(L\in\mathscr{L}\) \[\overline{P}_{D}(A)=\frac{\sup_{P\in\mathcal{P}^{\mathrm{co}}}\int_{\Theta}L( \theta)\mathbb{1}_{A}(\theta)P(\mathrm{d}\theta)}{\mathbf{b}} \tag{11}\] is concave. But since \(\mathcal{L}^{\mathrm{co}}_{\theta}\) is weak\({}^{\star}\)-compact, by (2) so is \(\mathscr{L}\). This implies that \(\underline{L},\overline{L}\in\mathscr{L}\), since a compact set always contains its boundary. Call then \(L^{\prime}=\overline{L}\mathbb{1}_{A}+\underline{L}\mathbb{1}_{A^{c}}\). It is immediate to see that \(L^{\prime}\in\mathscr{L}\). Then, by (11) we have that if we call \(\mathbf{b}^{\prime}:=\sup_{P\in\mathcal{P}^{\mathrm{co}}}\int_{\Theta}L^{\prime} (\theta)\mathbb{1}_{A}(\theta)P(\mathrm{d}\theta)+\inf_{P\in\mathcal{P}^{\mathrm{ co}}}\int_{\Theta}L^{\prime}(\theta)\mathbb{1}_{A^{c}}(\theta)P(\mathrm{d}\theta)\), it follows that \[\overline{P}_{D}(A) =\frac{\sup_{P\in\mathcal{P}^{\mathrm{co}}}\int_{\Theta}L^{\prime }(\theta)\mathbb{1}_{A}(\theta)P(\mathrm{d}\theta)}{\mathbf{b}^{\prime}}\] \[=\frac{\sup_{P\in\mathcal{P}^{\mathrm{co}}}\int_{\Theta}\overline {L}(\theta)\mathbb{1}_{A}(\theta)P(\mathrm{d}\theta)}{\mathbf{c}}\] is concave, concluding the proof. Proof of Theorem 10.: Suppose \(\Omega\) is uncountable. First notice that, since we assumed \(\frac{\mathrm{d}P}{\mathrm{d}\mu}\) to be continuous and bounded for all \(P\in\mathcal{P}\), then so is \(\log\circ\frac{\mathrm{d}P}{\mathrm{d}\mu}\), for all \(P\in\mathcal{P}\), since composing a continuous function with a continuous and bounded one gives us a continuous and bounded function. This entails that the Choquet integrals of \(\log\circ\frac{\mathrm{d}P}{\mathrm{d}\mu}\) with respect to \(\underline{P}\) and \(\overline{P}\) are both well defined. In addition, being continuous and bounded, both \(\frac{\mathrm{d}P}{\mathrm{d}\mu}\) and \(\log\circ\frac{\mathrm{d}P}{\mathrm{d}\mu}\) attain their infima and suprema thanks to Weierstrass' extreme value theorem, for all \(P\in\mathcal{P}\). Hence, all the Choquet integrals used in this proof are well defined. Then, we have the following \[\overline{H}(P): =\sup_{P\in\mathcal{P}}H(P)=\sup_{P\in\mathcal{P}}\left(-\int_{ \Omega}\log\left[\frac{\mathrm{d}P}{\mathrm{d}\mu}(\omega)\right]P(\mathrm{d} \omega)\right)\] \[=\sup_{P\in\mathcal{P}}\int_{\Omega}(-\log)\left[\frac{\mathrm{d }P}{\mathrm{d}\mu}(\omega)\right]P(\mathrm{d}\omega)\] \[\leq\sup_{P\in\mathcal{P}}\int_{\Omega}\sup_{P\in\mathcal{P}} \left\{(-\log)\left(\frac{\mathrm{d}P}{\mathrm{d}\mu}(\omega)\right)\right\}P (\mathrm{d}\omega) \tag{12}\] \[=-\int_{\Omega}\inf_{P\in\mathcal{P}}\left\{\log\left(\frac{ \mathrm{d}P}{\mathrm{d}\mu}(\omega)\right)\right\}\overline{P}(\mathrm{d}\omega)\] (14) \[=-\int_{\Omega}\log\left(\inf_{P\in\mathcal{P}}\frac{\mathrm{d}P} {\mathrm{d}\mu}(\omega)\right)\overline{P}(\mathrm{d}\omega)\] (15) \[=-\int_{\Omega}\log\left[\underline{\pi}(\omega)\right]\overline{ P}(\mathrm{d}\omega)=H(\overline{P}).\] The inequality in (12) is true because for all \(\omega\in\Omega\), \[\sup_{P\in\mathcal{P}}\left\{(-\log)\left(\frac{\mathrm{d}P}{\mathrm{d}\mu}( \omega)\right)\right\}\geq(-\log)\left(\frac{\mathrm{d}P}{\mathrm{d}\mu}( \omega)\right).\] The inequality in (13) is a property of Choquet integrals taken with respect to upper probabilities [38]. The equality in (14) is true because for a generic function \(f\), we have that \(\sup-f=-\inf f\). Finally, the equality in (15) is true because the logarithm is a strictly increasing function. By [38, Theorem 38], if \(\overline{P}\) is concave, then inequality (13) holds with an equality, and so the bound is tighter. The proof for \(\underline{H}(P)\geq H(\underline{P})\) is similar and so it is omitted. The proof for \(\underline{H}(P)\geq H(\underline{P})\) is similar; we use the facts that * \(\inf_{P\in\mathcal{P}}\left\{(-\log)\left(\frac{\mathrm{d}P}{\mathrm{d}\mu}( \omega)\right)\right\}\leq(-\log)\left(\frac{\mathrm{d}P}{\mathrm{d}\mu}( \omega)\right)\), for all \(\omega\in\Omega\); * by [38], \[\inf_{P\in\mathcal{P}}\int_{\Omega}\inf_{P\in\mathcal{P}}\left\{( -\log)\left(\frac{\mathrm{d}P}{\mathrm{d}\mu}(\omega)\right)\right\}P( \mathrm{d}\omega)\] (16) \[\geq\int_{\Omega}\inf_{P\in\mathcal{P}}\left\{(-\log)\left( \frac{\mathrm{d}P}{\mathrm{d}\mu}(\omega)\right)\right\}\underline{P}( \mathrm{d}\omega);\] * for a generic function \(f\), \(\inf-f=-\sup f\); * by [38, Theorem 38], if \(\underline{P}\) is convex, then (16) holds with an equality. Suppose now \(\Omega\) is at most countable; in this case, we do not need any assumptions to make the Choquet integrals well defined, since we will not deal with density functions. The following holds \[\overline{H}(P): =\sup_{P\in\mathcal{P}}H(P)=\sup_{P\in\mathcal{P}}\left(-\sum_{ \omega\in\Omega}P(\{\omega\})\log\left[P(\{\omega\})\right]\right)\] \[=\sup_{P\in\mathcal{P}}\sum_{\omega\in\Omega}P(\{\omega\})(-\log) \left[P(\{\omega\})\right]\] \[\leq\sum_{\omega\in\Omega}\sup_{P\in\mathcal{P}}\left\{P(\{\omega \})(-\log)\left[P(\{\omega\})\right]\right\} \tag{17}\] \[\leq\sum_{\omega\in\Omega}\overline{P}(\{\omega\})\sup_{P\in \mathcal{P}}(-\log)\left[P(\{\omega\})\right]\] (18) \[=-\sum_{\omega\in\Omega}\overline{P}(\{\omega\})\inf_{P\in \mathcal{P}}\log\left[P(\{\omega\})\right]\] (19) \[=-\sum_{\omega\in\Omega}\overline{P}(\{\omega\})\log\left[\inf_{ P\in\mathcal{P}}P(\{\omega\})\right]\] (20) \[=-\sum_{\omega\in\Omega}\overline{P}(\{\omega\})\log\left[ \underline{P}(\{\omega\})\right]=H(\overline{P}).\] The inequality in (17) comes from the well known fact that the sum of the suprema is at least equal to the supremum of the sum. The inequality in (18) comes from the fact that for differentiable functions, the product of the suprema is at least equal to the supremum of the product. The equality in (19) is true because for a generic function \(f\), we have that \(\sup-f=-\inf f\). Finally, the equality in (15) is true because the logarithm is a strictly increasing function. By [38, Theorem 38], if \(\overline{P}\) is concave, then inequality (17) holds with an equality, and so the bound is tighter. The proof for \(\underline{H}(P)\geq H(\underline{P})\) is similar; we use the facts that * the sum of the infima is at most equal to the infimum of the sum; * for differentiable functions, the product of the infima is at most equal to the infimum of the product; * for a generic function \(f\), \(\inf-f=-\sup f\); * by [38, Theorem 38], if \(\underline{P}\) is convex, then \(\inf_{P\in\mathcal{P}}\sum_{\omega\in\Omega}P(\{\omega\})(-\log)\left[P(\{ \omega\})\right]=\sum_{\omega\in\Omega}\inf_{P\in\mathcal{P}}\left\{P(\{ \omega\})(-\log)\left[P(\{\omega\})\right]\right\}\). Proof of Lemma 13.: Fix any \(p\in[1,\infty]\) and pick any \(\check{P}\in\mathcal{P}\). Because \(\Phi_{p}^{+}(\check{P},n)\subset\Phi_{p}^{+}(\mathcal{P},n)\), then \(\inf_{\alpha\in\Phi_{p}^{+}(\mathcal{P},n)}W_{p}(\alpha,P^{\prime})\) can only be either equal or smaller than \(\inf_{\alpha\in\Phi_{p}^{+}(\check{P},n)}W_{p}(\alpha,P^{\prime})\). Now, if \(\inf_{\alpha\in\Phi_{p}^{+}(\mathcal{P},n)}W_{p}(\alpha,P^{\prime})=\inf_{ \alpha\in\Phi_{p}^{+}(\check{P},n)}W_{p}(\alpha,P^{\prime})\), then \(W_{p}^{+}(\mathcal{P},P^{\prime})=W_{p}^{+}(\check{P},P^{\prime})\). If instead \(\inf_{\alpha\in\Phi_{p}^{+}(\mathcal{P},n)}W_{p}(\alpha,P^{\prime})<\inf_{ \alpha\in\Phi_{p}^{+}(\check{P},n)}W_{p}(\alpha,P^{\prime})\), then \(W_{p}^{+}(\mathcal{P},P^{\prime})<W_{p}^{+}(\check{P},P^{\prime})\). This concludes the first part of the proof. Fix then any convex functional \(f\) on \(\mathbb{R}\) such that \(f(0)=1\); the proof is similar for \(f\)-divergences. Proof of Lemma 14.: The proof is very similar to that of Lemma 13.
2301.08018
System on Chip Rejuvenation in the Wake of Persistent Attacks
To cope with the ever increasing threats of dynamic and adaptive persistent attacks, Fault and Intrusion Tolerance (FIT) is being studied at the hardware level to increase critical systems resilience. Based on state-machine replication, FIT is known to be effective if replicas are compromised and fail independently. This requires different ways of diversification at the software and hardware levels. In this paper, we introduce the first hardware-based rejuvenation framework, we call Samsara, that allows for creating new computing cores (on which FIT replicas run) with diverse architectures. This is made possible by taking advantage of the programmable and reconfigurable features of MPSoC with an FPGA. A persistent attack that analyzes and exploits the vulnerability of a core will not be able to exploit it as rejuvenation to a different core architecture is made fast enough. We discuss the feasibility of this design, and we leave the empirical evaluations for future work.
Ahmad T Sheikh, Ali Shoker, Paulo Esteves-Verissimo
2023-01-19T11:41:28Z
http://arxiv.org/abs/2301.08018v1
# System on Chip Rejuvenation in the Wake of Persistent Attacks ###### Abstract To cope with the ever increasing threats of dynamic and adaptive persistent attacks, Fault and Intrusion Tolerance (FIT) is being studied at the hardware level to increase critical systems resilience. Based on state-machine replication, FIT is known to be effective if replicas are compromised and tail independently. This requires different ways of diversification at the software and hardware levels. In this paper, we introduce the first hardware-based rejuvenation framework, we call _Samsara_, that allows for creating new computing cores (on which FIT replicas run) with diverse architectures. This is made possible by taking advantage of the programmable and reconfigurable features of MPSoC with an FPGA. A persistent attack that analyzes and exploits the vulnerability of a core will not be able to exploit it as rejuvenation to a different core architecture is made fast enough. We discuss the feasibility of this design, and we leave the empirical evaluations for future work. Rejuvenation, MPSoC, Reconfigurable Computing, Fault and Intrusion Tolerance (FIT), Byzantine Agreement ## 1 Introduction There is an ever increasing reliance on hardware accelerators including GPU, ASIC, DPU, GPGPU, and FPGA to boost the performance and security of vital digital computing embedded applications, e.g., in the Internet-of-Things services, Cyber-Physical Systems, and automation. More recently, there has been growing interest in accelerating cloud applications by deploying hundreds of cores in cloud FPGA instances [1]. While the performance gains are referred to eliminating the software stack that underlies applications and using multicore processing (e.g., as in image processing and AI applications), security is believed to be near unbeatable due to the programmable immutability in face of software attacks. This caused a leap in both, domain-specific hardware secure elements with isolation and cryptographic capabilities like enclaves, hardware secure modules, vaults, etc. [2, 3, 4, 5], and in hardening modern complex embedded systems, based on smaller--and thus easily verifiable--secure abstractions [6, 7]. Unfortunately, the blessing of immutability is lost with the advent of general-purpose programmable hardware computing like GPGPUs and FPGAs. This makes the corresponding systems and applications more vulnerable to intrusion attacks and vulnerabilities in the hardware design itself [8, 9, 10]. This appeals for the introduction of new techniques to improve the security and resilience of these programmable hardware accelerators in face of intrusion attacks and runtime errors. Being the last line of defense in the hardware/software stack, hardware accelerators' security, both programmable and non-programmable, have been the focus of academia and industry [11, 12]. The followed approaches are mainly: (1) computing in isolation, (2) bus compartment, (3) frequent DRAM refresh and randomization [13], (4) bitstream encryption [14] etc. Although often secure and dependable, these techniques fall short to defend (1) intrusion attacks being hard to detect in practice due to the huge design space of the fabric, which allows for stealthy logic, kill switches [15], (2) glitches due to design mistakes or dust, aging, and overheating [16, 17], and (3) vulnerabilities in RTOS's [18]. Furthermore, the hardware verification process is a very daunting and costly task, which makes it unaffordable at large-scale production [15]. To circumvent the intrusion detection inefficiency, a recent approach is to use intrusion masking [19, 20], inspired from state-machine-replication (SMR) [21] in Distributed Computing. To tolerate up to \(t\) malicious or anomalous replicas, intrusion masking requires running a number (\(n\)) of concurrent replicas of a process, thus forming a _replicated state machine_, running on multiple processing cores in this case. The outcome of the state machine is the agreement result of a non-compromised \(n-t\) quorum of replicas (both, software and underling hardware cores). Agreement is achieved by running a variant of intrusion agreement protocols, commonly known as Byzantine Agreement (BA) [22]. Despite resilience to (unintentional) glitches and intrusions without the need to define or know apriori, there are two noteworthy limitations in this approach. The first limitation is that replicas are assumed to be compromised or fail independently, i.e. no common vulnerabilities or glitches among the replicas exist. Failure to do so however, can lead to common mode failures of \(t^{\prime}>t\) replicas, and thus violate the quorum invariant (since \(n-t^{\prime}<n-t\)). The second limitation is that these systems are fixed in size (\(n\)). This makes them non dynamically reactive to variable threat severity levels, where the system is prone to more than \(t\) simultaneous intrusions. Indeed, a persistent adversary that is given long enough time could lead to resource exhaustion [23] in the system, i.e., compromising more than \(t\) replicas. In this paper, we introduce the first SoC rejuvenation framework, we call _Samsara_, for fault and intrusion tolerance (FIT). _Samsara_ makes use of the FPGA partial reconfigurable regions (PRR) features of an MPSoC to spawn new CPU cores, called _tiles_, on which FIT replicas can run. This targets the two main limitations of FIT protocols: (1) It allows for hot-swappable scaling out/in the number \(n\) of CPU cores (hosting the replicas), and thus adjusting the intrusion resilience of the compromised replicas \(t\); and (2) it improves independence of failures and resilience to advanced persistent attacks, by the ability to spawning diverse CPU cores, made available by a library of diverse _software_ templates, e.g., offered by several vendors [24]. The idea of rejuvenation is not new in FIT, as it has been suggested in [25] to improve the resilience of software-based systems. Nevertheless, this concept has never been used at the hardware embedded level in a dynamic way. The reason is obvious, since the architecture of non-programmable multicore hardware is pre-defined and fixed at the manufacturing phase, thus making it impossible to diversify these cores in production. Fortunately, reconfigurable hardware like FPGA have become enabler for diversifying cores, instantiated from different vendor templates. On the other hand, although literature [26, 27] have introduced the ability to control the thread-level parallelism in multicore systems, this was not dynamic and practical as it required rebooting the entire system, contrary to _Samsara_ that spawns and restarts CPU cores and replicas at runtime. In this work, we introduce a preliminary architecture (in Fig. 3) of _Samsara_ framework as part of a Xilinx (AMD) Zynq architecture [28]. Nevertheless, the proposed architecture is modularized in a generic way to be integrated into similar MPSoC architectures with FPGAs. The modularity also supports future extensions for further optimization, like diverse softcores and triggering policies, i.e., periodic, proactive, and reactive. Our future work includes an empirical evaluation to validate the behavior, security, and performance in practice. The rest of the paper is organized as follows: Section 2 discusses the background on the Fault and Intrusion Tolerance (FIT) and hardware MPSoC accelerators. The proposed framework, including threat model, architecture and workflow, are described in Section 3. We then drive a feasibility discussion in Section 4, and finally conclude in Section 5. ## 2 Background This section gives a gentle background about Multiprocessor System on Chip (MPSoC), e.g., Zynq Architecture [29], and Fault and Intrusion Tolerance (FIT) to make the understanding of the following concepts smoother. ### _MPSoC Architecture_ An MPSoC is a system on a chip (SoC) which includes multiple microprocessors, often used in embedded devices. A typical modern can MPSoC [29] includes multiple fabricated processing cores (called _hardcore_) and a programmable hardware, called Field Programmable Gate Arrays (FPGA). The latter gives the ability to reconfigure the underlying fabric with arbitrary custom-hardware logic, even after fabrication. A common case is to spawn an emulated processing core, we call it a _tile_, from a _softcore_ template (a _bit file_). The cost of development and deployment is negligible for low-medium volumes. However, per-device cost is expensive. Application Specific Integrated Circuits (ASICs) are the chips with immutable logic circuits. They are preferred for large scale manufacturing due to cheaper per device cost. FPGAs provide a suitable middle ground before for design testing and verification before taking it to the fabrication facility for tapeout. We overview MPSoC by demonstrating on a well known Zynq MPSoC architecture proposed by Xilinx in 2016 [29]. Zynq MPSoC is an updated version of the Zynq-7000 [30] class of devices with additional processing units and improved FPGA fabric. Other FPGA or MPSoCs variants [31, 32] are similar to the main Zynq design, despite having some implementation differences. #### 2.1.1 Zynq Architecture as a Case Study Figure 1 shows a simplified architecture of the Zynq Ultrascale+ MPSoC device. The Zynq MPSoC devices consist of a Processing System (PS) and a Programmable Logic (PL) on the same chip. The PS consists of various processing elements and is responsible for managing the SoC functionality and security. One important component of PS is the Application Processing Unit (APU) that houses the Arm processing cores. These cores can be programmed using C/C++ language. PS also includes Graphics Processing Unit (GPU), Real-Time Processing Unit (RPU), interfaces to the external and internal peripherals. The Configuration Security Unit (CSU) consists of various blocks; in particular, the _Processor Configuration Access Port (PCAP)_ that can be used by PS to program the PL dynamically at run-time or statically. Fig. 1: Simplified Architecture of the Zynq Device [29, 33] The PL consists of re-configurable FPGA fabric that can be programmed to accelerate arbitrary any function. PL can be configured directly or by the PS. The program used to configure the PL fabric is called bitstream or bitfile. Once a PL is programmed, it becomes immutable i.e., the free and reconfigured regions of PL can't be further changed without reconfiguring the whole fabric. The PL consists of sea of Configurable Logic Block (CLB), DSP blocks, I/O pins, programmable interconnects, Block RAMs (BRAMs) etc. However, it has limited number of resources as compared to the PS. We, therefore, aim to utilize the PL resources frugally and envisage to reconfigure them dynamically through PS as needed i.e. _Partial Reconfiguration (PR)_. PR is rebranded as _Dynamic Function eXchange (DFX)_[34] by Xilinx. #### 2.1.2 Partial Reconfiguration (PR) FPGA technology provides an opportunity for on-site programming and reconfiguration without sending the design to the fabrication facility for modification(s). PR takes this to next level by introducing the capability to program certain regions of the PL called Partially Reconfigurable Regions (PRRs) with partial bitstreams. A full bitstream configures the FPGA device and sets it up for the acceleration, partial bitstreams can be downloaded on PL to modify only the PRRs without compromising the integrity of applications running on the remaining portions of the PL fabric. Figure 2 shows a simple example of PR, where FPGA is divided into two dynamic regions. The partial bit files are shown in different colors and using DFX they can be deployed in any dynamic region during run time. One of the biggest advantage of PR is the ability to time multiplex the underlying silicon for varying tasks. This would result in lesser design area and power consumption - the two most important factors of any digital design. However, accelerators orthogonal to each other can be converted to partial bit files for PR. The PL can be programmed either through the PS or internally by the PL. From PS, the PL is programmed through the CPAP interface which transfers the full or partial bitstreams from the external DDR Memory using the DMA controller. The PL can also be programmed through the native Internal Configuration Access Port (ICAP). The ICAP has much higher speed of reconfiguration than the PCAP. ### _Fault and Intrusion Tolerance_ The concept of Fault and Intrusion Tolerance (FIT) [19] has been proposed and studied thoroughly in the Distributed Systems area to improve system's state integrity when arbitrary faults (commonly known as Byzantine faults [35]) or intentional intrusions exist. The idea is to replicate a system process in such a way that concurrent replicas form a single _deterministic state machine_[21] that ensures agreement on a unique final state, i.e., using a Byzantine Agreement protocol [36]. The correctness of FIT hinges on a quorum of correct (i.e., not faulty or malicious) nodes that have the ability to reach agreement/consensus, to maintain total order of operations. To tolerate a number \(t\) of compromised replica, a total number of system replicas \(n\), typically \(n=3t+1\), is required [36, 37]. The underlying assumption in FIT protocols is that replicas fail independently i.e., no common vulnerabilities among the replica nodes, failure to do so however, can lead to common mode failures. Fault and intrusion independence can be achieved by diversifying the replicas at different levels of abstraction, e.g., using N-version programming techniques [38, 39] to develop structurally different but functionally equivalent pieces of codes, or using different combinations of operating systems having non-overlapping fault vulnerabilities [40]. ## 3 Samsara Fault & Intrusion Tolerance Framework ### _System and Threat Models_ We consider a System on Chip composed of a static processing section, called _Processing System_ (PS), a reconfigurable section, i.e., like an _FPGA Programmable Logic_ (PL), and an application-specific integrated circuit (_ASIC_) chip section. All sections are connected via a reliable hardware on the same chip. For better security, _Samsara_'s main logic resides in the ASIC section, and is thus immutable; whereas other optimization modules reside and run in the PS. The PL section is however mutable, and used to spawn new _tiles_ (from softcore templates) that can host FIT replicas. The number of tiles to be spawn depends on the use of the FIT protocol [26, 36, 7]. We do not assume a particular FIT protocol in this draft. Tiles inside the PL are connected through a reliable bus that can deliver messages to their destination. (In harsh environments, one may assume an unreliable hardware bus that may drop or modify messages exchanged, e.g., due to external factors, however, we avoid this for simplicity.) The communication between tiles could be synchronous or _partially-synchronous_[41] (i.e., deliver within an unknown bound). We assume that tiles have a decent level of containment or isolation, such that an anomaly or vulnerability may not affect other cores directly. In addition, we assume a strong and advanced persistent adversary that can attack the entire SoC with the aim to break the integrity of the system state (protected by the FIT protocol). Therefore, the adversary may exploit a vulnerability in the tiles (in the PL) or the above software stack (usually small operations without an OS stack). Since our focus is on hardware diversity, we assume that the system has some level of software diversity [25]. Hardware diversity is however provided by _Samsara_ through creating diverse tiles from different softcore templates (e.g., from different vendors). However, we assume that no more than \(t\) cores can be compromised during the reference time \(T_{a}\). Within this time frame, _Samsara_ is expected to rejuvenate cores. The _Samsara_ modules in the PS are assumed to be encrypted and run in a secure way, e.g. in a TEE enclave Fig. 2: Basic PR concept (like the _ARM TrustZone_[3]). The aim is to protect the code from modification (both at-reset and under execution). On the other hand, Samsara's main logic is immutable being implemented in an _ASIC_. This assumption is reasonable as long as we keep _Samsara's_ code footprint small. Finally, we assume that the adversary can compromise the cryptographic authentication keys of replicas, but cannot break the cryptographic abstractions (e.g., signatures and hash functions) using brute force. ### _Concept_ The idea of rejuvenation in _Samsara_ is based on relaunching diverse computing instances, i.e., tiles, in an FPGA, from available _software bit codes_ templates of different internal architectures. The goal is to boost the effectiveness of hardware-based Fault and Intrusion Tolerance (FIT) in embedded systems through diversity and adaptability to different threat levels. Unfortunately, an FIT protocol is as effective as replicas are diverse, i.e., they have different implementation or design internally, but have identical specification to maintain the same functionality and behavior externally. If replicas (both hardware and software layers) are identical, there is a high probability that they err together either (1) because of hitting a common anomaly, or (2) due to a common vulnerability exploited at many replicas at once--if a persistent adversary managed to gain access to these replicas. _Samsara_ focuses on the rejuvenation at the hardware level. It helps instantiating diverse tiles from different templates, in a dynamic and hot-swappable way, to increase the diversity of the computing unit even under attacks. In particular, _Samsara_ framework has the following two main objectives: 1. the ability to launch a _same-function-different-build_ computing cores (tiles) to ensure diversity, and 2. the ability to modify the number of tiles at runtime. ### _Architecture_ We present a high-level architecture of _Samsara_ in Figure 3. Samsara is composed of three main parts: (1) the Core Logic (CL) that is an ASIC fabric; (2) the Processing System (PS) that represents a hardcore processing unit, and (3) the Programmable Logic (PL) representing an FPGA. The PS and Pl are typical in most MPSoCs, e.g., the Zynq-7000 fabric, whereas the CL is a new ASIC component to be integrated to the fabric. The CL controls the main rejuvenation logic and runs the FIT protocol. Being critical, the CL is proposed to be implemented on an ASIC hardware circuit fabric to be immutable, and thus immune to the attacks on the reconfigurable section (in PL). This is possible as long as the CL has a small footprint. The CL has access into the _Softcore lib_ that represents a set of diverse CPU core templates in the form of bit files stored securely in external memory. In addition, the CL has access to the _Monitor_ and _Trigger_ auxiliary modules in the PS (explained next in detail). These modules are used to extend the _Samsara'a_ responsiveness capabilities in a modular way. The PL is the _workplace_ section hosting the created tiles, in which embedded operations are executed, and thus where rejuvenation will take place. In particular, the PL is divided into _static_ and _dynamic_ regions. The _static_ region holds immutable logic with API that glue PS and PL together. The _dynamic_ region consists of reconfigurable blocks (pblocks) that can be reprogrammed during run time. ### _Workflow_ A typical execution workflow in _Samsara_ occurs as follows. An embedded application (whose state integrity is critical) invokes an operation by calling the CL. The latter runs the operation in replicated way using the implemented FIT protocol. According to the number of replicas \(n\) the FIT protocol needs, the CL instantiates \(n\) corresponding tiles in the PL. This is done by selecting random (and thus diverse) softcore templates from the Softcore lib. A simple FIT protocol can have the CL send each operation to different tiles for concurrent execution, and then collecting their outcomes for comparison. The agreed (matching) outcome is finally returned to the calling application. Different FIT protocols with different complexities and guarantees can be used; however, a specific FIT protocol is out of the scope of this paper. To diversify cores, the CL selects a running tile at random to rejuvenate. The default rejuvenation frequency \(\mu\in[0,T]\) is random, in order to obfuscate the pattern from the adversary. The rejuvenation protocol is inspired from software rejuvenation in [42, 43]. However, we envision possible optimizations in such a hardware-based setting, that we plan to address in the future. In a nutshell, a rejuvenation protocol includes the following steps: Fig. 3: The architecture of _Samsara_ Rejuvenation Fault and Intrusion Tolerant Framework on a general purpose _Zynq_ board architecture [28]. The four tiles are spawned from four known softcores [24] to demonstrate diversity. * spawningning a new tile in the PL using a random softcore template from the _Softcore lib_; * performing a state transfer from other tiles to initiate the new one; * destroying the _retired_, subject to rejuvenation; and * launching the newly created tile that replaces the retired one. ### _Optimizations_ The default policies to choose a tile to retire and a softcore to use in spawning a new tile are kept random for simplicity, and to maintain a small code footprint, and thus implemented in the CL for the ASIC fabric. However, it is possible to follow more sophisticated policies specified in the _Trigger_ module and with the use of the _Monitor_ module, both encrypted and executed in the PS section. The monitor plays the role of a _watch dog_: it uses heuristics and/or other systems to detects faulty, malicious, or abnormal behaviors upon which rejuvenation is triggered. On the other side, the following triggering policies are interesting: * Periodic: targets are selected in a periodic manner. * Reactive: this is an event-based policy that is triggered by the _Monitor_ module, e.g., if an attack or anomaly is detected. * Proactive: this is the most advanced and sophisticated policy which triggers rejuvenation even before a bad event (e.g., attack) happens. This may use heuristics or AI-based intelligence. Another situation where the monitor can be useful is upon increasing the number \(n\) of tiles in an FIT protocol, when detecting a higher threat level, i.e., \(t^{\prime}>t\) need to be tolerated. A higher threat can be caused by an attacker that may have access to the softcore code (sometimes open-sourced). With enough time and resources, the (persistent) attacker can possibility identify softcore vulnerabilities to be exploited when the softcore is instantiated as a tile. In this case, _Samsara_ can spawn new tiles in the PL section without retiring any other tile. Again, the newly created cores are cloned from diverse templates in the Softcore lib. We will define these details in the future work. ## 4 Feasibility Discussion Fault and Intrusion Tolerance is being increasingly used to build resilient systems, especially with the advent of _Blockchain_. It was believed that FIT SMR protocols are overkill at the hardware level [26]. Indeed, FIT protocols are commonly known to be computationally demanding due to (1) quadratic (\(O(n^{2})\)) number of exchange of messages between the replicas for reaching the consensus, and (2) the extensive use of cryptography. However, with the advent of powerful hardware-based components, that allowed the use of trusted-trustworthy abstractions (i.e., hybrids) [6, 7], cryptography computation is becoming more efficient, and the spatial complexity can be reduced to \(N=2f+1\), eventually requiring fairly simpler message exchange between replicas [7, 20]. Consequently, FIT is being increasingly explored and studied in multi-core on chip embedded systems as we do here. Similarly, rejuvenation has been proposed to diversify software as part of FIT SMR protocols [42]. However, it was not possible for FIT protocols to benefit from hardware rejuvenation [26] in classical multi-core architectures. Thanks to reconfigurable/reprogrammable hardware (e.g., brought by MPSoC with FPGAs [29]), hardware rejuvenation became handy. Beyond that, FPGA provides a fine grain control of restarting or rejuvenating a certain core in hot-swappable fashion i.e., without the need to restart the entire system. This is subject to the availability of different core templates that are compatible with a single MPSoC platform. One of the security challenges of _Samsara_ is that both the controller side (Core Logic) and the programmable side are on the same chip. This allows the adversary to compromise the controller to hamper rejuvenation, and then attack the tiles easily. For this, _Samsara's_ Core Logic (CL) must be highly protected, using ASIC hardware on chip implementation. Our challenge is to keep the CL small to be implemented in hardware efficiently. For this, our design separates the optimization modules (Monitor and Trigger) due to their overhead. In the worse case, the CL itself is protected and can use the default rejuvenation configurations to make the tiles more resilient to intrusion attacks. Finally, this conceptual analysis requires an empirical experimentation to be able to validate the envisioned correctness and performance. We are currently implementing a _Samsara_ Proof-of-Concept prototype on a Xilinz Zynq board [29]. Nevertheless, our goal is to keep the _Samsara_ architecture generic, and thus support other SoCs. Among the interesting metrics to measure are the: code footprint, processing overhead, FIT throughput and latency, and rejuvenation time. We also aim at adjusting the protocols used and providing corresponding correctness and security proofs. ## 5 Conclusion We have introduced _Samsara_, the first SoC rejuvenation framework for Fault and Intrusion Tolerance (FIT) diversification. _Samsara_ leverages the programmable computational resources of an MPSoC to spawn new diverse computing cores (tiles) on which FIT replicas can run. Spawned tiles are made diverse by instantiating them from different softcore templates, likely provided by different vendors. The Core Logic of the framework is immutable and protected by hardware; whereas the tiles make use of rejuvenation to diversify the computing logic underlying the operations. We provided a conceptual analysis showing that, based on our preliminary framework, the implementation of rejuvenation on hardware is feasible. In the meanwhile, we are driving an implementation and empirical evaluation to validate our analysis in future papers.
2302.11019
Using Semantic Information for Defining and Detecting OOD Inputs
As machine learning models continue to achieve impressive performance across different tasks, the importance of effective anomaly detection for such models has increased as well. It is common knowledge that even well-trained models lose their ability to function effectively on out-of-distribution inputs. Thus, out-of-distribution (OOD) detection has received some attention recently. In the vast majority of cases, it uses the distribution estimated by the training dataset for OOD detection. We demonstrate that the current detectors inherit the biases in the training dataset, unfortunately. This is a serious impediment, and can potentially restrict the utility of the trained model. This can render the current OOD detectors impermeable to inputs lying outside the training distribution but with the same semantic information (e.g. training class labels). To remedy this situation, we begin by defining what should ideally be treated as an OOD, by connecting inputs with their semantic information content. We perform OOD detection on semantic information extracted from the training data of MNIST and COCO datasets and show that it not only reduces false alarms but also significantly improves the detection of OOD inputs with spurious features from the training data.
Ramneet Kaur, Xiayan Ji, Souradeep Dutta, Michele Caprio, Yahan Yang, Elena Bernardis, Oleg Sokolsky, Insup Lee
2023-02-21T21:31:20Z
http://arxiv.org/abs/2302.11019v1
# Using Semantic Information for Defining and Detecting OOD Inputs ###### Abstract As machine learning models continue to achieve impressive performance across different tasks, the importance of effective anomaly detection for such models has increased as well. It is common knowledge that even well-trained models lose their ability to function effectively on out-of-distribution inputs. Thus, out-of-distribution (OOD) detection has received some attention recently. In the vast majority of cases, it uses the distribution estimated by the training dataset for OOD detection. We demonstrate that the current detectors inherit the biases in the training dataset, unfortunately. This is a serious impediment, and can potentially restrict the utility of the trained model. This can render the current OOD detectors impermeable to inputs lying outside the training distribution but with the same semantic information (e.g. training class labels). To remedy this situation, we begin by defining what should ideally be treated as an OOD, by connecting inputs with their semantic information content. We perform OOD detection on semantic information extracted from the training data of MNIST and COCO datasets and show that it not only reduces false alarms but also significantly improves the detection of OOD inputs with spurious features from the training data. ## 1 Introduction Machine learning models have achieved remarkable success in accomplishing different tasks across modalities such as image classification (Gkioxari et al., 2015), speech recognition (Hannun et al., 2014), and natural language processing (Majumder et al., 2017). It is however known, that these models are unreliable on samples that are less likely to occur, according to the model's _in-distribution_ estimated from its training data (Hendrycks and Gimpel, 2016). Detection of these _out-of-distribution (OOD)_ inputs is important for the deployment of machine learning models in safety-critical domains such as autonomous driving (Bojarski et al., 2016), and medical diagnosis (De Fauw et al., 2018). OOD detection has, therefore, gained a lot of attention recently (Kaur et al., 2022; Liang et al., 2017; Lee et al., 2018; Hendrycks et al., 2019; Kaur et al., 2021). Even though there is sufficient interest in OOD detection, to the best of our knowledge, it is unclear what precisely entails an OOD input. Existing detectors estimate a distribution that is tied to the training dataset, and flag inputs as OOD when the assigned probability according to the estimated distribution is low. The standard drill involves a set of in-distribution inputs drawn from a dataset such as CIFAR10, and detecting those inputs as OOD that are drawn from a different dataset such as SVHN (Hendrycks and Gimpel, 2016; Kaur et al., 2021; Lee et al., 2018). Such external inputs (from SVHN) would have non-overlapping training class labels (from CIFAR10). With this in mind, we propose to treat _intended distribution_ of images as in-distribution. i.e. images containing semantic information relevant to the training classes irrespective of the background (or spurious) information. 1. Inputs deficient of semantic information w.r.t any training class should be detected as OOD. Footnote 1: We will be using the terms “in-distribution” and “intended distribution” exchangeablely in the paper. Domain generalization or robustness to spurious features in an input is a desired property and a requirement for machine learning models to be put to use (Wan et al., 2022; Liu et al., 2022). For instance, as shown in Figure 1, a classifier trained to classify birds in {sitting birds, flying birds} is expected to generalize well beyond the training data of birds sitting on trees and birds flying in sky. Inputs from the intended distribution of sitting birds refers to birds sitting on trees, snow or water. Performing OOD detection on inputs with class label in the training classes but outside the training distribution such as birds sitting on snow restricts the utility of the model. Ming et al. [2022] show that the existing detectors are unfortunately tied to the sampling bias of the training dataset. This results in low detection on OOD inputs with spurious features such as background, color, etc. from the training data. The authors report low detection performance of existing detectors on two datasets: 1) Birds [Sagawa et al., 2019] with class labels in {waterbirds, landbirds}, and 2) CelebA [Liu et al., 2015] with class labels in {grey hair, non-grey hair}. Table 1 shows these results for OOD images without birds but containing water (or land) as a spurious feature for waterbirds (or landbirds), and OOD images of bald male with male as a spurious feature for grey hair; examples of these images are shown in Figure 2. This means that even though the classifier might be able to generalize better, OOD detectors themselves can stifle its utility. The contributions of this paper can be summarized as: **1. Demystifying OOD Inputs:** Even though there is sufficient interest in OOD detection, to the best of our knowledge, it is unclear what precisely constitutes an OOD input. We propose to model in-distribution for machine learning classifiers as the _intended set of images_ containing semantic information relevant to the training classes. As a consequence, we define as OOD those inputs whose semantically relevant part is given low probability by the intended distribution. **2. OOD Detection based on the Intended Set of Inputs:** We propose two distinct ways of estimating the intended set of images for modeling the in-distribution for a classifier. The first one leverages a machine learning model in the presence of a large amount of labeled training data, while the second one utilizes the available expert guidance. We propose two OOD detection algorithms based on the two ways of estimating the intended distribution. **3. Experimental Evaluation:** (a) Table 1 shows that we achieve significant improvement by \(57.22\%\) and \(45.64\%\) on OOD detection for Birds and CelebA, respectively, with the proposed OOD detection Algorithm 2 that uses a machine learning model for estimating the indented set. (b) Our experiments on COCO Lin et al. [2014] and MNIST LeCun et al. [1998] datasets show that the existing detectors overfit to the training data for estimating in-distribution, resulting in (i) false OOD detection on inputs with the same (training class) labels but from a different dataset, and (ii) low OOD detection on inputs whose classes are absent from the set of training classes. This low detection is due to the sensitivity of existing detectors to the spurious features from the training data. The proposed algorithms not only significantly reduce false alarms, but they also improve OOD detection (\(\geq 20\%\)) on inputs with spurious features from training data. **Related Work.** OOD detection has been extensively studied and detectors with OOD scores based on the difference in statistical, geometrical or topological properties of in-distribution and OOD inputs have been proposed. These detectors can be classified into three categories, supervised [Lee et al., 2018, Kaur et al., 2021a], self-supervised [Hendrycks et al., 2019, Kaur et al., 2022a], and unsupervised [Hendrycks and Gimpel, 2016, Liang et al., 2017]. Unsupervised approaches can function without an OOD dataset for training the detector, while supervised Figure 1: The intended distribution has a much higher variability in terms of the samples it covers, when compared to the training distribution. The classifier trained to classify birds in {sitting birds, flying birds} is expected to generalize well for the intended distribution, which has birds sitting on trees, snow or water. OOD inputs are the ones which are unlikely to occur from the point of the intended distribution \(\mathcal{D}_{I}\); e.g. images without birds in them such as an image of a dog or a tree without any bird on it, and poor quality images such as blurry or dark which are difficult to label. approaches do. Self-supervised approaches require a self-labeled dataset for training the detector. This dataset is created by applying transformations to the training data and labeling the transformed data with the applied transformation. The proposed OOD detection algorithms in this paper are unsupervised in nature. Ming et al. (2022) show that the existing detectors perform poorly on OOD inputs with spurious features from the training data. They, however, do not propose a solution for fixing the existing detectors. Domain generalization Zhou et al. (2022) is an active research area where efforts are made for the generalizability of machine learning classifier to its classes beyond the training data. As shown in Figure 1, it tries to ask the question of whether a classifier trained on the images of birds on trees would work on images of birds on water. Domain-invariant representation learning Li et al. (2018), training data augmentation with higher variability Zhou et al. (2020) etc. have been proposed to solve this problem. With the intended distribution of images containing (training) class-specific information for a classifier, we propose inputs that do not contain this information as OOD. There has been a great interest in making use of semantic segmentation networks in scene understanding problem Mo et al. (2022), one of the core problems in computer vision with applications e.g. to autonomous driving, video surveillance, and robot perception Garcia-Garcia et al. (2017). Recently, the use of segmentation networks was proposed to train machine learning classifiers with a handful of training examples Mojab et al. (2021). We make use of segmentation networks as the machine learning model for estimating the intended distribution for OOD detection. ## 2 Problem Formulation and Methodology ### Problem Formulation Let \((\mathcal{X},\mathcal{A}_{\mathcal{X}})\) be the measurable space from which images are sampled. We assume that \(\mathcal{X}\) is an at most a countable subset of a (possibly very high-dimensional) Euclidean space \(\mathbb{R}^{h\times w\times 3}\) whose dimension depends on the size of the images. Here, \(h,w\) refer to the height and width of the image, and \(3\) stands for the red, green, and blue channels, making the elements of \(\mathcal{X}\) colored images. Let \(\Delta(\mathcal{X},\mathcal{A}_{\mathcal{X}})\) denote the space of probability measures on \((\mathcal{X},\mathcal{A}_{\mathcal{X}})\), and we consider a candidate distribution \(\mathcal{D}\in\Delta(\mathcal{X},\mathcal{A}_{\mathcal{X}})\).2 Let \(X_{1},\dots,X_{n}\sim\mathcal{D}\) be iid, whose realizations \(x_{1},\dots,x_{n}\) make the training set \(\mathcal{S}\) for a machine learning classifier; \(\mathcal{S}:=\{x_{1},\dots,x_{n}\}\). We assume that the support of \(\mathcal{D}\), written \(\text{supp}(\mathcal{D})\), is a proper subset of \(\mathcal{X}\), that is, \(\text{supp}(\mathcal{D})\subsetneq\mathcal{X}\). Footnote 2: As no confusion arises, we do not distinguish between probability measure and probability distribution. Now, we introduce what we call the _intended distribution_, i.e. a probability measure \(\mathcal{D}_{I}\in\Delta(\mathcal{X},\mathcal{A}_{\mathcal{X}})\) whose support \(\text{supp}(\mathcal{D}_{I})\) is a proper superset of \(\text{supp}(\mathcal{D})\). We can write \(\text{supp}(\mathcal{D})\subsetneq\text{supp}(\mathcal{D}_{I})\subset \mathcal{X}\). Intended distribution \(\mathcal{D}_{I}\) is needed because it assigns non-zero probability to the set of images which are likely to be seen by the classifier in the real world. For instance, in case of standard birds dataset (Figure 1), the training distribution \(\mathcal{D}\) captures images of birds on trees, but the intended distribution \(\mathcal{D}_{I}\) for the classifier can refer to birds on trees, water, or snow. We define the _intended set of inputs_ for a classifier as: \[\mathcal{X}_{I}:=\{x\in\mathcal{X}:\mathcal{D}_{I}(x)>\epsilon\}\subset\text{ supp}(\mathcal{D}_{I}),\] for some \(\epsilon>0\). The measurable space of (class) labels \begin{table} \begin{tabular}{c|c|c} \hline Detector & OOD for Birds & OOD for CelebA \\ \hline Baseline (2016) & 25.32 & 16.30 \\ ODIN (2017) & 22.75 & 18.93 \\ Mahala (2018) & 30.65 & 21.25 \\ Energy (2020) & 25.78 & 28.72 \\ Gram (2020) & 41.75 & 18.79 \\ Ours & **98.97** & **74.36** \\ \hline \end{tabular} \end{table} Table 1: Low OOD detection by existing detectors on OOD inputs with spurious features from the training data of Birds dataset with class labels in {waterbirds, landbirds}, and CelebA dataset with class labels in {grey hair, non-grey hair} Ming et al. (2022). Some examples of in-distribution and spurious OOD images for these datasets are shown in Figure 2. **Our Algorithm 2 significantly improves detection on these OOD inputs deficit of semantic information relevant to any training class: birds for Birds dataset and hair for CelebA dataset.** Figure 2: Images from Birds and CelebA dataset (left). OOD for birds dataset are images without birds, and OOD for CelebA dataset are images of people without hair (right). \((\mathcal{Y},\mathcal{A}_{\mathcal{Y}})\) is assumed to be at most countable. We ask the following OOD detection question: _Given the training set \(\mathcal{S}\) for a machine learning classifier, can we build an OOD detector that is able to detect inputs that lie far from the ones in \(\mathcal{X}_{I}\)_? ### Methodology For any image \(x\in\mathcal{X}\), we use \(\text{rel}(x)\) to denote its semantically relevant part. We propose two approaches to answer the OOD detection question. The first one estimates the intended distribution \(\mathcal{D}_{I}\) with an empirical distribution \(\hat{\mathcal{D}}_{I}\); then, if for a given input \(t\in\mathcal{X}\) we have that \(\hat{\mathcal{D}}_{I}(\text{rel}(t))\) is "too low", we say that \(t\) is OOD. The second one is to build a function that measures the similarity between \(\text{rel}(t)\) and the elements of \(\mathcal{X}_{I}\); then, if the similarity is "too low", we say that \(t\) is OOD. Both these methods aim to detect as OOD an image whose (associated class) label does not belong to \(\mathcal{Y}\). Algorithm 1 subsumes the two approaches in the generic function \(OOD\_Detection_{\hat{\mathcal{X}}_{I}}\) that depends on the approximation \(\hat{\mathcal{X}}_{I}\) of \(\mathcal{X}_{I}\) via the training set \(\mathcal{S}\). In the next section, we delve into the details of the two approaches. ``` Input: Test datapoint \(t\in\mathcal{X}\) Parameters: Training set \(\mathcal{S}\), detection threshold \(\epsilon\) Output: "1" if \(t\) is detected as OOD; "0" otherwise \(\hat{\mathcal{X}}_{I}\) = estimated \(\mathcal{X}_{I}\) from \(\mathcal{S}\) \(OOD\_Detection_{\hat{\mathcal{X}}_{I}}:\mathcal{X}\rightarrow\mathbb{R}_{+}\) {Builds the intended distribution} Return 1 if \(OOD\_Detection_{\hat{\mathcal{X}}_{I}}(t)<\epsilon\), 0 otherwise ``` **Algorithm 1** Detecting OOD Inputs as Out-of-Intended Distribution Inputs ## 3 Using semantically relevant information for OOD detection In this section, we explore the two methods described above. We first elicit our estimation for \(\mathcal{X}_{I}\), \[\hat{\mathcal{X}}_{I}:=\{x^{\prime}\in\mathbb{R}^{h\times w\times 3}:x^{\prime}= \mathcal{N}(x),x\in\mathcal{S}\}, \tag{1}\] where \(\mathcal{N}\) is a generic segmentation map. \(\hat{\mathcal{X}}_{I}\) is a good estimator of \(\mathcal{X}_{I}\) because it preserves the semantically relevant information that is required for classification. ### Out-of-Intended Distribution with Machine Learning Model Let the _oracle classifier_ be a map \(C:\mathcal{X}\rightarrow\mathcal{Y},\;x\mapsto C(x)=y\in\mathcal{Y}\), which produces the ground truth labels. Let \(\mathscr{F}:=\{F:\mathcal{X}\rightarrow\mathcal{X}_{I}\cup\{\bot\}\mid\) \[x\mapsto F(x)=s\text{, }C(x)=C(s)\},\] where \(\bot\) denotes an empty image, and \(s\) corresponds to the relevant part \(\text{rel}(x)\) of an input image \(x\). The elements of \(\mathscr{F}\) are the maps that extract the relevant parts of an image while preserving the label assigned by the oracle classifier \(C\) to the original image. We require the set \(\mathscr{F}\) to satisfy the following two assumptions. Our **first assumption** is that \(\mathscr{F}\neq\emptyset\). This is reasonable since it is almost always the case that we can find a map that extracts the relevant part of an image without losing its label. Now, for any \(F\in\mathscr{F}\), let us compute \(F(X_{1}),\ldots,F(X_{n})\), where \(X_{1},\ldots,X_{n}\sim\mathcal{D}\) (training distribution) are iid. \(F(X_{1}),\ldots,F(X_{n})\) are iid random variables distributed according to some distribution \(\mathcal{D}^{\prime}\) on \(\mathcal{X}_{I}\cup\{\bot\}\).3 Our **second assumption** is that Footnote 3: We tacitly assume that \(F\) does not induce correlation; this will always be the case. \[d_{K}(\mathcal{D}^{\prime},\mathcal{D}_{I})\leq\delta,\quad\text{ for some }\delta\geq 0, \tag{2}\] where \(d_{K}(\mathcal{D}^{\prime},\mathcal{D}_{I}):=\sup_{x\in\mathcal{X}_{I}}| \mathcal{D}^{\prime}(x)-\mathcal{D}_{I}(x)|\).4 The second assumption is equivalent to saying that we can find a \(F\in\mathscr{F}\) such that \(F(X_{1}),\ldots,F(X_{n})\sim\mathcal{D}^{\prime}\) are iid, and (2) holds. This too is reasonable: it states that first sampling an image from \(\mathcal{D}\) and then extracting its intended part via \(F\) is "sufficiently similar" to directly sampling an image from the intended distribution. Then, we have the following. Footnote 4: Here, for simplicity, let \(\text{supp}(\mathcal{D}_{I})=\mathcal{X}_{I}\cup\{\bot\}\). **Theorem 1**.: _Let \(\mathcal{D}_{I}\) be defined as above. Then, there exists an estimator \(\hat{\mathcal{D}}_{I}\equiv\hat{\mathcal{D}}_{I}(n)\) of \(\mathcal{D}_{I}\) depending on the size \(n\) of the training set \(\mathcal{S}\) such that the following holds almost surely_ \[\lim_{n\rightarrow\infty}d_{K}(\hat{\mathcal{D}}_{I}(n),\mathcal{D}_{I})\leq\delta.\] We provide the proof of the theorem in supplementary material, where we show that \(\hat{\mathcal{D}}_{I}\) is the empirical measure for \(F(X_{1}),\ldots,F(X_{n})\). Theorem 1 states that as the size of the training set increases, the distance between the estimated intended distribution \(\hat{\mathcal{D}}_{I}\) and the true intended distribution \(\mathcal{D}_{I}\) converges to a scalar that is bounded by \(\delta\). If the sampling process for the training data was perfect, then first sampling according to \(\mathcal{D}\), and then extracting the intended part via \(F\) would give exactly the same result as sampling directly from \(\mathcal{D}_{I}\), and \(\delta\) would be equal to \(0\). The fact that \(\delta\) is positive accounts for the error due to the short-fall of the algorithm which estimates the intended distribution from the training data. In light of Theorem 1, which shows that - under two natural assumptions - the distance between \(\mathcal{D}_{I}\) and \(\hat{\mathcal{D}}_{I}\) is bounded, we propose to perform OOD detection using \(\hat{\mathcal{D}}_{I}\). In scenarios with a large amount of labeled data available, we propose to use semantic segmentation networks as \(\mathcal{N}\) in (1); we denote semantic segmentation networks by \(\mathcal{N}_{s}\) to distinguish them from the expert-guided procedure that we introduce in section 3.2. The output of a segmentation network \(\mathcal{N}_{s}(x)\), called the _segmentation map_, is the classification of each pixel in the image into either background (semantically irrelevant information) or one of the class labels in \(\mathcal{Y}\). We propose \(\hat{\mathcal{X}}_{I}\) as the set of segmentation maps on (the elements of) \(\mathcal{S}\), where class information is labeled by the segmentation network, i.e., \(\hat{\mathcal{X}}_{I}:=\{x^{\prime}:x^{\prime}=\mathcal{N}_{s}(x),x\in \mathcal{S}\}\). Here, we call the relevant part of an image as _foreground segment_: the set of pixels in a segmentation map labeled with a class in \(\mathcal{Y}\) by \(\mathcal{N}_{s}\). Segmentation algorithm \(\mathcal{N}_{s}\) filters the input with the class-specific semantic information in \(\mathcal{Y}\); since it extracts the relevant part from an input image \(x\), we can see \(\mathcal{N}_{s}\) as a map \(F\in\mathscr{F}\). Since \(\mathcal{N}_{s}\) extracts class-specific information from an input image without losing its class label [1], we see how the first assumption is satisfied. Then, recall that \(X_{1},\ldots,X_{n}\sim\mathcal{D}\) iid, and so \(\mathcal{N}_{s}(X_{1}),\ldots,\mathcal{N}_{s}(X_{n})\sim\mathcal{D}^{\prime}\) iid. Distribution \(\mathcal{D}^{\prime}\) satisfies the second assumption as it has been shown that segmentation networks are quite effective at extracting the intended or class-specific foreground data from the training dataset [1]. More rigorously, it is a function \(\mathcal{N}_{s}:\mathcal{X}\rightarrow\mathbb{R}^{h\times w\times(|\mathcal{Y }|+1)}\). It only keeps the height and width of the image, losing the color information; the third dimension is given by a vector of dimension \(|\mathcal{Y}|+1\), where \(|\mathcal{Y}|\) is the number of (class) labels, and the extra dimension captures an "extra label" associated with the background. Its entries are real numbers between \(0\) and \(1\) that sum up to \(1\); they represent the probability of each pixel in an image belonging to (class) label \(y\in\mathcal{Y}\) or to the "extra label". OOD Detection Scores:Classification-based detection scores [1, 10] can be put to use in order to perform OOD detection on the foreground segment of an input image. Similar to the baseline detector [1] - which uses the softmax score of the predicted class by a classification network for detection - we propose to use softmax scores for the predicted class of the foreground segment for detection. Since the detection score must be a single value, we take the average of the softmax scores for the pixels in the foreground segment. We formalize this score as follows. Recall that \(h\) and \(w\) denote the height and width of an image \(x\in\mathcal{X}\). Let \(H:=\{1,\ldots,h\}\), and \(W:=\{1,\ldots,w\}\). For a generic vector \(a\), we use \(a_{i}\) to denote its \(i\)-th entry, while for a generic element \(r\) of \(\mathbb{R}^{h\times w\times(|\mathcal{Y}|+1)}\), we use \(r_{i,j}\) to denote the \((|\mathcal{Y}|+1)\)-dimensional vector that we obtain if we "slice" \(r\) at the first coordinate \(i\) and the second coordinate \(j\). For \(N=|\mathcal{Y}|\), and \(q=(q_{1},\ldots,q_{N},q_{N+1})^{\top}\in\mathbb{R}^{N+1}\); we define the function \(V\) as follows. \[q\mapsto V(q):=\begin{cases}\max_{i}q_{i}&\text{if }\operatorname{arg\,max}_{i}q_{i} \in\{1,\ldots,N\}\\ 0&\text{otherwise}\end{cases}.\] Definition 1: For any \(x\in\mathcal{X}\), we define the baseline score (BLS) as the average of the softmax scores for pixels in the foreground segment of \(x\): \[BLS(x):=\frac{\sum_{i\in H}\sum_{j\in W}V\big{(}\mathcal{N}_{s}(x)_{i,j}\big{)} }{\sum_{i\in H}\sum_{j\in W}\mathbb{1}\big{(}V\big{(}\mathcal{N}_{s}(x)_{i,j} \big{)}\neq 0\big{)}} \tag{3}\] We can also use the classification-based score used by the ODIN detector [10]. ODIN is an enhanced version of the baseline detector where the temperature-scaled softmax score of the preprocessed input is used for detection. The input \(x\) is preprocessed by adding small perturbations: \[\widetilde{x}:=x-\zeta\text{ sign}(-\nabla_{x}\text{log}\beta^{\prime}_{\star}(x,T)).\] Here \(\zeta>0\) is the perturbation magnitude, sign denotes the sign function, \(T\in\mathbb{R}_{>0}\) is the temperature scaling parameter, \(\beta^{\prime}(x,T)\) is an \(N\)-dimensional vector whose \(i\)-th entry: \(\beta^{\prime}_{i}(x,T)=\frac{\exp(f_{i}(x)/T)}{\sum_{j=1}^{N}\exp(f_{j}(x)/T)}\) is given by the temperature-scaled softmax score of the \(i\)-th class predicted by the classification network \(\mathbf{f}=(f_{1},\ldots,f_{N})\) that is trained to classify \(N\) classes in \(\mathcal{Y}\), and \(\beta^{\prime}_{\star}(x,T)=\max_{i}\beta^{\prime}_{i}(x,T)\). Definition 2: For any \(x\in\mathcal{X}\), we define the ODIN score (ODS) as the average of the softmax scores for pixels in the foreground segment of \(\widetilde{x}\): \[ODS(x):=\frac{\sum_{i\in H}\sum_{j\in W}V(\mathcal{N}_{s}(\widetilde{x})_{i,j}) }{\sum_{i\in H}\sum_{j\in W}\mathbb{1}\big{(}V\big{(}\mathcal{N}_{s}(\widetilde{x })_{i,j}\big{)}\neq 0\big{)}}. \tag{4}\] Then, given an input image \(t\in\mathcal{X}\), we can view \(ds(t)\), \(ds\in\{BLS,ODS\}\), as the estimated intended distribution \(\hat{\mathcal{D}}_{I}\) of Theorem 1 evaluated at the relevant part of \(t\), that is, \(ds(t)=\hat{\mathcal{D}}_{I}(\text{rel}(t))\). We propose Algorithm 2 for OOD detection when \(\hat{\mathcal{X}}_{I}\) is computed according to \(\mathcal{N}_{s}\), and \(\mathcal{D}_{I}\) is estimated by \(ds\in\{BLS,ODS\}\). ``` Input:Test input \(t\in\mathcal{X}\), Parameters:Semantic segmentation network \(\mathcal{N}_{s}\) trained on \(\mathcal{S}\), detection score \(ds\in\{BLS,ODS\}\), detection threshold \(\epsilon\) Output:"\(1\)" if \(t\) is detected as OOD; "0" otherwise Return 1 if \(ds(t)<\epsilon\), 0 otherwise ``` **Algorithm 2** Out-of-Intended Distribution Detection with Semantic Segmentation Network **Input:** Test input \(t\in\mathcal{X}\), Parameters:Semantic segmentation network \(\mathcal{N}_{s}\) trained on \(\mathcal{S}\), detection score \(ds\in\{BLS,ODS\}\), detection threshold \(\epsilon\) Output:"\(1\)" if \(t\) is detected as OOD; "0" otherwise Return 1 if \(ds(t)<\epsilon\), 0 otherwise ### Out-of-Intended Distribution Detection with Expert Guidance Datasets such as MNIST, with a history of expert-feature engineering techniques, e.g. shape context Belongie et al. (2000), allow semantically relevant pixels to be derived easily. The generation of the segmentation map, which we denote by \(\mathcal{N}_{r}:\mathcal{X}\rightarrow\mathbb{R}^{h\times w\times 3}\) to distinguish it from \(\mathcal{N}_{s}\) in section 3.1, follows a two-step expert-guided process. First, it uses a standard segmentation algorithm to define super (or semantically relevant) pixels of an image. Next, it removes the segments which can be regarded as irrelevant (or background) information. This creates an image out of the two components by setting different colors to the semantically relevant and the irrelevant pieces. We leave the details with examples (Fig. 2) to the supplementary material. Following (1), we have \(\hat{\mathcal{X}}_{I}=\{x^{\prime}\in\mathbb{R}^{h\times w\times 3}:x^{\prime}= \mathcal{N}_{r}(x),\;x\in\mathcal{S}\}\). Next, we define a reference set \(\mathcal{R}\subset\hat{\mathcal{X}}_{I}\) as a set of size \(|\mathcal{Y}|\) containing one representative of each class \(y\in\mathcal{Y}\): \[\mathcal{R}:=\{x\in\hat{\mathcal{X}}_{I}:x\sim\text{Unif}(C^{-1}(y)),\,\forall y \in\mathcal{Y}\}\subset\hat{\mathcal{X}}_{I}.\] More sophisticated algorithms can be used to replace this simple choice for creating \(\mathcal{R}\), such as the ones proposed in Yang et al. (2022), Dutta et al. (2022). Nevertheless, we find this simple procedure well-suited for this context. **OOD Detection Score:** Here, we use Structural similarity index metric (SSIM) Wang et al. (2004) as the OOD detection score. SSIM is a well-known index to compute the statistical similarity between two images. It is calculated as: \[\begin{split} SSIM(x_{1},x_{2})&=S_{1}(x_{1},x_{2})S_{2 }(x_{1},x_{2}),\;\text{where}\\ S_{1}(x_{1},x_{2})&=lum(x_{1},x_{2}),\text{and}\\ S_{2}(x_{1},x_{2})&=con(x_{1},x_{2})corr(x_{1},x_{2 })\end{split} \tag{5}\] The functions \(lum\), \(con\) and \(corr\) compare the luminosity, contrast and correlation between two image inputs \(x_{1}\) and \(x_{2}\). The details of its implementation can be found in Wang et al. (2004), Brunet et al. (2012), and it permits fast GPU-based implementation. OOD detection for a test input \(t\in\mathcal{X}\) is performed by measuring the SSIM between \(\mathcal{N}_{r}(t)\) and its nearest neighbor in the reference set \(\mathcal{R}\). **Algorithm**: Algorithm 3 combines these pieces together. We compute the SSIM of the relevant part \(\text{rel}(t)=\mathcal{N}_{r}(t)\) of an input image \(t\in\mathcal{X}\) with respect to all images in \(\mathcal{R}\), and use the maximum value for detection. In other words, if the similarity value of the relevant part \(\text{rel}(t)\) of \(t\) with its nearest neighbor in \(\mathcal{R}\) is below the detection threshold \(\epsilon\), we declare \(t\) as OOD. ``` Input: Test input \(t\in\mathcal{X}\) Parameters: Segmentation algorithm \(\mathcal{N}_{r}\), reference set \(\mathcal{R}\), detection threshold \(\epsilon\) Output: "\(1\)" if \(t\) is detected as OOD; "\(0\)" otherwise \(v_{k}=SSIM(r_{k},\mathcal{N}_{r}(t))\), for all \(r_{k}\in\mathcal{R}\) Return 1 if \(\max_{k}(v_{k})<\epsilon\), 0 otherwise ``` **Algorithm 3** Out-of-Intended Distribution Detection with Reference Set ## 4 Experiments We perform experiments with the existing state-of-the-art (SOTA) detectors from all the three categories of supervised, unsupervised, and self-supervised OOD detection techniques. **Unsupervised :** Baseline detector (Hendrycks and Gimpel, 2016) is the SOTA unsupervised detector. It uses softmax score of a classifier for the predicted class. ODIN (Liang et al., 2017) is an enhanced version of the baseline detector that uses temperature-scaled softmax score but, for a perturbed input for detection. Details are in section 3.1. **Supervised :** Mahalanobis detector (Mahala) is the SOTA supervised detector which uses Mahalanobis distance (Mahalanobis, 1936) of the input in the training feature space of the classifier for detection. **Self-supervised :** Aux (Hendrycks et al., 2019) is the SOTA self-supervised detector which uses error in the prediction of the applied transformation on the input for detection. It trains a classifier with an auxiliary task of predicting the applied rotation, vertical and horizontal translations. The sum of the error in the three predictions and classification error is used for detection. **Evaluation Metrics:** We call in-distribution inputs as positives and OOD inputs as negatives. We report the Receiver Operating Characteristic curve (ROC), Area under ROC (AUROC), and True Negative Rate (TNR) at \(95\%\) True Positive Rate (TPR) for evaluation. These are the standard metrics used in OOD detection (Hendrycks et al., 2019, Liang et al., 2017; Lee et al., 2018). ### Case Study I: OOD Detection with Semantic Segmentation Network #### 4.1.1 Dataset and Motivation Common Objects in Context-Stuff (COCO) dataset (Caesar et al., 2018) is a large-scale vision dataset created for the purpose of training machine learning models for object detection, segmentation and captioning with \(182\) object classes. We use that subset (training and test) of COCO which can be classified with the class labels from the set \(\mathcal{Y}=\{\text{cup},\text{umbrella},\text{orange},\text{toaster},\text{broccoli}, \text{banana},\text{vase},\text{ zebra, kite}. These classes share the same label space with another dataset Vizwiz [14]. Vizwiz is a real patient dataset captured by visually impaired people, with the purpose developing algorithms for assistive technologies. Where the quality of images captured can be an issue. So, images in the Vizwiz are labeled with either "no issues", or with issues such as "blurry", "too bright", "too dark", "camera obstructed" etc. We call the images with "no issues" label in the Vizwiz dataset as the _clear Vizwiz_. We train the ResNet18 [13] model to classify the training set of COCO dataset. With \(74.33\%\) as model's accuracy on test COCO, it achieves a comparable accuracy of \(68.14\%\) on the clear Vizwiz. Detecting inputs from clear Vizwiz as OOD by the existing detectors restricts the generalizability of classifiers from the training distribution \(\mathcal{D}\) to the intended distribution \(\mathcal{D}_{I}\). 1.2 Semantic Segmentation Network \(\mathcal{N}_{s}\) for Algorithm 2 and Classifier for Detection by Existing Detectors As recommended by the authors of the COCO dataset [11], we train the DeepLab version 2 (v2) segmentation network [3] on the training set of COCO. DeepLab v2 uses ResNet101 [13] as the backbone model. For a fair comparison with the existing detectors, we train the ResNet101 classifier on the training set of COCO. We use the trained classifier for OOD detection by the existing SOTA unsupervised and supervised detectors. The accuracy of the classifier on the test COCO set is \(68.64\%\). COCO dataset is commonly used for object detection and segmentation. The classification accuracy of 68.64% is comparable with the SOTA detection accuracy (in terms of mean average precision) of \(64.2\%\) on COCO [20]. For the self-supervised detector AUX, we train the ResNet101 classifier with the auxiliary losses of rotations and translations. Its classification accuracy on the test COCO set is \(74.11\%\). #### 4.1.3 Test Cases and Results We consider the following three test cases: **(a) In-Distribution from Clear Vizwiz:** Inputs with the class labels in \(\mathcal{Y}\) but from clear Vizwiz. **(b) OOD from Vizwiz:** Inputs with blurry, too bright, too dark, and obstructed issues from Vizwiz. Due to the quality issues of these images, this dataset cannot be labeled with any labels in \(\mathcal{Y}\). **(c) OOD from COCO:** Inputs from test COCO dataset with class labels not in \(\mathcal{Y}\). Here, we filter that subset of test COCO that can be classified with class labels from the set {traffic light, stop sign, parking meter, fire hydrant}. Figure 3 shows some examples of the images from the three test cases. Figure 4 compares the ROC and AUROC results of the existing detectors with Algorithm 2 on these cases: **(a) In-Distribution from Clear Vizwiz (Fig. 4(a))**: AUROC less than \(50\%\) by our approach (with both the baseline and ODIN scores) implies that the proposed detector is not able to distinguish between the test COCO and Clear Vizwiz datasets. AUROC greater than \(50\%\) by the existing detectors implies that these detectors distinguish clear Vizwiz from the test COCO by assigning higher OOD detection scores to clear Vizwiz. **(b) OOD from Vizwiz (Fig. 4(b))**: With these images as OOD for COCO, we require the AUROC to be as close to one as possible. We achieve the best AUROC of \(98.52\) with ODS and the second best AUROC of \(96.75\) with BLS. **(c) OOD from COCO (Fig. 4(c))**: Significantly higher (\(\geq 28.86\%\)) AUROC by Algorithm 2 (with both scores) than the existing ones indicates that the proposed detector performs OOD detection on these inputs with spurious features from the training data better than the existing ones. We also perform additional experiments on existing benchmarks where OOD datasets such as SVHN [21], Imagenet [15], and LSUN [22] are considered. These results are included in the supplementary material. Here also, we perform the best (in terms of AUROC) for detection on OOD inputs from Imagenet and LSUN. For SVHN, Mahala (supervised detector) performs the best with \(99.04\%\) AUROC, and the result of Algorithm 2 (unsupervised detection) is \(97.25\%\) AUROC. ### Case Study II: OOD Detection with a Reference Set #### 4.2.1 Dataset and Motivation We use a mixture of MNIST-M [15] and Background-Colored-MNIST (BC-MNIST) [16] datasets. Both MNIST-M and BC-MNIST are modified versions of MNIST [15] dataset. MNIST-M is MNIST with its digits blended over patches of colored images. BC-MNIST is the colored version of MNIST where both digits and background are colored. We use a mixture dataset of \(100\%\) data from MNIST-M and \(50\%\) data from BC-MNIST. We call this dataset as _Mix-MNIST_. With \(60,000\) training images in MNIST-M and \(4000\) training images in BC-MNIST, \(96.77\%\) of the training data in Mix-MNIST comes from MNIST-M and the remaining \(3.23\%\) from BC-MNIST. We train the LeNet5 [16] classifier on Mix-MNIST. The classifier achieves comparable accuracy of \(90\%\) and \(91\%\) on test MNIST-M and test BC-MNIST datasets respectively. Therefore, with the classifier's ability to generalize on BC-MNIST with only \(3.23\%\) of BC-MNIST as the training data, detecting inputs from BC-MNIST as OOD by the exiting detectors (Fig. 5) limits the applicability of the classifier. #### 4.2.2 Experimental Details and Results For the existing detectors, we use trained the LeNet5 model with its accuracy of \(91.91\%\) on the test set of Mix-MNIST. Figure 5 compares the ROC, AUROC, and TNR results of the existing detectors with Algorithm 3 on the test set of BC-MNIST. AUROC higher than \(50\%\) by the existing detectors implies that existing detectors distinguish the test data of Mix-MNIST from the test set of BC-MNIST with higher OOD detection scores assigned to BC-MNIST. AUROC less than \(50\%\) by the proposed Algorithm 3 shows that it does not distinguish between the test sets of Mix-MNIST and BC-MNIST. We also achieve the lowest false alarm rate of \(2.95\%\) here. We perform additional experiments for Mix-MNIST with OOD datasets from (low quality) Vizwiz and Fashion-MNIST. Details and results on these experiments are included in the supplementary material. Figure 4: AUROC less than \(50\%\) on in-distribution inputs from clear Vizwiz (a), and highest AUROC on OOD inputs from Vizwiz (b) as well as on OOD inputs from COCO (c) by our Algorithm 2 shows that the proposed detection not only significantly reduces false alarms but also improves on OOD detection on inputs with spurious features from the training set. Figure 5: AUROC less than \(50\%\) on in-distribution inputs from BC-MNIST by Algorithm 3 shows that the proposed algorithm significantly reduces false alarms (by the existing detectors) for the training set of Mix-MNIST. Figure 3: Examples of images from the three test cases for COCO. Conclusion In this paper, we make use of the training class-specific semantic information for explicitly defining and detecting OOD inputs to a machine learning classifier. We show that including more nuanced semantic information about the content of images can improve OOD detection significantly. This, to the best of our knowledge, is one of the first approaches which differentiates between training distribution and intended distribution. ## 6 Appendix ### Proof of Theorem 1 Proof.: Call \(\mathcal{D}^{\prime}_{n}\) the empirical measure for \(F(X_{1}),\ldots,F(X_{n})\). We have that \[\lim_{n\rightarrow\infty}d_{K}(\mathcal{D}^{\prime}_{n},\mathcal{D}_{I}) \leq\lim_{n\rightarrow\infty}\left[d_{K}(\mathcal{D}^{\prime}_{n},\mathcal{D}^{\prime})+d_{K}(\mathcal{D}^{\prime},\mathcal{D}_{I})\right] \tag{6}\] \[=\lim_{n\rightarrow\infty}d_{K}(\mathcal{D}^{\prime}_{n},\mathcal{ D}^{\prime})+\lim_{n\rightarrow\infty}d_{K}(\mathcal{D}^{\prime},\mathcal{D}_{I})\] (7) \[=\lim_{n\rightarrow\infty}d_{K}(\mathcal{D}^{\prime}_{n}, \mathcal{D}^{\prime})+d_{K}(\mathcal{D}^{\prime},\mathcal{D}_{I})\] \[\leq 0+\delta=\delta \tag{8}\] almost surely, where (6) comes from the triangular inequality, (7) comes from the linearity of the limit operator, and (8) holds because \(\lim_{n\rightarrow\infty}d_{K}(\mathcal{D}^{\prime}_{n},\mathcal{D}^{\prime})=0\) almost surely by Glivenko-Cantelli's Theorem, and \(d_{K}(\mathcal{D}^{\prime},\mathcal{D}_{I})\leq\delta\) by our second assumption. The proof is completed by putting \(\hat{\mathcal{D}}_{I}(n)=\mathcal{D}^{\prime}_{n}\). ### Example Images from Coco Segmented With Class-Specific Relevant Information Figure 6 shows some examples of images sampled from COCO and clear Vizwiz datasets on the left and corresponding output of the trained semantic segmentation network \(\mathcal{N}_{s}\) from section 4.1.2 on the right. ### Additional Results on Existing Benchmarks ### Details About the Two-Step Segmentation Algorithm \(\mathcal{N}_{r}\) Used in Algorithm 3 Detecting semantically relevant pixels is the first step in this algorithm. In order to separate the semantically relevant pixels, we first partition the image into meaningful segments using Felzenszwalb's Algorithm Felzenszwalb and Huttenlocher (2004). Next we mark the segments placed away from the center as being semantically irrelevant. Whatever remains closely maps to semantically relevant information. We binarize the result in the previous step, to obtain a black and white version of the image. Figure 7 shows some examples of images sampled from Mix-MNIST on the left and corresponding output of the segmentation algorithm \(\mathcal{N}_{r}\) from section 4.2.2 on the right. ### Experimental Details and Additional Experiments on Mix-Mnist #### 6.5.1 Experimental Details Given two binarized versions of an image pair by the segmentation algorithm \(\mathcal{N}_{r}\) described in Appendix 6.4, we compute the SSIM value between these images. We restrict ourselves to a non-negative version of the SSIM metric in this paper. To estimate whether an image contains digit, we maintain a reference set for digits zero to nine. Figure 8 shows the reference set used in experiments. For a given test image, we compute the SSIM between the binary version of the image and each digit image in the reference set. If the test image does not resemble any digit in the reference set, we declare it to be OOD. Figure 6: Example images from COCO and clear Vizwiz on the left and output of the trained semantic segmentation network \(\mathcal{N}_{s}\) on these images on the right. Yellow color in the segmented images represents the class-specific semantically relevant information, and the purple color represents the semantically irrelevant or background information. #### 6.5.2 Additional Experiments We conduct additional experiments on Mix-MNIST with the following two test cases: **(a) OOD from Vizwiz:** Images with the blurry, too dark, and obstructed quality issues from Vizwiz. **(b) OOD from Fashion-MNIST:** Images from Fashion-MNIST [21] dataset with class labels from fashion objects such as trousers, shoe etc. The results are as follows: Figure 9 compares the ROC and AUROC results of the existing detectors with the proposed OOD detection Algorithm 3. Table 3 shows these results on TNR (at 95% TPR) on these test cases: **(a) OOD from Vizwiz (Fig. 9(a))**: With failure to assign any labels to this dataset due to quality issues, these images are OOD for the Mix-MNIST dataset and here we require the AUROC to be as close to one as possible. The existing detector ODIN achieves the best AUROC of \(94.95\%\) and our result is \(90.93\%\). We achieve the best TNR (\(@95\%\)TPR) detection of \(67.15\%\) here. **(b) OOD from Fashion-MNIST (Fig. 9(b))**: With the class labels of Fashion-MNIST disjoint from the classes in Mix-MNIST, images from Fashion-MNIST are OOD for Mix-MNIST. The existing supervised detector Mahala achieves the best AUROC of \(86.03\%\) and our (unsupervised) results are comparable at \(84.27\%\). Mahala achieves the best TNR (\(@95\%\)TPR) detection of \(56.86\%\) and ours is second best at \(44.67\%\). ### Details about the Experiments on Birds and CelebA Dataset We compare TNR (at \(95\%\) TPR) for existing detectors on Birds and CelebA datasets for OOD detection on Spurious OOD test set, as reported by [19]. #### 6.6.1 CelebA We use the semantic segmentation network \(\mathcal{N}_{s}\) by Lee et al. [2020] on CelebA dataset. The network segments faces into different parts including nose, hair, mouth, etc. In this experiment, we ran Algorithm 2 with the Baseline score. #### 6.6.2 Birds We use the semantic segmentation network \(\mathcal{N}_{s}\) by Iakubovskii [2019]. Here, we select Feature Pyramid Network (FPN) [17] with ResNet50 [14] as its backbone architecture. It segments the images into two parts: bird and background. In this experiment, we ran Algorithm 2 with the Baseline score. \begin{table} \begin{tabular}{c|c c c c c} \hline \hline OOD & Baseline & ODIN & Mahala & Aux & Ours (BLS) \\ \hline SVHN & 89.08 & 94.23 & **99.04** & 98.79 & 97.25 \\ Imagenet & 86.02 & 91.59 & 95.68 & 95.31 & **97.31** \\ LSUN & 83.24 & 91.24 & 94.28 & 96.21 & **99.57** \\ \hline \hline \end{tabular} \end{table} Table 2: Comparison of Algorithm 2 with SOTA detectors on the existing benchmarks. Figure 7: Example images from Mix-MNIST on the left and the output of the segmentation algorithm \(\mathcal{N}_{r}\) on these images on the right. White color in the segmented images represents the class-specific semantically relevant information, and the black color represents the semantically irrelevant information.
2308.04329
Predicting gravitational waves from jittering-jets-driven core collapse supernovae
I estimate the frequencies of gravitational waves from jittering jets that explode core collapse supernovae (CCSNe) to crudely be 5-30 Hz, and with strains that might allow detection of Galactic CCSNe. The jittering jets explosion mechanism (JJEM) asserts that most CCSNe are exploded by jittering jets that the newly born neutron star (NS) launches within few seconds. According to the JJEM, instabilities in the accreted gas lead to the formation of intermittent accretion disks that launch the jittering jets. Earlier studies that did not include jets calculated the gravitational frequencies that instabilities around the NS emit to have a peak in the crude frequency range of 100-2000 Hz. Based on a recent study, I take the source of the gravitational waves of jittering jets to be the turbulent bubbles (cocoons) that the jets inflate as they interact with the outer layers of the core of the star at thousands of kilometres from the NS. The lower frequencies and larger strain than those of gravitational waves from instabilities in CCSNe allow future, and maybe present, detectors to identify the gravitational wave signals of jittering jets. Detection of gravitational waves from local CCSNe might distinguish between the neutrino-driven explosion mechanism and the JJEM.
Noam Soker
2023-08-08T15:19:33Z
http://arxiv.org/abs/2308.04329v1
# Predicting gravitational waves from jittering-jets-driven core collapse supernovae ###### Abstract I estimate the frequencies of gravitational waves from jittering jets that explode core collapse supernovae (CCSNe) to crudely be 5-30 Hz, and with strains that might allow detection of Galactic CCSNe. The jittering jets explosion mechanism (JJEM) asserts that most CCSNe are exploded by jittering jets that the newly born neutron star (NS) launches within few seconds. According to the JJEM, instabilities in the accreted gas lead to the formation of intermittent accretion disks that launch the jittering jets. Earlier studies that did not include jets calculated the gravitational frequencies that instabilities around the NS emit to have a peak in the crude frequency range of 100-2000 Hz. Based on a recent study, I take the source of the gravitational waves of jittering jets to be the turbulent bubbles (cocoons) that the jets inflate as they interact with the outer layers of the core of the star at thousands of kilometres from the NS. The lower frequencies and larger strain than those of gravitational waves from instabilities in CCSNe allow future, and maybe present, detectors to identify the gravitational wave signals of jittering jets. Detection of gravitational waves from local CCSNe might distinguish between the neutrino-driven explosion mechanism and the JJEM. gravitational waves - stars: neutron - black holes - supernovae: general - stars: jets 00xx launches the jets as it accretes mass through an accretion disk. There are two sources of the angular momentum of the accretion disk (e.g., Soker 2023b). These are pre-collapse core rotation that has a fixed angular momentum axis, and the convective motion in the pre-collapse core (e.g., Papish & Soker 2014b; Gilkis & Soker 2015; Soker 2019a; Shishkin & Soker 2022) or envelope (e.g., Quataert et al. 2019; Antoni & Quataert 2022, 2023) that has a stochastically varying angular momentum axis. When the pre-collapse core angular momentum is low the accretion disk has rapidly varying axis direction. Each accretion episode through a given accretion disk lasts for a limited period of time and leads to one jet-launching episode of two opposite jets. The convective fluctuations serve as seed perturbations that are amplified by instabilities behind the stalled shock, which is at \(\simeq 100-150\:{\rm km}\) from the newly born NS. Namely, the same instabilities that give rise to gravitational waves in the frame of the neutrino-driven explosion mechanism (e.g., Mezzacappa et al. 2020), which does not include jets, exist also in the JJEM. The JJEM has in addition the jittering jets that inflate turbulent bubbles (cocoons) that might emit gravitational waves according to the new results of Gottlieb et al. (2023). In the present, still exploratory, study I present the first prediction, although very crude, for gravitational waves in the frame of the JJEM. I do this by appropriately scaling the recent results that Gottlieb et al. (2023) obtained for gravitational waves from much more energetic jets than the jittering jets (section 2). I then present the general characteristic of the strain of JJEM-driven CCSNe (section 3). I summarize the results (section 4) and strongly encourage simulations of gravitational waves from jittering jets in CCSNe. ## 2 Estimating gravitational waves from jittering jets The calculation of gravitational waves by CCSNe as expected in the JJEM requires very demanding three-dimensional hydrodynamical simulations. In this preliminary study I make crude estimates by scaling the results of Gottlieb et al. (2023) who conduct simulations of long-lived relativistic jets with energies of \(\simeq 10^{52}-10^{53}\:{\rm erg}\). In the JJEM the jets are relatively short-lived and have a typical velocity of \(0.3-0.5c\)(e.g., Papish & Soker 2014a; indeed, Guetta et al. 2020 claim that neutrino observations limit the jets in most cases to be non-relativistic). In an explosion process there are \(\approx{\rm few}-30\) jet-launching episodes, with a typical activity time of each episode of \(\simeq 0.01-0.1\:{\rm s}\), and a typical energy of the two jets of \(\approx 10^{50}\:{\rm erg}\) to \({\rm few}\times 10^{50}\:{\rm erg}\)(Papish & Soker 2014a). Gottlieb et al. (2023) estimate the range of frequencies of the gravitational waves when the jets' axis is at a large angle to the line of sight (off-axis) to be between \(f_{\rm min}\simeq 1/\Delta t_{\rm jc}\) and \(f_{\rm max}\simeq c_{s}/\Delta r_{\rm sh}\), where \(\Delta t_{\rm jc}\) is the time the jets energyize the cocoons, \(c_{s}\) is the sound speed, and \(\Delta r_{\rm sh}\) is the width of the shell formed by the shock. For their simulations this range is \(\approx 0.1-2000\:{\rm Hz}\). The on-axis emission, i.e., when the jets' axis is at a very small angle to the line of sight, has a strain amplitude that is more than an order of magnitude smaller than for the off-axis emission and the strain amplitude peaks at frequencies of \(10-100\:{\rm Hz}\). To scale for one pair of jittering jets I consider the three-dimensional simulations by Papish & Soker (2014b). They simulated three pairs of jittering jets that have their axes on the same plane, each jet-launching episode lasting for \(0.05\:{\rm s}\). In Figure 1 I present the density and temperature maps in the jittering plane of these jets. In each jet-launching episode the two opposite jets are seen as two opposite high density (red color on the left column) strips touching the center. While the first jet-pair inflates axisymmetric cocoons (bubbles), the second and third jet-pairs inflate non-axisymmetric bubbles. This is seen by the compressed gas at the head of the cocoon (bubble) that I point at with the double-lined arrows. From the density maps of Figure 1 I estimate the width of the shells of the bubbles/cocoons that the jets inflate to be \(\Delta r_{\rm sh}\simeq 200-500\:{\rm km}\), and from the temperature maps the sound speed is \(c_{s}\simeq 5000\:{\rm km}\:{\rm s}^{-1}\). This gives with the definition of Gottlieb et al. (2023)\(f_{\rm max}\simeq c_{s}/\Delta r_{\rm sh}\approx 10-25\:{\rm Hz}\). In the JJEM the typical duration of a jet-launching episode is \(\Delta t_{\rm j}\approx 0.01-0.1\:{\rm s}\). Even if the jets lasts for \(\simeq 0.01\:{\rm s}\), the interaction with the core material lasts longer. For that the interaction time is more likely to be \(\Delta t_{\rm jc}\approx 0.05-0.2\:{\rm s}\), which gives with the definition of Gottlieb et al. (2023)\(f_{\rm min}\simeq 1/\Delta t_{\rm jc}\approx 5-20\:{\rm Hz}\) for typical jittering jets, but with large uncertainties. The short-duration jets will have small energy and therefore small strain amplitude. The longer-duration jets have more energy. Therefore, waves with lower frequency are more likely to be detected, i.e., \(f_{\rm min}\simeq 5-10\:{\rm Hz}\). The relatively small ratio of \(f_{\rm max}/f_{\rm min}\approx 1-5\) that I find here shows that the typical spectrum of the gravitational waves of jittering jets is qualitatively different from the case that Gottlieb et al. (2023) study. In the case of the JJEM I expect the spectrum to be in narrow range of \[f_{\rm JJEM}\approx 5-30\:{\rm Hz}. \tag{1}\] As seen in Fig. 1 the size of the cocoon is smaller than the typical wavelength of \(\approx 20,000\:{\rm km}\), which makes phase cancellation very small. Scaling equation (2) of Gottlieb et al. (2023) for the strain amplitude for one pair of jets out of many pairs in the JEM, gives \[h\approx 4\times 10^{-22}\left(\frac{D}{10\;\mathrm{kpc}}\right)^{-1}\left( \frac{E_{\mathrm{2j}}}{10^{50\;\mathrm{erg}}}\right). \tag{2}\] I also consider the following quantity that is used in the study of gravitational waves from CCSNe \[\begin{split}\frac{h}{\sqrt{J}}&\approx 10^{-22} \left(\frac{D}{10\;\mathrm{kpc}}\right)^{-1}\left(\frac{E_{\mathrm{2j}}}{10^{50 \;\mathrm{erg}}}\right)\\ &\times\left(\frac{f_{\mathrm{JEM}}}{15\;\mathrm{Hz}}\right)^{-1 /2}\mathrm{Hz}^{-1/2},\end{split} \tag{3}\] were I scaled with the expected frequency range for jittering jets from equation (1). I note that equations (2) and (3) treat each jet-launching episode as an independent event. If several episodes are considered to inflate only two opposite large bubbles (lower panel of Figure 1) then the energy in the scaling of the equations should be the sum of several jet-launching episode. Namely, the scaling energy should be \(\simeq\mathrm{few}\times 10^{50}\) leading to a strain larger by a factor of a few. ## 3 Identification of gravitational waves from jittering jets Several papers calculated the gravitational waves properties from CCSNe when jets are not included (e.g., Radice et al. 2019; Andresen, Glas, & Janka 2021; Mezzacappa et al. 2023), i.e., in the frame of the delayed neutrino explosion mechanism (e.g., Bethe & Wilson 1985; Heger et al. 2003; Janka 2012; Nordhaus et al. 2012; Muller et al. 2019; Fujibayashi et al. 2021; Boccioli et al. 2022; Nakamura, Takiwaki, & Kotake 2022; Olejak et al. 2022). Mezzacappa et al. (2020), for example, find that low-frequency emission, \(\lesssim 200\;\mathrm{Hz}\), is emitted by the neutrino-driven convection and the standing accretion shock instability in the gain layer behind the stalled shock, while high-frequency emission, \(\lesssim 200\;\mathrm{Hz}\), is emitted by convection in the proto-NS. These studies find that the emission is mainly at frequencies of \(\approx 10-2000\;\mathrm{Hz}\) with larger strain amplitudes at frequencies of \(\approx 100-1000\;\mathrm{Hz}\) (e.g., Srivastava et al. 2019). The gain region and the convection in the proto-NS exist also in the JEM. Neutrino heating play roles also in the JEM (Soker 2022). Therefore, the contributions of the gain region and the proto-NS to gravitational waves in the JEM are similar to those in the delayed neutrino explosion mechanism. In the JEM there is the additional contribution of the cocoons that the jets inflate in the core and envelope of the exploding star. In section 2 I crudely Figure 1: Density (left column with a colour coding in logarithmic scale and units of \(\;\mathrm{g}\;\mathrm{cm}^{-3}\)) and temperature (right column in log scale in units of K) maps at three times of three-dimensional hydrodynamical simulation of jittering jets taken from Papish & Soker (2014). There are three jet-launching episodes, each composed of two opposite jets, one episode after the other with activity times of \(0-0.05\) s direction 1 in the lower panel, \(0.05-0.1\) s direction 2, and \(0.1-0.15\) s direction 3. I added double-lined arrows to point at the two opposite masses at the cocoon (bubble) head. While the first jet-pair inflates axisymmetric cocoons, the following cocoons largely deviate from axisymmetry. Velocity is proportional to the arrow length on the right column, with inset showing an arrow for \(30,000\;\mathrm{km}\;\mathrm{s}^{-1}\). estimated this contribution for jittering jets interacting with the core of the exploding star. In Figure 2 I present results from Mezzacappa et al. (2023). The results is of the characteristic gravitational wave strain from a CCSN in the frame of the delayed neutrino explosion mechanism of a \(15M_{\odot}\) stellar model. I added my crude estimate of a typical contribution of jittering jets (the horseshoe-shaped yellow region on the graph). The peak of the contribution of the jittering jets is at much lower frequencies than the peak of the other components of CCSNe. In addition, there will be variations with time as the jittering jets are active intermittently. As said, simulations of the JJEM are highly demanding because for the calculation of gravitational waves high-resolution simulations are need to resolve the convection in the cocoon and the head of the jet-core interaction. At this point I only present the possible schematic behavior of the strain as function of time due to the contribution of jittering jets. In the upper panel of Figure 3 I schematically present such a gravitational wave signal due only to jittering jets. I describe the distance times the strain of four jet-launching episodes (but more are expected at later time until the star explodes). Over the time period \(0.2\:\mathrm{s}-0.7\:\mathrm{s}\) the average frequency is \(16\:\mathrm{Hz}\). As commonly done I take \(t=0\) at the bounce of the shock wave from the newly born NS. There is some time delay until instabilities start to feed the intermittent accretion disks that launch the jets. These instabilities give rise to high-frequency-gravitational waves (e.g., Radice et al., 2019; Andresen, Glas, & Janka, 2021). In the lower panel of Figure 3 I present one figure from Mezzacappa et al. (2023) that shows their calculation for the gravitational wave of a CCSN of \(15M_{\odot}\) stellar model. The expected signal is the sum of all contributions. My crude estimate of gravitational waves from jittering jets show that their signal is qualitatively different than that of the other components that are close to the NS, \(\lesssim 100\:\mathrm{km}\). The jittering jets add long period modulations to the short-period waves from the other components. For a nearby CCSN even the present Advanced LIGO detector might separate the signal of the jittering jets from the other components. This depends on the signal-to-noise ratio that should be calculated with future simulations of jittering jets. Future detectors will be able to do so for CCSNe in the local group. ## 4 Summary Based on the very recent results by Gottlieb et al. (2023), which I scaled from long-lasting energetic relativistic jets in super-energetic CCSN to short-lived low-energy non-relativistic jets in common CCSNe, I concluded that jittering jets lead to detectable gravitational wave signals. The source of the gravitational waves is the turbulence in the cocoons that the jets inflate (Figure 1). Whether present detectors can reveal the gravitational wave signals of jittering jets depend on the signal-to-noise ratio that simulations of jittering jets should calculate, and of course on the distance to the CCSN. Future detectors will be able to reveal the jittering jets signal from CCSNe in the local group (Fig. 2). The frequencies of the expected gravitational wave signals from jittering jets are lower than the other components of CCSNe, as I mark by the yellow horseshoe-shaped region in Figure 2. I schematically present a gravitational wave signal from jittering jets in the upper panel of Figure Figure 2: A figure from Mezzacappa et al. (2023) to which I added a crude estimate of the characteristic spectrum of \(hf^{-1/2}\) from jittering jets in a CCSN at a distance of \(D=10\:\mathrm{kpc}\) (the horseshoe-shaped yellow zone). The signal in yellow is for one jet-launching episode. If several jet-launching episodes are considered to inflate only two opposite large bubbles (lower panel of Figure 1) then the strain will be larger, as it is about the sum of these episodes. Other marks are as in the original figure. The blue line is the calculation by Mezzacappa et al. (2023) of the characteristic gravitational wave strain from a CCSN of a \(15M_{\odot}\) stellar model. The five other lines represents the sensitivity curves for gravitational wave detectors: Advanced Laser Interferometer Gravitational Observatory (AdvLIGO), Advanced VIRGO, and Kamioka Gravitational Wave Detector (KAGRA) that are current-generation gravitational wave detectors, and the more sensitive next-generation detectors, Cosmic Explorer and Einstein Telescope. The predicted full gravitational wave spectrum includes both the contributions from the regions near the NS that exist both in the JJEM and in the neutrino-driven explosion mechanism (blue line), and the contribution of the jittering jets. 3, and compared it with calculations from a CCSN simulation that include no jets from Mezzacappa et al. (2023). The signal from jittering jets can be clearly distinguished from the other gravitational waves sources in CCSNe (depending on signal-to-noise ratio and the distance of the CCSN). This, still exploratory, study calls for the performance of highly-demanding simulations of jittering jets and the calculation of their gravitational wave signals. The simulation must be of very high resolution as to resolve the turbulence in the cocoon. Because I expect jittering jets to explode most CCSNe, my prediction for the gravitational wave signals from nearby CCSNe differ from the prediction of studies that include no jets.
2303.12611
Long-range interactions and disorder facilitate pattern formation in spatial complex systems
Complex systems with global interactions tend to be stable if interactions between components are sufficiently homogeneous. In biological systems, which often have small copy numbers and interactions mediated by diffusing agents, noise and non-locality may affect stability. Here, we derive stability criteria for spatial complex systems with local and non-local interactions from a coarse-grained field theory with multiplicative noise. We show that long-range interactions give rise to a transition between regimes exhibiting giant density fluctuations and pattern formation. This instability is suppressed by non-reciprocity in interactions.
Fabrizio Olmeda, Steffen Rulands
2023-03-22T14:50:11Z
http://arxiv.org/abs/2303.12611v2
# Long-range interactions and disorder facilitate pattern formation ###### Abstract Complex systems with global interactions tend to be stable if interactions between components are sufficiently homogeneous. In biological systems, which often have small copy numbers and interactions mediated by diffusing agents, noise and non-locality may affect stability. Here, we derive stability criteria for spatial complex systems with local and non-local interactions from a coarse-grained field theory with multiplicative noise. We show that long-range interactions give rise to a transition between regimes exhibiting giant density fluctuations and pattern formation. This instability is suppressed by non-reciprocity in interactions. In his seminal work [1], May studied the time evolution of a well-mixed complex system, where the rate of change of the concentration of each component depends non-linearly on all other components. Such systems are almost certainly stable if interactions are symmetric and the standard deviation of the Jacobian of the linearized system around such states, \(\sigma\), is smaller than the inverse square root of the number of components, \(L\) (May bound) [1; 2]. This seminal work has been extended to include non-symmetric interactions [3; 4] and dispersion in spatially extended systems, which has a destabilizing effect [5]. Works on models inspired by ecosystems considered stochasticity and showed that the stable regime can exhibit spin-glass behavior and marginally stable states [6; 7]. Global stability is also influenced by the motif structure of interaction networks [8; 9] and kernels [10]. Conditions for stability are particularly relevant in the context of biological systems. For example, in complex ecosystems, the loss of stability can lead to the extinction of species [11]. Further, the integrity of adult tissues relies on stable interactions between cell populations. As complex biological systems often comprise small copy numbers they exhibit strong fluctuations [12]. The strength of these fluctuations typically is concentration-dependent, termed multiplicative noise. Even more than additive noise [13; 14], multiplicative noise can dominate the behavior of stochastic systems [15]. Since interactions in biological systems are often mediated by diffusing agents, they are inherently non-local. For example, in cellular systems diffusing signaling molecules give rise to interactions between cells on a length scale determined by the square root of the product of the diffusion coefficient and the degradation rate [16; 17]. Here, we extend these theoretical works to include these key characteristics of biological systems: non-local interactions and multiplicative noise. Specifically, we study the stability of spatially extended, stochastic complex systems with local and non-local interactions. In order to take into account the role of different noise sources, we start from a general microscopic model and derive the distribution of particle densities stemming from multiplicative noise in a field theoretical framework. Based on this, we derive conditions for stability and characterize the emergence of spatial patterns in the stable regime. We find that non-local interactions destabilize the system by facilitating pattern formation close to the boundary of stability. Multiplicative noise does not alter the conditions for stability on the mean-field level but induces giant density fluctuations in the stable regime, where pattern formation does not occur. We consider a stochastic many-particle system of \(N\) particles, where each particle, indexed by \(n\), is characterized by a categorical variable \(i\in\{1,\ldots,L\}\) termed component and a position \(\vec{z}\) [Fig. 1(a)]. In the context of ecosystems, this describes \(N\) individuals belonging to \(L\) species which share a habitat. We consider general interactions between particles that can be local or non-local in \(\vec{z}\) and in \(i\) [Fig. 1(b)] and that may be classified based on whether they conserve global particle numbers or not [Fig. 1(c)]. Within each of these classes, we consider a minimal set of microscopic rules which contribute at most quadratically to the field theory. Non-conservative interactions comprise a birth-death process with rates \(\beta_{i}\) and \(\delta_{i}\), respectively. Since interactions are non-local, these rates are functions of the positions of all other particles across components. As a second-order process, we consider Lotka-Volterra type interactions, whereupon a local or non-local interaction between a pair of particles one particle is substituted by a copy of the other. We denote the rate of such interactions between particles of components \(i\) and \(j\) by \(K_{ij}\). In the context of ecosystems, such interactions mimic predator-prey, mutualistic, or competitive interactions between species [18; 19]. Conservative processes globally maintain the number of particles in each component. These comprise particle diffusion with diffusivity \(D_{i}\) and, as a minimal process of second order, particles may move along gradients of a two-body potential \(V_{ij}(|\vec{z}_{i}-\vec{z}_{j}|)\). We here consider scalar fields, such that processes like rota tional diffusion [20] do not contribute to the field theory. In order to derive a description in terms of a field theory we first define the density of particles at position \(\vec{z}\) as \(\rho_{i}(\vec{z}\,)=\sum_{n=1}^{N}\delta(\vec{z}_{i}^{n}-\vec{z})\). In the mean-field limit the time evolution of \(\rho(t)\) is then given by a differential equation of the form \(\partial_{t}\mathbf{\rho}(\vec{z},t)=\mathbf{f}[\mathbf{\rho}(\vec{z}^{\prime},t)]\,,\) where bold symbols represent vectors in \(i\). In the following, we will derive expressions for conservative and non-conservative contributions to \(\mathbf{f}[\mathbf{\rho}]\) and effective noise terms from the microscopic processes defining our model. We begin the analysis by deriving the contributions stemming from non-conservative processes, which are described in the framework of Master equations. Following standard steps [21], we expand the time evolution of the probability distribution in terms of the inverse system size and find that the non-local birth-death process contributes \(\mathbf{h}\left[\mathbf{\rho},\vec{z}\right]\circ\mathbf{\rho}(\vec{z}\,)+\mathbf{\eta}\), where \(\circ\) denotes the component-wise product and \(\mathbf{\eta}\) are multiplicative Gaussian white noise with correlations \(\langle\eta_{i}(\vec{z},t)\eta_{j}(\vec{z}^{\prime},t^{\prime})\rangle=(\beta _{i}+\delta_{i})\rho_{i}(\vec{z},t)\delta(t-t^{\prime})\delta(\vec{z}-\vec{z} ^{\prime})\delta_{i,j}\) and \(h_{i}=\beta_{i}-\delta_{i}\)[21, 22]. We further make the simplifying assumption that interactions decay exponentially in space with a characteristic length scale denoted by \(\zeta_{i}\). With this, we consider a kernel \(\mathbf{h}\) of the form, \[\mathbf{h}[\mathbf{\rho},\vec{z}]\approx\mathbf{h}^{0}-\mathbf{h}^{1}\circ\int \mathrm{d}\vec{y}\,e^{-2|\vec{z}-\vec{y}|/\mathsf{\zeta}}\circ\mathbf{\rho}(\vec{y })\,, \tag{1}\] such that the rates \(\mathbf{h}^{0}\) and \(\mathbf{h}^{1}\) quantify the local and non-local excess rate of birth processes compared to death processes, respectively. With this, we can then express Eq. (1) as a solution of a differential equation for an auxiliary field \(\phi(\vec{z},t)\)[23], \[\phi_{i}-\zeta_{i}^{2}\nabla^{2}\phi_{i}=h_{i}^{0}-h_{i}^{1}\zeta_{i}\rho_{i}\,. \tag{2}\] Intuitively, this equation describes the steady state of a field consumed locally by particles and subject to diffusion and degradation. Finally, following Ref. [24] yields that non-conservative two-particle interactions between different components contribute a term \(K_{ij}\rho_{i}\rho_{j}\). In order to derive the contribution of conservative processes we express them in the form of a Langevin equation, which describes stochastic trajectories \(\vec{z}_{i}^{n}(t)\) of individual particles, \[\partial_{t}\vec{z}_{i}^{n}=-\sum_{j=1}^{L}\sum_{m=1}^{N_{j}}\mathbf{\nabla}V_{ij} (|\vec{z}_{i}^{n}-\vec{z}_{j}^{m}|)+\sqrt{2D_{i}}\vec{\xi}_{i}^{n}(t)\,. \tag{3}\] \(\vec{\xi}_{i}^{n}(t)\) is Gaussian white noise with correlator \(\langle\vec{\xi}_{i}^{n}(t)\vec{\xi}_{j}^{m}(t^{\prime})\rangle=\delta(t-t^{ \prime})\delta_{n,m}\delta_{i,j}\) and \(N_{j}\) are the number of particles indexed by \(j\). The time evolution of the density then follows from Eq. (3) by following standard procedures [25, 26, 27]. Taken together, the time evolution of the density \(\rho_{i}(\vec{z})\) follows a stochastic partial differential equation of the form \[\begin{split}\partial_{t}\rho_{i}&=\mathcal{L} \left[\rho_{i}\right]+\sum_{j\neq i}K_{ji}\rho_{j}\rho_{i}\\ &+\mathbf{\nabla}\mathbf{\cdot}\left[\rho_{i}\int\mathrm{d}\vec{y}\,\rho _{i}(\vec{y})\mathbf{\nabla}V(\vec{z}-\vec{y})\right]+\eta_{i}+\mathbf{\nabla}\mathbf{ \cdot}\vec{\xi}_{i}.\end{split} \tag{4}\] Here, we defined the operator \(\mathcal{L}\left[\rho_{i}\right]\equiv\phi_{i}\rho_{i}+D_{i}\Delta\rho_{i}\). \(\vec{\xi}_{i}\) is Gaussian white noise stemming from the stochastic movement of particles, Eq. (3). It has correlations \(\langle\vec{\xi}_{i}(\vec{z},t)\vec{\xi}_{j}(\vec{z}^{\prime},t^{\prime}) \rangle=2D_{i}\rho_{i}(\vec{z},t)\delta(t-t^{\prime})\delta(\vec{z}-\vec{z}^{ \prime})\delta_{i,j}\). We begin our analysis of the stability conditions of Eq. (4) by considering a system composed of a single component, \(\rho\), and a constant potential \(V\). We will later discuss the role of non-constant local potentials in conservative processes. Eq. (4) then admits two spatially homogeneous stationary solutions: \((\rho_{1}^{*},\phi_{1}^{*})=\left(0,h^{0}\right)\) is stable if the local birth-death process is subcritical, \(h^{0}<0\), and unstable otherwise. The fixed point \((\rho_{2}^{*},\phi_{2}^{*})=\left(h^{0}/(\zeta h^{1}),0\right)\) exists only if \(h^{0}>0\) and it is stable [Fig. 2(a)]. The feedback with the field \(\phi\) prevents the unbounded growth of the density even for supercritical birth-death processes. The stability of these states with respect to spatiotemporal perturbations of the density \(\rho\) can be assessed within the framework of linear stability analysis. This implies linearizing Eq. (4) around the stationary states, \(\rho(\vec{z},t)=\rho_{1,2}^{*}+\delta\rho(\vec{z},t)\), and studying the response of the linearized system to spatially inhomogeneous perturbations. This procedure yields a dispersion relation between the growth rate \(\omega\) and the wave vector \(\vec{k}\) of a spatially periodic perturbation. If the maximum of \(\omega\) is Figure 1: Schematics representing (a) the spatial and categorical degrees of freedom, (b) the possible ranges of interactions, and (c) the processes defining the microscopic model. positive and occurs at a finite value of \(|\vec{k}|\), linear stability analysis predicts the emergence of a pattern with a finite length scale. For single-component systems, \(w\) is never positive for finite values of \(|\vec{k}|\), and such systems therefore do not show pattern formation. These results are consistent with a structurally similar model that has been studied in the context of stem cells in spermatogenesis [22]. For systems comprising multiple components, the stability of stationary states may be altered by interactions between different components. In order to investigate the stability of the multi-component system, Eq. (4), we will first derive a phase diagram for the stability of the homogeneous stationary solutions to global perturbations. In the second step, we will then study spatiotemporal patterns in each regime. To this end, following Ref. [1], we take \(K_{ij}\) to be Gaussian distributed with mean \(0\), variance \(\sigma^{2}/L\) and covariance \(\epsilon\sigma^{2}/L\)[28]. Following Ref. [29, 30, 31] we use a path integral representation of Eq. (4) and perform the average over all realizations of \(K_{ij}\). Within this formulation, in the limit of large \(L\), we express the interaction between components in terms of a coupling to an effective response function, \(\chi_{i}(t,t^{\prime},\vec{z},\vec{z^{\prime}})\), and Gaussian colored noise, \(W_{i}\). This reduces the \(2L\) coupled equations, Eq. (4), to \(L\) uncoupled equations, \[\partial_{t}\rho_{i}= \mathcal{L}\left[\rho_{i}\right]+\epsilon\sigma^{2}\rho_{i}\int_ {0}^{t}\mathrm{d}t^{\prime}\int\mathrm{d}\vec{z}^{\prime}\chi_{i}(t,t^{\prime },\vec{z},\vec{z}^{\prime})\rho_{i}(\vec{z}^{\prime},t^{\prime})\] \[+W_{i}\rho_{i}+\eta_{i}+\mathbf{\nabla\cdot\vec{\xi}_{i}}\,. \tag{5}\] In Eq. (5), \(W_{i}\) has zero mean and its correlations are proportional to the correlations of the fields (\(C_{i,j}(\vec{z},\vec{z}^{\prime},t,t^{\prime})=\langle\rho_{i}(\vec{z},t)\rho_ {j}(\vec{z}^{\prime},t^{\prime})\rangle\)) such that \(\langle W_{i}(\vec{z},t)W_{j}(\vec{z}^{\prime},t^{\prime})\rangle=\sigma^{2} \delta(t-t^{\prime})\delta_{i,j}C_{i,j}(\vec{z},\vec{z}^{\prime},t,t^{\prime})\) and the response function \(\chi_{i}(\vec{z},\vec{z}^{\prime},t,t^{\prime})=\langle\partial\rho_{i}(\vec{ z},t)/\partial W_{i}(\vec{z}^{\prime},t^{\prime})|_{W_{i}=0}\rangle\)[29, 30]. In a spatially homogeneous stationary state of Eq. (5), \(\rho_{i}^{*}\), the response and correlation functions are time-independent, such that we define \(\int\mathrm{d}t^{\prime}\,\chi_{i}(t,t^{\prime})\equiv\chi_{i}^{*}\) and \(C_{i}(t,t^{\prime})=\langle\rho_{i}^{*2}\rangle\equiv c_{i}\). The stationary value of the noise is \(W_{i}^{*}=\sqrt{\sigma^{2}c_{i}}w\)[32, 33], where \(w\) is a Gaussian random variable with unitary variance and zero mean. In order to find stationary solutions of Eq. (5) we take the mean-field limit where multiplicative noise contributions coming from density fluctuations and birth-death processes are negligible, which in our system is not expected to change the stationary states [7]. As \(W_{i}\) describes the effect of deterministic interactions between different components in Eq. (4) it can not be neglected in the mean-field limit. There are two homogeneous solutions of Eq. (5), given by the random variables \((\rho_{1}^{*},\phi_{1}^{*})=(0,h^{0}/\alpha)\) and \[\rho_{2}^{*}=\frac{w\sigma\sqrt{c}+h^{0}}{\Delta(\chi^{*})}\Theta\left(\ \frac{w\sigma\sqrt{c}+h^{0}}{\Delta(\chi^{*})}\right) \tag{6}\] where we assumed equal parameter values for all components and consequently dropped the index \(i\). \(\Theta(x)\) is the Heaviside step function and we defined \(\Delta(\chi^{*})\equiv h^{1}\zeta-\epsilon\sigma^{2}\chi^{*}\). With \(\rho_{2}^{*}\) defined the value of \(\phi_{2}^{*}\) then follows from Eq. (2). As \(w\) is a Gaussian random variable the density in the stationary state, \(\rho_{2}^{*}\), follows a Gaussian distribution truncated at \(0\). Therefore, as expected, the strength of particle production processes sets the mean of the stationary particle density distributions and the variance of the distribution is proportional to the variance of interactions between components. In order to obtain an expression for the parameters of the distribution of \(\rho_{2}^{*}\), Eq. (6), we derive self-consistency equations for its moments: the stationary value of the fraction of surviving components, \(f_{s}\), the average density Figure 2: (a) Phase portrait of the single-component model, Eq. (4). Streamlines depict the time evolution of the homogeneous states \(\rho\) and \(\phi\). Black dots signify fixed points. (b) Phase diagram of Eq. (5) as a function of the variance, range, and asymmetry (inlay) of interactions. It is valid for critical or supercritical birth-death processes, \(h^{0}\geq 0\). The solid line depicts the condition for instability, Eq. (9). The dashed line represents the criterion for pattern instability. Region I exhibits giant fluctuations, II is the pattern-forming regime, and the dynamics in III exhibit unstable growth. (c) Dispersion relation, Eq. (10), in each of the three regimes of (b). (d) Distribution of particle densities in regime I obtained by numerical solutions of Eq. (5) using the Euler-Mayorana algorithm for \(d=2\) and finite central difference with integration steps \(dt=10^{-3}\) and \(dx=1\) (\(L=50\) and \(64\) sites per dimension). Shared parameters for all panels are \(h^{0}=h^{1}\zeta=\epsilon=1\), \(D=1\), \(\zeta^{2}=10\). of particles, \(M^{*}=\langle\rho^{*}\rangle\), the response function \(\chi^{*}\), and the correlation function, \(c\). The self-consistency equations read \[f_{s}=\int_{-\kappa}^{\infty}\mathrm{D}w\,,\;M^{*}=\frac{\alpha \sigma\sqrt{c}}{\Delta(\chi^{*})}\int_{-\kappa}^{\infty}\mathrm{D}w\left(w+ \kappa\right)\,,\] \[\chi^{*}=\frac{\alpha}{\Delta(\chi^{*})}\int_{-\kappa}^{\infty} \mathrm{D}w,\;1=\frac{\alpha^{2}\sigma^{2}}{\Delta(\chi^{*})^{2}}\int_{-\kappa }^{\infty}\mathrm{D}w\left(w+\kappa\right)^{2}. \tag{7}\] Here we defined \(\mathrm{D}w\equiv\mathrm{d}w\,e^{-w^{2}/2}/\sqrt{2\pi}\) and \(\kappa\equiv h^{0}/\left(\sigma\sqrt{c}\right)\) is a threshold value of \(w\) above which the density \(\rho_{2}^{*}\) is a possible solution of Eq. (5). Eq. (7) can be solved numerically. Global instability of the stationary states is associated with diverging correlations in density fluctuations, \(\tilde{C}\equiv\langle\delta\rho(\vec{z},t)\delta\rho(\vec{z}^{\,\prime},t^{ \prime})\rangle\) where \(\delta\rho=\rho-\rho_{2}^{*}\)[31; 32]. In order to determine the conditions under which correlation functions diverge, we linearize Eq. (5) around the homogeneous stationary state \(\rho_{2}^{*}\) and express two-point spatiotemporal correlation functions in Fourier space, \[\tilde{C}(\vec{k},\vec{k}^{\prime},\omega,\omega^{\prime})=\frac{\Lambda(\vec {k})\delta(\vec{k}+\vec{k}^{\prime})\delta(\omega+\omega^{\prime})}{\left\langle \left|\frac{i\omega}{\rho^{*}}-\Omega(\vec{k},\omega)\right|^{-2}\right\rangle_ {+}^{-1}-f_{s}\sigma^{2}}\,, \tag{8}\] where we defined \(\Lambda(\vec{k})=f_{s}(h^{0}/\alpha+2Dk^{2})\langle\rho^{*}\rangle_{+}\) and \(\Omega(\vec{k},\omega)=-\left(\frac{D}{\rho^{*}}\vec{k}^{2}+\frac{h^{1}\zeta}{ 1+\zeta^{2}\vec{k}^{2}}\right)+\epsilon\sigma^{2}\chi(\vec{k},\omega)\cdot \langle\ldots\rangle_{+}\) denotes the average over the positive fraction of surviving components \(f_{s}\)[32]. The correlation functions Eq. (8) of the stationary homogeneous state, \(\omega\to 0\) and \(|\vec{k}|\to 0\), diverge if \(\Omega(0,0)^{2}=f_{s}\sigma^{2}\). This condition can be satisfied only if \(h_{0}\geq 0\). Substituting \(\Omega(0,0)^{2}=f_{s}\sigma^{2}\) into Eq. (7) gives the critical value of the variance of interaction strengths below which the system is stable with respect to homogeneous perturbations [dashed line in Fig. 2(b)], \[\sigma_{c}=\sqrt{2}h^{1}\zeta/(1+\epsilon)\,. \tag{9}\] Therefore, non-local birth-death processes globally stabilize the system. By contrast, the degree of symmetry in interactions between components, \(\epsilon\), destabilizes the homogeneous state. For anti-symmetric interactions, \(\epsilon=-1\), the system is always stable. Therefore, predator-prey interactions stabilize ecosystems against global perturbations. If \(h^{0}<0\) extinction is the only stable fixed point such that interactions between components do not affect the stability to linear order. In order to further characterize the instability we now investigate whether the stable, homogeneous solutions can be destabilized by spatial perturbations in the regime where \(\sigma<\sigma_{c}\). For stochastic systems, pattern formation is reflected in a finite-wavelength peak of the spectral density functions in the long-term limit, \(\langle|\delta\rho(\vec{k},0)|^{2}\rangle\)[34; 35]. The onset of a pattern instability follows from solving \(\mathbf{\nabla}\cdot C(\vec{k},0)|_{\vec{k}^{\prime}}=0\) for \(\vec{k}\) and \(C(\vec{k},0)|_{\vec{k}^{\prime}}=0\). Analytical solutions to this equation are feasible in the long-term limit, \(\omega\to 0\), and using the approximation that averages of functions of the fields equal the functions of averages. Following Refs. [32; 36] then yields a criterion the instability of Eq. (4) with respect to spatial perturbations. The condition for pattern instability also follows from random-matrix theory [37; 31], requiring that the largest eigenvalue describing the relaxation of a perturbation of Eq. (4) around a stable homogeneous state is positive at a finite value of \(|\vec{k}|\). This yields \[-h^{1}\zeta/(1+\zeta^{2}\vec{k}^{2})-D\vec{k}^{2}/M^{*}+\sqrt{f_{s}}\sigma(1+ \epsilon)>0\,, \tag{10}\] and it is convex at that value [Fig. 2(c)]. We find that a pattern instability is possible if \(\sigma_{p}<\sigma<\sigma_{c}\) with a critical threshold \(\sigma_{p}\) given by \(\sigma_{p}=\sigma_{c}/\sqrt{2}\). Notably, the regime exhibiting pattern formation is fully determined by \(\sigma_{c}\) and otherwise independent of the model parameters. \(\sigma_{c}\) and \(\sigma_{p}\) could be decoupled for models involving component-dependent dispersal [5] or higher order interactions [38]. Figure 3 shows numerical solutions of Eq. (4) in the pattern forming regime. Although individual components exhibit different patterns, they occur with an identical length scale. Indeed, Eq. (5) implies that the dynamics of individual components are effectively coupled via a shared response function. Although the system is stable against homogeneous and spatial perturbations for \(\sigma<\sigma_{p}\) the presence of intrinsic noise can lead to the extinction of all components. Figure 3: Snapshots of numerical simulations of Eq. (5) showing a pattern instability across different components (top, \(\sigma=0.65\)) and giant density fluctuations (bottom, \(\sigma=0.4\)). Parameters are \(\zeta^{2}=D=10\), \(h^{0}=0\), \(h^{1}\zeta=1\), \(L=100\), \(dt=1e-3\), \(dx=1\). Snapshots were taken at \(1e5\) time steps. In order to characterize the risk of extinction, we investigate fluctuations around the stationary state \((\rho_{2}^{*},\phi_{2}^{*})\) in this regime. In the limit of \(|\vec{k}|\rightarrow\infty\) and \(\omega\to 0\), Eq (8) shows an algebraic decay following \(|\vec{k}|^{-2}\). This is a hallmark of giant-density fluctuations [39, 40], whose strength in a given subsystem scales stronger than predicted by the central limit theorem. These giant-density fluctuations are also reflected in finite difference simulations of Eq. (4), cf. Fig. 2(d) and Fig. 3. Therefore, multiplicative noise from microscopic processes leads to strong fluctuations and an increased risk of extinction in the stable regime. Taken together, we generalized the May bound for the stability of complex systems [1] to features characteristic of biological systems: strong, multiplicative noise stemming from small copy numbers and non-local interactions, which are often mediated by diffusing signaling factors. Starting from a microscopic model definition we derived an effective field theory. We showed that non-local interactions and multiplicative noise both alter the conditions for stability with respect to the May bound. In particular, multiplicative noise stemming from non-conservative processes gives rise to giant density fluctuations in stable stationary states whilst conservative processes could potentially destabilize the stable region of the phase space giving rise to the formation of spatial patterns. By extending the theory by non-local conservative terms or vectorial fields our work can be applied to other systems such as multi-component phase separation or active matter. ## Acknowledgements We thank R. Mukhamadiarov and I. Di Terlizzi for helpful feedback on the manuscript. This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program (grant agreement no. 950349). \({}^{\dagger}\) [email protected] \({}^{*}\) [email protected]
2302.05044
Toward Degree Bias in Embedding-Based Knowledge Graph Completion
A fundamental task for knowledge graphs (KGs) is knowledge graph completion (KGC). It aims to predict unseen edges by learning representations for all the entities and relations in a KG. A common concern when learning representations on traditional graphs is degree bias. It can affect graph algorithms by learning poor representations for lower-degree nodes, often leading to low performance on such nodes. However, there has been limited research on whether there exists degree bias for embedding-based KGC and how such bias affects the performance of KGC. In this paper, we validate the existence of degree bias in embedding-based KGC and identify the key factor to degree bias. We then introduce a novel data augmentation method, KG-Mixup, to generate synthetic triples to mitigate such bias. Extensive experiments have demonstrated that our method can improve various embedding-based KGC methods and outperform other methods tackling the bias problem on multiple benchmark datasets.
Harry Shomer, Wei Jin, Wentao Wang, Jiliang Tang
2023-02-10T04:14:45Z
http://arxiv.org/abs/2302.05044v1
# Toward Degree Bias in Embedding-Based ###### Abstract. A fundamental task for knowledge graphs (KGs) is knowledge graph completion (KGC). It aims to predict unseen edges by learning representations for all the entities and relations in a KG. A common concern when learning representations on traditional graphs is degree bias. It can affect graph algorithms by learning poor representations for lower-degree nodes, often leading to low performance on such nodes. However, there has been limited research on whether there exists degree bias for embedding-based KGC and how such bias affects the performance of KGC. In this paper, we validate the existence of degree bias in embedding-based KGC and identify the key factor to degree bias. We then introduce a novel data augmentation method, KG-Mixup, to generate synthetic triples to mitigate such bias. Extensive experiments have demonstrated that our method can improve various embedding-based KGC methods and outperform other methods tackling the bias problem on multiple benchmark datasets. 1 Footnote 1: The code is available at [https://github.com/HarryShomer/KG-Mixup](https://github.com/HarryShomer/KG-Mixup) Knowledge Graphs, Link Prediction, Degree Bias + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: journal: Information Systems + Footnote †: journal: journal: Information Systems + Footnote †: journal: journal: Information Systems + Footnote †: journal: journal: Information Systems + Footnote †: journal: journal: Information Systems + Footnote †: journal: journal: Information Systems + Footnote †: journal: journal: Information Systems + Footnote †: journal: journal: Information Systems + Footnote †: journal: journal: Information Systems + Footnote †: journal: journal: Information Systems + Footnote †: journal: journal: Journal of Physics A: Mathematical and Physical Sciences + Footnote †: journal: journal: Journal of Physics A: Mathematical and Physical Sciences + Footnote †: journal: journal: Journal of Physics A: Mathematical and Physical Sciences + Footnote †: journal: journal: Journal of Physics A: Mathematical and Physical Sciences + Footnote †: journal: journal: Journal of Physics A: Mathematical and Physical Sciences + Footnote †: journal: journal: Journal of Physics A: Mathematical and Physical Sciences + Footnote †: journal: journal: Journal of Physics A: Mathematical and Physical Sciences + Footnote †: journal: journal: Journal of Physics A: Mathematical and Physical Sciences + Footnote †: journal: journal: Journal of Physics A: Mathematical and Physical Sciences + Footnote †: journal: journal: Journal of Physics A: Mathematical and Physical Sciences + Footnote †: journal: journal: Journal of Physics A: Mathematical and Physical Sciences + Footnote †: journal: journal: Journal of Physics A: Mathematical and Physical Sciences + Footnote †: journal: journal: Journal of Physics A: Mathematical and Physical Sciences + Footnote †: journal: journal: Journal of Physics A: Mathematical and Physical Sciences + Footnote †: journal: journal: Journal of Physics A: Mathematical and Physical Sciences + Footnote †: journal: journal: Journal of Physics A: Mathematical and Physical Sciences + Footnote †: journal: journal: Journal of Physics A: Mathematical and Physical Sciences + Footnote †: journal: journal: Journal of Physics A: Mathematical and Physical Sciences + Footnote †: journal: journal: Journal of Physics A: Mathematical and Physical Sciences + Footnote †: journal: journal: Journal of Physics A: Mathematical and Physical Sciences + Footnote †: journal: journal: Journal of Physics A: Mathematical and Physical Sciences + Footnote †: journal: journal: Journal of Physics A: Mathematical and Physical Sciences + Footnote †: journal: journal: Journal of Physics A: Mathematical and Physical Sciences + Footnote †: journal: journal: Journal of Physics A: Mathematical and Physical Sciences + Footnote †: journal: journal: Journal of Physics A: Mathematical and Physical Sciences + Footnote †: journal: journal: Journal of Physics A: Mathematical and Physical Sciences + Footnote †: journal: journal: Journal of Physics A: Mathematical and Physical Sciences + Footnote †: journal: journal: Journal of Physics A: Mathematical and Physical Sciences + Footnote †: journal: journal: Journal of Physics A: Mathematical and Physical Sciences + Footnote †: journal: journal: Journal of Physics A: Mathematical and Physical Sciences + Footnote †: journal: journal: Journal of Physics A: Mathematical and Physical Sciences + Footnote †: journal: journal: Journal of Physics A: Mathematical and Physical Sciences + Footnote †: journal: journal: Journal of Physics A: Mathematical and Physical Sciences + Footnote †: journal: journal: Journal of Physics A: Mathematical and Physical Sciences + Footnote †: journal: journal: Journal of Physics A: Mathematical and Physical Sciences + Footnote †: journal: journal: Journal of Physics A: Mathematical and Physical Sciences + Footnote †: journal: journal: Journal of Physics A: Mathematical and Physical Sciences + Footnote †: journal: journal: Journal of Physics A: Mathematical and Physical Sciences + Footnote †: journal: journal: Journal of Physics A: Mathematical and Physical Sciences + Footnote †: journal: journal: Journal of Physics A: Mathematical and Physical Sciences + Footnote †: journal: journal: Journal of Physics A: Mathematical and Physical Sciences + Footnote †: journal: journal: Journal of Physics A: Mathematical and Physical Sciences + Footnote †: journal: journal: Journal of Physics A: Mathematical and Physical Sciences + Footnote †: journal: journal: Journal of Physics A: Mathematical and Physical Sciences + Footnote †: journal: journal: Journal of Physics A: Mathematical and Physical Sciences + Footnote †: journal: journal: Journal of Physics A: Mathematical and Physical Sciences + Footnote †: journal: journal: Journal of Physics A: Mathematical and Physical Sciences + Footnote †: journal: journal: Journal of Physics A: Mathematical and Physical Sciences + Footnote †: journal: journal: Journal of Physics A: Mathematical and Physical Sciences + Footnote †: journal: journal: Journal of Physics A: Mathematical and Physical Sciences + Footnote †: journal: journal: Journal of Physics A: Mathematical and Physical Sciences + Footnote †: journal: journal: Journal of Physics A: Mathematical and Physical Sciences + Footnote †: journal: journal: Journal of Physics A: Mathematical and Physical Sciences + Footnote †: journal: journal: Journal of Physics A: Mathematical and Physical Sciences + Footnote †: journal: journal: Journal of Physics A: Mathematical and Physical Sciences + Footnote †: journal: journal: Journal of Physics A: Mathematical and Physical Sciences + Footnote †: journal: journal: Journal of Physics A: Mathematical and Physical Sciences + Footnote †: journal: journal: Journal of Physics A: Mathematical and Physical Sciences + Footnote †: journal: journal: Journal of Physics A: Mathematical and Physical Sciences + Footnote †: journal: journal: Journal of Physics A: Mathematical and Physical Sciences + Footnote †: journal: journal: Journal of Physics A: Mathematical and Physical Sciences + Footnote †: journal: journal: Journal of Physics A: Mathematical and Physical Sciences + Footnote †: journal: journal: Journal of Physics A: Mathematical and Physical Sciences + Footnote †: journal: journal: Journal of Physics A: Mathematical and Physical Sciences + Footnote †: journal: journal: Journal of Physics A: Mathematical and Physical Sciences + Footnote †: journal: journal: Journal of Physics A: Mathematical and Physical Sciences + Footnote †: journal: journal: Journal of Physics A: Mathematical and Physical Sciences + Footnote †: journal: journal: Journal of Physics A: Mathematical and Physical Sciences + Footnote †: journal: journal: Journal of Physics A: Mathematical and Physical Sciences + Footnote †: journal: journal: Journal of Physics A: Mathematical and Physical Sciences + Footnote †: journal: journal: Journal of Physics A: Mathematical and Physical Sciences + Footnote †: journal: journal: Journal of Physics A: Mathematical and Physical Sciences + Footnote †: journal: journal: Journal of Physics A: Mathematical and Physical Sciences + Footnote †: journal: journal: Journal of Physics A: Mathematical and Physical Sciences + Footnote †: journal: journal: Journal of Physics A: Mathematical and Physical Sciences + Footnote †: journal: journal: Journal of Physics A: Mathematical and Physical Sciences + Footnote †: journal: journal: Journal of Physics A: Mathematical and Physical Sciences + Footnote †: journal: journal: Journal of Physics A: Mathematical and Physical Sciences + Footnote †: journal: journal: Journal of Physics A: Mathematical and Physical Sciences + Footnote †: journal: journal: Journal of Physics A: Mathematical and Physical Sciences + Footnote †: journal: journal: Journal of Physics A: Mathematical and Physical Sciences + Footnote †: journal: journal: Journal of Physics A: Mathematical and Physical Sciences + Footnote †: journal: journal: Journal of Physics A: Mathematical and Physical Sciences + Footnote †: journal: journal: Journal of Physics A: Mathematical and Physical Sciences + Footnote †: journal: journal: Journal of Physics A: Mathematical and Physical Sciences + Footnote †: journal: journal: Journal of Physics A: Mathematical and Physical Sciences + Footnote †: journal: journal: Journal of Physics A: Mathematical and Physical Sciences + Footnote †: journal: journal: Journal of Physics A: Mathematical and Physical Sciences + Footnote †: journal: journal: Journal of Physics A: Mathematical and Physical Sciences + Footnote †: journal: journal: Journal of Physics A: Mathematical and Physical Sciences + Footnote †: journal: journal: Journal of Physics A: Mathematical and Physical Sciences + Footnote †: journal: journal: Journal of Physics A: Mathematical and Physical Sciences + Footnote †: journal: journal: Journal of Physics A: Mathematical and Physical Sciences + Footnote †: journal: journal: Journal of Physics A: Mathematical and Physical Sciences + Footnote †: journal: journal: Journal of Physics A: Mathematical and Physical Sciences + Footnote †: journal: journal: Journal of Physics A: Mathematical and Physical Sciences + Footnote †: journal: journal: Journal of Physics A: Mathematical and Physical Sciences + Footnote †: journal: journal: Journal of Physics A: Mathematical and Physical Sciences + Footnote †: journal: journal: Journal of Physics A: Mathematical and Physical Sciences + Footnote †: journal: journal: Journal of Physics A: Mathematical and Physical Sciences + Footnote †: journal: journal: Journal of Physics A: Mathematical and Physical Sciences + Footnote †: journal: journal: Journal of Physics A: Mathematical and Physical Sciences + Footnote †: journal: journal: Journal of Physics A: Mathematical and Physical Sciences + Footnote †: journal: journal: Journal of Physics A: Mathematical and Physical Sciences + Footnote †: journal: journal: Journal of Physics A: Mathematical and Physical Sciences + Footnote †: journal: journal: Journal of Physics A: Mathematical and Physical Sciences + Footnote †: journal: journal: Journal of Physics A: Mathematical and Physical Sciences + Footnote †: journal: journal: Journal of Physics A: Mathematical and Physical Sciences + Footnote †: journal: journal: Journal of Physics A: Mathematical and Physical Sciences + Footnote †: journal: journal: Journal of Physics A: Mathematical and Physical Sciences + Footnote †: journal: Journal of Physics A: Mathematical and Physical Sciences + Footnote †: journal: journal: Journal of Physics A: Mathematical and Physical Sciences + Footnote †: journal: journal: Journal of Physics A: Mathematical and Physical Sciences + Footnote †: journal: journal: Journal of Physics A: Mathematical and Physical Sciences + Footnote †: journal: journal: Journal of Physics A: Mathematical and Physical Sciences + Footnote †: journal: journal: Journal of Physics A: Mathematical and Physical Sciences + Footnote †: journal: journal: Journal of Physics A: Mathematical and Physical Sciences + Footnote †: journal: journal: Journal of Physics A: Mathematical and Physical Sciences + Footnote †: journal: journal: Journal of Physics A: Mathematical and Physical Sciences + Footnote †: journal: journal: Journal of Physics A: Mathematical and Physical Sciences + Footnote †: journal: journal: Journal of Physics A: Mathematical and Physical Sciences + Footnote †: journal: journal: Journal of Physics A: Mathematical and Physical Sciences + Footnote †: journal: journal: Journal of Physics A: Mathematical and Physical Sciences + Footnote †: journal: journal: Journal of Physics A: Mathematical and Physical Sciences + Footnote †: journal: journal: Journal of Physics A: Mathematical and Physical Sciences + Footnote †: journal: journal: Journal of Physics A: Mathematical and Physical Sciences + Footnote †: journal: journal: Journal of Physics A: Mathematical and Physical Sciences + Footnote †: journal: Journal of Physics A: Mathematical and Physical Sciences + Footnote †: journal: journal: Journal of Physics A: Mathematical and Physical Sciences + Footnote †: journal: journal: Journal of Physics A: Mathematical and Physical Sciences + Footnote †: journal: journal: Journal of Physics A: Mathematical and Physical Sciences + Footnote †: journal: Journal of Physics A: Mathematical and Physical Sciences + Footnote †: journal: journal: Journal of Physics A: Mathematical and Physical Sciences + Footnote †: journal: Journal of Physics A: Mathematical and Physical Sciences + Footnote †: journal: Journal of Physics A: Mathematical and Physical Sciences + Footnote †: journal: journal: Journal of Physics A: Mathematical and Physical Sciences + Footnote †: journal: Journal of Physics A: Mathematical and Physical Sciences and the relation \(r\) co-occur as the tail and relation (Eq. (2)). An example in Figure 1 is the tail-relation pair (_Germany_, _Has Country_). Since the pair only co-occurs as a relation and tail in one triple, their tail-relation degree is 1. Our preliminary studies (Section 3) suggest that when predicting the tail entity \(t\), the in-degree of \(t\) and especially the tail-relation degree of \((t,r)\) plays a vital role. That is, when predicting the tail for a triple \((h,r,t)\), the number of triples where the entity \(t\) and relation \(r\) co-occur as an entity-relation pair correlates significantly with performance during KGC. Going back to our example, since _Germany_ and _Has Country_ only co-occur as a relation and tail in one triple their tail-relation degree is low, thus making it difficult to predict _Germany_ for the query (_Europe_, _Has Country_,?). Given the existence of degree bias in KGC, we aim to alleviate the negative effect brought by degree bias. Specifically, we are tasked with improving the performance of triples with low tail-relation degrees while maintaining the performance of other triples with a higher tail-relation degree. Essentially, it is desired to promote the engagement of triples with low tail-relation degrees during training so as to learn better embeddings. To address this challenge, we propose a novel data augmentation framework. Our method works by augmenting entity-relation pairs that have low tail-relation degrees with synthetic triples. We generate the synthetic triples by extending the popular Mixup (Mikup, 2017) strategy to KGs. Our contributions can be summarized as follows: * Through empirical study, we identify the degree bias problem in the context of KGC. To the best of our knowledge, _no previous work has studied the problem of degree bias from the perspective of entity-relation pairs_. * We propose a simple yet effective data augmentation method, KG-Mixup, to alleviate the degree bias problem in KG embeddings. * Through empirical analysis, we show that our proposed method can be formulated as a form of regularization on the low tail-relation degree samples. * Extensive experiments have demonstrated that our proposed method can improve the performance of lower tail-relation degree triples on multiple benchmark datasets without compromising the performance on triples of higher degree. ## 2. Related Work **KG Embedding**: TransE (Kirk et al., 2017) models the embeddings of a single triple as a translation in the embedding space. Multiple works model the triples as a tensor factorization problem, including (Bohner et al., 2015; Kriz et al., 2016; Wang et al., 2017; Wang et al., 2018). ConvE (Chen et al., 2017) learns the embeddings by modeling the interaction of a single triple via a convolutional neural network. Other methods like R-GCN (Wang et al., 2017) modify GCN (Ghezhi et al., 2017) for relational graphs. **Imbalanced/Long-Tail Learning**: Imbalanced/Long-Tail Learning considers the problem of model learning when the class distribution is highly uneven. SMOTE (Chen et al., 2017), a classic technique, attempts to produce new synthetic samples for the minority class. Recent work has focused on tackling imbalance problems on deeper models. Works such as (Kriz et al., 2016; Wang et al., 2018; Wang et al., 2018) address this problem by modifying the loss for different samples. Another branch of work tries to tackle this issue by utilizing ensemble modeling (Wang et al., 2018; Wang et al., 2018; Wang et al., 2018). **Degree Bias**: Mohamed et al. (Mohamed et al., 2017) demonstrate the existence of popularity bias in popular KG datasets, which causes models to inflate the score of entities with a high degree. Bonner et al. (Bonner et al., 2015) show the existence of entity degree bias in biomedical KGs. Rossi et al. (Rossi et al., 2018) demonstrate that the performance is positively correlated with the number of source peers and negatively with the number of target peers. Kojaku et al. (Kojaku et al., 2018) analyze the degree bias of random walks. To alleviate this issue, they propose a debiasing method that utilizes random graphs. In addition, many studies have focused on allaying the effect of degree bias for the task of node classification including (Wang et al., 2018; Wang et al., 2018; Wang et al., 2018). However, there is no work that focuses on how the intersection of entity and relation degree bias effects embedding-based KGC. **Data Augmentation for Graphs** There is a line of works studying data augmentation for homogeneous graphs (Wang et al., 2018; Wang et al., 2018). Few of these works study the link prediction problem (Wang et al., 2018) but they do not address the issues in KGs. To augment KGs, Tang et al. (Tang et al., 2018) generate synthetic triples via adversarial learning; Wang et al. (Wang et al., 2018) use a GAN to create stronger negative samples; Li et al. (Li et al., 2018) use rule mining to identify new triples to augment the graph with. However, all these methods do not augment with for the purpose of degree bias in KGs and hence are not applicable to the problem this paper studies. ## 3. Preliminary Study In this section we empirically study the degree bias problem in KGC. We focus on two representative embedding based methods ConvE (Chen et al., 2017) and TuckER (Kriz et al., 2016). We first introduce some notations. We denote \(\mathcal{G}=\{\mathcal{V},\mathcal{R},\mathcal{E}\}\) as a KG with entities \(\mathcal{V}\), relations \(\mathcal{R}\), and edges \(\mathcal{E}\). Each edge represents two entities connected by a single relation. We refer to an edge as a triple and denote it as \((h,r,t)\) where \(h\) is referred to as the head entity, \(t\) the tail entity, and \(r\) the relation. Each entity and relation is represented by an embedding. We represent the embedding for a single entity \(o\) as \(\mathbf{x}_{o}\in\mathbb{R}^{n_{o}}\) and the embedding for a relation \(r\) as \(\mathbf{x}_{r}\in\mathbb{R}^{n_{r}}\), where \(n_{o}\) and \(n_{r}\) are the dimensions of the entity and relation embeddings, respectively. We further define the degree of an entity \(o\) as \(d_{o}\) and the in-degree \(\left(d_{o}^{(int)}\right)\) and out-degree \(\left(d_{o}^{(out)}\right)\) as the number of triples where \(o\) is the tail and head entity, respectively. Lastly, KGC attempts to predict new facts that are not found in the original KG. This involves predicting the tail entities that satisfy \((h,r,?)\) and the head entities that satisfy \((?,r,t)\). Following (Chen et al., 2017), we augment all triples \((h,r,t)\) with its inverse \((t,r^{-1},h)\). As such, predicting the head entity of \((h,r,t)\) is analogous to predicting the tail entity for \((t,r^{-1},?)\). Under such a setting, KGC can be formulated as always predicting the tail entity. _Therefore, in the remainder of this work, we only consider KGC as predicting the tail entities that satisfy \((h,r,?)\)._ In the following subsections, we will explore the following questions: (1) Does degree bias exist in typical KG embedding models? and (2) Which factor in a triple is related to such bias? To answer these questions, we first study how the head and tail entity degree affect KGC performance in Section 3.1. Then, we investigate the impact of the frequency of entity-relation pairs co-occurring on KGC performance in Section 3.2. ### Entity Degree Analysis We first examine the effect that the degree of both the head and tail entities have on KGC performance. We perform our analysis on the FB15K-237 dataset (Wang et al., 2018), a commonly used benchmark in KGC. Since a KG is a directed graph, we postulate that the direction of an entity's edges matters. We therefore split the degree of each entity into its in-degree and out-degree. We measure the performance using the mean reciprocal rank (MRR). Note that the degree metrics are calculated using only the training set. Figure 1(a) displays the results of TuckER (see Section 6.1.4 for more details) on FB15K-237 split by both entities and degree type. From Figure 1(a) we observe that when varying the tail entity degree value, the resulting change in test MRR is significantly larger than when varying the degree of head entities. Furthermore, the MRR increases drastically with the increase of tail in-degree (blue line) while there is a parabolic-like relationship when varying the tail out-degree (orange line). From these observations we can conclude: (1) the degree of the tail entity (i.e. the entity we are trying to predict) has a larger impact on test performance than the degree of the head entity; (2) the tail in-degree features a more distinguishing and apparent relationship with performance than the tail out-degree. Due to the page limitation, the results of ConvE are shown in Appendix A.4, where we have similar observations. These results suggest that KGC displays a degree bias in regards to the in-degree. Next, we will examine which factors of a triple majorly contribute to such degree bias. ### Entity-Relation Degree Analysis In the previous subsection, we have demonstrated the relationship between the entity degree and KGC performance. However, it doesn't account for the interaction of the entities and relation. Therefore, we further study how the presence of both the relation and entities in a triple _together_ impact the KGC performance. We begin by defining the number of edges that contains the relation \(r\) and an entity \(v\) as the _relation-specific_ degree: \[d_{v,r}=|\{(h,r,t)\in\mathcal{E}\mid h=v\lor t=v)\}|. \tag{1}\] Based on the results in Section 3.1, we posit that the main indicator of performance is the in-degree of the tail entity. We extend this idea to our definition of relation-specific degree by only counting co-occurrences of an entity and relation when the entity occurs as the tail. For simplicity we refer to this as the _tail-relation_ degree and define it as: \[d_{v,r}^{(tail)}=|\{(h,r,v)\in\mathcal{E}\}|. \tag{2}\] The tail-relation degree can be understood as the number of edges that an entity \(v\) shares with \(r\), where \(v\) occupies the position we are trying to predict (i.e. the tail entity). We further refer to the number of in-edges that \(v\) doesn't share with \(r\) as "Other-Tail Relation" degree. This is calculated as the difference between the in-degree of entity \(v\) and the tail-relation degree of \(v\) and relation \(r\), i.e. \(d_{v}^{(in)}-d_{v,r}^{(tail)}\). It is easy to verify that the in-degree of an entity \(v\) is the summation of the tail-relation degree and "Other-Tail Relation" degree. We use Figure 1 as an example of the tail-relation degree. The entity _Sweden_ co-occurs with the relation _Has Country_ on one edge. On that edge, _Sweden_ is the tail entity. Therefore the tail-relation degree of the pair (_Sweden_, _Has Country_) is one. We note that a special case of the tail-relation degree is relation-level semantic evidence defined by Li et al. (2019). Figure 1(b) displays the MRR when varying the value of the tail-relation and "Other-Tail Relation" degree of the tail entity. From the results, we note that while both degree metrics correlate with performance, the performance when the other-tail-relation degree in the range \([0,3)\) is quite high. Since both metrics are highly correlated, it is difficult to determine which metric is more important for the downstream performance. Is the "Other-Tail Relation" the determining factor for performance or is it the tail-relation degree? We therefore check the performance when controlling for one another. Figure 1(c) displays the results when varying the "Other-Tail Relation" degree for specific sub-ranges of the tail-relation degree. From this figure, we see that the tail-relation degree exerts a much larger influence on the KGC performance as there is little variation between bars belonging to the same subset. Rather the tail-relation degree (i.e. the clusters of bars) has a much larger impact. Therefore, we conclude that for a single triple, the main factor of degree bias is the tail-relation degree of the tail entity. **Remark.** Our analysis differs from traditional research on degree bias. While previous works focus only on the degree of the node, we Figure 2. MRR when predicting the tail for TuckER on FB15K-237 when varying the (a) in-degree and out-degree of the head and tail entity, (b) tail-relation and other-relation in-degree, and (c) other-relation in-degree for smaller sub-ranges of the tail-relation degree. focus on a specific type of frequency among entity-relation pairs. This is vital as the frequencies of both the entities and relations are important in KGs. Though we only analyze KGs, findings from our analysis could be applicable to other types of heterogeneous graphs. ## 4. The Proposed Method Grounded in the observations in Section 3.2, one natural idea to alleviate the degree bias in KGC is to compensate the triples with low tail-relation degrees. Based on this intuition, we propose a new method for improving the KGC performance of triples with low tail-relation degrees. Our method, **KG-Mixup**, works by augmenting the low tail-relation degree triples during training with synthetic samples. This strategy has the effect of increasing the degree of an entity-relation pair with a low tail-relation degree by creating more shared edges between them. Therefore, KG-Mixup is very general and can further be used in conjunction with any KG embedding technique. ### General Problem In Section 3.2 we showed that the tail-relation degree of the tail entity strongly correlates with higher performance in KGC. As such we seek to design a method that can increase the performance of such low-degree entity-relation pairs without sacrificing the performance of high-degree pairs. To solve this problem, we consider data augmentation. Specifically, we seek to create synthetic triples for those entity-relations pairs with a low tail-relation degree. In such a way we are creating more training triples that contain those pairs, thereby "increasing" their degree. For each entity-relation pair with a tail-relation degree less than \(\eta\), we add \(k\) synthetic samples, which can be formulated as follows: \[\tilde{\mathcal{E}}_{0,r}=\begin{cases}\mathcal{E}_{0,r}\cup\{(\tilde{h}, \tilde{r},\tilde{t})\}_{i=1}^{k}&d_{0,r}^{(tail)}<\eta,\\ \mathcal{E}_{0,r}&\text{else},\end{cases} \tag{3}\] where \((h,r,\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{ \mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ }}}}}}}}}}}}})\in \mathcal{E}_{0,r}\) are the original training triples with the relation \(r\) and the tail entity \(\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{ \mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{ \mathfrak{\mathfrak{\mathfrak{ \mathfrak{ \mathfrak{ }}}}}}}}}}}}}})\) is a synthetic sample, and \(\tilde{\mathcal{E}}_{0,r}\) is the new set of triples to use during training. **Challenges.** We note that creating the synthetic samples as shown in Eq. (3) is non-trivial and there are a number of challenges: 1. How do we produce the synthetic samples for KG triples that contain multiple types of embeddings? 2. How do we promote diversity in the synthetic samples \((\tilde{h},\tilde{r},\tilde{t})\)? We want them to contain sufficient information from the original entity and relation embeddings we are augmenting, while also being distinct from similar triples in \(\mathcal{E}_{0,r}\). 3. How do we achieve such augmentation in a computationally efficient manner? These challenges motivate us to design a special data augmentation algorithm for knowledge graph completion and we detail its core techniques in the next subsection. ### KG-Mixup We now present our solution for producing synthetic samples as described in Eq. (3). Inspired by the popular Mixup (Shoemaker et al., 2017) strategy, we strive to augment the training set by mixing the representations of triples. We draw inspiration from mixup as (1) it is an intuitive and widely used data augmentation method, (2) it is able to promote diversity in the synthetic samples via the randomly drawn value \(\lambda\), and (3) it is computationally efficient (see Section 4.4). We now briefly describe the general mixup algorithm. We denote the representations of two samples as \(x_{1}\) and \(x_{2}\) and their labels \(y_{1}\) and \(y_{2}\). Mixup creates a new sample \(\tilde{x}\) and label \(\tilde{y}\) by combining both the representations and labels via a random value \(\lambda\in[0,1]\) drawn from \(\lambda\sim\text{Beta}(\alpha,\alpha)\) such that: \[\tilde{x} =\lambda\mathbf{x}_{1}+(1-\lambda)\mathbf{x}_{2},\] \[\tilde{y} =\lambda\mathbf{y}_{1}+(1-\lambda)\mathbf{y}_{2}. \tag{4}\] We adapt this strategy to our studied problem for a triple \((h,r,t)\) where the tail-relation degree is below a degree threshold, i.e. \(d_{t,r}^{(tail)}<\eta\). For such a triple we aim to augment the training set by creating \(k\) synthetic samples \(\{(\tilde{h},\tilde{r},\tilde{t})\}_{i=1}^{k}\). This is done by mixing the original triple with \(k\) other triples \(\{(h_{i},r_{i},t_{i})\}_{i=1}^{k}\). However, directly adopting mixup to KGC leads to some problems: (1) Since each sample doesn't contain a label (Eq. 5) we are unable to perform label mixing. (2) While standard mixup randomly selects samples to mix with, we may want to utilize a different selection criteria to better enhance those samples with a low tail-relation degree. (3) Since each sample is composed of multiple components (entities and relations) it's unclear how to mix two samples. We go over these challenges next. #### 4.2.1. Label Incorporation in KGC We first tackle how to incorporate the label information as shown in Eq. (5). Mixup was originally designed for classification problems, making the original label mixing straightforward. However, for KGC, we have no associated label for each triple. We therefore consider the entity we are predicting as the label. For a triple \(e_{1}=(h_{1},r_{1},t_{1})\) where we are predicting \(t_{1}\), the label would be considered the entity \(t_{1}\). #### 4.2.2. Mixing Criteria Per the original definition of Mixup, we would then mix \(e_{1}\) with a triple belonging to the set \(\{(h_{2},r_{2},t_{2})\in\mathcal{E}\,|t_{2}\neq t_{1}\}\). However, since our goal is to predict \(t_{1}\) we wish to avoid mixing it. Since we want to better predict \(t_{1}\), we need to preserve as much tail (i.e. label) information as possible. As such, we only consider mixing with other triples that share the same tail and belong to the set \(\{(h_{2},r_{2},t_{1})\in\mathcal{E}\,\mid\,h_{1}\neq h_{2},r_{1}\neq r_{2}\}\). Our design is similar to SMOTE (Mikolov et al., 2017), where only samples belonging to the same class are combined. We note that while it would be enticing to only consider mixing with triples containing the same entity-relation pairs, i.e. \((h_{2},r_{1},t_{1})\in\mathcal{E}_{t_{1},r_{1}}\), this would severely limit the number of possible candidate triples as the tail-relation degree can often be as low as one or two for some pairs. #### 4.2.3. How to Mix? We now discuss how to perform the mixing of two samples. Given a triple \(e_{1}=(h_{1},r_{1},t_{1})\) of low tail-relation degree we mix it with another triple that shares the same tail (i.e. label) such that \(e_{2}=(h_{2},r_{2},t_{1})\). Applying Eq. (4) to \(e_{1}\) and \(e_{2}\), a synthetic triple \(\tilde{e}=(\tilde{h},\tilde{r},\tilde{t})\) is equal to: \[\tilde{e} =\lambda e_{1}+(1-\lambda)e_{2}, \tag{7}\] \[\tilde{e} =\lambda(h_{1},r_{1},t_{1})+(1-\lambda)(h_{2},r_{2},t_{1}),\] (8) \[\tilde{e} =\lambda(\mathbf{x}_{h_{1}},\mathbf{x}_{r_{1}},\mathbf{x}_{t_{1 }})+(1-\lambda)(\mathbf{x}_{h_{2}},\mathbf{x}_{r_{2}},\mathbf{x}_{t_{1}}), \tag{6}\] where \(\mathbf{x}_{h_{i}}\) and \(\mathbf{x}_{r_{j}}\) represent the entity and relation embeddings, respectively. We apply the weighted sum to the head, relation, and tail, separately. Each entity and relation are therefore equal to: \[x_{\tilde{h}} =\lambda\mathbf{x}_{h_{1}}+(1-\lambda)\mathbf{x}_{h_{2}}, \tag{10}\] \[x_{\tilde{r}} =\lambda\mathbf{x}_{r_{1}}+(1-\lambda)\mathbf{x}_{r_{2}},\] (11) \[x_{\tilde{t}} =\mathbf{x}_{t_{1}}. \tag{9}\] We use Figure 1 to illustrate an example. Let \(e_{1}=(\textit{Europe, Has Country},\textit{Germany})\) be the triple we are augmenting. We mix it with another triple with the tail entity _Germany_. We consider the triple \(e_{2}=(\textit{Belgium, Borders, Germany})\). The mixed triple is represented as \(\tilde{e}=(\textit{Europe+ Belgium, Has Country+Borders, Germany})\). As \(\tilde{e}_{1}\) contains the continent that _Germany_ belongs to and \(e_{2}\) has the country it borders, we can understand the synthetic triple \(\tilde{e}\) as conveying the geographic location of _Germany_ inside of _Europe_. This is helpful when predicting _Germany_ in the original triple \(e_{1}\), since the synthetic sample imbues the representation of _Germany_ with more specific geographic information. ### KG-Mixup Algorithm for KGC We utilize the binary cross-entropy loss when training each model. The loss is optimized using the Adam optimizer (King and Ba, 2015). We also include a hyperparameter \(\beta\) for weighting the loss on the synthetic samples. The loss on a model with parameters \(\theta\) is therefore: \[\mathcal{L}(\theta)=\mathcal{L}_{KG}(\theta)+\beta\mathcal{L}_{\text{Mix}}( \theta)\,, \tag{12}\] where \(\mathcal{L}_{KG}\) is the loss on the original KG triples and \(\mathcal{L}_{\text{Mix}}\) is the loss on the synthetic samples. The full algorithm is displayed in Algorithm 1. We note that we first pre-train the model before training with KG-Mixup, to obtain the initial entity and relation representations. This is done as it allows us to begin training with stronger entity and relation representations, thereby improving the generated synthetic samples. ### Algorithmic Complexity We denote the algorithmic complexity of a model \(f\) (e.g. ConvE (Cavet et al., 2017) or TuckER (Cavet et al., 2017)) for a single sample \(e\) as \(O(f)\). Assuming we generate \(N\) negative samples per training sample, the training complexity of \(f\) over a single epoch is: \[O\left(N\cdot|\mathcal{E}|\cdot O(f)\right), \tag{13}\] where \(|\mathcal{E}|\) is the number of training samples. In KG-Mixup, in addition to scoring both the positive and negative samples, we also score the synthetic samples created for all samples with a tail-relation degree below a threshold \(\eta\). We refer to that set of samples below the degree threshold as \(\mathcal{E}_{\text{thresh}}\). We create \(k\) synthetic samples per \(e\in\mathcal{E}_{\text{thresh}}\). As such, our algorithm scores an additional \(k\cdot|\mathcal{E}_{\text{thresh}}|\) samples for a total of \(N\cdot|\mathcal{E}|+k\cdot|\mathcal{E}_{\text{thresh}}|\) samples per epoch. Typically the number of negative samples \(N>>k\). Both ConvE and TuckER use all possible negative samples per training sample while we find \(k=5\) works well. Furthermore, by definition, \(\mathcal{E}_{\text{thresh}}\subseteq\mathcal{E}\) rendering \(|\mathcal{E}|>>|\mathcal{E}_{\text{thresh}}|\). We can thus conclude that \(N\cdot|\mathcal{E}|>>k\cdot|\mathcal{E}_{\text{thresh}}|\). We can therefore express the complexity of KG-Mixup as: \[\approx O\left(N\cdot|\mathcal{E}|\cdot O(f)\right). \tag{14}\] This highlights the efficiency of our algorithm as its complexity is approximately equivalent to the standard training procedure. ## 5. Regularizing Effect of KG-Mixup In this section, we examine the properties of KG-Mixup and show it can be formulated as a form of regularization on the entity and relation embeddings of low tail-relation degree samples following previous works (Cavet et al., 2017; Wang et al., 2018). We denote the mixup loss with model parameters \(\theta\) over samples \(S\) as \(\mathcal{L}_{\text{Mix}}(\theta)\). The set \(S\) contains those samples with a tail-relation degree below a threshold \(\eta\) (see line 6 in Algorithm 1). The embeddings for each sample \(e_{i}=(h_{i},r_{i},t)\in S\) is mixed with those of a random sample \(e_{j}=(h_{j},r_{j},t)\) that shares the same tail. The embeddings are combined via a random value \(\lambda\sim\text{Beta}(\alpha,\alpha)\) as shown in Eq. (9), thereby producing the synthetic sample \(\tilde{e}=(\tilde{h},\tilde{r},t)\). The formulation for \(\mathcal{L}_{\text{mix}}(\theta)\) is therefore: \[\mathcal{L}_{\text{Mix}}(\theta)=\frac{1}{k|S|}\sum_{i=1}^{|S|}\sum_{j=1}^{k}l _{\theta}\left(\tilde{e},\tilde{y}\right), \tag{15}\] where \(k\) synthetic samples are produced for each sample in \(S\), and \(\tilde{y}\) is the mixed binary label. Following Theorem 1 in Carratino et al. (2017)(Cavet et al., 2017) we can rewrite the loss as the expectation over the synthetic samples as, \[\mathcal{L}_{\text{Mix}}(\theta)=\frac{1}{|S|}\sum_{i=1}^{|S|}\mathbb{E}_{ \lambda,j}\,l_{\theta}\left(\tilde{e},\tilde{y}\right), \tag{16}\] where \(\lambda\sim\mathcal{D}_{\lambda}\) and \(j\sim\text{Uniform}(\mathcal{E}_{t})\). The distribution \(\mathcal{D}_{\lambda}=\text{Beta}_{\left\{\frac{1}{2},1\right\}}(\alpha,\alpha)\) and the set \(\mathcal{E}_{t}\) contains all samples \((h_{j},r_{j},t)\) with tail \(t\). Since the label \(y\) for both samples \(i\) and \(j\) are always \(1\), rendering \(\tilde{y}=1\), we can simplify Eq. (16) arriving at: \[\mathcal{L}_{\text{Mix}}(\theta)=\frac{1}{|S|}\sum_{i=1}^{|S|}\mathbb{E}_{ \lambda,j}\,l_{\theta}\left(\tilde{e}\right). \tag{17}\] For the above loss function, we have the following theorem. **Theorem 5.1**.: _The mixup loss \(\mathcal{L}_{Mix}(\theta)\) defined in Eq. (17) can be rewritten as the following where the loss function \(l_{\theta}\) is the binary cross-entropy loss, \(\mathcal{L}(\theta)\) is the loss on the original set of augmented samples \(S\), and \(\mathcal{R}_{1}(\theta)\) and \(\mathcal{R}_{2}(\theta)\) are two regularization terms,_ \[\mathcal{L}_{Mix}(\theta)=\mathcal{L}(\theta)+\mathcal{R}_{1}(\theta)+ \mathcal{R}_{2}(\theta). \tag{18}\] _The regularization terms are given by the following where each mixed sample \(\tilde{e}\) is composed of a low tail-relation degree sample \(e_{i}\) and another sample with the same tail entity \(e_{j}\):_ \[\mathcal{R}_{1}(\theta)=\frac{\tau}{|S|}\sum_{i=1}^{|S|}\sum_{j=1}^{k}\left(1 -\sigma\left(f(e_{i})\right)\right)\frac{\partial f(e_{i})^{T}}{\partial x_{ \tilde{h}}}\Delta h, \tag{19}\] \[\mathcal{R}_{2}(\theta)=\frac{\tau}{|S|}\sum_{i=1}^{|S|}\sum_{j=1}^{k}\left(1 -\sigma\left(f(e_{i})\right)\right)\frac{\partial f(e_{i})^{T}}{\partial x_{ \tilde{\tau}}}\Delta r, \tag{20}\] _with \(\tau=\mathbb{E}_{\lambda-\mathcal{D}_{\lambda}}(1-\lambda)\), \(\Delta h=\left(x_{h_{j}}-x_{h_{i}}\right)\), \(\Delta r=\left(x_{\tau_{j}}-x_{\tau_{i}}\right)\), \(\sigma\) is the sigmoid function, and \(f\) is the score function._ We provide the detailed proof of Theorem 5.1 in Appendix A.6. Examining the terms in Eq (18), we can draw the following understandings on KG-Mixup: 1. The inclusion of \(\mathcal{L}(\theta)\) implies that the low tail-relation degree samples are scored an additional time when being mixed. This can be considered as a form of oversampling on the low tail-relation degree samples. 2. If the probability is very high, i.e. \(\sigma(f(e_{i}))\approx 1\), both \(\mathcal{R}_{1}\) and \(\mathcal{R}_{2}\) cancel out. This is intuitive as if the current parameters perform well for the original low-degree sample, there is no need to make any adjustments. 3. We can observe that \(\mathcal{R}_{1}\) and \(\mathcal{R}_{2}\) enforce some regularization on the derivatives as well as the difference between the embeddings \(\Delta h\) and \(\Delta r\). This motivates us to further examine the difference between the embeddings. In Section 6.3, we find that our method does indeed produce more similar embeddings, indicating that our method exerts a smoothing effect among mixed samples. ## 6. Experiment In this section we conduct experiments to demonstrate the effectiveness of our approach on multiple benchmark datasets. We further compare the results of our framework to other methods commonly used to address bias. In particular we study if KG-Mixup can (a) improve overall KGC performance and (b) increase performance on low tail-relation degree triples without degrading performance on other triples. We further conduct studies examining the effect of the regularization terms, ascertaining the importance of each component in our framework, and the ability of KG-Mixup to improve model calibration. ### Experimental Settings #### 6.1.1. Datasets We conduct experiments on three datasets including FB15K-237 (Zhu et al., 2017), CoDEx-M (Zhu et al., 2017), and NELL-995 (Zhu et al., 2017). We omit the commonly used dataset WN18RR (Zhu et al., 2017) as a majority of entities have a degree less than or equal to 3, and as such does not exhibit any degree bias towards triples with a low tail-relation degree. The statistics of each dataset is shown in Table 6. #### 6.1.2. Baselines We compare the results of our method, KG-Mixup, with multiple popular methods proposed for addressing imbalanced problems. Such methods can be used to mitigate bias caused by the initial imbalance. In our case, an imbalance in tail-relation degree causes algorithms to be biased against triples of low tail-relation degree. Specifically, we implement: (a) **Over-Sampling** triples below a degree threshold \(\eta\). We over-sample \(\eta-d^{(tail)}_{\theta,r}\) times, (b) **Loss Re-Weighting**(Zhu et al., 2017), which assigns a higher loss to triples with a low tail-relation degree, (c) **Focal Loss**(Zhu et al., 2017), which assigns a higher weight to misclassified samples (e.g. low degree triples). #### 6.1.3. Evaluation Metrics To evaluate the model performance on the test set, we report the mean reciprocal rank (MRR) and the Hits@k for \(k=1,10\). Following (Beng et al., 2017), we report the performance using the filtered setting. #### 6.1.4. Implementation Details In this section, we detail the training procedure used to train our framework KG-Mixup. We conduct experiments on our framework using two different KG embedding models, ConvE (Zhu et al., 2017) and TuckER (Zhu et al., 2017). Both methods are widely used to learn KG embeddings and serve as a strong indicator of our framework's efficacy. We use stochastic weight averaging (SWA) (Zhu et al., 2017) when training our model. SWA uses a weighted average of the parameters at different checkpoints during training for inference. Previous work (Zhu et al., 2017) has shown that SWA in conjunction with data augmentation can increase performance. Lastly, the synthetic loss weighting parameter \(\beta\) is determined via hyperparameter tuning on the validation set. ### Main Results In this subsection we evaluate KG-Mixup on multiple benchmarks, comparing its test performance against the baseline methods. We first report the overall performance of each method on the three datasets. We then report the performance for various degree bins. The top results are bolded with the second best underlined. Note that the **Standard** method refers to training without any additional method to alleviate bias. Table 1 contains the overall results on each method and dataset. The performance is reported for both ConvE and TuckER. KG-Mixup achieves for the best MRR and Hits@10 on each dataset for ConvE. For TuckER, KG-Mixup further achieves the best MRR on each dataset and the top Hits@10 for two. Note that the other three baseline methods used for alleviating bias, on average, perform poorly. This may be due to their incompatibility with relational structured data where each sample contains multiple components. It suggests that we need dedicated efforts to handle the degree bias in KGC. We further report the MRR of each method for triples of different tail-relation degree. We split the triples into four degree bins of zero, low, medium and high degree. The range of each bin is [0, 1), (10), (50), (50), and (50, \(\infty\)), respectively. KG-Mixup achieves a notable increase in performance on low tail-relation degree triples for each dataset and embedding model. KG-Mixup increases the MRR on low degree triples by 9.8% and 5.3% for ConvE and TuckER, respectively, over the standard trained models on the three datasets. In addition to the strong increase in low degree performance, KG-Mixup is also able to retain its performance for high degree triples. The MRR on high tail-relation degree triples degrades, on average, only 1% on ConvE between our method and standard training and actually increases 1% for TuckER. Interestingly, the performance of KG-Mixup on the triples with zero tail-relation degree isn't as strong as the low degree triples. We argue that such triples are more akin to the zero-shot learning setting and therefore different from the problem we are studying. Lastly, we further analyzed the improvement of KG-Mixup over standard training by comparing the difference in performance between the two groups via the paired t-test. We found that for the results in Table 1, 5/6 are statistically significant (p\(<\)0.05). Furthermore, for the performance on low tail-relation degree triples in Table 2, all results (6/6) are statistically significant. This gives further justification that our method can improve both overall and low tail-relation degree performance. ### Regularization Analysis In this subsection we empirically investigate the regularization effects of KG-Mixup discussed in Section 5. In Section 5 we demonstrated that KG-Mixup can be formulated as a form of regularization. We further showed that one of the quantities minimized is the difference between the head and relation embeddings of the two samples being mixed, \(e_{i}\) and \(e_{j}\), such that \((x_{h_{j}}-x_{h_{i}})\) and \((x_{r_{j}}-x_{r_{i}})\). Here \(e_{i}\) is the low tail-relation degree sample being augmented and \(e_{j}\) is another sample that shares the same tail. We deduce from this that for low tail-relation degree samples, KG-Mixup may cause their head and relation embeddings to be more similar to those of other samples that share same tail. Such a property forms a smoothing effect on the mixed samples, which facilitates a transfer of information to the embeddings of the low tail-relation degree sample. We investigate this by comparing the head and relation embeddings of all samples that are augmented with all the head and relation embeddings that also share the same tail entity. We denote the set of all samples below some tail-relation degree threshold \(\eta\) as \(\mathcal{E}_{t}\)thresh and all samples with tail entity \(t\) as \(\mathcal{E}_{t}\). Furthermore, we refer to all head entities that are connected to a tail \(t\) as \(\mathcal{H}_{t}=\{h_{j}\mid(h_{j},r_{j},t)\in\mathcal{E}_{t}\}\) and all such relations as \(\mathcal{R}_{t}=\{r_{j}\mid(h_{j},r_{j},t)\in\mathcal{E}_{t}\}\). For each sample \((h_{i},r_{i},t)\in\mathcal{E}_{t}\)thresh we compute the mean euclidean distance between the (1) head embedding \(\mathbf{x}_{h_{i}}\) and all \(\mathbf{x}_{h_{j}}\in\mathcal{H}_{t}\) and (2) the relation embedding \(\mathbf{x}_{r_{j}}\) and all \(\mathbf{x}_{r_{j}}\in\mathcal{R}_{t}\). For a single sample \(e_{i}\) the mean head and relation embedding distance are given by \(h_{\text{dist}}(e_{i})\) and \(r_{\text{dist}}(e_{i})\), respectively. Lastly, we take the mean of both the head and relation embeddings mean distances across all \(e\in\mathcal{E}_{\text{thresh}}\), \[D_{\text{rel}}=\text{Mean}\left(r_{\text{dist}}(e_{i})\mid e_{i} \in\mathcal{E}_{\text{thresh}}\right), \tag{22}\] \[D_{\text{head}}=\text{Mean}\left(h_{\text{dist}}(e_{i})\mid e_{i} \in\mathcal{E}_{\text{thresh}}\right). \tag{21}\] \begin{table} \begin{tabular}{l|l|l l l l|l l l l l l} \hline \hline \multirow{2}{*}{**Model**} & \multirow{2}{*}{**Method**} & \multicolumn{4}{c}{**FB15K-237**} & \multicolumn{4}{c}{**NELL-995**} & \multicolumn{4}{c}{**CoDEx-M**} \\ \cline{3-14} & & MRR & H@1 & H@10 & MRR & H@1 & H@10 & MRR & H@1 & H@10 \\ \hline \multirow{4}{*}{**ConvE**} & Standard & 33.04 & 23.95 & 51.23 & 50.87 & **44.14** & 61.48 & 31.70 & **24.34** & 45.60 \\ \cline{2-14} & + Over-Sampling & 30.45 & 21.85 & 47.81 & 48.63 & 40.99 & 60.78 & 27.13 & 20.17 & 40.11 \\ & + Loss Re-weighting & 32.32 & 23.32 & 50.19 & 50.89 & 43.83 & 62.17 & 28.38 & 21.12 & 42.68 \\ & + Focal Loss & 32.08 & 23.29 & 50.09 & 50.43 & 44.00 & 60.70 & 27.99 & 20.93 & 41.48 \\ & + KG-Mixup (Ours) & **34.33** & **25.00** & **53.11** & **51.08** & 43.52 & **63.22** & **31.71** & 23.49 & **47.49** \\ \hline \multirow{4}{*}{**TuckER**} & Standard & 35.19 & 26.06 & 53.47 & 52.11 & 45.51 & **62.26** & 31.67 & **24.46** & 45.73 \\ \cline{2-14} & + Over-Sampling & 34.77 & 25.48 & 53.53 & 50.36 & 44.04 & 60.40 & 29.97 & 22.27 & 44.19 \\ \cline{1-1} & + Loss Re-weighting & 35.25 & 26.08 & 53.34 & 51.91 & 45.76 & 61.05 & 31.58 & 24.32 & 45.41 \\ \cline{1-1} & + Focal Loss & 34.02 & 24.79 & 52.48 & 49.57 & 43.28 & 58.91 & 31.47 & 24.05 & 45.60 \\ \cline{1-1} & + KG-Mixup (Ours) & **35.83** & **26.37** & **54.78** & **52.24** & **45.78** & 62.14 & **31.90** & 24.15 & **46.54** \\ \hline \hline \end{tabular} \end{table} Table 1. Knowledge Graph Completion (KGC) Comparison. \begin{table} \begin{tabular}{l|l|l l l l|l l l l|l l l} \hline \hline \multirow{2}{*}{**Model**} & \multirow{2}{*}{**Method**} & \multicolumn{4}{c}{**FB15K-237**} & \multicolumn{4}{c}{**NELL-995**} & \multicolumn{4}{c}{**CoDEx-M**} \\ \cline{3-14} & & Zero & Low & Medium & High & Zero & Low & Medium & High & Zero & Low & Medium & High \\ \hline \multirow{4}{*}{**ConvE**} & Standard & 7.34 & 12.35 & 34.95 & **70.97** & 35.37 & 57.16 & **65.99** & **91.90** & 8.38 & _7.97_ & **34.64** & **65.29** \\ \cline{2-14} & + Over-Sampling & 8.37 & 12.45 & 33.01 & 68.75 & **36.67** & 57.33 & 56.09 & 79.57 & 8.09 & 7.52 & 29.51 & 54.80 \\ & + Loss Re-weighting & 5.03 & 9.89 & 30.56 & 63.34 & 36.16 & 57.96 & 63.69 & 89.52 & 8.79 & 7.09 & 29.09 & 58.10 \\ & + Focal Loss & 7.52 & 11.89 & 33.96 & 68.75 & 34.72 & 58.00 & 65.60 & 90.89 & 6.78 & 6.80 & 33.42 & 56.96 \\ & + KG-Mixup (Ours) & **10.90** & **13.92** & **35.74** & 70.72 & 35.38 & **59.56** & 65.41 & 90.64 & **9.74** & **8.96** & 32.63 & 64.38 \\ \hline \multirow{4}{*}{**TuckER**} & Standard & 10.41 & 14.65 & 38.49 & 71.39 & **37.02** & 58.21 & 69.17 & 90.55 & 9.99 & 8.29 & **35.23** & 63.94 \\ \cline{1-1} & + Over-Sampling & **12.25** & 14.28 & 36.79 & 70.50 & 34.50 & 55.46 & 65.68 & **93.47** & **10.98** & 7.76 & 32.50 & 60.25 \\ \cline{1-1} & + Loss Re-weighting & 10.61 & 14.40 & 37.66 & **72.28** & 36.59 & 59.00 & 67.19 & 91.17 & 10.44 & _8.62_ & 35.00 & 63.39 \\ \cline{1-1} & + Focal Loss & 10.84 & 13.53 & 37.00 & 69.28 & 34.18 & 53.60 & 62.67 & 91.02 & 9.68 & 8.17 & 33.95 & _64.13_ \\ \cline{1-1} & + KG-Mixup (Ours) & **11.83** & **15.61** & **39.45** & 70.86 & 36. Both \(D_{\text{head}}\) and \(D_{\text{rel}}\) are shown in Table 3 for models fitted with and without KG-Mixup. We display the results for ConvE on FB15K-237. For both the mean head and relation distances, KG-Mixup produces smaller distances than the standardly-trained model. This aligns with our previous theoretical understanding of the regularization effect of the proposed method: for samples for which we augment during training, their head and relation embeddings are more similar to those embeddings belonging to other samples that share the same tail. This to some extent forms a smoothing effect, which is helpful for learning better representations for the low-degree triplets. ### Ablation Study In this subsection we conduct an ablation study of our method on the FB15K-237 dataset using ConvE and TuckER. We ablate both the data augmentation strategy and the use of stochastic weight averaging (SWA) separately to ascertain their effect on performance. We report the overall test MRR and the low tail-relation degree MRR. The results of the study are shown in Table 4. KG-Mixup achieves the best overall performance on both embedding models. Using only our data augmentation strategy leads to an increase in both the low degree and overall performance. On the other hand, while the SWA-only model leads to an increase in overall performance it degrades the low degree performance. We conclude from these observations that data augmentation component of KG-Mixup is vital for improving low degree performance while SWA helps better maintain or even improve performance on the non-low degree triples. ### Parameter Study In this subsection we study how varying the number of generated synthetic samples \(k\) and the degree threshold \(\eta\) affect the performance of KG-Mixup. We consider the values \(k\in\{1,5,10,25\}\) and \(\eta\in\{2,5,15\}\). We report the MRR for both TuckER and ConvE on the CoDEx-M dataset. Figure 2(a) displays the performance when varying the degree threshold. Both methods peak at a value of \(\eta=5\) and perform worst at \(\eta=15\). Figure 2(b) reports the MRR when varying the number of synthetic samples generated. Both methods peak early with ConvE actually performing best at \(k=1\). Furthermore, generating too many samples harms performance as evidenced by the sharp drop in MRR occurring after \(k=5\). ### Model Calibration In this subsection we demonstrate that KG-Mixup is effective at improving the calibration of KG embedding models. Model calibration (Krizhevsky et al., 2015) is concerned with how well calibrated a models prediction probabilities are with its accuracy. Previous work (Krizhevsky et al., 2015) have discussed the desirability of calibration to minimize bias between different groups in the data (e.g. samples of differing degree). Other work (Srivastava et al., 2016) has drawn the connection between out-of-distribution generalization and model calibration, which while not directly applicable to our problem is still desirable. Relevant to our problem, Thulasidasan et al. (2016) has shown that Mixup is effective at calibrating deep models for the tasks of image and text classification. As such, we investigate if KG-Mixup is helpful at calibrating KG embedding models for KGC. We compared the expected calibration error (see Appendix A.5 for more details) between models trained with KG-Mixup and those without on multiple datasets. We report the calibration in Table 5 for all samples and those with a low tail-relation degree. We find that in every instance KG-Mixup produces a better calibrated model for both ConvE and TuckER. These results suggest another reason for why KG-Mixup works; a well-calibrated model better minimizes the bias between different groups in the data (Krizhevsky et al., 2015). This is integral for our problem where certain groups of data (i.e. triples with low tail-relation degree) feature bias. ## 7. Conclusion We explore the problem of degree bias in KG embeddings. Through empirical analysis we find that when predicting the tail \(t\) for a triple \((h,r,t)\), a strong indicator performance is the number of edges \begin{table} \begin{tabular}{l|c c|c c} \hline \hline \multirow{2}{*}{**Method**} & \multicolumn{2}{c}{**ConvE**} & \multicolumn{2}{c}{**TuckER**} \\ \cline{2-5} & Low & Overall & Low & Overall \\ \hline Standard & 12.35 & 33.04 & 14.65 & 35.19 \\ + SWA & 12.27 & 33.69 & 14.18 & 35.77 \\ + Augmentation & **13.99** & 33.67 & **15.64** & 35.62 \\ KG-Mixup (Ours) & 13.92 & **34.33** & 15.61 & **35.83** \\ \hline \hline \end{tabular} \end{table} Table 4. Ablation Study on FB15K-237. Figure 3. MRR of TuckER and ConvE on CoDEx-M (a) when varying the degree threshold and (b) when varying the number of samples generated. \begin{table} \begin{tabular}{l|c|c} \hline \hline **Embedding Type** & **Head Entity** & **Relation** \\ \hline w/o KG-Mixup & 1.18 & 1.21 \\ KG-Mixup & 1.09 & 1.13 \\ \hline \% Decrease & -7.6\% & -6.6\% \\ \hline \hline \end{tabular} \end{table} Table 3. Mean Embedding Distances on FB15K-237. \begin{table} \begin{tabular}{l|c c|c c|c c} \hline \hline \multirow{2}{*}{**Model**} & \multirow{2}{*}{**Method**} & \multicolumn{2}{c}{**FB15K-237**} & \multicolumn{2}{c}{**NELL-995**} & \multicolumn{2}{c}{**CoDEx-M**} \\ \cline{3-6} & & Low & Overall & Low & Overall \\ \hline \multirow{2}{*}{**ConvE**} & Standard & 0.19 & 0.15 & 0.34 & 0.27 & 0.28 & 0.26 \\ & KG-Mixup & 0.08 & 0.05 & 0.08 & 0.08 & 0.02 & 0.09 \\ \hline \multirow{2}{*}{**TuckER**} & Standard & 0.20 & 0.35 & 0.63 & 0.56 & 0.05 & 0.34 \\ & KG-Mixup & 0.07 & 0.1 & 0.26 & 0.20 & 0.01 & 0.06 \\ \hline \hline \end{tabular} \end{table} Table 5. Expected Calibration Error (ECE). Lower is better. where \(r\) and \(t\) co-occur as the relation and tail, respectively. We refer to this as the tail-relation degree. We therefore propose a new method, KG-Mixup, that can be used in conjunction with any KG embedding technique to improve performance on triples with a low tail-relation degree. It works by augmenting lower degree entity-relation pairs with additional synthetic triples during training. To create synthetic samples we adapt the Mixup [42] strategy to KGs. Experiments validate its usefulness. For future work we plan on expanding our method to path-based techniques such as NBFNet [49]. ## Acknowledgments This research is supported by the National Science Foundation (NSF) under grant numbers CNS1815636, IIS1845081, IIS19528278, IIS1955285, IIS2212032, IIS2212144, IOS2107215, and IOS2035472, the Army Research Office (ARO) under grant number W911NF-21-1-0198, the Home Depot, Cisco Systems Inc, Amazon Faculty Award, Johnson&Johnson and SNAP.
2305.01285
Discontinuous Galerkin Methods with Generalized Numerical Fluxes for the Vlasov-Viscous Burgers' System
In this paper, semi-discrete numerical scheme for the approximation of the periodic Vlasov-viscous Burgers' system is developed and analyzed. The scheme is based on the coupling of discontinuous Galerkin approximations for the Vlasov equation and local discontinuous Galerkin approximations for the viscous Burgers' equation. Both these methods use generalized numerical fluxes. The proposed scheme is both mass and momentum conservative. Based on generalized Gauss-Radau projections, optimal rates of convergence in the case of smooth compactly supported initial data are derived. Finally, computational results confirm our theoretical findings.
Harsha Hutridurga, Krishan Kumar, Amiya K. Pani
2023-05-02T09:37:47Z
http://arxiv.org/abs/2305.01285v1
Discontinuous Galerkin methods with generalized numerical fluxes for the Vlasov-viscous Burgers' system ###### Abstract. In this paper, semi-discrete numerical scheme for the approximation of the periodic Vlasov-viscous Burgers' system is developed and analyzed. The scheme is based on the coupling of discontinuous Galerkin approximations for the Vlasov equation and local discontinuous Galerkin approximations for the viscous Burgers' equation. Both these methods use generalized numerical fluxes. The proposed scheme is both mass and momentum conservative. Based on generalized Gauss-Radau projections, optimal rates of convergence in the case of smooth compactly supported initial data are derived. Finally, computational results confirm our theoretical findings. *Corresponding Author **Key words.** Vlasov-viscous Burgers' system, discontinuous Galerkin method, LDG method, generalized numerical fluxes, discrete mass and momentum conservation, generalized Gauss-Radau projection, optimal error estimates, numerical experiments. **AMS Subject Classification.** 65N30, 65M60, 65M12, 65M15, 82D10. ## 1. Introduction The simplest kinematic model for nonevaporating dilute two phase flow which takes into account only the exchange of momentum between the two phases is described by the following coupled system of viscous Burgers' equation and a Vlasov type equation: \[\left\{\begin{aligned} \partial_{t}f+v\partial_{x}f+ \partial_{v}\left(\left(u-v\right)f\right)&=0&\text{in} \quad(0,T]\times I\times\mathbb{R},\\ f(0,x,v)&=f_{0}(x,v)&\text{in}\quad I \times\mathbb{R},\end{aligned}\right. \tag{1.1}\] \[\left\{\begin{aligned} \partial_{t}u+u\partial_{x}u-\epsilon \partial_{x}^{2}u&=\rho V-\rho u&\text{in}\quad(0,T] \times I,\\ u(0,x)&=u_{0}(x)&\text{in}\quad I,\end{aligned}\right. \tag{1.2}\] with periodic boundary conditions: \[f(t,L,v)=f(t,0,v),\;\;u(t,L)=u(t,0)\quad\text{and}\quad u_{x}(t,L)=u_{x}(t,0).\] Here, \(I=[0,L]\), \(u=u(t,x)\) represents the fluid velocity, \(f=f(t,x,v)\) denotes the distribution function and \(\epsilon>0\) represents the viscosity of the fluid. The coupling between two systems is due to drag force which is proportional to relative velocity \((u-v)\). In the above model, \(f(t,x,v)\) describes the dispersed phase whereas \(u(t,x)\) describes the background continuous phase. Such dispersed two-phase systems are relevant in many applications for example, in modeling combustion phenomena in diesel engines, where a spray of droplets is injected in the device and mixed with the gas prior to combustion [11, 12]. Here, the dispersed phase is the spray, whereas the continuous phase is the surrounding gas. The well-posedness of (1.1)-(1.2), global existence and uniqueness of a solution \(u\in C^{0}([0,T];C^{2}(\mathbb{R}))\) and \(f\in C^{0}([0,T];C^{0}_{0}(\mathbb{R}\times\mathbb{R}))\) for \(u_{0}\in C^{2}(\mathbb{R})\) and \(f_{0}\in C^{1}_{0}(\mathbb{R}\times\mathbb{R})\), was proved by Domelevo and Roquejoffre in [13]. Further, the global existence of a weak solution \((u,f)\in L^{2}(0,T;H^{1}(\mathbb{R}))\times L^{\infty}(0,T;\mathcal{M}^{1}( \mathbb{R}\times\mathbb{R}))\) for \((u_{0},f_{0})\in L^{2}(\mathbb{R})\times L^{1}(\mathbb{R}\times\mathbb{R})\) was shown by Goudon in [1], where \(\mathcal{M}^{1}(\mathbb{R}\times\mathbb{R})\) stands for the set of bounded measures on the domain \(\mathbb{R}\times\mathbb{R}\). In this paper, with domain \(I\times\mathbb{R}\) and Introduction The classical theory of linear hyperbolic equations is the classical one-dimensional Schrodinger equation (SDG) [15], which is a classical one-dimensional Schrodinger equation with a given energy \(E\), and a solution \(u\) of the form \[\begin{split}\partial_{t}u&=\frac{1}{2}\int_{0}^{t}f(t,x,v)\,\mathrm{d}v\quad\text{and}\quad V(t,x)=\frac{1}{\rho}\int_{\mathbb{R}}f(t,x,v)v\,\mathrm{d}v.\end{split} \tag{1.1}\] Here \(f\) is a function of \(f\) and \(x\) is a function of \(f\) and \(v\) is a function of \(f\). The SDG equation is a classical one-dimensional Schrodinger equation with a given energy \(E\), and a solution \(u\) of the form \[\begin{split}\partial_{t}u&=\frac{1}{2}\int_{0}^{t }f(t,x,v)\,\mathrm{d}v\quad\text{and}\quad V(t,x)=\frac{1}{\rho}\int_{\mathbb{R }}f(t,x,v)v\,\mathrm{d}v.\end{split} \tag{1.2}\] The SDG equation is a classical one-dimensional Schrodinger equation with a given energy \(E\), and a solution \(u\) of the form \[\begin{split}\partial_{t}u&=\frac{1}{2}\int_{0}^{t }f(t,x,v)\,\mathrm{d}v\quad\text{and}\quad V(t,x)=\frac{1}{\rho}\int_{\mathbb{R }}f(t,x,v)v\,\mathrm{d}v.\end{split} \tag{1.3}\] The SDG equation is a classical one-dimensional Schrodinger equation with a given energy \(E\), and a solution \(u\) of the form \[\begin{split}\partial_{t}u&=\frac{1}{2}\int_{0}^{t }f(t,x,v)\,\mathrm{d}v\quad\text{and}\quad V(t,x)=\frac{1}{\rho}\int_{\mathbb{R }}f(t,x,v)v\,\mathrm{d}v.\end{split} \tag{1.4}\] The SDG equation is a classical one-dimensional Schrodinger equation with a given energy \(E\), and a solution \(u\) of the form \[\begin{split}\partial_{t}u&=\frac{1}{2}\int_{0}^{t }f(t,x,v)\,\mathrm{d}v\quad\text{and}\quad V(t,x)=\frac{1}{\rho}\int_{\mathbb{R }}f(t,x,v)v\,\mathrm{d}v.\end{split} \tag{1.5}\] The SDG equation is a classical one-dimensional Schrodinger equation with a given energy \(E\), and a solution \(u\) of the form \[\begin{split}\partial_{t}u&=\frac{1}{2}\int_{0}^{t }f(t,x,v)\,\mathrm{d}v\quad\text{and}\quad V(t,x)=\frac{1}{\rho}\int_{\mathbb{R }}f(t,x,v)v\,\mathrm{d}v.\end{split} \tag{1.6}\] The SDG equation is a classical one-dimensional Schrodinger equation with a given energy \(E\), and a solution \(u\) of the form \[\begin{split}\partial_{t}u&=\frac{1}{2}\int_{0}^{t }f(t,x,v)\,\mathrm{d}v\quad\text{and}\quad V(t,x)=\frac{1}{\rho}\int_{\mathbb{R }}f(t,x,v)v\,\mathrm{d}v.\end{split} \tag{1.7}\] The SDG equation is a classical one-dimensional Schrodinger equation with a given energy \(E\), and a solution \(u\) of the form \[\begin{split}\partial_{t}u&=\frac{1}{2}\int_{0}^{t }f(t,x,v)\,\mathrm{d}v\quad\text{and}\quad V(t,x)=\frac{1}{\rho}\int_{\mathbb{R }}f(t,x,v)v\,\mathrm{d}v.\end{split} \tag{1.8}\] The SDG equation is a classical one-dimensional Schrodinger equation with a given energy \(E\), and a solution \(u\) of the form \[\begin{split}\partial_{t}u&=\frac{1}{2}\int_{0}^{t }f(t,x,v)\,\mathrm{d}v\quad\text{and}\quad V(t,x)=\frac{1}{\rho}\int_{\mathbb{R }}f(t,x,v)v\,\mathrm{d}v.\end{split} \tag{1.9}\] The SDG equation is a classical one-dimensional Schrodinger equation with a given energy \(E\), and a solution \(u\) of the form \[\begin{split}\partial_{t}u&=\frac{1}{2}\int_{0}^{t }f(t,x,v)\,\mathrm{d}v\quad\text{and}\quad V(t,x)=\frac{1}{\rho}\int_{\mathbb{R }}f(t,x,v)v\,\mathrm{d}v.\end{split} \tag{1.10}\] The SDG equation is a classical one-dimensional Schrodinger equation with a given energy \(E\), and a solution \(u\) of the form \[\begin{split}\partial_{t}u&=\frac{1}{2}\int_{0}^{t }f(t,x,v)\,\mathrm{d}v\quad\text{and}\quad V(t,x)=\frac{1}{\rho}\int_{\mathbb{R }}f(t,x,v)v\,\mathrm{d}v.\end{split} \tag{1.11}\] The SDG equation is a classical one-dimensional Schrodinger equation with a given energy \(E\), and a solution \(u\) of the form \[\begin{split}\partial_{t}u&=\frac{1}{2}\int_{0}^{t }f(t,x,v)\,\mathrm{d}v\quad\text{and}\quad V(t,x)=\frac{1}{\rho}\int_{\mathbb{R }}f(t,x,v)v\,\mathrm{d}v.\end{split} \tag{1.12}\] The SDG equation is a classical one-dimensional Schrodinger equation with a given energy \(E\), and a solution \(u\) of the form \[\begin{split}\partial_{t}u&=\frac{1}{2}\int_{0}^{t }f(t,x,v)\,\mathrm{d}v\quad\text{and}\quad V(t,x)=\frac{1}{\rho}\int_{\mathbb{R }}f(t,x,v)v\,\mathrm{d}v.\end{split} \tag{1.13}\] The SDG equation is a classical one-dimensional Schrodinger equation with a given energy \(E\), and a solution \(u\) of the form \[\begin{split}\partial_{t}u&=\frac{1}{2}\int_{0}^{t }f(t,x,v)\,\mathrm{d}v\quad\text{and}\quad V(t,x)=\frac{1}{\rho}\int_{\mathbb{R }}f(t,x,v)v\,\mathrm{d}v.\end{split} \tag{1.14}\] The SDG equation is a classical one-dimensional Schrodinger equation with a given energy \(E\), and a solution \(u\) of the form \[\begin{split}\partial_{t}u&=\frac{1}{2}\int_{0}^{t }f(t,x,v)\,\mathrm{d}v\quad\text{and}\quad V(t,x)=\frac{1}{\rho}\int_{\mathbb{R }}f(t,x,v)v\,\mathrm{d}v.\end{split} \tag{1.15}\] The SDG equation is a classical one-dimensional Schrodinger equation with a given energy \(E\), and a solution \(u\) of the form \[\begin{split}\partial_{t}u&=\frac{1}{2}\int_{0}^{t }f(t,x,v)\,\mathrm{d}v\quad\text{and}\quad V(t,x)=\frac{1}{\rho}\int_{\mathbb{R }}f(t,x,v)v\,\mathrm{d}v.\end{split} \tag{1.16}\] The SDG equation is a classical one-dimensional Schrodinger equation with a given energy \(E\), and a solution \(u\) of the form \[\begin{split}\partial_{t}u&=\frac{1}{2}\int_{0}^{t }f(t,x,v)\,\mathrm{d}v\quad\text{and}\quad V(t,x)=\frac{1}{\rho}\int_{\mathbb{R }}f(t,x,v)v\,\mathrm{d}v.\end{split} \tag{1.17}\] The SDG equation is a classical one-dimensional Schrodinger equation with a given energy \(E\), and a solution \(u\) of the form \[\begin{split}\partial_{t}u&=\frac{1}{2}\int_{0}^{t }f(t,x,v)\,\mathrm{d}v\quad\text{and}\quad V(t,x)=\frac{1}{\rho}\int_{\mathbb{R }}f(t,x,v)v\,\mathrm{d}v.\end{split} \tag{1.18}\] The SDG equation is a classical one-dimensional Schrodinger equation with a given energy \(E\), and a solution \(u\) of the form \[\begin{split}\partial_{t}u&=\frac{1}{2}\int_{0}^{t }f(t,x,v)\,\mathrm{d}v\quad\text{and}\quad V(t,x)=\frac{1}{\rho}\int_{ \mathbb{R}}f(t,x,v)v\,\mathrm{d}v.\end{split} \tag{1.19}\] The SDG equation is a classical one-dimensional Schrodinger equation with a given energy \(E\), and a solution \(u\) of the form \[\begin{split}\partial_{t}u&=\frac{1}{2}\int_{0}^{t }f(t,x,v)\,\mathrm{d}v\quad\text{and}\quad V(t,x)=\frac{1}{\rho}\int_{ \mathbb{R}}f(t,x,v)v\,\mathrm{d}v.\end{split} \tag{1.20}\] The SDG equation is a classical one-dimensional Schrodinger equation with a given energy \(E\), and a solution \(u\) of the form \[\begin{split}\partial_{t}u&=\frac{1}{2}\int_{0}^{t }f(t,x,v)\,\mathrm{d}v\quad\text{and}\quad V(t,x)=\frac{1}{\rho}\int_{\mathbb{R }}f(t,x,v)v\,\mathrm{d}v.\end{split} \tag{1.21}\] The SDG equation is a classical one-dimensional Schrodinger equation with a given energy \(E\), and a solution \(u\) of the form \[\begin{split}\partial_{t}u&=\frac{1}{2}\int_{0}^{t }f(t,x,v)\,\mathrm{d}v\quad\text{and}\quad V(t,x)=\frac{1}{\rho}\int_{\mathbb{R }}f(t,x,v)v\,\mathrm{d}v.\end{split} \tag{1.22}\] The SDG equation is a classical one-dimensional Schrodinger equation with Now define mass and momentum, respectively, by \[\int_{\mathbb{R}}\int_{I}f(t,x,v)\,\mathrm{d}x\,\mathrm{d}v\quad\text{and}\quad \int_{\mathbb{R}}\int_{I}vf(t,x,v)\,\mathrm{d}x\,\mathrm{d}v.\] We denote the \(k^{th}\) order velocity moments by \[m_{k}f(t,x)=\int_{\mathbb{R}}|v|^{k}f(t,x,v)\,\mathrm{d}v,\quad\text{for} \quad k\in\mathbb{N}\cup\{0\}.\] Throughout this paper, we use standard notation for Sobolev spaces. We denote by \(W^{m,p}\) the \(L^{p}\)-Sobolev space of order \(m\geq 0\) and by \(C^{1}_{0}(\mathbb{R}\times\mathbb{R})\) the class of \(C^{1}\) functions on \(\mathbb{R}\times\mathbb{R}\) which are compactly supported. Throughout this manuscript, any function defined on \(I\) is assumed to be periodic in the \(x\)-variable. The following theorem shows existence and uniqueness of the classical solution to (1.1)-(1.2) posed on \(\mathbb{R}\times\mathbb{R}\) (for detailed proof, see [1, Theorem 2.1, p. 65]). **Theorem 2.1**.: _Let \(u_{0}\in C^{2}(\mathbb{R})\) and \(f_{0}\in C^{1}_{0}(\mathbb{R}\times\mathbb{R}),f_{0}\geq 0\) be given. Then, the Vlasov-viscous Burgers' system has a unique solution \((u,f)\in C^{0}([0,\infty);C^{2}(\mathbb{R}))\times C^{0}([0,\infty);C^{1}_{0}( \mathbb{R}\times\mathbb{R}))\)._ ### Some properties of the solution We begin this subsection by gathering certain conservation properties of (1.1)-(1.2), the proof of which can be found in [1, Proposition 2.1, p. 1374]. **Lemma 2.2**.: _The solution \((f,u)\) to the Vlasov - viscous Burgers' system has the following properties:_ 1. _(Positivity preserving)_ _For any given non-negative initial data_ \(f_{0}\)_, the solution_ \(f\) _is also non-negative._ 2. _(Mass conservation)_ _The total mass is conserved in the sense that_ \[\int_{\mathbb{R}}\int_{I}f(t,x,v)\,\mathrm{d}x\,\mathrm{d}v=\int_{\mathbb{R}} \int_{I}f_{0}(x,v)\,\mathrm{d}x\,\mathrm{d}v,\quad t\in[0,T].\] 3. _(Total Momentum conservation)_ _The solution pair_ \((f,u)\) _conserves total momentum in the following sense:_ \[\int_{\mathbb{R}}\int_{I}vf(t,x,v)\,\mathrm{d}x\,\mathrm{d}v+\int_{I}u(t,x)\, \mathrm{d}x=\int_{\mathbb{R}}\int_{I}vf_{0}(x,v)\,\mathrm{d}x\,\mathrm{d}v+ \int_{I}u_{0}(x)\,\mathrm{d}x,\ t\in[0,T].\] 4. _(Total energy dissipation)_ _The total energy of the system dissipates in the sense that_ (2.1) \[\int_{\mathbb{R}}\int_{I}v^{2}\,f(t,x,v)\,\mathrm{d}x\,\mathrm{d}v+\int_{I}u^ {2}(t,x)\,\mathrm{d}x\leq\int_{\mathbb{R}}\int_{I}v^{2}f_{0}(x,v)\,\mathrm{d}x \,\mathrm{d}v+\int_{I}u_{0}^{2}(x)\,\mathrm{d}x,\quad t\in[0,T]\] _provided_ \(f(t,x,v)\) _is non-negative._ While proving the energy dissipation property (2.1), we also obtain the following identity: \[\frac{1}{2}\int_{\mathbb{R}}\int_{I}v^{2}f\,\mathrm{d}x\,\mathrm{ d}v+\frac{1}{2}\int_{I}u^{2}\,\mathrm{d}x+\epsilon\int_{0}^{t}\int_{I}u_{x}^{2}\, \mathrm{d}x\,\mathrm{d}t+\int_{0}^{t}\int_{\mathbb{R}}\int_{I}(u-v)^{2}\,f\, \mathrm{d}x\,\mathrm{d}v\,\mathrm{d}t\] \[=\frac{1}{2}\int_{\mathbb{R}}\int_{I}v^{2}f_{0}\,\mathrm{d}x\, \mathrm{d}v+\frac{1}{2}\int_{I}u_{0}^{2}\,\mathrm{d}x.\] If \(v^{2}f_{0}\in L^{1}(I\times\mathbb{R})\) and if \(u_{0}\in L^{2}(I)\), then the above equality shows \[u\in L^{\infty}(0,T;L^{2}(I))\quad\text{and}\quad u_{x}\in L^{2}([0,T]\times I). \tag{2.2}\] A use of the Sobolev inequality yields \[u\in L^{2}(0,T;L^{\infty}(I)). \tag{2.3}\] Note that for any \(z\in L^{\infty}(I)\), \[\|z\|_{L^{4}(I)}\leq C\|z\|_{L^{2}(I)}^{\frac{1}{2}}\|z\|_{L^{\infty}(I)}^{ \frac{1}{2}}.\] Hence, \[\int_{0}^{T}\int_{I}|u|^{4}\,\mathrm{d}x\,\mathrm{d}t\leq C\|u\|_{L^{\infty}(0, T;L^{2}(I)}^{2}\|u\|_{L^{2}(0,T;L^{\infty}(I))}^{2}\leq C. \tag{2.4}\] The following lemma yields integrability estimates on the local density and on the momentum. Since these appear as source terms in the viscous Burgers' equation, these estimates are crucial in deducing the regularity result of solution to (1.2). The proof of the following result is similar to [1, Lemma 2.2, p.56]. Hence, we skip the proof. **Lemma 2.3**.: _Let \(p,r\geq 1\). Let \(u\in L^{r}(0,T;L^{p+1}(I)),f_{0}\in L^{\infty}(I\times\mathbb{R})\cap L^{1}(I \times\mathbb{R})\). Further, let_ \[\int_{\mathbb{R}}\int_{I}\,|v|^{p}f_{0}\,\mathrm{d}x\,\mathrm{d}v<\infty.\] _Then, the local density \(\rho\) and the momentum \(\rho V\) satisfy the following:_ \[\rho\in L^{\infty}(0,T;L^{p+1}(I))\quad\text{and}\quad\rho V\in L^{\infty}(0, T;L^{\frac{p+1}{2}}(I)).\] **Remark 2.4**.: _Taking \(p=3\) in Lemma 2.3 yields_ \[\rho\in L^{\infty}(0,T;L^{4}(I))\quad\text{and}\quad\rho V\in L^{\infty}(0,T; L^{2}(I)). \tag{2.5}\] The following lemma gives \(L^{\infty}\) estimate for local density \(\rho\) in time and space variable. The proof of this can be found in [1, Proposition 4.6, p. 44]. **Lemma 2.5**.: _Let \(u\in L^{1}(0,T;L^{\infty}(I))\). Let \(f_{0}(x,v)\) be such that \(\sup_{C^{r}_{t,v}}f_{0}\in L^{\infty}_{loc}\left(\mathbb{R}_{+};L^{1}(\mathbb{ R})\right)\) for all \(r>0\), where \(C^{r}_{t,v}:=I\times B(e^{t}v,r)\). Here \(B(e^{tv},r)\) denotes the ball of radius \(r\) with center at \(e^{tv}\). Then, the following estimate holds:_ \[\|\rho\|_{L^{\infty}((0,T)\times I)}\leq e^{T}\sup_{t\in[0,T]}\|\sup_{C^{r}_{ t,v}}f_{0}\|_{L^{1}(\mathbb{R})}.\] For completeness, the existence of a unique strong solution to the problem (1.1)-(1.2) is discussed in the Appendix A. ## 3. Semi-discrete scheme This section deals with a semi-discrete scheme to approximate solutions to (1.1)-(1.2) and with some properties of the said discrete system. Note that, for a compactly supported initial datum \(f_{0}\), the solution \(f(t,x,v)\) has compact support. Therefore, without loss of generality, we assume that there is \(M>0\) such that for \(v\in[-M,M]=:J\) and \(t\in(0,T]\), \(\operatorname{supp}f(t,x,v)\subset\Omega=I\times J\). Let \(I_{h}=\{I_{i}\}_{i=1}^{N_{x}}\) and \(J_{h}=\{J_{j}\}_{j=1}^{N_{v}}\) be the partitions of intervals \(I\) and \(J\), respectively. Let \(\mathcal{T}_{h}\) be defined as the Cartesian product of these two partitions, i.e. \[\mathcal{T}_{h}=\left\{T_{ij}=I_{i}\times J_{j}:1\leq i\leq N_{x},1\leq j\leq N _{v}\right\},\] with \[I_{i}=[x_{i-\frac{1}{2}},x_{i+\frac{1}{2}}]\quad\forall\ 1\leq i\leq N_{x}; \quad J_{j}=[v_{j-\frac{1}{2}},v_{j+\frac{1}{2}}]\quad\forall\ 1\leq j\leq N_{v}.\] The mesh sizes \(h_{x}\) and \(h_{v}\) relative to the above partition are defined as follows: \[0<h_{x}=\max_{1\leq i\leq N_{x}}h_{i}^{x},\quad\text{where}\quad h_{i}^{x}=x_ {i+\frac{1}{2}}-x_{i-\frac{1}{2}},\] and \[0<h_{v}=\max_{1\leq j\leq N_{v}}h_{j}^{v},\quad\text{where}\quad h_{j}^{v}=v_{ j+\frac{1}{2}}-v_{j-\frac{1}{2}}.\] The mesh size of the partition \(\mathcal{T}_{h}\) is defined as \(h:=\max(h_{x},h_{v})\). Here the mesh is assumed to be regular and quasi-uniform in the sense that there exist positive constants \(c_{1},c_{2},c^{\prime}_{1},c^{\prime}_{2}\) such that the ratio of maximal and minimal mesh sizes stay bounded during mesh refinement in the following sense: \[c_{1}\leq\frac{h_{x}}{h_{i}^{x}}\leq c_{2}\quad\text{and}\quad c^{\prime}_{1} \leq\frac{h_{v}}{h_{j}^{v}}\leq c^{\prime}_{2},\qquad\forall\quad 1\leq i\leq N _{x},1\leq j\leq N_{v}.\] The set of all vertical edges and all horizontal edges are denoted by \(\Gamma_{x}\) and \(\Gamma_{v}\), respectively, \[\Gamma_{x}:=\bigcup_{i,j}\left\{\{x_{i-\frac{1}{2}}\}\times J_{j}\right\} \quad\text{and}\quad\Gamma_{v}:=\bigcup_{i,j}\left\{I_{i}\times\{v_{j-\frac{1 }{2}}\}\right\}.\] We denote the collection of all edges by \(\Gamma_{h}:=\Gamma_{x}\cup\Gamma_{v}\). We define the discontinuous finite element spaces for approximating \((u,f)\) as follows: \[X_{h} :=\{\psi\in L^{2}(I):\psi\in\mathbb{P}^{k}(I_{i}),\ \ i=1,\ldots,N_{x}\},\] \[V_{h} :=\{\psi\in L^{2}(J):\psi\in\mathbb{P}^{k}(J_{j}),\ \ j=1,\ldots,N_{v}\},\] \[\mathcal{Z}_{h} :=\left\{\phi\in L^{2}(\Omega):\phi\in\mathbb{Q}^{k}(T_{ij}),\ i=1, \ldots,N_{x};\ j=1,\ldots,N_{v}\right\},\] where \(\mathbb{P}^{k}\) is the space of scalar polynomials of degree at most \(k\) and \(\mathbb{Q}^{k}(T_{ij})\) is the space of tensor product of polynomials of degrees at most \(k\) in each variable. Below, we define the jump and average values of a function at nodal points. Let \(\left(\phi_{h}\right)_{i+\frac{1}{2},v}^{+}\) and \(\left(\phi_{h}\right)_{i+\frac{1}{2},v}^{-}\) be the values of \(\phi_{h}\) at \(\left(x_{i+\frac{1}{2}},v\right)\) from the right cell \(I_{i+1}\times J_{j}\) and from the left cell \(I_{i}\times J_{j}\), respectively. More precisely \[\left(\phi\right)_{i+\frac{1}{2},v}^{+}=\lim_{\eth\to 0^{+}}\phi_{h} \left(x_{i+\frac{1}{2}}+\eth,v\right)\quad\text{and}\quad\left(\phi\right)_{i +\frac{1}{2},v}^{-}=\lim_{\eth\to 0^{+}}\phi_{h}\left(x_{i+\frac{1}{2}}-\eth,v \right).\] Similarly, we set \(\left(\phi_{h}\right)_{x,j+\frac{1}{2}}^{+}\) and \(\left(\phi_{h}\right)_{x,j-\frac{1}{2}}^{-}\). The jump \(\llbracket\cdot\rrbracket\) and average \(\{\cdot\}\) of \(\phi_{h}\) at \(\left(x_{i+\frac{1}{2}},v\right),\ \forall\,v\in J_{j}\) are defined by \[\llbracket\phi_{h}\rrbracket_{i+\frac{1}{2},v} :=\left(\phi_{h}\right)_{i+\frac{1}{2},v}^{+}-\left(\phi_{h} \right)_{i+\frac{1}{2},v}^{-}\quad\forall\,\phi_{h}\in\mathcal{Z}_{h},\] \[\{\phi_{h}\}_{i+\frac{1}{2},v} :=\frac{1}{2}\left(\left(\phi_{h}\right)_{i+\frac{1}{2},v}^{+}+ \left(\phi_{h}\right)_{i+\frac{1}{2},v}^{-}\right)\quad\forall\,\phi_{h}\in \mathcal{Z}_{h}.\] Similarly, one can define jump and average at \(\left(x,v_{j+1/2}\right),\ \forall\,x\in I_{i}\). **Discrete norm:** We define the following discrete semi-norms and norms: \[\|w|_{m,\mathcal{T}_{h}}=\left(\sum_{R\in\mathcal{T}_{h}}|w|_{m,R}^{2}\right)^ {\frac{1}{2}},\,\|w\|_{m,\mathcal{T}_{h}}=\left(\sum_{R\in\mathcal{T}_{h}}\|w \|_{m,R}^{2}\right)^{\frac{1}{2}}\quad\forall\,w\in H^{m}(\mathcal{T}_{h}),\ m\geq 0,\] \[\|w\|_{\infty,\mathcal{T}_{h}}=\sup_{R\in\mathcal{T}_{h}}\|w\|_{L^{\infty}(R)},\quad\|w\|_{L^{p}(\mathcal{T}_{h})}=\left(\sum_{R\in\mathcal{T}_{h}}\|w\|_{L ^{p}(R)}^{p}\right)^{\frac{1}{p}},\ \forall\,w\in L^{p}(\mathcal{T}_{h}),\] for all \(1\leq p<\infty\). Next, we recall some standard estimates which are frequently used in our analysis: **Inverse inequality:** (see [1, Lemma 1.44, p. 26]) If \(w_{h}\in\mathbb{P}^{k}(I_{i})\), then \[\|\partial_{x}w_{h}\|_{0,I_{i}}\leq Ch_{x}^{-1}\|w_{h}\|_{0,I_{i}}. \tag{3.1}\] **Trace inequality:** (see [1, Lemma 1.46, p. 27]) For \(w_{h}\in\mathbb{P}^{k}(I_{i})\), \[\|w_{h}\|_{0,\partial I_{i}}\leq Ch_{x}^{-\frac{1}{2}}\|w_{h}\|_{0,I_{i}}. \tag{3.2}\] **Norm comparison:** (see [1, Lemma 1.50, p. 29]) Let \(1\leq p,q\leq\infty\) and \(w_{h}\in\mathbb{P}^{k}(I_{i})\). Then, \[\|w_{h}\|_{L^{p}(I_{i})}\leq Ch_{x}^{\frac{1}{p}-\frac{1}{q}}\|w_{h}\|_{L^{q}( I_{i})}. \tag{3.3}\] ### LDG formulation We rewrite equation (1.2) by introducing an auxiliary variable \(w=\sqrt{\epsilon}\,\partial_{x}u\) as follows: \[\partial_{t}u+\frac{1}{2}\partial_{x}u^{2}-\sqrt{\epsilon}\,\partial_{x}w+\rho u =\rho V, \tag{3.4}\] \[w-\sqrt{\epsilon}\,\partial_{x}u=0. \tag{3.5}\] This helps to devise LDG scheme for the Burgers' equation. We denote by \((u_{h}(t),w_{h}(t))\in X_{h}\times X_{h}\), a discrete approximation for \((u(t),w(t))\) for all \(t\in[0,T]\) and we denote by \(f_{h}(t)\in\mathcal{Z}_{h}\), a discrete approximation for \(f(t)\) for all \(t\in[0,T]\). As in the continuum setting, we set discrete local density and discrete momentum by \[\rho_{h}=\sum_{j=1}^{N_{v}}\int_{J_{j}}f_{h}\,\mathrm{d}v\qquad\text{and}\qquad \rho_{h}V_{h}=\sum_{j=1}^{N_{v}}\int_{J_{j}}vf_{h}\,\mathrm{d}v, \tag{3.6}\] respectively. Our discrete problem is to seek \((u_{h}(t),w_{h}(t),f_{h}(t))\in X_{h}\times X_{h}\times\mathcal{Z}_{h}\), for \(t\in[0,T]\) such that \[\left(\frac{\partial f_{h}}{\partial t},\psi_{h}\right)+\mathcal{B}_{h}(u_{h} ;f_{h},\psi_{h})=0\;\;\forall\;\psi_{h}\in\mathcal{Z}_{h}, \tag{3.7}\] \[\left(\frac{\partial u_{h}}{\partial t},\phi_{h}\right)+a_{h}(u_ {h},\phi_{h})+\sqrt{\epsilon}\,b_{h}(w_{h},\phi_{h}) +(\rho_{h}u_{h},\phi_{h})\] \[=(\rho_{h}V_{h},\phi_{h})\quad\forall\;\phi_{h}\in X_{h}, \tag{3.8}\] \[(w_{h},q_{h})+\sqrt{\epsilon}\,b_{h}(u_{h},q_{h})=0\quad\forall\;q_{h}\in X_{ h}, \tag{3.9}\] with \(f_{h}(0)=f_{0h}\in\mathcal{Z}_{h}\) and \(u_{h}(0)=u_{0h}\in X_{h}\) to be defined later. In (3.7), \[\mathcal{B}_{h}(u_{h};f_{h},\psi_{h}):=\sum_{i=1}^{N_{x}}\sum_{j=1}^{N_{v}} \mathcal{B}_{ij}^{h}(u_{h};f_{h},\psi_{h}), \tag{3.10}\] with \[\begin{split}\mathcal{B}_{ij}^{h}(u_{h};f_{h},\psi_{h})& :=-\int_{T_{ij}}vf_{h}\partial_{x}\psi_{h}\,\mathrm{d}v\,\mathrm{ d}x-\int_{T_{ij}}f_{h}(u_{h}-v)\partial_{v}\psi_{h}\,\mathrm{d}v\,\mathrm{ d}x\\ &\quad+\int_{J_{j}}\left[\left(\widehat{vf_{h}}\psi_{h}^{-} \right)_{i+1/2,v}-\left(\widehat{vf_{h}}\psi_{h}^{+}\right)_{i-1/2,v}\right] \,\mathrm{d}v\\ &\quad+\int_{I_{i}}\left[\left(\widehat{(u_{h}-v)f_{h}}\psi_{h}^ {-}\right)_{x,j+1/2}-\left(\widehat{(u_{h}-v)f_{h}}\psi_{h}^{+}\right)_{x,j-1/ 2}\right]\,\mathrm{d}x\end{split} \tag{3.11}\] wherein the generalized numerical fluxes are \[\begin{split}\left\{\begin{aligned} \widehat{vf_{h}}\big{|}_{\{x_{i-1/2}\} \times J_{j}}&:=\left\{\begin{aligned} &(1-\lambda_{1})\,vf_{h}^{+}+\lambda_{1}vf_{h}^{-} &\text{if}\quad v\geq 0\\ &(1-\lambda_{1})\,vf_{h}^{-}+\lambda_{1}vf_{h}^{+}& \text{if}\quad v<0\end{aligned}\right.\\ &\quad=\left\{vf_{h}\right\}+\left(\frac{1-2\lambda_{1}}{2} \right)\left|v\right|\llbracket f_{h}\rrbracket\;\;\text{on}\;\;\{x_{i-1/2}\} \times J_{j},\\ &\left(\overline{(u_{h}-v)f_{h}}\big{|}_{I_{i}\times\{v_{j-1/2}\} }\right.&:=\left\{\begin{aligned} &(1-\lambda_{2})\,(u_{h}-v)\,f_{h}^{+}+ \lambda_{2}\,(u_{h}-v)\,f_{h}^{-}&\text{if}\quad(u_{h}-v)\geq 0\\ &(1-\lambda_{2})\,(u_{h}-v)\,f_{h}^{-}+\lambda_{2}\,(u_{h}-v)\,f_{h}^ {+}&\text{if}\quad(u_{h}-v)<0\\ &=\left\{(u_{h}-v)\,f_{h}\right\}+\left(\frac{1-2\lambda_{2}}{2} \right)\left|u_{h}-v\right|\llbracket f_{h}\rrbracket\;\;\text{on}\;\;I_{i} \times\{v_{j-1/2}\},\end{aligned}\right.\end{split} \tag{3.12}\] with \(\lambda_{1},\lambda_{2}>1/2\). We define the numerical fluxes on the boundary \(\partial\Omega\) by \[\left(\widehat{vf_{h}}\right)_{1/2,v}=\left(\widehat{vf_{h}}\right)_{N_{x}+1 /2,v},\qquad\left(\overline{(u_{h}-v)f_{h}}\right)_{x,1/2}=\left(\overline{(u _{h}-v)f_{h}}\right)_{x,N_{v}+1/2}=0,\] for all \((x,v)\in I_{i}\times J_{j}\) for \(1\leq i\leq N_{x},1\leq j\leq N_{v}\). **Remark 3.1**.: _Even though classical purely upwind fluxes are employed in DG schemes for linear hyperbolic equations, it is not easy to define such fluxes in the presence of variable coefficients. Lately, generalized numerical fluxes similar to (3.12) have been used in such scenarios, thanks to the simplicity in their definition [10]. Furthermore, such generalized fluxes provide more flexibility in dealing with small viscosity coefficient in the viscous Burgers' equation, see, for some comments in (ii) of the Observations in the Section 5._ In (3.8)-(3.9), \[a_{h}(u_{h},\phi_{h}):=-\sum_{i=1}^{N_{x}}\int_{I_{i}}\frac{u_{h}^{2}}{2}\partial_ {x}\phi_{h}\,\mathrm{d}x-\sum_{i=0}^{N_{x}-1}\left(\frac{\overline{u_{h}^{2}} \llbracket\phi_{h}\rrbracket}{2}\right)_{i+1/2} \tag{3.13}\] and \[b_{h}(w_{h},\phi_{h}):=\sum_{i=1}^{N_{x}}\int_{I_{i}}w_{h}\partial_{x}\phi_{h} \,\mathrm{d}x+\sum_{i=0}^{N_{x}-1}\left(\widehat{w_{h}}\llbracket\phi_{h} \rrbracket\right)_{i+1/2} \tag{3.14}\] with the numerical fluxes \[\widehat{u_{h}^{2}}:=\frac{1}{3}\left(\left(u_{h}^{+}\right)^{2}+u_{h}^{+}u_{ h}^{-}+\left(u_{h}^{-}\right)^{2}\right), \tag{3.15}\] \[\widehat{w_{h}}:=\left(1-\lambda\right)w_{h}^{-}+\lambda w_{h}^{+}=\{w_{h}\} +\left(\frac{2\lambda-1}{2}\right)\llbracket w_{h}\rrbracket, \tag{3.16}\] and \[\widehat{u_{h}}:=\lambda u_{h}^{-}+\left(1-\lambda\right)u_{h}^{+}=\{u_{h}\} +\left(\frac{1-2\lambda}{2}\right)\llbracket u_{h}\rrbracket, \tag{3.17}\] where \(\lambda\geq 1/2\). Note that the numerical fluxes given in (3.16)-(3.17) are in a generalized sense. Note further that the numerical fluxes at the end points of \(I\) are defined by \(\left(u_{h}\right)_{\frac{1}{2}}^{+}=\left(u_{h}\right)_{N_{x}+\frac{1}{2}}^{ +},\left(u_{h}\right)_{\frac{1}{2}}^{-}=\left(u_{h}\right)_{N_{x}+\frac{1}{2} }^{-},\left(w_{h}\right)_{\frac{1}{2}}^{+}=\left(w_{h}\right)_{N_{x}+\frac{1}{ 2}}^{+}\) and \(\left(w_{h}\right)_{\frac{1}{2}}^{-}=\left(w_{h}\right)_{N_{x}+\frac{1}{2}}^{-}\). The \(\widehat{u_{h}^{2}}\) is not a generalized numerical flux and it is referred as central flux which was introduced by Liu et al. in [15]. **Remark 3.2**.: _The author of [15] has run a large time simulation to understand the stability of a numerical scheme stemming out of the central flux (3.15) while comparing it with the stability of a numerical scheme associated with the following Lax-Friedrich flux:_ \[\widehat{u_{h}^{2}}=\frac{1}{2}[(u_{h}^{+})^{2}+(u_{h}^{-})^{2}-2\max|u|(u_{h }^{+}-u_{h}^{-})],\] _see, section \(3\) (Example \(3.3\)) of [15] for more details. The author observes that the scheme defined by using the central fluxes is more stable than the scheme that uses the Lax-Friedrich flux._ We observe below, certain properties of the bilinear form \(b_{h}(w_{h},\phi_{h})\): * For \(w_{h}\in X_{h}\), (3.18) \[\begin{split} b_{h}(w_{h},w_{h})&=\sum_{i=1}^{N_{x} }\int_{I_{i}}w_{h}\partial_{x}w_{h}\,\mathrm{d}x+\sum_{i=0}^{N_{x}-1}\left( \widehat{w_{h}}\llbracket w_{h}\rrbracket\right)_{i+1/2}\\ &=\sum_{i=0}^{N_{x}-1}\left(\widehat{w_{h}}\llbracket w_{h} \rrbracket-\frac{1}{2}\llbracket w_{h}^{2}\rrbracket\right)_{i+1/2}=\left( \lambda-\frac{1}{2}\right)\sum_{i=0}^{N_{x}-1}\llbracket w_{h}\rrbracket_{i+1/2 }^{2}.\end{split}\] * For \(w_{h},\phi_{h}\in X_{h}\), there holds (3.19) \[\begin{split} b_{h}(w_{h},\phi_{h})&=\sum_{i=1}^{N_{ x}}\int_{I_{i}}w_{h}\partial_{x}\phi_{h}\,\mathrm{d}x+\sum_{i=0}^{N_{x}-1}\left( \widehat{w_{h}}\llbracket\phi_{h}\rrbracket\right)_{i+1/2}\\ &=-\sum_{i=1}^{N_{x}}\int_{I_{i}}\partial_{x}w_{h}\,\phi_{h}\, \mathrm{d}x-\sum_{i=0}^{N_{x}-1}\left(\llbracket w_{h}\phi_{h}\rrbracket- \widehat{w_{h}}\llbracket\phi_{h}\rrbracket\right)_{i+1/2}\\ &=-b_{h}(\phi_{h},w_{h})-\sum_{i=0}^{N_{x}-1}\left(\llbracket w_{h} \phi_{h}\rrbracket-\widehat{w_{h}}\llbracket\phi_{h}\rrbracket-\widehat{ \phi_{h}}\llbracket w_{h}\rrbracket\right)_{i+1/2}.\end{split}\] Since \(X_{h}\times X_{h}\times\mathcal{Z}_{h}\) is finite dimensional, the discrete problem (3.7)-(3.9) leads to a system of non-linear ODEs coupled with linear algebraic equations which is known as a system of differential algebraic equations. From equation (3.9) \(w_{h}\) can be written explicitly as a function of \(u_{h}\). On substitution in (3.8) we obtain a system of non-linear ODEs. An application of Picard's theorem shows the existence of a local-in-time unique solution \((u_{h},w_{h},f_{h})\). This solution can be made global-in-time provided we have bounds on the solution in appropriate norms. We shall be commenting on this towards the end of the paper. ### Some properties of the discrete solution **Lemma 3.3** (Discrete mass conservation).: _Let \(k\geq 0\) and let \(f_{h}\) be the DG approximation to \(f\) satisfying (3.7) with \(f_{h}(0)=\mathcal{P}_{h}f_{0}\), where \(\mathcal{P}_{h}\) is the \(L^{2}\)-projection onto \(\mathcal{Z}_{h}\). Then, the following discrete mass conservation property holds:_ \[\int_{\Omega}f_{h}(t,x,v)\,\mathrm{d}x\,\mathrm{d}v=\int_{\Omega}f_{0}(x,v)\, \mathrm{d}x\,\mathrm{d}v\quad\forall\ t>0.\] Proof.: From the definition of the \(L^{2}\)-projection, it follows that \[\sum_{i,j}\int_{T_{ij}}f_{h}(0)\,\mathrm{d}x\,\mathrm{d}v=\sum_{i,j}\int_{T_{ ij}}\mathcal{P}_{h}f_{0}\,\mathrm{d}x\,\mathrm{d}v=\sum_{i,j}\int_{T_{ij}}f_{0}\, \mathrm{d}x\,\mathrm{d}v.\] Let us fix an arbitrary element \(T_{ij}\) and let us take a test function \(\psi_{h}\) in (3.7) such that \(\psi_{h}=1\) in \(T_{ij}\) and \(\psi_{h}=0\) elsewhere. Then, (3.7) reduces to \[\int_{T_{ij}}\frac{\partial f_{h}}{\partial t}\,\mathrm{d}x\,\mathrm{d}v+ \mathcal{B}_{ij}^{h}(u_{h};f_{h},1)=0.\] From the definition of \(\mathcal{B}_{ij}^{h}\), we arrive at \[\int_{T_{ij}}\frac{\partial f_{h}}{\partial t}\,\mathrm{d}x\, \mathrm{d}v +\int_{J_{j}}\left[\left(\widehat{vf_{h}}\right)_{i+1/2,v}-\left( \widehat{vf_{h}}\right)_{i-1/2,v}\right]\,\mathrm{d}v\] \[+\int_{I_{i}}\left[\left(\widehat{(u_{h}-v)f_{h}}\right)_{x,j+1/2 }-\left(\widehat{(u_{h}-v)f_{h}}\right)_{x,j-1/2}\right]\,\mathrm{d}x=0.\] Note that the choice of \(T_{ij}\) was done arbitrarily. Furthermore, the second and the third integral terms on the left hand side holds true for all \(i,j\). Since boundary conditions in \(x\) are periodic and the support in \(v\) is compact, taking summation over all \(i,j\) and integration in time yields the desired result. As a consequence of the above lemma, for any given non-negative initial data, we have \[\int_{\Omega}f_{h}(t,x,v)\mathrm{d}x\,\mathrm{d}v\geq 0\quad\text{and}\quad \int_{I}\rho_{h}(t,x)\mathrm{d}x\geq 0.\] **Lemma 3.4** (Discrete total momentum conservation).: _Let \((f_{h},u_{h})\in\mathcal{C}^{1}([0,T];\mathcal{Z}_{h}\times X_{h})\) be the DG-LDG approximation obtained as a solution of (3.7) and (3.8). Then, for \(k\geq 1\),_ \[\int_{\Omega}vf_{h}(t,x,v)\,\mathrm{d}x\,\mathrm{d}v+\int_{I}u_{h}(t,x)\, \mathrm{d}x=\int_{\Omega}vf_{0}(x,v)\,\mathrm{d}x\,\mathrm{d}v+\int_{I}u_{0}( x)\,\mathrm{d}x\quad\forall\ t\geq 0.\] Proof.: Choose \(\phi_{h}=1\in X_{h}\) in (3.8), and use the definitions (3.6) to obtain \[\sum_{i}\int_{I_{i}}\frac{\partial u_{h}}{\partial t}\,\mathrm{d}x=\sum_{i} \int_{I_{i}}\left(\rho_{h}V_{h}-\rho_{h}u_{h}\right)\,\mathrm{d}x=\sum_{i,j} \int_{T_{ij}}\left(v-u_{h}\right)f_{h}\,\mathrm{d}x\,\mathrm{d}v.\] Now, putting \(\psi_{h}=v\in\mathcal{Z}_{h}\) in equation (3.7), we arrive at \[\sum_{i,j}\int_{T_{ij}}\left(\frac{\partial}{\partial t}(vf_{h})-\left(u_{h}-v \right)f_{h}\right)\,\mathrm{d}x\,\mathrm{d}v=0.\] Adding the above two expressions followed by an integration in time yields the result. **Lemma 3.5** (Discrete total energy identity).: _Let \(k\geq 2\) and let \((f_{h},u_{h},w_{h})\in\mathcal{C}^{1}(0,T;\mathcal{Z}_{h})\times\mathcal{C}^{1}( 0,T;X_{h})\times\mathcal{C}^{0}(0,T;X_{h})\) be the DG-LDG approximate solution of (3.7) and (3.8)-(3.9). Then,_ \[\frac{1}{2}\left(\sum_{i,j}\int_{T_{ij}}v^{2}\,f_{h}\,\mathrm{d}x \,\mathrm{d}v+\sum_{i=1}^{N_{x}}\int_{I_{i}}\,u_{h}^{2}\,\mathrm{d}x\right)+ \sum_{i=1}^{N_{x}}\int_{0}^{t}\int_{I_{i}}w_{h}^{2}\,\mathrm{d}x\] \[+\sum_{i,j}\int_{0}^{t}\int_{T_{ij}}\,\left(u_{h}-v\right)^{2}f_{ h}\,\mathrm{d}x\,\mathrm{d}v=\frac{1}{2}\left(\sum_{i,j}\int_{T_{ij}}v^{2}\,f_{h}( 0)\,\mathrm{d}x\,\mathrm{d}v+\sum_{i=1}^{N_{x}}\int_{I_{i}}u_{h}^{2}(0)\, \mathrm{d}x\right)\ \forall\ t\geq 0.\] Proof.: A choice of \(\psi_{h}=\frac{v^{2}}{2}\in\mathcal{Z}_{h}\) in equation (3.7) yields \[\sum_{i,j}\int_{T_{ij}}\left(\frac{v^{2}}{2}\frac{\partial f_{h}}{\partial t} +v^{2}f_{h}-u_{h}vf_{h}\right)\,\mathrm{d}x\,\mathrm{d}v=0. \tag{3.20}\] Choose \(\phi_{h}=u_{h}\) and \(q_{h}=w_{h}\) in equation (3.8)-(3.9) to obtain \[\frac{1}{2}\frac{\mathrm{d}}{\mathrm{d}t}\|u_{h}\|_{0,I_{h}}^{2}+ \|w_{h}\|_{0,I_{h}}^{2} +a_{h}(u_{h},u_{h})+\sqrt{\epsilon}\,b_{h}(w_{h},u_{h})\] \[+\sqrt{\epsilon}\,b_{h}(u_{h},w_{h})+\left(\left(\rho_{h}u_{h}- \rho_{h}V_{h}\right),u_{h}\right)=0. \tag{3.21}\] From equation (3.13), we obtain \[a_{h}(u_{h},u_{h})=\sum_{i=0}^{N_{x}-1}\left(\frac{\llbracket u_{h}^{3} \rrbracket}{6}-\frac{\widehat{u_{h}^{2}}\llbracket u_{h}\rrbracket}{2}\right)_{ i+1/2}=0,\] thanks to the flux defined in (3.15) and the periodic boundary condition. From the fluxes defined in (3.16) and (3.17), we deduce that \[b_{h}(u_{h},w_{h})+b_{h}(w_{h},u_{h})=0. \tag{3.22}\] Hence, summing the equations (3.20) and (3.21) yields \[\frac{1}{2}\frac{\mathrm{d}}{\mathrm{d}t}\left(\sum_{i,j}\int_{T_ {ij}}v^{2}\,f_{h}\,\mathrm{d}x\,\mathrm{d}v+\sum_{i=1}^{N_{x}}\int_{I_{i}}u_{ h}^{2}\,\mathrm{d}x\right) +\sum_{i=1}^{N_{x}}\int_{I_{i}}w_{h}^{2}\,\mathrm{d}x\] \[+\sum_{i,j}\int_{T_{ij}}\,\left(u_{h}-v\right)^{2}f_{h}\,\mathrm{ d}x\,\mathrm{d}v=0.\] An integration in time leads to the desired identity. In the discrete case, it is difficult to prove that the total energy dissipates as in (2.1), because it is hard to show the non-negativity of \(f_{h}\). In the continuum case, this dissipation property plays a crucial role in the proof of well-posedness of the system (1.1)-(1.2). **Lemma 3.6**.: _Let \(k\geq 0\) and let \(f_{h}\) be the DG approximation to \(f,\) satisfying (3.7). Then,_ \[\max_{t\in[0,T]}\|f_{h}(t,\cdot)\|_{0,\mathcal{T}_{h}}\leq e^{\frac{T}{2}}\|f _{0}\|_{0,\mathcal{T}_{h}}.\] Proof.: Choosing \(\psi_{h}=f_{h}\) in (3.7), we obtain \[\frac{1}{2}\frac{\mathrm{d}}{\mathrm{d}t}\|f_{h}\|_{0,\mathcal{T} _{h}}^{2}-\sum_{i,j}\left(\frac{1}{2}\int_{T_{ij}}\,\left(v\partial_{x}f_{h}^{ 2}+(u_{h}-v)\partial_{v}f_{h}^{2}\right)\,\mathrm{d}x\,\mathrm{d}v\right)\] \[-\sum_{i,j}\int_{J_{j}}\left(\widehat{vf_{h}}\llbracket f_{h} \rrbracket\right)_{i-1/2,v}\,\mathrm{d}v-\sum_{i,j}\int_{I_{i}}\left(\widehat {(u_{h}-v)f_{h}}\llbracket f_{h}\rrbracket\right)_{x,j-1/2}\,\mathrm{d}x=0.\] After applying integration by parts in second and third term and using \(\llbracket f_{h}^{2}\rrbracket=2\{f_{h}\}\llbracket f_{h}\rrbracket\), it follows that \[\begin{split}&\frac{1}{2}\frac{\mathrm{d}}{\mathrm{d}t}\|f_{h}\|_{0, \mathcal{T}_{h}}^{2}-\frac{1}{2}\|f_{h}\|_{0,\mathcal{T}_{h}}^{2}+\sum_{i,j} \int_{J_{j}}\left(\frac{2\lambda_{1}-1}{2}\right)|v|\,\llbracket f_{h}\rrbracket _{i-1/2,v}^{2}\,\mathrm{d}v\\ &+\sum_{i,j}\int_{I_{i}}\left(\frac{2\lambda_{2}-1}{2}\right) \left(|u_{h}-v|\,\llbracket f_{h}\rrbracket^{2}\right)_{x,j-1/2}\,\mathrm{d}x= 0.\end{split} \tag{3.23}\] Observe that for \(\lambda_{1},\lambda_{2}>1/2\), last two terms on the left hand side are non-negative and hence, can be dropped. An integration in time yields the desired result. For our subsequent use, we shall need the following lemma. **Lemma 3.7**.: _Let \(f\) and \(f_{h}\) be the continuum and the discrete solutions of the Vlasov - viscous Burgers' system, respectively. Further, let \(\rho\) be the local density associated with \(f\) and \(\rho_{h}\) be the discrete local density defined as in (3.6). Then,_ \[\|\rho-\rho_{h}\|_{0,I_{h}}\leq(2M)^{\frac{1}{2}}\|f-f_{h}\|_{0,\mathcal{T}_{h }}\quad\text{and}\quad\|\rho-\rho_{h}\|_{\infty,I_{h}}\leq 2M\|f-f_{h}\|_{ \infty,\mathcal{T}_{h}}.\] _Moreover,_ \[\|\rho V-\rho_{h}V_{h}\|_{0,I_{h}}\leq 2M\|f-f_{h}\|_{0,\mathcal{T}_{h}}.\] Proof.: An application of the Holder inequality yields \[\|\rho-\rho_{h}\|_{0,I_{h}}^{2} =\sum_{i=1}^{N_{x}}\int_{I_{i}}\left(\sum_{j=1}^{N_{v}}\int_{J_{j} }\left(f-f_{h}\right)\mathrm{d}v\right)^{2}\,\mathrm{d}x\] \[\leq 2M\sum_{i,j}\int_{T_{ij}}\left(f-f_{h}\right)^{2}\mathrm{d}v \,\mathrm{d}x\] By definition, it follows that \[\rho-\rho_{h}=\int_{J}(f-f_{h})\,\mathrm{d}v\leq 2M\|f-f_{h}\|_{L^{\infty}(J)}\] and hence, our second result. An application of the Holder inequality shows \[\|\rho V-\rho_{h}V_{h}\|_{0,I_{h}}^{2} =\sum_{i=1}^{N_{x}}\int_{I_{i}}\left(\sum_{j=1}^{N_{v}}\int_{J_{j }}\left(f-f_{h}\right)v\,\mathrm{d}v\right)^{2}\mathrm{d}x\] \[\leq 4M^{2}\sum_{i,j}\int_{T_{ij}}\left(f-f_{h}\right)^{2}\mathrm{ d}v\,\mathrm{d}x.\] This concludes the proof. As a consequence of Lemma 3.6, it follows that \[\|\rho_{h}\|_{0,I_{h}}\leq C(T)M\|f_{0}\|_{0,\mathcal{T}_{h}}. \tag{3.24}\] ## 4. A priori Estimates This section discusses some a priori error estimates for the discrete solution. In order to derive an optimal order of convergence, we adopt the following strategy: * Using a projection operator \(Q_{\lambda}^{x}\) introduced by Liu et al. in [10] and its approximation property, we obtain a bound on \(\|Q_{\lambda}^{x}u-u_{h}\|_{0,I_{h}}\), (See, Corollary 4.6). * Inspired by a projection operator introduced by Liu et. al. in [13], we define a new projection \(\Pi_{\lambda_{1},\lambda_{2}}\) which in turn helps us to obtain a bound on \(\|\Pi_{\lambda_{1},\lambda_{2}}f-f_{h}\|_{0,\mathcal{T}_{h}}\) thanks to its approximation property, (See, Lemma 4.12). * The aforementioned bounds are such that the bound on \(\|Q_{\lambda}^{x}u-u_{h}\|_{0,I_{h}}\) depends on \(\|f-f_{h}\|_{0,\mathcal{T}_{h}}\) (See equations (4.13)-(4.14)). Furthermore, the bound on \(\|\Pi_{\lambda_{1},\lambda_{2}}f-f_{h}\|_{0,\mathcal{T}_{h}}\) depends on \(\|u-u_{h}\|_{\infty,I_{h}}\), (See, Lemma 4.10). * In Lemma 4.1, using inverse hypothesis, we obtain \[\|u-u_{h}\|_{\infty,I_{h}}\lesssim h+h^{-\frac{1}{2}}\|u-u_{h}\|_{0,I_{h}}.\] * Then, an application of a non-linear version of Gronwall type inequality yields optimal order of convergence, (See, Theorem 4.13). By optimality, we mean optimality with respect to approximation properties of the projection operators employed in the proof. ### Error estimates for viscous Burgers' system This subsection deals with error estimates for the viscous Burgers' system. Let \(\mathcal{P}_{x}:L^{2}(I_{h})\to X_{h}\) be the standard \(L^{2}\)-projection. **Global projection.** Let \(s\geq k+1\) and \(\lambda\geq\frac{1}{2}\). Consider the projection \(Q_{\lambda}^{x}:H^{s}(I_{h})\to X_{h}\) defined by \[\int_{I_{i}}\left((Q_{\lambda}^{x}\Upsilon)-\Upsilon\right)z\,\mathrm{d}x=0, \quad\forall\ z\in\mathbb{P}^{k-1}(I_{i}),\ 1\leq i\leq N_{x}, \tag{4.1}\] together with the flux relation \[\left(\widehat{Q_{\lambda}^{x}\Upsilon}\right)_{i+1/2}=\lambda\Upsilon_{i+1/2 }^{-}+\left(1-\lambda\right)\Upsilon_{i+1/2}^{+},\ 1\leq i\leq N_{x}-1. \tag{4.2}\] The above projection \(Q_{\lambda}^{x}\) is uniquely defined when \(\lambda>1/2\) (see [1, Lemma 4.1]). Even when \(\lambda=1/2\), the projection \(Q_{\lambda}^{x}\) is uniquely defined provided \(k\) is even and \(N_{x}\) is odd (again see [1, Lemma 4.1] for the proof). Below, we recall some properties of the above defined projection (for proof refer to [1, Lemmas 4.2-4.3, p. 331]. **Approximation properties of the global projection.** For \(\Upsilon\mid_{I_{i}}\in H^{k+1}(I_{i})\) for \(i=1,2,\ldots,N_{x}\), there exists a positive constant \(C=C(k,\lambda)\), independent of \(\Upsilon\) such that for \(k\geq 0\), \[\begin{cases}\|Q_{\lambda}^{x}\Upsilon-\Upsilon\|_{0,I_{h}}\leq Ch_{x}^{k+1}| \Upsilon|_{k+1,I_{h}},\\ \|Q_{\lambda}^{x}\Upsilon-\Upsilon\|_{0,\Gamma_{x}}\leq Ch_{x}^{k+1/2}| \Upsilon|_{k+1,I_{h}}.\end{cases} \tag{4.3}\] **Lemma 4.1**.: _Let \(u\) be the solution of viscous Burgers' problem (1.2). Let \(u_{h}\in X_{h}\) be its finite element approximation. Assume that \(u\in W^{1,\infty}(I)\cap H^{k+1}(I)\). Then,_ \[\|u-u_{h}\|_{\infty,I_{h}}\lesssim h\|u\|_{W^{1,\infty}(I)}+h^{k+\frac{1}{2}} \|u\|_{k+1,I_{h}}+h^{-\frac{1}{2}}\|u-u_{h}\|_{0,I_{h}}.\] Proof.: Observe that \[\|u-u_{h}\|_{\infty,I_{h}} \leq\|u-\mathcal{P}_{x}u\|_{\infty,I_{h}}+\|\mathcal{P}_{x}u-u_{h }\|_{\infty,I_{h}}\] \[\lesssim h\|u\|_{W^{1,\infty}(I)}+h^{-\frac{1}{2}}\|\mathcal{P}_{ x}u-u_{h}\|_{0,I_{h}}\] \[\lesssim h\|u\|_{W^{1,\infty}(I)}+h^{-\frac{1}{2}}\|u-\mathcal{P} _{x}u\|_{0,I_{h}}+h^{-\frac{1}{2}}\|u-u_{h}\|_{0,I_{h}}\] \[\lesssim h\|u\|_{W^{1,\infty}(I)}+h^{k+\frac{1}{2}}\|u\|_{k+1,I_{h }}+h^{-\frac{1}{2}}\|u-u_{h}\|_{0,I_{h}}.\] Here, in the first step we have employed the triangle inequality. In the second step, the first term is a consequence of the projection estimate [1] and the second term is a consequence of the norm comparison inequality (3.3). Triangle inequality is applied again in the third step. Finally in the fourth step, we use projection estimate for the second term. **Error equation for the viscous Burgers' system:** Since the scheme with fluxes (3.15)-(3.17) is consistent, (3.8)-(3.9) also hold for solution \((u,w)\). Hence, taking the difference, we obtain the error equation \[\left(\frac{\partial e_{u}}{\partial t},\phi_{h}\right)+a_{h}(u,\phi _{h}) -a_{h}(u_{h},\phi_{h})+\sqrt{\epsilon}\,b_{h}(e_{w},\phi_{h})\] \[+(\rho u-\rho_{h}u_{h},\phi_{h})=(\rho V-\rho_{h}V_{h},\phi_{h}) \quad\forall\ \phi_{h}\in X_{h}, \tag{4.4}\] \[(e_{w},q_{h})+\sqrt{\epsilon}\,b_{h}(e_{u},q_{h})=0\quad\forall\ q_{h}\in X_{h}, \tag{4.5}\] where \[e_{u}=u-u_{h},\quad e_{w}=w-w_{h}.\] Using the projection operator, we rewrite \[\begin{split} e_{u}&:=u-u_{h}=(u-Q_{\lambda}^{x}u)+( Q_{\lambda}^{x}u-u_{h})=\theta_{u}-\eta_{u},\\ e_{w}&:=w-w_{h}=(w-Q_{1-\lambda}^{x}w)+(Q_{1- \lambda}^{x}w-w_{h})=\theta_{w}-\eta_{w}.\end{split} \tag{4.6}\] Here \[Q_{\lambda}^{x}u-u=:\eta_{u},\quad Q_{\lambda}^{x}u-u_{h}=:\theta_{u},\quad Q_ {1-\lambda}^{x}w-w=:\eta_{w},\quad Q_{1-\lambda}^{x}w-w_{h}=:\theta_{w}.\] Using (4.6) and the definition of the projection \(Q_{\lambda}^{x}\) given by (4.1)-(4.2), we can rewrite the error equation (4.4) and (4.5) as \[\begin{split}(\partial_{t}\theta_{u},\phi_{h})+\sqrt{\epsilon}b_ {h}(\theta_{w},\phi_{h})&=(\rho V-\rho_{h}V_{h},\phi_{h})+( \partial_{t}\eta_{u},\phi_{h})-(\rho\theta_{u}.\phi_{h})+(\rho\eta_{u},\phi_{ h})\\ &+((\rho-\rho_{h})\,\theta_{u},\phi_{h})-((\rho-\rho_{h})\,\eta_{u},\phi_{h})-((\rho-\rho_{h})\,u,\phi_{h})\\ &-a_{h}(u,\phi_{h})+a_{h}(u_{h},\phi_{h}),\quad\forall\ \phi_{h}\in X_{h},\end{split} \tag{4.7}\] \[(\theta_{w},q_{h})+\sqrt{\epsilon}b_{h}(\theta_{u},q_{h})=(\eta_{w},q_{h})\,, \quad\forall\ q_{h}\in X_{h}. \tag{4.8}\] **Remark 4.2**.: _Note that (4.2) yields_ \[(\widehat{\eta_{u}})_{i+1/2}=\{\eta_{u}\}_{i+1/2}+\left(\frac{1-2\lambda}{2} \right)[\![\eta_{u}]\!]_{i+1/2}=0,\] _which implies_ \[\{\eta_{u}\}_{i+1/2}=\left(\frac{2\lambda-1}{2}\right)[\![\eta_{u}]\!]_{i+1/2}. \tag{4.9}\] _Therefore, if \(\lambda=1/2\) then \(\{\eta_{u}\}=0\) and if \(\lambda>1/2\) then \(\{\eta_{u}\}\neq 0\)._ If \(\lambda>1/2\), then the following result holds (for detailed proof, see [20, Lemma 2.3, p. 2085]). **Lemma 4.3**.: _Let \(\lambda>1/2\). Suppose \((\theta_{u},\theta_{w})\in X_{h}\times X_{h}\) satisfy (4.8). Then, there is a positive constant \(C\), independent \(h\) and \(\epsilon\), such that_ \[\|\partial_{x}\theta_{u}\|_{0,I_{h}}+\sum_{i=0}^{N_{x}-1}h^{-\frac{1}{2}}[\![ \theta_{u}]\!]_{i+1/2}\leq C\epsilon^{-\frac{1}{2}}\|\theta_{w}\|_{0,I_{h}}. \tag{4.10}\] **Lemma 4.4**.: _Let \((u,w)\) be the solution of the viscous Burgers' equation (3.4)-(3.5). Let \((u_{h},w_{h})\in X_{h}\times X_{h}\) solve (3.8)-(3.9). Let \(u\in L^{\infty}(0,T;H^{k+1}(I))\). Then, there exists a positive constant \(C\) independent of \(h\) such that for all \(t\in(0,T]\),_ \[\begin{split}\frac{1}{2}\frac{\mathrm{d}}{\mathrm{d}t}\|\theta_{u }\|_{0,I_{h}}^{2}&+\frac{1}{2}\|\theta_{w}\|_{0,I_{h}}^{2}\leq C \left(h^{2k+2}+\|\theta_{u}\|_{0,I_{h}}^{2}+\|f-f_{h}\|_{0,\mathcal{T}_{h}}^{2} \right)+Ch^{-\frac{3}{2}}\|\theta_{u}\|_{0,I_{h}}^{3}\\ &+h^{-1}\|\theta_{u}\|_{0,I_{h}}^{4}+\left(\frac{2\lambda-1}{2} \right)C\left(h^{k+1}+h^{k+\frac{1}{2}}\|\theta_{u}\|_{0,I_{h}}\right)\left( \sum_{i=0}^{N_{x}-1}h^{-1}[\![\theta_{u}]\!]_{i+1/2}^{2}\right)^{\frac{1}{2}}. \end{split} \tag{4.11}\] Proof.: After choosing \(\phi_{h}=\theta_{u}\) and \(q_{h}=\theta_{w}\) in equations (4.7) and (4.8), respectively and then add up to obtain \[\frac{1}{2}\frac{\mathrm{d}}{\mathrm{d}t}\|\theta_{u}\|_{0,I_{h}}^{2 }+\|\theta_{w}\|_{0,I_{h}}^{2}+\sqrt{\epsilon}\,b_{h}(\theta_{w},\theta_{u})+ \sqrt{\epsilon}\,b_{h}(\theta_{u},\theta_{w})+\|\rho^{\frac{1}{2}}\theta_{u}\| _{0,I_{h}}^{2}=((\eta_{u})_{t},\theta_{u})\] \[\qquad\qquad+(\rho\eta_{u},\theta_{u})+\frac{1}{2}\sum_{i=1}^{N_{ x}}\int_{I_{i}}\left(u^{2}-u_{h}^{2}\right)\partial_{x}\theta_{u}\,\mathrm{d}x+ \frac{1}{2}\sum_{i=0}^{N_{x}-1}\left(u^{2}-\{u_{h}\}^{2}\right)\llbracket \theta_{u}\rrbracket_{i+1/2}\] \[\qquad\qquad+\frac{1}{2}\sum_{i=0}^{N_{x}-1}\left(\{u_{h}\}^{2} -\widehat{u_{h}^{2}}\right)\llbracket\theta_{u}\rrbracket_{i+1/2}+(\eta_{w}, \theta_{w})-((\rho-\rho_{h})\,\eta_{u},\theta_{u})\] \[\qquad\qquad-((\rho-\rho_{h})\,u,\theta_{u})+(\rho V-\rho_{h}V_{ h},\theta_{u})+((\rho-\rho_{h})\,\theta_{u},\theta_{u})\] Note that the third and the fourth terms on the left hand side of the above equality sum to zero, thanks to (3.22) and the definition of the projection in (4.1)-(4.2). Furthermore, the fifth term on the left hand side is non-negative. Hence can be dropped while estimating. An application of the Holder inequality with the Young's inequality, the estimate from Lemma 3.7 and an application of the identity \(\frac{a^{2}}{2}-\frac{b^{2}}{2}=a(a-b)-\frac{(a-b)^{2}}{2}\) results in \[\frac{1}{2}\frac{\mathrm{d}}{\mathrm{d}t}\|\theta_{u}\|_{0,I_{h}}^ {2}+\|\theta_{w}\|_{0,I_{h}}^{2}\lesssim\|(\eta_{u})_{t}\|_{0,I_{h}}^{2}+\| \rho\|_{L^{\infty}(I)}^{2}\|\eta_{u}\|_{0,I_{h}}^{2}+\|\eta_{w}\|_{0,I_{h}}^{2 }+\|f-f_{h}\|_{0,\mathcal{T}_{h}}^{2}\] \[\qquad\qquad+h^{-1}\|f-f_{h}\|_{0,\mathcal{T}_{h}}^{2}\|\eta_{u} \|_{0,I_{h}}^{2}+\|f-f_{h}\|_{0,\mathcal{T}_{h}}^{2}\|u\|_{L^{\infty}(I)}^{2}+ \|\theta_{u}\|_{0,I_{h}}^{2}+\|\theta_{u}\|_{L^{4}(I_{h})}^{4}\] \[\qquad\qquad+\sum_{i=1}^{N_{x}}\int_{I_{i}}u\left(u-u_{h}\right) \partial_{x}\theta_{u}\,\mathrm{d}x-\frac{1}{2}\sum_{i=1}^{N_{x}}\int_{I_{i}} \left(u-u_{h}\right)^{2}\partial_{x}\theta_{u}\,\mathrm{d}x\] \[\qquad\qquad+\frac{1}{2}\sum_{i=0}^{N_{x}-1}\left(\{u_{h}\}^{2} -\widehat{u_{h}^{2}}\right)\llbracket\theta_{u}\rrbracket_{i+1/2}+\frac{1}{2} \|\theta_{w}\|_{0,I_{h}}^{2}.\] A use of the norm comparison inequality (3.3) with the approximation property (4.3), the identity (4.9) and employing the substitution \(u-\{u_{h}\}=\{u-u_{h}\}=\{\theta_{u}\}-\{\eta_{u}\}\), in the above equation shows \[\frac{1}{2}\frac{\mathrm{d}}{\mathrm{d}t}\|\theta_{u}\|_{0,I_{h}}^{2}+\frac{1 }{2}\|\theta_{w}\|_{0,I_{h}}^{2} \leq Ch^{2k+2} +C\|f-f_{h}\|_{0,\mathcal{T}_{h}}^{2}+\|\theta_{u}\|_{0,I_{h}}^{ 2}+h^{-1}\|\theta_{u}\|_{0,I_{h}}^{4}\] \[+\kappa_{1}+\kappa_{2}+\kappa_{3}+\kappa_{4}+\kappa_{5}. \tag{4.12}\] Here \[\kappa_{1} :=\frac{1}{2}\sum_{i=1}^{N_{x}}\int_{I_{i}}u\,\partial_{x}\theta_{ u}^{2}\,\mathrm{d}x+\sum_{i=0}^{N_{x}-1}u\,\{\theta_{u}\}\,\llbracket \theta_{u}\rrbracket_{i+1/2}\] \[\kappa_{2} :=-\sum_{i=1}^{N_{x}}\int_{I_{i}}u\,\eta_{u}\,\partial_{x} \theta_{u}\,\mathrm{d}x-\left(\frac{2\lambda-1}{2}\right)\sum_{i=0}^{N_{x}-1}u \,\llbracket\eta_{u}\rrbracket_{i+1/2}\llbracket\theta_{u}\rrbracket_{i+1/2}\] \[\kappa_{3} :=-\frac{1}{6}\sum_{i=1}^{N_{x}}\int_{I_{i}}\,\partial_{x} \theta_{u}^{3}\,\mathrm{d}x-\frac{1}{2}\sum_{i=0}^{N_{x}-1}\{\theta_{u}\}^{2} \llbracket\theta_{u}\rrbracket_{i+1/2}\] \[\kappa_{4} :=\sum_{i=1}^{N_{x}}\int_{I_{i}}\,\left(\theta_{u}-\frac{1}{2} \eta_{u}\right)\eta_{u}\,\partial_{x}\theta_{u}\,\mathrm{d}x+\left(\frac{2 \lambda-1}{2}\right)\sum_{i=0}^{N_{x}-1}\,\left(\{\theta_{u}\}-\frac{1}{2}\{ \eta_{u}\}\right)\llbracket\eta_{u}\rrbracket_{i+1/2}\llbracket\theta_{u} \rrbracket_{i+1/2}\] \[\kappa_{5} :=\frac{1}{2}\sum_{i=0}^{N_{x}-1}\left(\{u_{h}\}^{2}-\widehat{u_{h }^{2}}\right)\llbracket\theta_{u}\rrbracket_{i+1/2}.\] Using integration by parts and the identity \(\llbracket ab\rrbracket=\{a\}\llbracket b\rrbracket+\llbracket a\rrbracket\{b\}\), we obtain \[\kappa_{1}\leq C\|u_{x}\|_{L^{\infty}(I)}\|\theta_{u}\|_{0,I_{h}}^{2}.\] Taylor's theorem says \(u(x)=u(x_{i})+(x-x_{i})\,\partial_{x}u(x_{i}^{*})\) for all \(x\in I_{i}\) where \(x_{i}^{*}\in(x,x_{i})\). Therefore, \[\kappa_{2} =-\sum_{i=1}^{N_{x}}\,u(x_{i})\int_{I_{i}}\eta_{u}\,\partial_{x} \theta_{u}\,\mathrm{d}x-\sum_{i=1}^{N_{x}}\partial_{x}u(x_{i}^{*})\int_{I_{i}} \left(x-x_{i}\right)\,\eta_{u}\,\partial_{x}\theta_{u}\,\mathrm{d}x\] \[\quad-\left(\frac{2\lambda-1}{2}\right)\sum_{i=0}^{N_{x}-1}\,u \,\llbracket\eta_{u}\rrbracket_{i+1/2}\llbracket\theta_{u}\rrbracket_{i+1/2}\] \[\leq h^{k+1}C\|\partial_{x}u\|_{L^{\infty}(I)}\|\theta_{u}\|_{0, I_{h}}+\|u\|_{L^{\infty}(I)}\left(\frac{2\lambda-1}{2}\right)\sum_{i=0}^{N_{x}-1}h^{ \frac{1}{2}}\llbracket\eta_{u}\rrbracket_{i+1/2}h^{-\frac{1}{2}}\llbracket \theta_{u}\rrbracket_{i+1/2}\] \[\leq C\left(h^{2k+2}+\|\theta_{u}\|_{0,I_{h}}^{2}\right)+\left( \frac{2\lambda-1}{2}\right)C\|\eta_{u}\|_{0,I_{h}}\left(\sum_{i=0}^{N_{x}-1}h^ {-1}\llbracket\theta_{u}\rrbracket_{i+1/2}^{2}\right)^{\frac{1}{2}}.\] Note that the first term in the above expression of \(\kappa_{2}\) vanishes, thanks to the definition of the global projection (4.1). To bound the second term, we have employed the approximation property (4.3) and the inverse inequality (3.1). The last step is a consequence of the Young's inequality, the trace inequality (3.2) and the Cauchy-Schwarz inequality. For \(\kappa_{3}\), an integration by parts yields \[\kappa_{3}=\frac{1}{24}\sum_{i=0}^{N_{x}-1}\llbracket\theta_{u}\rrbracket_{i +1/2}^{3}\lesssim\|\theta_{u}\|_{\infty,I_{h}}\|\theta_{u}\|_{0,\partial I_{h }}^{2}\leq Ch^{-\frac{3}{2}}\|\theta_{u}\|_{0,I_{h}}^{3},\] where in the last step we have used the norm comparison estimate (3.3) and the trace inequality (3.2). To estimate \(\kappa_{4}\), a use of the projection estimate with (3.1)-(3.2), the Holder and the Young's inequalities yield \[\kappa_{4} \leq\|\theta_{u}\|_{0,I_{h}}\|\eta_{u}\|_{0,I_{h}}\|\partial_{x} \theta_{u}\|_{\infty,I_{h}}+C\|\partial_{x}\theta_{u}\|_{\infty,I_{h}}\|\eta_{ u}\|_{0,I_{h}}^{2}\] \[\quad+\left(\frac{2\lambda-1}{2}\right)C\left(\|\theta_{u}\|_{0, \partial I_{h}}+\|\eta_{u}\|_{0,\partial I_{h}}\right)\|\eta_{u}\|_{0,\partial I _{h}}\left(\sum_{i=0}^{N_{x}-1}h^{-1}\llbracket\theta_{u}\rrbracket_{i+1/2}^{ 2}\right)^{\frac{1}{2}}\] \[\leq C\left(h^{2k+2}+\|\theta_{u}\|_{0,I_{h}}^{2}\right)+\left( \frac{2\lambda-1}{2}\right)Ch^{k+\frac{1}{2}}\|\theta_{u}\|_{0,I_{h}}\left( \sum_{i=0}^{N_{x}-1}h^{-1}\llbracket\theta_{u}\rrbracket_{i+1/2}^{2}\right)^{ \frac{1}{2}}.\] Finally, using the fact that \(\frac{\{uu_{h}\}^{2}}{2}-\frac{u_{h}^{2}}{2}=-\frac{\llbracket\eta_{u} \rrbracket^{2}}{24},\llbracket u_{h}\rrbracket=\llbracket\theta_{u}\rrbracket- \llbracket\eta_{u}\rrbracket\), the approximation property (4.3), trace inequality (3.2) and inverse inequality (3.1) with the Holder inequality, we arrive at \[\kappa_{5} =-\frac{1}{24}\sum_{i=0}^{N_{x}-1}\left(\llbracket\theta_{u} \rrbracket_{i+1/2}^{2}-2\llbracket\eta_{u}\rrbracket_{i+1/2}\llbracket \theta_{u}\rrbracket_{i+1/2}+\llbracket\eta_{u}\rrbracket_{i+1/2}^{2}\right) \llbracket\theta_{u}\rrbracket_{i+1/2}\] \[\leq C\|\theta_{u}\|_{\infty,I_{h}}\left(\|\theta_{u}\|_{0, \partial I_{h}}^{2}+\|\theta_{u}\|_{0,\partial I_{h}}\|\eta_{u}\|_{0, \partial I_{h}}+\|\eta_{u}\|_{0,\partial I_{h}}^{2}\right)\] \[\leq h^{-\frac{1}{2}}\|\theta_{u}\|_{0,I_{h}}\left(h^{-1}\|\theta_ {u}\|_{0,I_{h}}^{2}+Ch^{k}\|\theta_{u}\|_{0,I_{h}}+Ch^{2k+1}\right)\] \[\leq C\left(h^{2k+2}+\|\theta_{u}\|_{0,I_{h}}^{2}+h^{-\frac{3}{2}} \|\theta_{u}\|_{0,I_{h}}^{3}\right).\] Substituting all the above estimates in equation (4.12), we obtain our desired result. **Remark 4.5**.: _In arriving at the estimate (4.11) for \(\theta_{u}\), we tackled the non-linear term by employing the algebraic identity \(\frac{a^{2}}{2}-\frac{b^{2}}{2}=a(a-b)-\frac{(a-b)^{2}}{2}\), the Taylor's theorem and the definition (3.15) of the central flux \(\widetilde{u_{h}^{2}}\)._ **Corollary 4.6**.: _If \(\lambda=1/2\) with \(k\) even and \(N_{x}\) odd, then_ \[\frac{1}{2}\frac{\mathrm{d}}{\mathrm{d}t}\|\theta_{u}\|_{0,I_{h}}^{2 }+\|\theta_{w}\|_{0,I_{h}}^{2}\leq C\left(h^{2k+2}+\|f-f_{h}\|_{0,\mathcal{T}_{ h}}^{2}+\|\theta_{u}\|_{0,I_{h}}^{2}\right.\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\left.+\,\,h^{-\frac{3}{2}}\|\theta_{u}\|_{0,I_{ h}}^{3}+h^{-1}\|\theta_{u}\|_{0,I_{h}}^{4}\right). \tag{4.13}\] _If \(\lambda>1/2\), then_ \[\frac{1}{2}\frac{\mathrm{d}}{\mathrm{d}t}\|\theta_{u}\|_{0,I_{h}} ^{2}+\|\theta_{w}\|_{0,I_{h}}^{2}\leq C\left(\left(1+\epsilon^{-1}\right)h^{2k+2}+\|f-f_{h} \|_{0,\mathcal{T}_{h}}^{2}+\left(1+\epsilon^{-1}h^{2k+1}\right)\|\theta_{u}\|_ {0,I_{h}}^{2}\right.\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\left.+\,\,h^{-\frac{3}{2}}\|\theta_{u}\|_{0,I_{h}}^{3}+h^{-1}\| \theta_{u}\|_{0,I_{h}}^{4}\right). \tag{4.14}\] Proof.: For \(\lambda=1/2\) with \(k\) even and \(N_{x}\) odd, the last term in (4.11) is vanish. Now, if \(\lambda>1/2\) then to estimate the last term of (4.11), we use (4.10). \[\left(\frac{2\lambda-1}{2}\right)\left(h^{k+1}+h^{k+\frac{1}{2}} \|\theta_{u}\|_{0,I_{h}}\right)\left(\sum_{i=0}^{N_{x}-1}h^{-1}\llbracket \theta_{u}\rrbracket_{i+1/2}^{2}\right)^{\frac{1}{2}}\] \[\leq C\epsilon^{-\frac{1}{2}}\left(h^{k+1}+h^{k+\frac{1}{2}}\| \theta_{u}\|_{0,I_{h}}\right)\|\theta_{w}\|_{0,I_{h}}\] \[\leq C(\delta)\epsilon^{-1}\left(h^{2k+2}+h^{2k+1}\|\theta_{u}\|_ {0,I_{h}}^{2}\right)+\delta\|\theta_{w}\|_{0,I_{h}}^{2}.\] For a sufficiently small \(\delta>0\), the desired result follows. ### Error estimates for Vlasov equation Since the scheme given by (3.7) with fluxes (3.12) is consistent, (3.7) holds for solution \((u,f)\) as well. Hence, by taking the difference, we obtain the error equation: \[\left(\frac{\partial}{\partial t}\left(f-f_{h}\right),\psi_{h}\right)+\mathcal{ B}(u;f,\psi_{h})-\mathcal{B}_{h}(u_{h};f_{h},\psi_{h})=0\quad\forall\,\psi_{h} \in\mathcal{Z}_{h}.\] Setting \(e_{f}:=f-f_{h}\), we rewrite the error equation as \[\left(\frac{\partial e_{f}}{\partial t},\psi_{h}\right)+a_{h}^{0}(e_{f},\psi_ {h})+\mathcal{N}(u;f,\psi_{h})-\mathcal{N}^{h}(u_{h};f_{h},\psi_{h})=0\quad \forall\,\psi_{h}\in\mathcal{Z}_{h}, \tag{4.15}\] where \[a_{h}^{0}(e_{f},\psi_{h}):=-\sum_{i,j}\int_{T_{ij}}e_{f}v\,\partial_{x}\psi_{h }\,\mathrm{d}v\,\mathrm{d}x-\sum_{j=1}^{N_{x}}\sum_{i=0}^{N_{x}-1}\int_{J_{j} }\left(\widehat{v_{f}}\llbracket\psi_{h}\rrbracket\right)_{i+1/2,v}\,\mathrm{d}v, \tag{4.16}\] \[\mathcal{N}(u;f,\psi_{h}):=-\sum_{i,j}\int_{T_{ij}}f(u-v)\,\partial_{v}\psi_{h }\,\mathrm{d}v\,\mathrm{d}x\ -\sum_{i=1}^{N_{x}}\sum_{j=0}^{N_{v}-1}\int_{I_{i}} \left((u-v)f\llbracket\psi_{h}\rrbracket\right)_{x,j+1/2}\,\mathrm{d}x\,, \tag{4.17}\] and \[\mathcal{N}^{h}(u_{h};f_{h},\psi_{h}):=-\sum_{i,j}\int_{T_{ij}}f_{h} (u_{h}-v)\,\partial_{v}\psi_{h}\,\mathrm{d}v\,\mathrm{d}x\] \[-\sum_{i=1}^{N_{x}}\sum_{j=0}^{N_{v}-1}\int_{I_{i}}\left(\widehat{ (u_{h}-v)f_{h}}\llbracket\psi_{h}\rrbracket\right)_{x,j+1/2}\,\mathrm{d}x\,. \tag{4.18}\] ### 2D projection Inspired by [14], we now introduce a 2D global projection \(\Pi_{\lambda_{1},\lambda_{2}}:\mathcal{C}^{0}(\mathcal{T}_{h})\to\mathcal{Z}_{h}\) for \(\lambda_{1},\lambda_{2}>\frac{1}{2}\) as \[\Pi_{\lambda_{1},\lambda_{2}}g=(\Pi_{\lambda_{1}}\otimes\Pi_{\lambda_{2}})\,g,\quad\forall\quad g\in\mathcal{C}^{0}(\overline{T_{ij}}),1\leq i\leq N_{x},1 \leq j\leq N_{v}. \tag{4.19}\] where \[\Pi_{\lambda_{1}}g:=\left\{\begin{aligned} & Q_{\lambda_{1}}^{x}g& \quad\text{if}\quad v>0,\\ & Q_{1-\lambda_{1}}^{x}g&\quad\text{if}\quad v<0, \\ & Q_{1}^{x}g&\quad\text{if}\quad v=0,\end{aligned}\right.\] and \[\Pi_{\lambda_{2}}g:=\left\{\begin{aligned} & Q_{\lambda_{2}}^{v}g& \quad\text{if}\quad(u(x)-v)>0,\\ & Q_{1-\lambda_{2}}^{v}g&\quad\text{if}\quad(u(x)-v)<0, \\ & Q_{1}^{v}g&\quad\text{if}\quad(u(x)-v)=0.\end{aligned}\right.\] More precisely, the above defined projection \(\Pi_{\lambda_{1},\lambda_{2}}\) can be explicitly defined as follows: Let \(\gamma_{1},\Bbbk_{1}^{+}\) and \(\Bbbk_{1}^{-}\) be the set of all indices \(j\) for which \(v\) changes sign, remains positive and remains negative, respectively inside \(J_{j}\). Let \(\gamma_{2},\Bbbk_{2}^{+}\) and \(\Bbbk_{2}^{-}\) be the set of all indices \(i\) for which \((u(x)-v)\) changes sign, remains positive and remains negative, respectively inside \(I_{i}\) for a fixed \(v\). The projection \(\Pi_{\lambda_{1},\lambda_{2}}g\) satisfies the following equality involving volume integrals: \[\int_{T_{ij}}\Pi_{\lambda_{1},\lambda_{2}}gz_{h}\,\mathrm{d}x\,\mathrm{d}v= \int_{T_{ij}}gz_{h}\,\mathrm{d}x\,\mathrm{d}v,\quad\forall\quad z_{h}\in \mathbb{Q}^{k-1}(T_{ij}) \tag{4.20}\] The projection \(\Pi_{\lambda_{1},\lambda_{2}}g\) satisfies the following equalities involving vertical boundary integrals: \[\begin{cases}\int_{J_{j}}\left(\Pi_{\lambda_{1},\lambda_{2}}g\right)_{i+\frac {1}{2},v}^{-}\left(z_{h}\right)_{i+\frac{1}{2},v}^{-}\mathrm{d}v=\int_{J_{j}} g_{i+\frac{1}{2},v}^{-}(z_{h})_{i+\frac{1}{2},v}^{-}\mathrm{d}v,\quad\text{if} \quad j\in\gamma_{1}\\ \int_{J_{j}}\left(\Pi_{\lambda_{1},\lambda_{2}}g\right)_{i+\frac{1}{2},v}^{ \lambda_{1}}(z_{h})_{i+\frac{1}{2},v}^{-}\mathrm{d}v=\int_{J_{j}}g_{i+\frac{1} {2},v}^{\lambda_{1}}(z_{h})_{i+\frac{1}{2},v}^{-}\mathrm{d}v,\quad\text{if} \quad j\in\Bbbk_{1}^{+}\\ \int_{J_{j}}\left(\Pi_{\lambda_{1},\lambda_{2}}g\right)_{i-\frac{1}{2},v}^{1- \lambda_{1}}(z_{h})_{i-\frac{1}{2},v}^{+}\mathrm{d}v=\int_{J_{j}}g_{i-\frac{1} {2},v}^{1-\lambda_{1}}(z_{h})_{i-\frac{1}{2},v}^{+}\mathrm{d}v,\quad\text{if} \quad j\in\Bbbk_{1}^{-}\end{cases} \tag{4.21}\] for all \(z_{h}\in\mathbb{Q}^{k-1}(T_{ij})\). The projection \(\Pi_{\lambda_{1},\lambda_{2}}g\) satisfies the following equalities involving horizontal boundary integrals: \[\begin{cases}\int_{I_{i}}\left(\Pi_{\lambda_{1},\lambda_{2}}g\right)_{x,j+\frac {1}{2}}^{-}(z_{h})_{x,j+\frac{1}{2}}^{-}\mathrm{d}x=\int_{I_{i}}g_{x,j+\frac{1 }{2}}^{-}(z_{h})_{x,j+\frac{1}{2}}^{-}\mathrm{d}x,\text{ if }i\in\gamma_{2}\\ \int_{I_{i}}\left(\Pi_{\lambda_{1},\lambda_{2}}g\right)_{x,j+\frac{1}{2}}^{ \lambda_{2}}(z_{h})_{x,j+\frac{1}{2}}^{-}\mathrm{d}x=\int_{I_{i}}g_{x,j+\frac{1 }{2}}^{\lambda_{2}}(z_{h})_{x,j+\frac{1}{2}}^{-}\mathrm{d}x,\text{ if }i\in\Bbbk_{2}^{+}\\ \int_{I_{i}}\left(\Pi_{\lambda_{1},\lambda_{2}}g\right)_{x,j-\frac{1}{2}}^{1- \lambda_{2}}(z_{h})_{x,j-\frac{1}{2}}^{-}\mathrm{d}x=\int_{I_{i}}g_{x,j-\frac{1 }{2}}^{1-\lambda_{2}}(z_{h})_{x,j-\frac{1}{2}}^{-}\mathrm{d}x,\text{ if }i\in\Bbbk_{2}^{-}\end{cases} \tag{4.22}\] for all \(z_{h}\in\mathbb{Q}^{k-1}(T_{ij})\). The projection \(\Pi_{\lambda_{1},\lambda_{2}}g\) satisfies the following equalities involving the boundary points: \[(\Pi_{\lambda_{1},\lambda_{2}}g)_{i+\frac{1}{2},j+\frac{1}{2}}^{-,\gamma_{2}}= g_{i+\frac{1}{2},j+\frac{1}{2}}^{-,\gamma_{2}},\quad\text{if}\quad(i,j)\in(\,\gamma_{2}, \gamma_{1}\,) \tag{4.23}\] \[\begin{cases}(\Pi_{\lambda_{1},\lambda_{2}}g)_{i+\frac{1}{2},j+\frac{1}{2}}^{ \lambda_{1},-}=g_{i+\frac{1}{2},j+\frac{1}{2}}^{\lambda_{1},-},&\text{if}\quad(i,j)\in(\gamma_{2},\Bbbk_{1}^{+})\\ (\Pi_{\lambda_{1},\lambda_{2}}g)_{i-\frac{1}{2},j+\frac{1}{2}}^{1-\lambda_{1},- }=g_{i-\frac{1}{2},j+\frac{1}{2}}^{1-\lambda_{1},-},&\text{if}\quad(i,j)\in( \gamma_{2},\Bbbk_{1}^{-})\\ \end{cases} \tag{4.24}\] \[\begin{cases}(\Pi_{\lambda_{1},\lambda_{2}}g)_{i+\frac{1}{2},j+\frac{1}{2}}^{-, \gamma_{2}}=g_{i+\frac{1}{2},j+\frac{1}{2}}^{-,\gamma_{2}},&\text{if}\quad(i,j) \in(\Bbbk_{2}^{+},\gamma_{1})\\ (\Pi_{\lambda_{1},\lambda_{2}}g)_{i+\frac{1}{2},j-\frac{1}{2}}^{-,1-\lambda_{2}}= g_{i+\frac{1}{2},j-\frac{1}{2}}^{-,1-\lambda_{2}},&\text{if}\quad(i,j)\in(\Bbbk_{2}^{-}, \gamma_{1})\end{cases} \tag{4.25}\] \[(\Pi_{\lambda_{1},\lambda_{2}}g)_{i+\frac{1}{2},j+\frac{1}{2}}^{ \lambda_{1},\lambda_{2}}=g_{i+\frac{1}{2},j+\frac{1}{2}}^{\lambda_{1},\lambda_{ 2}},\quad\text{if}\quad(i,j)\in(\Bbbk_{2}^{+},\Bbbk_{1}^{+}) \tag{4.27}\] \[(\Pi_{\lambda_{1},\lambda_{2}}g)_{i+\frac{1}{2},j-\frac{1}{2}}^{ \lambda_{1},1-\lambda_{2}}=g_{i+\frac{1}{2},j-\frac{1}{2}}^{\lambda_{1},1- \lambda_{2}},\quad\text{if}\quad(i,j)\in(\Bbbk_{2}^{-},\Bbbk_{1}^{+})\] (4.28) \[(\Pi_{\lambda_{1},\lambda_{2}}g)_{i-\frac{1}{2},j+\frac{1}{2}}^{ 1-\lambda_{1},\lambda_{2}}=g_{i-\frac{1}{2},j+\frac{1}{2}}^{1-\lambda_{1}, \lambda_{2}},\quad\text{if}\quad(i,j)\in(\Bbbk_{2}^{+},\Bbbk_{1}^{-})\] (4.29) \[(\Pi_{\lambda_{1},\lambda_{2}}g)_{i-\frac{1}{2},j-\frac{1}{2}}^{ 1-\lambda_{1},1-\lambda_{2}}=g_{i-\frac{1}{2},j-\frac{1}{2}}^{1-\lambda_{1},1- \lambda_{2}},\quad\text{if}\quad(i,j)\in(\Bbbk_{2}^{-},\Bbbk_{1}^{-}). \tag{4.26}\] Here, we have used the following notations: \[g_{i+\frac{1}{2},j+\frac{1}{2}}^{\varkappa,\varsigma}:=\varkappa \varsigma g_{i+\frac{1}{2},j+\frac{1}{2}}^{-,-}+\varkappa\left(1-\varsigma \right)g_{i+\frac{1}{2},j+\frac{1}{2}}^{-,+}\] \[\qquad\qquad+\left(1-\varkappa\right)\varsigma g_{i+\frac{1}{2},j +\frac{1}{2}}^{+,-}+\left(1-\varkappa\right)\left(1-\varsigma\right)g_{i+\frac{ 1}{2},j+\frac{1}{2}}^{+,+},\] \[g_{i-1/2,v}^{\varkappa}:=\left(1-\varkappa\right)\left(g_{i-1/2,v}^{+}+ \varkappa\left(g\right)_{i-1/2,v}^{-},\right.\] \[g_{x,j-1/2}^{\varsigma}:=\left(1-\varsigma\right)\left(g\right)_{x,j-1/2}^{+}+ \varsigma\left(g\right)_{x,j-1/2}^{-},\] \[g_{i-1/2,j-1/2}^{\varkappa,-}:=\left(1-\varkappa\right)g\left(x_{i-1/2}^{+},v _{j-1/2}^{-}\right)+\varkappa g\left(x_{i-1/2}^{-},v_{j-1/2}^{-}\right),\] \[g_{i-1/2,j-1/2}^{-,\varkappa}:=\left(1-\varkappa\right)g\left(x_{i-1/2}^{-},v _{j-1/2}^{+}\right)+\varkappa g\left(x_{i-1/2}^{-},v_{j-1/2}^{-}\right),\] \[g_{i-1/2,j-1/2}^{\pm,\pm}:=g\left(x_{i-1/2}^{\pm},v_{j-1/2}^{\pm}\right),\] with \(x_{i-1/2}^{\pm}\) denoting \(\lim_{\varrho\to 0^{+}}\left(x_{i-1/2}\pm\varrho\right)\) and \(\{v_{j-1/2}^{\pm}\}\) denoting \(\lim_{\varrho\to 0^{+}}\left(v_{j-1/2}\pm\varrho\right)\). In the above notations the parenthesis \(\varkappa\) takes the value \(\lambda_{1}\) and \(1-\lambda_{1}\) whereas \(\varsigma\) takes the value \(\lambda_{2}\) and \(1-\lambda_{2}\). In the following lemma, we state the approximation property of the global projection \(\Pi_{\lambda_{1},\lambda_{2}}\) whose proof can be found in [1, Lemma 3.1, p. 8]. **Lemma 4.7**.: _For \(f\in H^{k+1}(\Omega)\), there exists a unique \(\Pi_{\lambda_{1},\lambda_{2}}f\) defined by (4.19) and satisfying the following approximation property:_ \[\|f-\Pi_{\lambda_{1},\lambda_{2}}f\|_{0,\mathcal{T}_{h}}+h^{\frac{1}{2}}\|f- \Pi_{\lambda_{1},\lambda_{2}}f\|_{0,\Gamma_{h}}\leq C\,h^{k+1}\|f\|_{k+1, \Omega}, \tag{4.30}\] _where \(\|\cdot\|_{0,\Gamma_{h}}^{2}=\|\cdot\|_{0,\Gamma_{x}}^{2}+\|\cdot\|_{0,\Gamma_{ v}}^{2}\)._ Using the projection, split \(f-f_{h}\) as \[e_{f}:=f-f_{h}:=(\Pi_{\lambda_{1},\lambda_{2}}f-f_{h})-(\Pi_{\lambda_{1}, \lambda_{2}}f-f)=:\theta_{f}-\eta_{f}, \tag{4.31}\] where \[\theta_{f}=\Pi_{\lambda_{1},\lambda_{2}}f-f_{h}\quad\text{and}\quad\eta_{f}= \Pi_{\lambda_{1},\lambda_{2}}f-f.\] **Lemma 4.8**.: _Let \(u\in C^{0}(I),f\in C^{0}(\Omega)\) and \(f_{h}\in\mathcal{Z}_{h}\) with \(k\geq 0\). Then, the following identity holds true:_ \[\mathcal{N}(u;f,\theta_{f}) -\mathcal{N}^{h}(u_{h};f_{h},\theta_{f})=\sum_{i,j}\int_{I_{i}} \left(\frac{2\lambda_{2}-1}{2}\right)|(u_{h}-v)|\,[\![\theta_{f}]\!]_{x,j-1/2}^{2} \,\mathrm{d}x\] \[+\sum_{i,j}\int_{T_{ij}}\left((u-u_{h})\,\partial_{v}f\,\theta_{f }-\frac{1}{2}\theta_{f}^{2}\right)\,\mathrm{d}v\,\mathrm{d}x+\mathcal{K}^{2}(u_ {h}-v,f,\theta_{f}),\] _where_ \[\mathcal{K}^{2}(u_{h}-v,f,\theta_{f})=\sum_{i,j}\int_{T_{ij}}\eta_{f}(u_{h}-v)\, \partial_{v}\theta_{f}\,\mathrm{d}v\,\mathrm{d}x+\sum_{i,j}\int_{I_{i}}\left( \overline{(u_{h}-v)\,\eta_{f}}[\![\theta_{f}]\!]\right)_{x,j-1/2}\,\mathrm{d}x\,. \tag{4.32}\] Proof.: Proof is similar to the [1, Lemma 4.8] with some changes in boundary terms. So, we are providing a short proof. Subtracting the non-linear terms (4.17) and (4.18), we arrive at \[\begin{split}&\mathcal{N}\left(u;f,\theta_{f}\right)-\mathcal{N}^{ h}\left(u_{h};f_{h},\theta_{f}\right)=-\sum_{i,j}\int_{T_{ij}}\left[f\left(u-v \right)-f_{h}\left(u_{h}-v\right)\right]\partial_{v}\theta_{f}\,\mathrm{d}x\, \mathrm{d}v\\ &\quad-\sum_{i,j}\int_{I_{i}}\left(\left(\left(u-v\right)f- \overline{\left(u_{h}-v\right)f_{h}}\right)\llbracket\theta_{f}\rrbracket \right)_{x,j-1/2}\,\,\mathrm{d}x=T_{1}+T_{2}+T_{3}.\end{split} \tag{4.33}\] After adding and subtracting \(\left(u_{h}-v\right)f\) in the volume term, we obtain \[T_{1} =-\sum_{i,j}\int_{T_{ij}}\left(f\left(u-u_{h}\right)\right) \partial_{v}\theta_{f}\,\mathrm{d}x\,\mathrm{d}v,\] \[T_{2} =-\sum_{i,j}\int_{T_{ij}}\left(\left(u_{h}-v\right)\left(f-f_{h} \right)\right)\partial_{v}\theta_{f}\,\mathrm{d}x\,\mathrm{d}v,\] \[T_{3} =-\sum_{i,j}\int_{I_{i}}\left(\left(\left(u-v\right)f-\overline{ \left(u_{h}-v\right)f_{h}}\right)\llbracket\theta_{f}\rrbracket\right)_{x,j-1/ 2}\,\,\mathrm{d}x.\] An integration by parts with respect to the \(v\) variable yields \[T_{1} =\sum_{i,j}\int_{T_{ij}}\left(u-u_{h}\right)\,\partial_{v}f\theta _{f}\,\mathrm{d}x\,\mathrm{d}v+\sum_{i,j}\int_{I_{i}}\left(u-u_{h}\right)f \llbracket\theta_{f}\rrbracket_{x,j-1/2}\,\mathrm{d}x\] \[=:T_{11}+T_{12}.\] We rewrite the term \(T_{2}\) as \[T_{2} =-\sum_{i,j}\int_{T_{ij}}\left(f-f_{h}\right)\left(u_{h}-v\right) \,\partial_{v}\theta_{f}\,\mathrm{d}x\,\mathrm{d}v\] \[=\sum_{i,j}\int_{T_{ij}}\eta_{f}\left(u_{h}-v\right)\,\partial_{ v}\theta_{f}\,\mathrm{d}x\,\mathrm{d}v-\frac{1}{2}\sum_{i,j}\int_{T_{ij}} \left(u_{h}-v\right)\,\partial_{v}\theta_{f}^{2}\,\mathrm{d}x\,\mathrm{d}v\] \[=:T_{21}+T_{22}.\] An integration by parts with respect to the \(v\) variable in the term \(T_{22}\) yields \[T_{22} =-\frac{1}{2}\sum_{i,j}\int_{T_{ij}}\theta_{f}^{2}\,\mathrm{d}x \,\mathrm{d}v+\sum_{i,j}\int_{I_{i}}\left(\frac{\left(u_{h}-v\right)}{2} \llbracket\theta_{f}^{2}\rrbracket\right)_{x,j-1/2}\,\,\mathrm{d}x=:T_{221}+T_{ 22}.\] We finally deal with the boundary term \(T_{3}\) as follows: \[T_{3} =-\sum_{i,j}\int_{I_{i}}\left(\left(\left(u-v\right)f-\overline{ \left(u_{h}-v\right)f_{h}}\right)\llbracket\theta_{f}\rrbracket\right)_{x,j-1/ 2}\,\,\mathrm{d}x\] \[=-\sum_{i,j}\int_{I_{i}}\left(\left(\left(u-v\right)f-\left(u_{h }-v\right)f\right)\llbracket\theta_{f}\rrbracket\right)_{x,j-1/2}\,\,\mathrm{d}x\] \[=T_{31}+T_{32}.\] From \(T_{12},T_{222},T_{31}\) and \(T_{32}\) terms, we obtain \[T_{31}+T_{12}=0.\] \[T_{222}+T_{32} =\sum_{i,j}\int_{I_{i}}\left((u_{h}-v)\left\{\theta_{f}\right\}\llbracket \theta_{f}\rrbracket-(u_{h}-v)\left\{f-f_{h}\right\}\llbracket\theta_{f} \rrbracket\right)_{x,j-1/2}\,\mathrm{d}x\] \[\quad+\sum_{i,j}\int_{I_{i}}\left(\frac{2\lambda_{2}-1}{2} \right)\left(|(u_{h}-v)|\left\llbracket f-f_{h}\right\rrbracket\llbracket \theta_{f}\rrbracket\right)_{x,j-1/2}\,\mathrm{d}x\] \[=\sum_{i,j}\int_{I_{i}}\left(\frac{2\lambda_{2}-1}{2}\right) \left(|(u_{h}-v)|\left\llbracket\theta_{f}\rrbracket^{2}+\overline{(u_{h}-v) \,\eta_{f}}\llbracket\theta_{f}\rrbracket\right)_{x,j-1/2}\,\mathrm{d}x.\] Here, in first step we use \(\llbracket ab\rrbracket=\{a\}\llbracket b\rrbracket+\llbracket a\rrbracket\{b\}\). Now, after putting all identities in equation (4.33), we conclude the proof. After choosing \(\psi_{h}=\theta_{f}\) in the error equation (4.15) and a use of equation (4.31) and Lemma 4.8, we rewrite equation (4.15) in \(\theta_{f}\) as \[\begin{split}&\frac{1}{2}\frac{\mathrm{d}}{\mathrm{d}t}\|\theta_{ f}\|_{0,\mathcal{T}_{h}}^{2}+\sum_{j}\int_{J_{j}}\left(\frac{2\lambda_{1}-1}{2} \right)|v|\left\llbracket\theta_{f}\rrbracket_{i-1/2,v}^{2}\,\mathrm{d}v+\sum_ {i,j}\int_{I_{i}}\left(\frac{2\lambda_{2}-1}{2}\right)|u_{h}-v|\left\llbracket \theta_{f}\rrbracket_{x,j-1/2}^{2}\,\mathrm{d}x\\ &=((\eta_{f})_{t},\theta_{f})-\mathcal{K}^{1}(v,\eta_{f},\theta_ {f})-\sum_{i,j}\int_{T_{ij}}\left((u-u_{h})\,\partial_{v}f\,\theta_{f}-\frac{1 }{2}\theta_{f}^{2}\right)\,\mathrm{d}v\,\mathrm{d}x\\ &\quad-\mathcal{K}^{2}(u_{h}-v,f,\theta_{f}),\end{split} \tag{4.34}\] where, \[\mathcal{K}^{1}(v,\eta_{f},\theta_{f})=\sum_{i,j}\int_{T_{ij}}\eta_{f}v\, \partial_{x}\theta_{f}\,\mathrm{d}x\,\mathrm{d}v+\sum_{i,j}\int_{J_{j}}\left( \widehat{v\,\eta_{f}}\llbracket\theta_{f}\rrbracket\right)_{i-1/2,v}\,\mathrm{ d}v\,. \tag{4.35}\] To show the estimate on \(e_{f}:=\theta_{f}-\eta_{f}\), it is enough to show the estimate on \(\theta_{f}\) since from (4.30) estimate on \(\eta_{f}\) are known. Let \(J^{i+1/2}=\{x_{i+1/2}\}\times J\) and \(I^{j+1/2}=I\times\{v_{j+1/2}\}\). **Lemma 4.9**.: _Let \(k\geq 1\) and let \(f\in C^{0}([0,T];W^{1,\infty}(\Omega)\cap H^{k+1}(\Omega))\) be the solution of (1.1). Let \(f_{h}(t)\in\mathcal{Z}_{h}\) be its approximation satisfying (3.7) and let \(\mathcal{K}^{1}(v,\eta_{f},\theta_{f})\) be defined as in (4.35). Assume that the partition \(\mathcal{T}_{h}\) is constructed so that none of the components of \(v\) vanish inside any element. Then, the following estimate holds:_ \[|\mathcal{K}^{1}(v,\eta_{f},\theta_{f})|\leq Ch^{k+1}\|f\|_{k+1,\Omega}\| \theta_{f}\|_{0,\mathcal{T}_{h}}\quad\forall\quad t\in[0,T]. \tag{4.36}\] Proof.: Define \[\mathcal{K}^{1}(v,\eta_{f},\theta_{f})=\sum_{i,j}\mathcal{K}^{1}_{ij}(v,\eta_{ f},\theta_{f}),\] where \[K^{1}_{ij}(v,\eta_{f},\theta_{f})=\int_{T_{ij}}\eta_{f}v\,\partial_{x}\theta_{ f}\,\mathrm{d}x\,\mathrm{d}v+\int_{J_{j}}\left(\widehat{v\eta_{f}} \llbracket\theta_{f}\rrbracket\right)_{i-1/2,v}\,\mathrm{d}v.\] We will estimate \(\mathcal{K}^{1}(v,\eta_{f},\theta_{f})\) for a single arbitrary element \(T_{ij}\) and then we sum over all elements. Let \(\bar{v}=\mathcal{P}^{0}_{v}(v)\) be the \(L^{2}\)-projection onto the piecewise constants of \(T_{ij}\) of \(v\) then we write \[\mathcal{K}^{1}_{ij}(v,\eta_{f},\theta_{f})=\mathcal{K}^{1}_{ij}(v-\bar{v}, \eta_{f},\theta_{f})+\mathcal{K}^{1}_{ij}(\bar{v},\eta_{f},\theta_{f}). \tag{4.37}\] The Holder inequality with the projection estimates and (3.1)-(3.2) yields \[\mathcal{K}^{1}_{ij}(v-\bar{v},\eta_{f},\theta_{f}) =\|v-\bar{v}\|_{L^{\infty}(J)}\left(\|\eta_{f}\|_{0,T_{ij}}\| \partial_{x}\theta_{f}\|_{0,T_{ij}}+\sum_{m=i+1/2}\|\eta_{f}\|_{0,J^{m}}\| \theta_{f}\|_{0,J^{m}}\right)\] \[\leq h_{v}h^{k}\|f\|_{k+1,T_{ij}}\|\theta_{f}\|_{0,T_{ij}}.\] Using (4.19) and (3.12), the last term of (4.37) is zero. This completes the proof. **Lemma 4.10**.: _Let \(\mathcal{T}_{h}\) be a Cartesian mesh of \(\Omega\), \(k\geq 1\) and let \((u_{h},f_{h})\in X_{h}\times\mathcal{Z}_{h}\) be the solution to (3.7). Let \((u,f)\in L^{\infty}([0,T];\,W^{1,\infty}(I)\cap H^{k+1}(I))\times L^{\infty}([0, T];\,W^{1,\infty}(\Omega)\cap H^{k+1}(\Omega))\) and let \(\mathcal{K}^{2}\) be defined as in (4.32). Then, the following estimate holds_ \[|\mathcal{K}^{2}(u_{h}-v,f,\theta_{f})|\leq C\left(h^{k}\|u_{h}-u\|_{\infty,I_{h}}+h^{k+1}\|u\|_{W^{1, \infty}(I)}\right)\|f\|_{k+1,\Omega}\|\theta_{f}\|_{0,\mathcal{T}_{h}}. \tag{4.38}\] Proof.: Let \[\mathcal{K}^{2}(u_{h}-v,f,\theta_{f})=\sum_{i,j}\mathcal{K}^{2}_{ij}(u_{h}-v,f,\theta_{f}),\] where \[\mathcal{K}^{2}_{ij}(u_{h}-v,f,\theta_{f})=\int_{T_{ij}}\left(u_{h}-v\right) \eta_{f}\,\partial_{v}\theta_{f}\,\mathrm{d}x\,\mathrm{d}v+\int_{I_{i}}\left( \widehat{(u_{h}-v)\,\eta_{f}}[\![\theta_{f}]\!]\right)_{x,j+1/2}\,\,\mathrm{d}x.\] First, we prove the estimate for a single element \(T_{ij}\) then we take sum over all element. By adding and subtracting \((u-v)\) in \(\mathcal{K}^{2}_{ij}(u_{h}-v,f,\theta_{f})\), we obtain \[\mathcal{K}^{2}_{ij}(u_{h}-v,f,\theta_{f})\leq\mathcal{K}^{2}_{ij}((u_{h}-u), f,\theta_{f})+\mathcal{K}^{2}_{ij}((u-v),f,\theta_{f}). \tag{4.39}\] Using the Holder inequality, (3.1)-(3.2) with the projection estimates from the first term, we obtain \[\mathcal{K}^{2}_{ij}(u_{h}-u),f,\theta_{f}) \leq\|u_{h}-u\|_{\infty,I_{i}}\left(\|\eta_{f}\|_{0,T_{ij}}\| \partial_{v}\theta_{f}\|_{0,T_{ij}}\right.\] \[+\sum_{m=i+1/2}\|\eta_{f}\|_{0,I^{m}}\|\theta_{f}\|_{0,I^{m}}\right)\] \[\leq Ch^{k}\|u_{h}-u\|_{\infty,I_{i}}\|f\|_{k+1,T_{ij}}\|\theta_{ f}\|_{0,T_{ij}}.\] Now, to complete the proof we need to estimate the last term of (4.39) for this we make different cases: 1. If \((u(x)-v)\neq 0,\ \forall\ x\in I_{i}\). 2. If \((u-v)\) vanish inside \(T_{ij}\). In the case (1), without loss of generality assume that \((u(x)-v)>0\) for all \(x\in I_{i}\). A use of equation (4.19) with (3.12) shows: \[\mathcal{K}^{2}_{ij}\left((u-v),f,\theta_{f}\right)=0.\] In the case (2), the projection operator is \(\Pi_{\lambda_{1},\lambda_{2}}\). Then, the Holder inequality with (3.1)-(3.2) and projection estimate, yields \[\mathcal{K}^{2}_{ij}((u-v),f,\theta_{f}) \leq\|u-v\|_{L^{\infty}(I)}\left(\|\eta_{f}\|_{0,T_{ij}}\|\partial _{v}\theta_{f}\|_{0,T_{ij}}+\sum_{m=j+1/2}\|\eta_{f}\|_{0,I^{m}}\|\theta_{f}\| _{0,I^{m}}\right)\] \[\leq Ch^{k}\|u-v\|_{L^{\infty}(I)}\|f\|_{k+1,T_{ij}}\|\theta_{f}\| _{0,T_{ij}}.\] Now, using the fact that, there exist \(x^{*}\in I_{i}\) such that \((u(x^{*})-v)=0\) together with the mean value theorem, we obtain \[\|u-v\|_{L^{\infty}(I)}=\max_{x\in I_{i}}|u(x)-u(x^{*})|\leq C\max_{x\in I_{i} }|x-x^{*}|\,\|u_{x}\|_{L^{\infty}(I)}\leq Ch\|u\|_{W^{1,\infty}(I)}.\] Hence, \[\mathcal{K}^{2}_{ij}((u-v),f,\theta_{f})\leq Ch^{k+1}\|u\|_{W^{1,\infty}(I)}\| f\|_{k+1,T_{ij}}\|\theta_{f}\|_{0,T_{ij}}.\] This completes the proof. **Lemma 4.11**.: _Let \(f(t)\) be the solution of Vlasov equation (1.1). Let \(f_{h}(t)\in\mathcal{Z}_{h}\) be the finite approximation. Assume that \(f\in L^{\infty}([0,T];H^{k+1}(\Omega)\cap W^{1,\infty}(\Omega))\). Then there exist a constant \(C>0\) such that for \(t\in(0,T]\)_ \[\|f(t)-f_{h}(t)\|_{\infty,\mathcal{T}_{h}}\leq C\left(h\|f(t)\|_{W^{1,\infty}( \Omega)}+h^{k+\frac{1}{2}}\|f(t)\|_{k+1,\Omega}+h^{-\frac{1}{2}}\|f(t)-f_{h}(t) \|_{0,\mathcal{T}_{h}}\right).\] Proof.: The proof similar to Lemma 4.1 just taking \(u=f,u_{h}=f_{h}\) and \(Q_{\lambda}u=\Pi_{\lambda_{1},\lambda_{2}}f\). This completes the proof. **Lemma 4.12**.: _Let \(k\geq 1\) and let \(f\in\mathcal{C}^{1}([0,T];H^{k+1}(\Omega)\cap W^{1,\infty}(\Omega))\) be the solution of the Vlasov - viscous Burgers' problem (1.1)-(1.2) and let \(u\in\mathcal{C}^{0}([0,T];H^{k+1}(I)\cap W^{1,\infty}(I))\) be the associated fluid velocity. Let \((u_{h},f_{h})\in X_{h}\times\mathcal{Z}_{h}\) be the approximation of \((u,f)\), respectively, then_ \[\frac{\mathrm{d}}{\mathrm{d}t}\|\theta_{f}\|_{0,\mathcal{T}_{h}}^{2}\leq Ch^{2k +2}+C\|\theta_{f}\|_{0,\mathcal{T}_{h}}^{2}+C\|u-u_{h}\|_{0,I_{h}}^{2}, \tag{4.40}\] _where \(C\) depends on the polynomial degree \(k\), the shape regularity of the partition and it is also depends on \((u,f)\)._ Proof.: From equation (4.34) with triangle inequality, we have \[\begin{split}&\frac{1}{2}\frac{\mathrm{d}}{\mathrm{d}t}\|\theta_{ f}\|_{0,\mathcal{T}_{h}}^{2}+\sum_{i,j}\int_{J_{j}}\left(\frac{2\lambda_{1}-1}{2} \right)|v|\llbracket\theta_{f}\rrbracket_{i-1/2,v}^{2}\,\mathrm{d}v\\ &+\sum_{i,j}\int_{I_{i}}\left(\frac{2\lambda_{2}-1}{2}\right)|u_{ h}-v|\llbracket\theta_{f}\rrbracket_{x,j-1/2}^{2}\,\mathrm{d}x\leq((\eta_{f})_{t}, \theta_{f})+\frac{1}{2}\|\theta_{f}\|_{0,\mathcal{T}_{h}}^{2}+|\mathcal{K}^{1 }|+|\mathcal{K}^{2}|+|T_{1}|.\end{split} \tag{4.41}\] Here, \[T_{1}=\sum_{i,j}\int_{T_{ij}}(u-u_{h})\,\partial_{v}f\,\theta_{f}\,\mathrm{d}v \,\mathrm{d}x.\] First estimate \(T_{1}\), \[\begin{split}|T_{1}|&=\left|\sum_{i,j}\int_{T_{ij}}( u-u_{h})\,\partial_{v}f\,\theta_{f}\,\mathrm{d}v\,\mathrm{d}x\right|\leq\|u-u_{h}\| _{0,I_{h}}\|\partial_{v}f\|_{L^{\infty}(\Omega)}\|\theta_{f}\|_{0,\mathcal{T} _{h}}\\ &\leq\|u-u_{h}\|_{0,I_{h}}^{2}\|\partial_{v}f\|_{L^{\infty}( \Omega)}+\|\partial_{v}f\|_{L^{\infty}(\Omega)}\|\theta_{f}\|_{0,\mathcal{T} _{h}}^{2},\end{split} \tag{4.42}\] here, in second step use the Holder inequality, in third step the Young's inequality. Now, \(\mathcal{K}^{1}\) is estimated from Lemma 4.9 and the arithmetic-geometric inequality, \[|\mathcal{K}^{1}|\leq Ch^{2k+2}\|f\|_{k+1,\Omega}^{2}+C\|\theta_{f}\|_{0, \mathcal{T}_{h}}^{2}. \tag{4.43}\] To deal with \(\mathcal{K}^{2}\), we observe that the bound (4.38) in Lemma 4.10 yields \[\begin{split}|\mathcal{K}^{2}|&\leq C\left(h^{k}\|u- u_{h}\|_{\infty,I_{h}}+Ch^{k+1}\|u\|_{W^{1,\infty}(I)}\right)\|f\|_{k+1, \Omega}\|\theta_{f}\|_{0,\mathcal{T}_{h}}\\ &\leq Ch^{k}\left(h\|u\|_{W^{1,\infty}(I)}+h^{k+\frac{1}{2}}\|u \|_{k+1,I}+h^{-\frac{1}{2}}\|u-u_{h}\|_{0,I_{h}}\right)\|f\|_{k+1,\Omega}\| \theta_{f}\|_{0,\mathcal{T}_{h}}\\ &\quad+Ch^{k+1}\|u\|_{W^{1,\infty}(I)}\|f\|_{k+1,\Omega}\|\theta_ {f}\|_{0,\mathcal{T}_{h}}\\ &\leq C_{u,f}h^{2k+2}+C\|u-u_{h}\|_{0,I_{h}}^{2}+C\|\theta_{f}\|_ {0,\mathcal{T}_{h}}^{2}.\end{split} \tag{4.44}\] here, in second step we use Lemma 4.1, in third step use the Young's inequality. Now, substituting the estimates (4.42)-(4.44) into (4.41) and using the fact that the last two terms on left hand side are non-negative, the Holder inequality, projection estimate (4.30) with the Young's inequality, we obtain \[\frac{\mathrm{d}}{\mathrm{d}t}\|\theta_{f}\|_{0,\mathcal{T}_{h}}^{2}\leq Ch^{2k +2}+C\|\theta_{f}\|_{0,\mathcal{T}_{h}}^{2}+C\|u-u_{h}\|_{0,I_{h}}^{2}.\] where \(C\) is now independent of \(h\) and \(f_{h}\), and depends on \(t\) and on the solution \((u,f)\) through its norm. This completes the proof. **Theorem 4.13**.: _Let \(k\geq 1\) and let \((u,f)\in\mathcal{C}^{1}(0,T;H^{k+1}(I)\cap W^{1,\infty}(I))\times\mathcal{C}^{ 1}(0,T;H^{k+1}(\Omega)\cap W^{1,\infty}(\Omega))\) be the solution of the Vlasov - viscous Burgers' system. Let \((f_{h},u_{h},w_{h})\in\mathcal{C}^{1}(0,T;\mathcal{Z}_{h})\times\mathcal{C}^{ 1}(0,T;X_{h})\times\mathcal{C}^{0}(0,T;X_{h})\) be the DG-LDG approximation of (3.7),(3.8)-(3.9), then for \(\lambda=1/2\) with \(k\) even and \(N_{x}\) odd or \(\lambda>\frac{1}{2}\), there exist a positive constant \(C\) independent of \(h\) such that for all \(t\in(0,T]\)_ \[\|u-u_{h}\|_{L^{\infty}(0,T;L^{2}(I))}+\|w-w_{h}\|_{L^{2}([0,T]\times I)}+\|f-f_{ h}\|_{L^{\infty}(0,T;L^{2}(\Omega))}\leq C\tilde{C}h^{k+1},\] _where, \(\tilde{C}=2\) for \(\lambda=1/2\) and \(\tilde{C}=\left(1+\epsilon^{-1}\right)^{\frac{1}{2}}\left(1+\epsilon^{-1}h^{2k+ 1}\right)^{\frac{1}{2}}\) for \(\lambda>1/2\)._ Proof.: A use of equations (4.6), (4.31) and triangle inequality implies \[\|u-u_{h}\|_{0,I_{h}}+\|w-w_{h}\|_{0,I_{h}}+\|f-f_{h}\|_{0,\mathcal{T }_{h}} \leq\|\eta_{u}\|_{0,I_{h}}+\|\theta_{u}\|_{0,I_{h}}+\|\eta_{w}\|_{0,I_ {h}}\] \[\quad+\|\theta_{w}\|_{0,I_{h}}+\|\eta_{f}\|_{0,\mathcal{T}_{h}}+ \|\theta_{f}\|_{0,\mathcal{T}_{h}}.\] From equations (4.3) and (4.30), we know the estimate of \(\|\eta_{u}\|_{0,I_{h}},\|\eta_{w}\|_{0,I_{h}}\) and \(\|\eta_{f}\|_{0,\mathcal{T}_{h}}\), respectively, so it is enough to estimate \(\|\theta_{u}\|_{0,I_{h}}+\|\theta_{w}\|_{0,I_{h}}+\|\theta_{f}\|_{0,\mathcal{ T}_{h}}\). Adding (4.13)-(4.14) and (4.40) with the Young's inequality, we obtain \[\begin{split}\frac{\mathrm{d}}{\mathrm{d}t}\left(\|\theta_{u}\|_{ 0,I_{h}}^{2}+\|\theta_{f}\|_{0,\mathcal{T}_{h}}^{2}\right)+\|\theta_{w}\|_{0,I_{h}}^{2}&\leq C\left(Ah^{2k+2}+B\|\theta_{u}\|_{0,I_{h}}^{2}+h ^{-\frac{3}{2}}\|\theta_{u}\|_{0,I_{h}}^{3}\right.\\ &\quad+\left.h^{-1}\|\theta_{u}\|_{0,I_{h}}^{4}+\|\theta_{f}\|_{0,\mathcal{T}_{h}}^{2}\right)\\ &\leq C\left(Ah^{2k+2}+B\|\theta_{u}\|_{0,I_{h}}^{2}+\left.h^{-3 }\|\theta_{u}\|_{0,I_{h}}^{4}+\|\theta_{f}\|_{0,\mathcal{T}_{h}}^{2}\right) \end{split} \tag{4.45}\] here \(A=1,B=1\) for \(\lambda=1/2\) and \(A=1+\epsilon^{-1},B=1+\epsilon^{-1}h^{2k+1}\) for \(\lambda>1/2\). Setting \[\left|\hskip-1.422638pt\left|\hskip-1.422638pt\left|\left(\theta_{f},\theta_ {u},\theta_{w}\right)\right|\hskip-1.422638pt\right|\hskip-1.422638pt\right|^{2} :=\|\theta_{f}\|_{0,\mathcal{T}_{h}}^{2}+\|\theta_{u}\|_{0,I_{h}}^{2}+\int_{0} ^{t}\|\theta_{w}\|_{0,I_{h}}^{2}\,\mathrm{d}s \tag{4.46}\] and a function \(\Phi\) as \[\Phi(t)=Ah^{2k+2}+\int_{0}^{t}\left(B\|\hskip-1.422638pt\left|\hskip-1.422638pt \left|\left(\theta_{f},\theta_{u},\theta_{w}\right)\right|\hskip-1.422638pt \right|\hskip-1.422638pt\right|^{2}+h^{-3}\|\hskip-1.422638pt\left|\left(\theta_ {f},\theta_{u},\theta_{w}\right)\hskip-1.422638pt\right|\hskip-1.422638pt \right|^{4}\right)\,\mathrm{d}s. \tag{4.47}\] An integration of equation (4.45) with respect to \(t\) from \(0\) to \(t\) and a use of (4.46)-(4.47) shows \[\left|\hskip-1.422638pt\left|\hskip-1.422638pt\left|\left(\theta_{f},\theta_ {u},\theta_{w}\right)\right|\hskip-1.422638pt\right|\hskip-1.422638pt\right|^{2} \leq C\Phi(t).\] Without loss of generality assume that \(\left|\hskip-1.422638pt\left|\hskip-1.422638pt\left|\left(\theta_{f},\theta_ {u},\theta_{w}\right)\right|\hskip-1.422638pt\right|\hskip-1.422638pt\right|>0\), otherwise, we add an arbitrary small quantity say \(\delta\) and proceed as in a similar way as describe below and then pass the limit as \(\delta\to 0\). Note that \(0<\Phi(0)\leq\Phi\) and \(\Phi\) is differentiable. An differentiation of \(\Phi\) with respect to \(t\), implies \[\partial_{t}\Phi(t) =B\|\hskip-1.422638pt\left|\hskip-1.422638pt\left(\theta_{f}, \theta_{u},\theta_{w}\right)\right|\hskip-1.422638pt\right|^{2}+h^{-3}\| \hskip-1.422638pt\left|\hskip-1.422638pt\left|\left(\theta_{f},\theta_{u}, \theta_{w}\right)\hskip-1.422638pt\right|\hskip-1.422638pt\right|^{4}\] \[\leq C\left(B\Phi(t)+h^{-3}\left(\Phi(t)\right)^{2}\right).\] Moreover, \(\partial_{t}\Phi(t)>0\) and hence, \(\Phi(t)\) is strictly monotonically increasing function which is also positive. An integration in time \(t\) yields \[\int_{0}^{t}\frac{\partial_{s}\Phi(s)}{\Phi(s)\left(B+h^{-3}\Phi(s)\right)}\, \mathrm{d}s\leq\int_{0}^{t}C\,\mathrm{d}s\leq CT.\] After evaluating integration on the left hand side exactly by using \(\Phi(0)=Ah^{2k+2}\) and taking exponential both side, we obtain \[\Phi(t)\left(B-Ah^{2k-1}(e^{CBT}-1)\right)\leq ABe^{CBT}h^{2k+2}.\] Now, choose small \(h>0\) such that \(\left(B-h^{2k-1}(e^{CBT}-1)\right)>0\) this gives \[\Phi(t)\leq\tilde{C}\,e^{CBT}h^{2k+2},\] where, \(\tilde{C}=\left(1+\epsilon^{-1}\right)\left(1+\epsilon^{-1}h^{2k+1}\right)\). This completes the proof for all \(t\in(0,T]\). **Remark 4.14**.: _By Lemma 3.6 and Theorem 4.13 along with equation (2.2), we have_ \[f_{h}\in L^{\infty}(0,T;L^{2}(\mathcal{T}_{h})),\quad u_{h}\in L^{\infty}(0,T;L^ {2}(I_{h}))\quad\text{and}\quad w_{h}\in L^{2}([0,T]\times I_{h}).\] _Using these bounds, we can improved the earlier local-in-time existence result for the discrete problem to global-in-time existence result by extending the interval of existence._ ## 5. Numerical simulation In this section, we give some results from our numerical simulations of the Vlasov - viscous Burgers' system: \[\partial_{t}f+v\,\partial_{x}f+\partial_{v}\left(\left(u-v\right)f\right)=F(t,x, v),x\in[0,L]=I,v\in[-M,M]=J,t>0 \tag{5.1}\] \[\partial_{t}u+u\,\partial_{x}u-\epsilon\partial_{x}^{2}u=\rho V-\rho u+G(x,t),x\in[0,L],t>0, \tag{5.2}\] with the same boundary and initial conditions as in (1.1)-(1.2). Our proposed scheme reduces the problem (5.1)-(5.2) into a system of ODEs: \[\frac{\mathrm{d}}{\mathrm{d}t}\vec{\alpha}(t)=\mathcal{L}(\vec{\alpha},t) \tag{5.3}\] and \[\frac{\mathrm{d}}{\mathrm{d}t}\vec{\beta}(t)=\mathcal{M}(\vec{\beta},t), \tag{5.4}\] where, \(\vec{\alpha}(t)\) and \(\vec{\beta}(t)\) are the coefficient vectors of \(f_{h}\) and \(u_{h}\), respectively. To further approximate the solutions of the systems (5.3)-(5.4), we use the third order TVD RK scheme [1]. In the figures, we use the notation \(k_{x},k_{v}\) for the degree of polynomials in \(x\) and \(v\)-variables, respectively. \(N_{x}\) and \(N_{v}\) denote the number of elements taken in \(x\) and \(v\)-domains, respectively. L2f and L2u are errors of \(f(t,x,v)\) and \(u(t,x)\) in \(L^{2}(\Omega)\) and \(L^{2}(I)\)-norms, respectively at the final time \(t=T\). We test the order of convergence of the proposed scheme on the system (5.1)-(5.2) in two different scenarios (see Example 5.1 and Example 5.2 below). We have run the simulations for both these examples with \(I=[0,2\pi],J=[-1,1],T=0.1\). **Example 5.1**.: _Take_ \[F(t,x,v)=\left(v-1\right)\cos(x-t)e^{-v^{2}}-\left(2v\left(\sin(x-t)-v\right) +1\right)\left(1+\sin(x-t)\right)e^{-v^{2}}\] \[G(t,x)=\left(\sin(x-t)-1\right)\cos(x-t)+\epsilon\,\sin(x-t)+\left(1.49364826 5624854\right)\left(1+\sin(x-t)\right)\sin(x-t)\] _and_ \[f(0,x,v)=\left(1+\sin(x)\right)e^{-v^{2}},\quad u(0,x)=\sin(x).\] _The exact solution of the problem is given by_ \[f(t,x,v) =\left(1+\sin(x-t)\right)e^{-v^{2}}\] \[u(t,x) =\sin(x-t).\] Figure 1. Convergence rate for different values of \(\epsilon\) of the distribution function \(f\)(Left) and the velocity \(u\)(Right) for the Example 5.1; \(k_{x}=1,k_{v}=1\) **Example 5.2**.: _Take_ \[F(t,x,v) =\frac{e^{t}}{\sqrt{2\pi}}e^{\frac{-v^{2}}{2}}\left(+\left(e^{-t} \left(cos(x)+\sin(x)\right)-v\right)\left(1+\cos(x)\right)\left(9v-5v^{3}\right)\right.\] \[\qquad\left.-v\,\sin(x)\left(1+5v^{2}\right)\right)\] \[G(t,x) =\left(-1+\epsilon\right)\left(\cos(x)+\sin(x)\right)e^{-t}+e^{- 2t}\cos(2x)\] \[\qquad+\frac{1}{\sqrt{2\pi}}\left(4.202186105579451\right)\left( 1+\cos(x)\right)\left(\cos(x)+\sin(x)\right)\] _and_ \[f(0,x,v)=\frac{e^{\frac{-v^{2}}{2}}}{\sqrt{2\pi}}\left(1+\cos(x)\right)\left(1 +5v^{2}\right),\quad u(0,x)=\cos(x)+\sin(x).\] _The exact solution of the problem is given by_ \[f(t,x,v) =\frac{e^{t}}{\sqrt{2\pi}}e^{\frac{-v^{2}}{2}}\left(1+\cos(x) \right)\left(1+5v^{2}\right)\] \[u(t,x) =e^{-t}\left(cos(x)+\sin(x)\right).\] **Observations:** We now make, below, several observations on numerical results of Examples 5.1 and 5.2. Figure 3. Convergence rate for different values of \(\lambda\) of \(u\) for the Example 5.1; \(k_{x}=1,k_{v}=1\) (Left), \(k_{x}=2,k_{v}=1\) (Right) Figure 2. Convergence rate for different values of \(\epsilon\) of the distribution function \(f\)(Left) and the velocity \(u\)(Right) for the Example 5.1; \(k_{x}=2,k_{v}=2\) * In Figures 1-2 and Figures 4 -Figure 5, \(h\) varies like \(2^{-m},m=1,\cdots,7\), while \(\epsilon\) varies like \(10^{-\ell},\ell=1,\cdots,5\) with \(\epsilon<h\). From Theorem 4.13, it is noted that the convergence for both \(f\) and \(u\) is of order \(O(\epsilon^{-1/2}h^{k+1})\). In Figures 1 and 4, piecewise polynomial of degree \(k=1\) is used and the computational order for \(f\) matches with theoretical order of convergence, that is order \(2\). Figure 4. Convergence rate for different values of \(\epsilon\) of the distribution function \(f\)(Left) and the velocity \(u\)(Right) for the Example 5.2; \(k_{x}=1,k_{v}=1\) Figure 5. Convergence rate for different values of \(\epsilon\) of the distribution function \(f\)(Left) and the velocity \(u\)(Right) for the Example 5.2; \(k_{x}=2,k_{v}=2\) but for \(u\) there seems to be some deviation in order showing the effect of smaller \(\epsilon\). Figures 2 and 5 uses piecewise polynomial of degree \(k=2\) and computational order of convergence confirms the theoretical order of convergence for both \(f\) and \(u\), which seems to be uniform in \(\epsilon\). 2. In Figure 3 and Figure 6, we explore the dependence of the order of convergence on the magnitude of the parameter \(\lambda\) appearing in the generalized fluxes for the Burgers' part. This experiment suggests a trade-off between the magnitude of the parameter \(\lambda\) and that of the viscosity parameter \(\epsilon\). More precisely, for smaller values of \(\epsilon\), taking larger value of \(\lambda\) helps stabilize the convergence rate. In the next couple of experiments, we test the conservation properties of our proposed numerical scheme. **Example 5.3**.: _In (5.1)-(5.2), let \(I=[0,2\pi]\) and \(J=[-5,5]\). Further, take \(F=G=0\). Consider the initial data:_ \[f(0,x,v) =\begin{cases}(1+\sin(x))e^{-v^{2}}&\text{if}\quad v\in[-1,1]\\ \quad 0&elsewhere.\end{cases}\] \[u(0,x) =sin(x).\] Note that we don't have access to an explicit representation of the exact solution. Now, we take degree of polynomials \(k_{x}=1,k_{v}=2\) and number of sub-intervals \(N_{x}=128,N_{v}=128\) with \(\epsilon=0.1,\lambda=1.5,\lambda_{1}=1.5,\lambda_{2}=1.5\). In Figure 7, we check the discrete mass and discrete momentum conservation properties of our numerical scheme. It confirms our findings in Lemmas 3.3 and 3.4 on discrete mass and momentum conservation, respectively. Figure 7 also shows that our numerical scheme dissipates the discrete energy. This hints at a result that such a discrete energy dissipation property should hold in general, but unlike in continuous case, we do not have any theoretical justification to substantiate this numerical evidence. The following example repeats the above experiment for a different set of initial data. **Example 5.4**.: _In (5.1)-(5.2), let \(I=[0,2\pi]\) and \(J=[-5,5]\). Further, take \(F=G=0\). Consider the initial data:_ \[f(0,x,v) =\begin{cases}\dfrac{e^{\frac{-v^{2}}{2}}}{\sqrt{2\pi}}\left(1+ \cos(x)\right)\left(1+5v^{2}\right)&\text{if}\quad v\in[-1,1]\\ \quad 0&elsewhere.\end{cases}\] \[u(0,x) =sin(x)+cos(x).\] Take degree of polynomials \(k_{x}=1,k_{v}=2\) and number of sub-intervals \(N_{x}=128,N_{v}=128\) with \(\epsilon=0.1,\lambda=1.5,\lambda_{1}=1.5,\lambda_{2}=1.5\). In Figure 8, we note that deviation are up to the machine error and these results are in agreement with the theoretical findings. Figure 7. Deviation in mass and momentum (Left) and energy dissipation (Right) for the Example 5.3 ## 6. Conclusion In this article, a semi-discrete numerical method for the Vlasov-viscous Burgers' equation is introduced and analyzed. This is a DG method for the Vlasov and LDG method for the viscous Burgers' equations in phase space both with generalized numerical fluxes. The discrete scheme is mass and momentum preserving.The optimal rate of convergence for \(\lambda=1/2\) and even degree of polynomial for \(x\) and odd number of elements for space domain is derived. Further, optimal rates of convergence for \(\lambda>1/2\) and for both even and odd degree of polynomial for \(x\) and \(v\) are established, but now the constant in error estimates depends on \(\epsilon^{-\frac{1}{2}}.\) The main tools used for error estimates are the introduction of generalized Gauss-Radau projection and some application of a variant of non-linear Gronwall's lemma. Finally, computational results confirm our theoretical findings. **Acknowledgements.** Authors are grateful to anonymous referees for their valuable comments and suggestions which help to improve the revised manuscript. K.K. and H.H. thank Laurent Desvillettes for introducing them to the fluid-kinetic equations modelling the thin sprays during the Junior Trimester Program on Kinetic Theory organised at the Hausdorff Research Institute for Mathematics, Bonn. K.K. and H.H. thank the Hausdorff Institute of Mathematics, Bonn, for hosting them during the Junior Trimester program on Kinetic theory (Summer of 2019) where this work was initiated. K.K. further acknowledges the financial support of the University Grants Commission (UGC), Government of India. **Statements and Declarations Funding:** The second author acknowledges the financial support of the University Grants Commission (UGC), Government of India. the first and second authors thank the Hausdorff Institute of Mathematics, Bonn, for hosting them during the Junior Trimester program on Kinetic theory (Summer of 2019) where this work was initiated. **Conflict of Interest:** The authors declare that they have no conflict of interest. **Author Contributions:** All authors contributed equally to prepare this manuscript. All authors read and approved the final manuscript. **Data Availability:** The codes during the current study are available from the corresponding author on reasonable request.
2309.08612
Explaining Vision and Language through Graphs of Events in Space and Time
Artificial Intelligence makes great advances today and starts to bridge the gap between vision and language. However, we are still far from understanding, explaining and controlling explicitly the visual content from a linguistic perspective, because we still lack a common explainable representation between the two domains. In this work we come to address this limitation and propose the Graph of Events in Space and Time (GEST), by which we can represent, create and explain, both visual and linguistic stories. We provide a theoretical justification of our model and an experimental validation, which proves that GEST can bring a solid complementary value along powerful deep learning models. In particular, GEST can help improve at the content-level the generation of videos from text, by being easily incorporated into our novel video generation engine. Additionally, by using efficient graph matching techniques, the GEST graphs can also improve the comparisons between texts at the semantic level.
Mihai Masala, Nicolae Cudlenco, Traian Rebedea, Marius Leordeanu
2023-08-29T07:25:06Z
http://arxiv.org/abs/2309.08612v1
# Explaining Vision and Language through Graphs of Events in Space and Time ###### Abstract Artificial Intelligence makes great advances today and starts to bridge the gap between vision and language. However, we are still far from understanding, explaining and controlling explicitly the visual content from a linguistic perspective, because we still lack a common explainable representation between the two domains. In this work we come to address this limitation and propose the Graph of Events in Space and Time (GEST), by which we can represent, create and explain, both visual and linguistic stories. We provide a theoretical justification of our model and an experimental validation, which proves that GEST can bring a solid complementary value along powerful deep learning models. In particular, GEST can help improve at the content-level the generation of videos from text, by being easily incorporated into our novel video generation engine. Additionally, by using efficient graph matching techniques, the GEST graphs can also improve the comparisons between texts at the semantic level. ## 1 Introduction There is a considerable amount of research at the intersection of vision and language, such as image and video generation [22, 40, 15, 3, 33, 24, 29], captioning [11, 39, 30] or visual question answering [2, 18, 38]. However, we still lack an explainable model that can fully relate, constrain and control the connection between vision and language at the level of meaning and content. This limitation, which affects not only text-to-image/video models, but also Large Language Models [36], seriously impedes our way towards trustworthy and safe AI. We mention that, even in this work, we found state of the art text-to-video transformer models generating almost adult-only content for a simple, plain text such as: _A woman goes to the bedroom_. In this context, we introduce GEST, the Graph of Events in Space and Time, which provides an explicit spatio-temporal representation of stories as they appear in both videos and texts and can immediately relate, in an explainable way, the two domains. GEST provides a meaningful representation space, in which similarities between videos and texts can be computed at the level of semantic content. GEST can also be used in the context of our specially designed video generation engine (Sec. 3) to produce videos that are rated higher in terms of content, both by human and automatic evaluations, than their video counterparts generated by state of the art text-to-video models (Sec. 4). Also, GEST graphs can be used for comparing the meaning of texts and improve over classic text similarity metrics or in combination with heavily trained state-of-the-art deep learning metrics (Sec. 2.1). Graphs have been used to represent content in videos [26, 25, 7, 34, 32, 9] or texts [17, 35, 19, 10, 4], but not both as is the case for GEST. **Main novel aspects of GEST** are: **1)** Nodes are events, which could represent (Sec. 2) physical objects, simple actions or even complex activities and stories. **2)** Edges can represent any type of relation (temporal, spatial, semantic, as defined by any verb) between two events defined as nodes. **3)** Any GEST graph can always collapse into a node event, at a higher level of abstraction. Also, any event node can always be expanded into a GEST graph, from a lower Figure 1: Functional overview of the proposed framework. GEST represents the central component, allowing for the preservation of the semantic content in an explainable form, as well as a seamless transition between different domains. level of abstraction. This is an essential property that allows GEST to have multiple layers of depth (see Fig. 2). Another practical **contribution** of our work, is our novel video generation engine (Sec. 3), based on GEST, which can produce long and complex videos that preserve well semantic content, as validated by human and automatic evaluations. We will make the engine code and the videos generated for our experiments publicly available. ## 2 GEST Model The basic elements of GEST are the nodes, which represent events and the edges, which represent the way in which events interact. **GEST nodes:** represent events that could go from simple actions (e.g. opening a door) to complex, high-level events (e.g. a political revolution), in terms of spatio-temporal extent, scale and semantics. They are usually confined to a specific time period (e.g. a precise millisecond or whole year) and space region (e.g. a certain room or entire country). Events could exist at different levels of semantics, ranging from simple physical contact (e.g. "I touch the door handle") to profoundly semantic ones (e.g. "the government has fallen" or "John fell in love with physics"). Even physical objects are also events (e.g. John's car is represented by the event "John's car exists"). Generally, any space-time entity could be a GEST event. **GEST edges:** relate two events and can define any kind of interaction between them, from simple temporal ordering (e.g. "the door opened" after "I touched the door handle") to highly semantic (e.g. "the revolution" caused "the fall of the government", or "Einstein's discovery" inspired "John to fall in love with physics"). Generally, any verb that relates two events or entities could be a GEST edge. **From a graph to a node and vice-versa:** A GEST graph essentially represents a story in space and time, which could be arbitrarily complex or simple. Even simple events can be explained by a GEST, since all events can be broken, at a sufficient level of detail, into simpler ones and their interactions (e.g. "I open the door" becomes a complex GEST if we describe in detail the movements of the hand and the mechanical components involved). At the same time, any GEST graph could be seen as a single event from a higher semantic and spatio-temporal scale (e.g. "a political revolution" could be both a GEST graph and a single event). Collapsing graphs into nodes (\(Event\Left EGST\)) or expanding nodes into graphs (\(GEST\Left Event\)), gives GEST the possibility to have many levels of depth, as needed for complex visual and linguistic stories. Going from a \(GEST\) at a lower level to an event \(E\) at a higher level (\(E\Left EGST\)) reminds of how the attention mechanism is applied in Graph Neural Networks and Transformers [27]: the GEST graph acts as a function that aggregates information from nodes (events) \(E_{i}\)'s at level \(k\) and builds a higher level GEST representation, which further becomes an event at the next level \(k+1\): \[E_{i}^{(k+1)}\Left EGST(E_{1}^{k},E_{2}^{k},...,E_{n}^{k})\] In Fig. 2 we present our GEST representation, as it applies to a specific text. In each event node, \(E_{i}\), we encode an \(action\), a list of \(entities\) that are involved in the action, its \(location\) and \(timeframe\) and any additional \(properties\). Note that an event can contain references (pointers) to other events, which define relations of type "same X" (e.g. "same breakfast"). We also exemplify how the GEST of two connected events can collapse into a single event. ### GEST for Textual Content Comparison Next we verify experimentally that the GEST model can capture the semantics of language by applying it to the task of text to text comparisons, in the context of video to text translation. We use the Videos-to-Paragraphs dataset [6] that has multiple text descriptions for the same video. Starting from the given texts, we build ground truth GEST representations for the entire dataset as follows: we use a rule-based method to obtain initial GESTs from texts, represented in a specific string format that captures information in the nodes as well as their relationships. Next we check, correct and refine the automatically generated GESTs by human annotation. Note that we also tested with GhatGPT, Figure 2: GEST graph explaining the following text: _“John was having breakfast when a bee approached the flower in the pot on the table. Then he pulled back trying to avoid contact with the bee but he realized that it was not an easy attempt because she actually came because of the tasty food on his plate”._ which was able to produce mostly valid GESTs by learning from a few human examples. We seek to find how useful is GEST in deciding if two texts stem from the same video or not. Basically, instead of comparing texts, we move the comparison in the GEST space in which we define a similarity function using graph matching. In We use as graph matching methods the classic Spectral Matching (SM) [14] and the recent Neural Graph Matching (NGM) [31]. For both algorithms, the affinity matrix is build using node and edge level similarity functions based on pre-trained GloVe [21] word embeddings. Two nodes are as similar as are their components (e.g. action, entities), while edge-level similarity uses the relation type defined by the edge (e.g. causality, temporal ordering, etc.) along with the similarity of the nodes they connect. In Tab. 1 we present comparisons of GEST+graph matching similarity vs. other well-known text similarity metrics, which demonstrate that GEST is capable to capture semantic content. In Tab. 2 we investigate whether graph matching in GEST space can be combined with state-of-the-art highly trained text similarity metrics such as BLEURT [23]. We combine each pair of similarity metrics (BLEURT + X) in linear way, to ensure that if a performance gain exists, it is less likely to be due to the combination method and more due to the additional metric. In this setting GEST graphs are learned by finetuning a GPT-3 model (_text-curie-001_), with raw text as input and ground truth GEST as output, on the Videos-to-Paragraphs train set. Note that the combination of BLEURT with graph matching in the GEST space consistently increases the performance over BLEURT (which is not always the case for other metrics) and by the largest margin. ## 3 GEST Video Generation Engine To complete the connection between GEST and the visual world, we introduce the engine of visual stories. Based on the game GTA San Andreas with Multi Theft Auto (MTA)1 interfacing the game's mechanics, we use the pre-existing in-game locations, objects and animations and focus on events taking place in and around a house. The engine has full control within the virtual environment and can, therefore, take full advantage of the structured and explainable nature of GEST. It is capable of choosing a setting in a virtual environment, with locations, actions and entities that match the events described within the GEST and orchestrate the complex interactions during the simulation, thus emulating an entire world (Figure 3). Footnote 1: [https://multitheftauto.com/](https://multitheftauto.com/), accessed on 25 July 2023 The system takes a GEST as input and, based on it, generates multiple valid videos - note the one-to-many relation. This engine is used to automatically generate videos from GEST. We couple this with the system that generates GEST starting from a text, closing the loop and building a system that transforms a text into a GEST, then a GEST into a video. We generate a set of 25 complex videos of 2-3 minutes each, with up to 15 different activities, much larger than what is used in the current literature. Even if the set is small, it is very challenging so we use to validate the quality of the generated videos. Results of this evaluation are presented in the following section. \begin{table} \begin{tabular}{|l|c|c|c|c|} \hline **Method** & **Corr(\%)** & **Acc(\%)** & **F** & **AUC(\%)** \\ \hline \hline BLEU@4 & 24.45 & 75.52 & 0.28 & 52.65 \\ METEOR & 58.48 & 84.23 & 1.12 & 73.90 \\ ROUGE & 51.11 & 83.40 & 0.72 & 68.92 \\ SPICE & 59.42 & 84.65 & 1.04 & 74.43 \\ BERTScore & 57.39 & 85.89 & 1.07 & **77.93** \\ \hline GEST-SM & **61.70** & 84.65 & **1.20** & 75.47 \\ GEST-NGM & 60.93 & **86.31** & 0.98 & 76.75 \\ \hline \end{tabular} \end{table} Table 1: Comparing GEST representation power (coupled with graph matching similarity functions SM or NGM) and well-known text-to-text similarity methods (applied on texts from Videos-to-Paragraphs test set, on the task of separating texts describing the same video vs. texts from different videos). Corr - correlation, Acc - Accuracy, F - Fisher score and AUC - area under the precision-recall curve. Best values are in **bold**, second best underlined. \begin{table} \begin{tabular}{|l|c|c|c|} \hline **Method** & **Corr(\%)** & **Acc(\%)** & **F** & **AUC(\%)** \\ \hline \hline BLEU@4 & 70.93 & 90.04 & 2.03 & 88.02 \\ \hline +BLEU@4 & 70.93 & 90.04 & 2.03 & 88.04 \\ +METEOR & 71.20 & 89.63 & 2.07 & 87.62 \\ +ROUGE & 70.76 & 90.04 & 2.00 & 87.71 \\ +SPICE & 71.94 & 88.80 & 2.09 & 87.71 \\ +BERTSCore & 71.11 & 89.63 & 2.01 & 87.25 \\ \hline +GEST-SM & **72.89** & **90.87** & **2.21** & **89.80** \\ +GEST-NGM & 71.91 & 90.46 & 2.05 & 88.58 \\ \hline \end{tabular} \end{table} Table 2: Results comparing the power of BLEURT coupled with well-known text similarity metrics and GEST, applied on stories from Videos-to-Paragraphs test set. Text metrics are computed on the ground truth stories, while GESTs are generated with a transformer learned on the training set. Same notations as in Tab. 1. \begin{table} \begin{tabular}{|l|c|c|c|} \hline Metric & Ours & CogVideo & Text2VideoZero \\ \hline \hline Bleu@4[20] & 9.84 & 8.16 & **10.02** \\ Meteor[5] & **14.16** & 13.48 & 13.96 \\ ROUGE[16] & **35.40** & 32.72 & 34.87 \\ SPICE[1] & **20.04** & 19.54 & 19.43 \\ CIDEr[28] & **34.12** & 33.16 & 33.65 \\ BERTScore[37] & **19.37** & 13.09 & 15.02 \\ BLEURT[23] & **39.44** & 37.55 & 38.40 \\ \hline \end{tabular} \end{table} Table 3: Results on video-to-text task. We show in **bold** the best value for each metric. ## 4 Vision-Language Experiments with GEST Next we present both human and automatic evaluations of our GEST-generated videos, compared to recent text-to-video models [12, 13]. We invite human annotators to rate videos in terms of semantic content w.r.t input text, on a scale from 1 to 10 and pick the best video for each input text. We collected a total of 111 annotations, from 6 independent annotators. In Fig 5 we show the overall scores given by human evaluators for each method. In 87.39% of cases our GEST-generated video was picked as best, with only 11.71% for Text2VideoZero and 0.90% for CogVideo. For the automatic evaluation of the generated videos, we use a state-of-the-art video-to-text generation method, VALOR [8], and measure how well the text generated back from the generated videos match the initial input texts. VALOR is trained and tested separately for each type of video generation method using 5-fold cross validation, from scratch, over 3 runs with results averaged (shown in Tab. 3). These experiments match the human evaluation, keeping the same ranking across methods and proving that GEST-generated videos can better maintain the semantic content of the original input text. This proves that an explicit and fully explainable vision-language model in the form of a graph of events in space and time, could also provide in practice a better way to explain and control semantic content - thus bringing a complementary value in the context of realistic (but not necessarily truthful) AI generation models. The reason why current deep learning models are not strong is that we generate long and complex videos. Their main weakness resides in their inability to integrate long and complex context, both in video and text generation. ## 5 Conclusions We propose an explainable representation that connects language and vision (see Fig 1), which explicitly captures semantic content as a graph of events in space and time (GEST). We prove that GEST is capable of capturing meaning from text and contribute to the design of powerful text-to-text comparison metrics when combined with graph matching. More importantly, GEST can be also used to generate videos from text that better preserve the semantic content (as evaluated by humans and automatic procedures), than deep learning methods for which there is no explicit way of explaining and controlling content. In future work we plan to explore ways to better integrate the power of deep learning into the explainable structure of GEST, for further developing a robust and trustworthy bridge between vision and language. **Acknowledgements**: This work was funded in part by UEFISCDI, under Project EEA-RO-2018-0496 and by a Google Research Gift. Figure 4: Example of input text (A), generated GEST from text (B) and automatically generated video from GEST (C). Figure 5: Overall scores (1-10) given by human evaluators. Figure 3: The system architecture of the engine. Upper part - meta context validation. Lower part - simulation.
2306.08097
Lectures on Field Theory and the Standard Model: A Symmetry-Oriented Approach
The standard model of particle physics represents the cornerstone of our understanding of the microscopic world. In these lectures we review its contents and structure, with a particular emphasis on the central role played by symmetries and their realization. This is not intended to be an exhaustive review but a discussion of selected topics that we find interesting, with the specific aim of clarifying some subtle points and potential misunderstandings. A number of more technical topics are discussed in separated boxes interspersed throughout the text.
Luis Alvarez-Gaume, Miguel A. Vazquez-Mozo
2023-06-13T19:34:06Z
http://arxiv.org/abs/2306.08097v1
# Lectures on Field Theory and the Standard Model: A Symmetry-Oriented Approach+ ###### Abstract The standard model of particle physics represents the cornerstone of our understanding of the microscopic world. In these lectures we review its contents and structure, with a particular emphasis on the central role played by symmetries and their realization. This is not intended to be an exhaustive review but a discussion of selected topics that we find interesting, with the specific aim of clarifying some subtle points and potential misunderstandings. A number of more technical topics are discussed in separated boxes interspersed throughout the text. ###### Contents * 1 Preliminaries * 2 From symmetry to physics * 2.1 Relativity from geometry * 2.2 Relativity and quantum mechanics * 3 The importance of classical field theory * 3.1 The symmetries of Maxwell's theory * 3.2 Quantum electromagnetism * 3.3 Some comments on quantum fields * 4 Some group theory and some more wave equations * 4.1 Special relativity and group theory * 4.2 Chiral (and also nonchiral) fermions * 4.3 Some more group theory * 5 A tale of many symmetries * 5.1 The symmetries of physics * 5.2 Noether's two theorems * 5.3 Quantum symmetries: to break or not to break (spontaneously) * 5.4 The Brout-Englert-Higgs mechanism * 6 Some more gauge invariances * 7 Anomalous symmetries * 7.1 Symmetry vs. the quantum * 7.2 The physical power of the anomaly * 8 The strong CP problem and axions * 8.1 The (infinitely) many vacua of QCD * 8.2 Breaking CP strongly * 8.3 Enters the axion * 9 The electroweak theory * 9.1 Implementing SU(2) \(\times\) U(1)\({}_{Y}\) * 9.2 But, where are the masses? * 9.3 The Higgs boson * 9.4 Neutrino masses * 10 Scale invariance and renormalization * 11 Closing remarks * 12 Preliminaries Quantum field theory (QFT) is the language in which we codify our knowledge about the fundamental laws of nature in a manner compatible with quantum mechanics, relativity, and locality. Its most significant achievement has been formulating the standard model (SM) of strong, weak, and electromagnetic interactions. This theory summarizes what we know about the physics of the fundamental constituents of matter. It also delineates our ignorance, providing a glimpse of the known unknowns that will motivate future research. The story of QFT and the SM has been told many times with various degrees of detail and depth (see [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18] for a necessarily incomplete sample of books on both topics). In the pages reserved for these lecture notes, it is utterly impossible to provide a detailed account of the towering achievements accumulated since the discovery of the electron by J. J. Thomson in 1897, whose most recent milestone was the announcement in 2012 of the discovery of the Higgs boson at CERN. Generations of physicists and engineers have made possible the formulation of a theory describing the most fundamental laws of nature known so far. High energy physics is not the only arena in which QFT has shown its powers. In the nonrelativistic regime, it leads to quantum many body theory, a mathematical framework used in condensed matter physics to study phenomena such as superconductivity, superfluidity, and metals' thermal and electronic properties [21, 22, 23]. Furthermore, in the last few decades QFT has also played a central role in understanding the formation of the large scale structure in the universe [24, 25, 26]. Exciting as all these developments are, these lectures will focus on the applications of QFT to particle physics, and particularly the construction of the SM. We will highlight symmetry arguments to show how virtually all known forms of symmetry realizations play a role in it. But even within this restricted scope, space limitations require choosing not just the material to include but also the viewpoint to adopt. In explaining some of the ideas and techniques in our study of the SM, it is useful to focus on several key concepts, many of which are related to implementing symmetries in a quantum system with infinite degrees of freedom. In doing so, we will encounter many surprises and some misconceptions to be clarified. Explaining physics can be compared to the performance of a well-known piece of music. Often the performer surprises the audience by accentuating some features of the work that only then are sufficiently appreciated. In such a vein, we will highlight some important fundamental aspects of the SM the reader may not have encountered previously, some of which also point to the limitations of the theory. Although we will now shy away from diving into calculations when needed, our aim here is less giving a detailed account of the technicalities involved than providing the reader with both essential conceptual tools and inspiration to further deepen in the study of the topics to be presented. Having set our plan of action, we turn to physics and begin by reviewing the system of units to be used throughout the lectures. Since we are dealing with quantum relativistic systems, it is natural to work with natural units where the speed of light and the Planck constant are both set to one, \(c=\hbar=1\). Doing a bit of dimensional analysis, it is easy to see that setting these two fundamental constants to one means that of the three fundamental dimensions \(L\) (length), \(T\) (time), and \(M\) (mass) only one is independent. Indeed, from \([c]=LT^{-1}\) and \([\hbar]=ML^{2}T^{-1}\) it follows that \(T=L\) and \(M=L^{-1}\), meaning that time have dimensions of length and masses of \((\text{length})^{-1}\). Alternatively, we may prefer to use energy (\(E\)) as the fundamental dimension, as we will actually do in the following. In this case, from \([\text{energy}]=ML^{2}T^{-2}\) we see that both lengths and times have dimensions of \((\text{energy})^{-1}\), while masses are measured in units of energy. Using natural units simplifies expressions by eliminating factor of \(\hbar\) and \(c\) and brings other advantages. The most relevant for us is that it provides a simple classification of the operators, or terms, appearing in the action or Hamiltonian defining a theory. As an example, let us consider the scalar field action \[S=\int d^{4}x\left(\frac{1}{2}\partial_{\mu}\phi\,\partial^{\mu}\phi-\frac{m^{ 2}}{2}\phi^{2}-\frac{\lambda_{4}}{4!}\phi^{4}-\frac{\lambda_{6}}{6!}\phi^{6} \right). \tag{1}\] Action is measured in the same units as \(\hbar\) (not by chance historically known as the quantum of action) and is therefore dimensionless in natural units. Taking into account that \([d^{4}x]=E^{-4}\) and \([\partial_{\mu}]=E\), we find from the kinetic term that \([\phi]=E\), which in turn confirms that \([m]=E\) as behooving a mass. As for the coupling constants, \(\lambda_{4}\) is dimensionless while \([\lambda_{6}]=E^{-2}\). Terms such as \(\phi^{6}\), whose coupling constants have negative energy dimension, are called higher-dimensional operators. In the modern (Wilsonian) view of QFT to be discussed in section 10, they are seen as induced by physical processes above some energy scale \(\Lambda\), much higher than the energy at which we want to describe the physics using the corresponding action. The presence of higher-dimensional operators in the action signals that we are dealing with a theory that is not fundamental, but some effective description valid at energies \(E\ll\Lambda\), that should be eventually replaced (completed) by some more fundamental theory at higher energies. Although the action of an effective field theory (EFT) may contain an infinite number of higher-dimensional operators of arbitrary high dimension, this does not make it any less predictive at low energies [27, 28]. To understand this, let us look at a higher-dimensional operator \(\mathcal{O}_{n}\), with \([\mathcal{O}_{n}]=E^{n-4}\) for \(n>4\), entering in the action as \[S\supset\frac{g_{n}}{\Lambda^{n-4}}\int d^{4}x\,\mathcal{O}_{n}, \tag{2}\] where \(g_{n}\) is a dimensionless coupling. The corrections induced by this term to processes occurring at energy \(E\) scales as \((E/\Lambda)^{n-4}\), so for \(E\ll\Lambda\) there is a clear hierarchy among the infinite set of higher-dimensional operators. The upshot is that using our EFT to ask physical questions at sufficiently low energies, and taking into account the limited sensitivity of our detectors, only a small number of higher-dimensional operators have to be considered in the computation of physical observables. Applying the philosophy of EFT to the action (1) leads to identify the theory as an effective description valid at energies well below the scale set by \(\lambda_{6}\), namely \(\Lambda\sim 1/\sqrt{\lambda_{6}}\). Nature offers more interesting implementations of this scheme, some of which we will encounter later on in the context of the SM. A particularly relevant case is that of general relativity (GR), that we discuss now in some detail. We start with the Einstein-Hilbert action \[S=\frac{1}{16\pi G_{N}}\int d^{4}x\,\sqrt{-g}R, \tag{3}\] and consider fluctuations around the Minkowski metric (nonflat background metrics can also be used) \[g_{\mu\nu}=\eta_{\mu\nu}+2\kappa h_{\mu\nu}, \tag{1.4}\] where \[\kappa\equiv\sqrt{8\pi G_{N}}. \tag{1.5}\] Inserting (1.4) into (1.3) and expanding in powers of \(h_{\mu\nu}\) we get an action defining a theory of interacting gravitons propagating on flat spacetime [29, 30, 31]. Its interaction part contains an infinite number of terms with the structure \[S_{\rm int}=\sum_{n=3}^{\infty}\kappa^{n-2}\int d^{4}x\,\mathcal{O}_{n+2}[h, \partial], \tag{1.6}\] where the operator \(\mathcal{O}_{n+2}[h,\partial]\), which has energy dimension \(n+2\), contains \(n\) graviton fields and two derivatives, while from eq. (1.5) we see that the coupling constant has dimensions \([\kappa]=E^{-1}\). In the spirit of EFT, this indicates that Einstein's gravity is not fundamental, but an effective description valid at energies below its natural energy scale set by the dimensionful gravitational constant, the so-called Planck scale \[\Lambda_{\rm Pl}\equiv\sqrt{\frac{\hbar c^{5}}{8\pi G_{N}}}=2.4\times 10^{18} \text{ GeV}, \tag{1.7}\] where we have restored powers of \(\hbar\) and \(c\). To get an idea of the size of this scale, let us just say it is about \(10^{14}\) times the center-of-mass energy at which LHC currently operates. The statement is occasionally encountered in the literature and the media that GR is impossible to quantize. This needs to be qualified. The effective action (1.6) can be consistently quantized provided we restrict our physical questions to the range of energies where it can be used, namely \(E\ll\Lambda_{\rm Pl}\). In this regime, the quantum fluctuations of the background metric shown in (1.4) are of order \(E/\Lambda_{\rm Pl}\) and, therefore, small. Furthermore, powers of this same quantity suppress the induced corrections and again, at the level of accuracy set by our experiments, only a small number of operators in (1.6) need to be retained to compute physical observables. In other words, below the Planck energy scale quantum gravity is just a theory of weakly coupled gravitons propagating on a regular background spacetime. This state of affairs breaks down when the energy gets close to \(\Lambda_{\rm Pl}\). At this point the quantum fluctuations of the geometry become large, and the hierarchy of terms in (1.6) breaks down. Physically, what happens is that our gravitons become strongly coupled and therefore cease to be the appropriate degrees of freedom to describe a quantum theory of gravity. Thus, the correct statement is not that there is no consistent theory of quantum gravity, but that we lack one _which remains valid at arbitrarily high energies_. The difference is crucial, since it is precisely the latter kind of theory needed to analyze, for example, what happens close to spacetime singularities, where quantum effects are so large as to override the semiclassical description provided by GR. Viewed as an EFT, Einstein's (quantum) gravity is expected to be subsumed near \(\Lambda_{\rm Pl}\) into another theory, its ultraviolet (UV) completion, which presumably remains valid to arbitrarily high energies. Among the particle physics community string theory continues to be the favored candidate for such a framework (see for instance [32, 33] for a modern account). The previous digression on EFTs leads us to the related issue of renormalizability, on which we will further elaborate in section 10. All QFTs used in describing elementary particles, particularly the SM, lead to infinities when computing quantum corrections (terms of order \(\hbar\) or higher) to classical results. The origin of these divergences lies in the behavior of the theory at very high energies. Quantum fluctuations of very short wavelength actually dominate the result, driving them to infinity. This problem was tackled already in the 1940s by the procedure of renormalization. To make a long story short, one begins by regularizing the theory by setting a maximum energy \(\Lambda\), a cutoff, so fluctuations with wavelength smaller than \(\Lambda^{-1}\) are ignored. This makes all results finite, albeit dependent on the otherwise arbitrary cutoff. The key observation now is that the parameters in the action (field normalizations, masses, and coupling constants) can depend on \(\Lambda\), so physical observables are cutoff independent. For this to work, a further ingredient is needed: an operational definition of masses and couplings, which serves to fix the dependence of the action parameters on the cutoff (for all the details see, for example, chapter \(8\) of ref. [14] or any other of the QFT textbooks listed in the references). In carrying out this program, two thing may happen. One is that divergences can be removed with a finite number of operators in the action (most frequently, just those already present in the classical theory). This is the case of a renormalizable theory. The second situation arises when it is necessary to add an infinite number of new operators in order to absorb all the divergences in their corresponding couplings. The theory is then said to be nonrenormalizable. The SM belongs to the first type, while GR is an example of the second. As a rule of thumb, actions containing operators of dimension equal or smaller than four define renormalizable theories, while the presence of higher-dimensional operators renders the theory nonrenormalizable, at least when working in perturbation theory. For decades, renormalizability was considered necessary for any decent theory of elementary particles. The very formulation of the SM and, most particularly, its implementation of the Brout-Englert-Higgs (BEH) mechanism [34, 35, 36] through the Higgs boson was guided by making the theory renormalizable. As a token of how important this requirement was perceived to be at the time, let us mention that the electroweak sector of the SM developed by Sheldon L. Glashow, Steven Weinberg, and Abdus Salam [37, 38, 39] only started to be taken seriously by the particle physics community after Gerard 't Hooft and Martinus Veltman mathematically demonstrated its renormalizability [40, 41]. From a modern perspective, however, the condition that a theory must be renormalizable is regarded as too restrictive, equivalent to requiring that it remains valid at all energies. As a matter of fact, there is no reason to exclude nonrenormalizable theories from our toolkit. They can be interpreted as EFTs whose natural energy scale is set by the cutoff \(\Lambda\), giving accurate results for processes involving energies \(E\ll\Lambda\). Furthermore, from this viewpoint, the cutoff ceases to be a mere mathematical artefact to be eventually hidden in the action parameters. Instead, it acquires a physical significance as the energy threshold of the unknown physics encoded in the higher dimensional operators of our EFTs. Otherwise expressed, nonrenormalizability has lost its bad reputation and now is taken as a hint that some unknown physics is lurking at higher energies. To make the previous discussion more transparent, let us look at the important case of quantum chromodynamics (QCD), the theory describing the interaction of quarks and gluons. QCD is not just a renormalizable theory that it can be extrapolated at arbitrary energies, but asymptotically free as well. This means that its coupling constant approaches zero as we go to higher energies, thus making perturbation theory more and more reliable. The issue, however, is that when studying its low energy dynamics, the QCD coupling grows as we decrease the energy and the theory becomes strongly coupled. This has to be handled in a way somehow reminiscent of what we explained when discussing quantum GR near the Planck scale: below certain energy scale \(\Lambda_{\rm QCD}\) we need to abandon the perturbative QCD (pQCD) description in terms of quarks and gluons, now strongly coupled, and find the "right", weakly coupled, degrees of freedom to build an operative QFT. But, simultaneously, we have a huge advantage concerning the gravity case. There, the trouble arose in the unexplored region of extremely high energies, where identifying the appropriate degrees of freedom, their interactions, or just the right framework remains anybody's guess (strings? spin foam? causal sets?). By contrast, life is much easier in QCD. The problematic regime happens at low energies, so to identify the weakly coupled degrees of freedom, we only need to "look", i.e., to do experiments. From them, we learn that the physics has to be described in terms of mesons and baryons, whose interactions are largely fixed by symmetries (an issue to which we will come back later). What is relevant for the present discussion is that the appropriate framework, chiral perturbation theory (\(\chi\)PT), is a nonrenormalizable QFT whose action contains a plethora of higher-dimensional operators. Its cutoff, however, is not some arbitrary energy \(\Lambda\) whose role is just to make the theory finite, but the physical scale \(\Lambda_{\rm QCD}\) at which quarks and gluons get confined into hadrons. The theory of hadron interactions should then be understood as an EFT valid at energies \(E\ll\Lambda_{\rm QCD}\). The existence of the Planck scale at which quantum gravity is expected to become the dominant interaction has led to the realization that all quantum field theories have to be regarded as EFTs with a limited range of validity. This includes even renormalizable theories that, like the SM, are well-defined in a wide range of energies. However, explaining some experimental facts, such as nonzero neutrino masses, might require adding higher-dimensional operators to the theory, setting the energy scale for new physics to be explored in future high-energy facilities. At this energy, the SM will be superseded, maybe by some grand unified theory (GUT), which in turn is expected to break down at \(\Lambda_{\rm Pl}\). It is in this sense that EFTs provide the foundational framework to understand nature at the smallest length scales (see fig. 1). Figure 1: Simplified cartoon showing the network of EFTs behind our understanding of subatomic physics. ## 2 From symmetry to physics Symmetry is a central theme of contemporary physics, although its tracks go back a long way in history. More or less in disguise, symmetry-based arguments can be found in natural philosophy since classical times. In his refutation of vacuum in the fourth book of _Physics_ (215a), Aristotle used the homogeneity of empty space to conclude the principle of inertia, that he however regarded as an inconsistency since it contradicted his first principle of motion: whatever moves has to be moved by something else. Galileo Galilei's assumption that reversing the velocity with which a free-rolling ball arrives at the basis of an inclined plane would make it climb exactly to the height from which it was released can be also regarded as an early _de facto_ application of time reversal symmetry. Although the origins of the mathematical study of symmetry are traced back to the first half of the 19th century with the groundbreaking works on group theory of Evariste Galois and Niels Henrik Abel, its golden age was ushered in by Felix Klein's 1872 Erlangen Program [42, 43]. Its core idea is that different geometries can be fully derived from the knowledge of the group of transformations preserving its objects (points, angles, figures, etc.). This establishes at the same time a hierarchy among geometries, determined by the relative generality of their underlying symmetry groups. In this way, Euclidean, affine, and hyperbolic geometries can be retrieved from projective geometry by restricting its group of transformations. As an example, the whole plane Euclidean geometry emerges from the invariance under the combined action of rotations and rigid translations \[x^{\prime i}=R^{i}_{\ j}x^{j}+a^{i}, \tag{2.1}\] where \(R^{i}_{\ j}\in\text{SO}(2)\) and \(a^{i}\) is and arbitrary two-dimensional vector. These two transformations build together the Euclidean group \(E(2)\equiv\text{ISO}(2)\), leaving invariant the Euclidean distance between two points \(A\) and \(B\) with Cartesian coordinates \(A=(x_{A},y_{A})\) and \(B=(x_{B},y_{B})\) \[d(A,B)=\sqrt{(x_{B}-x_{A})^{2}+(y_{B}-y_{A})^{2}}, \tag{2.2}\] which is just an application of the Pythagorean theorem (see fig. 2). In a similar fashion, the geometry on the complex projective line \(\mathbb{CP}^{1}\) (a.k.a. the Riemann sphere) follows from the invariance of geometrical Figure 2: Euclidean distance between two points on the plane. objects under the projective linear group \(\text{PGL}(2,\mathbb{C})\), acting through Mobius transformations on \(\mathbb{C}\cup\{\infty\}\) \[z^{\prime}=\frac{az+b}{cz+d}, \tag{2.3}\] where \(a,b,c,d\in\mathbb{C}\) and \(ad-bc\neq 0\). Among the invariants in this case are the four-point cross ratios associated with four points with complex coordinates \(z_{1}\), \(z_{2}\), \(z_{3}\), and \(z_{4}\) \[\text{CR}(z_{1},z_{2},z_{3},z_{4})\equiv\frac{(z_{1}-z_{3})(z_{2}-z_{4})}{(z_{ 2}-z_{3})(z_{1}-z_{4})}, \tag{2.4}\] as well as the chordal distance between two points \(A\) and \(B\) on the Riemann sphere \[d(A,B)_{\text{chordal}}=\frac{2|z_{A}-z_{B}|}{\sqrt{(1+|z_{A}|^{2})(1+|z_{B}|^{ 2})}}. \tag{2.5}\] Mobius transformations preserve angles and maps circles to circles, so from a Kleinian point of view they are _bona fide_ geometrical objects on \(\mathbb{CP}^{1}\). Klein's association of geometry and symmetry (i.e., group theory) revolutionized mathematics and became a game changer in physics. Beyond all early tacit uses, the systematic implementation of symmetry in physics had to wait until the end of the 19th century. In 1894 Pierre Curie used group theoretical methods to study the role of spatial symmetries in physical phenomena [44], thus introducing mathematical tools so far only applied in crystallography. This inaugurated a trend taken up later by the emerging fields of relativity and atomic physics, that led to key results like Emmy Noether's two celebrated theorems linking symmetries with conserved charges [45] (see section 5.2). ### Relativity from geometry A beautiful example of geometry emerging from symmetry is provided by the geometrization of special relativity carried out in 1908 by Hermann Minkowski1. Einstein's formulation of special relativity in terms of events occurring in some instant \(t\) at some position \(\mathbf{r}\) (as measured by some inertial observer) leads naturally to introducing the four-dimensional space of all potential events, each represented by a point with spacetime coordinates \((t,\mathbf{r})\). Although switching from one inertial observer to another changes the individual coordinates of the events, the invariance of the speed of light implies the existence of an invariant. Given two arbitrary events taking place at points \(\mathbf{r}\) and \(\mathbf{r}+\Delta\mathbf{r}\) and separated by a time lapse \(\Delta t\), its "spacetime separation" Footnote 1: Einstein actually dubbed Minkowski’s idea _uberflussige Gelehrsamkeit_ (superfluous erudition) [46], although geometrization later turned out to be the basis of his general theory of relativity. \[\Delta s^{2}\equiv\Delta t^{2}-(\Delta\mathbf{r})^{2}, \tag{2.6}\] remains the same for all inertial observers. The existence of this invariant with respect to the reference frame transformations introduced by Lorentz, Poincare, and Einstein (and named after the first one) makes it natural to endow the space of events, or spacetime for short, with the metric \[ds^{2}=dt^{2}-dx^{2}-dy^{2}-dz^{2}. \tag{2.7}\] This is how spacetime geometry originates from the postulate of invariance of the speed of light. We can take advantage of the language of tensors and write the line element (2.7) in the form \[ds^{2}=\eta_{\mu\nu}dx^{\mu}dx^{\nu}, \tag{2.8}\] where \((x^{0},x^{1},x^{2},x^{3})\equiv(t,x,y,z)\) and \(\eta_{\mu\nu}\equiv\text{diag}(1,-1,-1,-1)\) is the Minkowski metric. The most general linear transformation leaving invariant (2.8) [or (2.7)] is written as \[x^{\prime\mu}=\Lambda^{\mu}_{\ \nu}x^{\nu}+a^{\mu}, \tag{2.9}\] where \(\Lambda^{\mu}_{\ \nu}\) satisfies \[\eta_{\mu\nu}=\eta_{\alpha\beta}\Lambda^{\alpha}_{\ \mu}\Lambda^{\beta}_{\ \nu}, \tag{2.10}\] and \(a^{\mu}\) is an arbitrary constant vector. The linear coordinate change (2.9) generates the Poincare group, \(\text{ISO}(1,3)\), that includes all transformations \(\Lambda^{\mu}_{\ \nu}\) in the Lorentz group \(\text{SO}(1,3)\) in addition to rigid translations. Notice that \(\Lambda^{\mu}_{\ \nu}\) is a \(4\times 4\) matrix with 16 real components, that the ten conditions (2.10) reduce to six independent ones. They correspond to the three parameters of a three-dimensional rotation (e.g., the Euler angles) plus the three velocity components of a generic boost. Adding the four real numbers determining a spacetime translation, we conclude that the Poincare transformation (2.9) depends on ten independent real parameters. Besides the invariance of the speed of light, Einstein's special relativity is also based on a second postulate, that all laws of physics take the same form for any inertial observer. This can also be recast in geometric language by demanding that all equations of physics be expressed as tensor identities with the structure \[T^{\mu_{1}\dots\mu_{k}}_{\nu_{1},\dots,\nu_{n}}(x)=0. \tag{2.11}\] Under the generic Poincare transformation (2.9), the previous equation changes as \[T^{\prime\mu_{1}\dots\mu_{k}}_{\nu_{1}\dots\nu_{n}}(x^{\prime})= \Lambda^{\mu_{1}}_{\ \alpha_{1}}\dots\Lambda^{\mu_{k}}_{\ \alpha_{k}}T^{\alpha_{1}\dots\alpha_{k}}_{\beta_{1}\dots\beta_{n}}(x) \Lambda^{\beta_{1}}_{\ \nu_{1}}\dots\Lambda^{\beta_{n}}_{\ \nu_{n}}=0, \tag{2.12}\] thus preserving the form \(T^{\prime\mu_{1}\dots\mu_{k}}_{\nu_{1},\dots,\nu_{n}}(x^{\prime})=0\) it had for the original observer. **Box I. Retrieving Lorentz transformations** It is a trivial exercise to recover the standard expression of a Lorentz transformations from the invariance of the line element (2.7). For simplicity we consider a two-dimensional spacetime, equivalent to restricting to boosts along the \(x\)-axis so the coordinates \(y^{\prime}=y\) and \(z^{\prime}=z\) remain unchanged. Implementing the coordinate change \[\left(\begin{array}{c}t^{\prime}\\ x^{\prime}\end{array}\right)=\left(\begin{array}{cc}\Lambda^{0}_{\ 0}&\Lambda^{0}_{\ 1}\\ \Lambda^{1}_{\ 0}&\Lambda^{1}_{\ 1}\end{array}\right)\left(\begin{array}{c}t \\ x\end{array}\right). \tag{2.13}\] with the condition \(dt^{\prime 2}-dx^{\prime 2}=dt^{2}-dx^{2}\) implies \[(\Lambda^{1}_{\ 0})^{1}-(\Lambda^{1}_{\ 0})^{2} =1,\] \[(\Lambda^{2}_{\ 1})^{1}-(\Lambda^{0}_{\ 1})^{2} =1, \tag{2.14}\] \[\Lambda^{0}_{\ 0}\Lambda^{0}_{\ 1}-\Lambda^{1}_{\ 0}\Lambda^{1}_{ \ 1} =0.\] Using the properties of the hyperbolic functions we easily see that the first two identities are solved by \(\Lambda^{0}_{\ 0}=\cosh\alpha\), \(\Lambda^{1}_{\ 0}=\pm\sinh\alpha\) and \(\Lambda^{0}_{\ 1}=\pm\sinh\beta\), \(\Lambda^{1}_{\ 1}=\cosh\beta\), for arbitrary \(\alpha\) and \(\beta\), with the third one requiring \(\beta=\alpha\). The sought transformation is therefore parametrized as \[\left(\begin{array}{c}t^{\prime}\\ x^{\prime}\end{array}\right)=\left(\begin{array}{cc}\cosh\alpha&-\sinh \alpha\\ -\sinh\alpha&\cosh\alpha\end{array}\right)\left(\begin{array}{c}t\\ x\end{array}\right), \tag{2.15}\] where the parameter \(\alpha\) is called the boost rapidity. A comment on the signs is in order. First, we have taken \(\Lambda^{0}_{\ 0}>0\) so the arrow of time points in the same direction for both observers (later in page 41 we will put a Greek name to this and call these transformations orthochronous). On the other hand, as we will see right away, the parameter \(\alpha\) is related to the boost velocity. Choosing a negative sign for the off-diagonal components of the matrix in (2.15) means that \(\alpha>0\) corresponds to a boost in the direction of the positive \(x\)-axis. To find the standard expression of the Lorentz transformation, we notice that the hyperbolic functions can be alternatively parametrized as \[\cosh\alpha=\frac{1}{\sqrt{1-V^{2}}},\hskip 28.452756pt\sinh\alpha=\frac{V}{ \sqrt{1-V^{2}}}, \tag{2.16}\] where the relation between the boost velocity and its rapidity is given by \(V=\tanh\alpha\). Plugging these expressions into (2.15), we arrive at the well-known formulae \[t^{\prime}=\frac{t-\frac{V\pi}{c^{2}}}{\sqrt{1-\frac{V^{2}}{c^{2}}}},\hskip 28.452756pt x^{\prime}=\frac{x-Vt}{\sqrt{1-\frac{V^{2}}{c^{2}}}}, \tag{2.17}\] where exceptionally we have restored powers of \(c\). Whereas the Euclidean distance (2.2) tells us about how far apart in space two points lie, the spacetime geometry (2.7) contains information about the causal relations between events. Let us consider an arbitrary event that, without lost of generality, we place at the origin of our coordinate system \(x^{\mu}_{0}=(0,\mathbf{0})\). The question arises as to whether some other event \(x^{\mu}=(t,\mathbf{r})\) may either influence what happens at \(x^{\mu}_{0}\) or be influenced by it. Since the speed of light is a universal velocity limit, the question is settled by checking whether it is possible for a signal propagating with velocity \(v\leq 1\) to travel from \((t,\mathbf{r})\) to \((0,\mathbf{0})\), if \(t<0\), or vice-versa for positive \(t\). The condition for this to happen is \[\frac{|\mathbf{r}|}{|t|}\leq 1\hskip 28.452756pt\Longrightarrow\hskip 28.452756ptt ^{2}-\mathbf{r}^{2}\geq 0. \tag{2.18}\] The set of events satisfying this defines the interior and the surface of the light-cone associated with the event at \((0,\mathbf{0})\), that we have depicted in fig. 3 for a \((2+1)\)-dimensional spacetime. Points in the causal past of the origin lie inside or on the past light-cone (\(t<0\)), whereas those on or inside the future light-cone (\(t>0\)) are causally reachable from \((0,\mathbf{0})\). By contrast, events outside the light-cone cannot influence of be influenced by the event at the origin, since this would require superluminal propagation. What we have said about the origin applies to any other event: every point of the spacetime is endowed with its light-cone defining its area of casual influence. Thus, if two events lie outside each other's light-cones, they cannot influence one another. Mathematically this is characterized by their spacetime separation satisfying \(\Delta s^{2}<0\), so they are said to be _spatially_ separated. Interestingly, there always exists a reference frame in which both events happen at the same \(t\), i.e. they are simultaneous. This is not possible when one event is inside the other's light-cone, in which case \(\Delta s^{2}>0\) and their separation is called _timelike_. Looking at (2.6) and remembering the invariant character of \(\Delta s^{2}\) we see that there can be no frame for which \(\Delta t=0\). Nonetheless, it is always possible to find an inertial observer for which both events happen at the same point of space, i.e. \(\Delta\mathbf{r}=\mathbf{0}\). In this case \(\Delta s^{2}\) is just the (squared) time elapsed between both events, as measured by the observer who is visiting both. Notice for two events lying on each others light-cone there is no such possibility, since they can only be joined by signals propagating at the speed of light and no observer can travel at this velocity. **Box 2. There is no twin paradox** One of the most celebrated "paradoxes" associated with special relativity is that involving two identical twins, one of which starts a round trip from Earth at very high speed while the second remains quietly behind. Relativistic time dilation implies that the clock carried by the traveling twin slows down with respect to the time set by a second clock on Earth, so at the end of the trip the returning Figure 3: Representation of the light cone at the origin in a \((2+1)\)-dimensional spacetime. twin looks younger than the remaining sibling. So far, so good. However, applying the same argument to the frame of reference moving with the spaceship, the conclusion seems to be the opposite: that the clock of the twin staying on Earth, that is the one moving in the reference frame of the rocket, ticks slower and after the reunion it is the Earth twin the one looking younger. To clarify this apparent "paradox" we have to keep in mind that special relativity is about inertial observers. Thus, we are going to work with the reference frame of the twin standing on Earth, who follows the spacetime path (the worldline) indicated in the following graph as \(1\) The travelling twin, on the other hand, follows the worldline labelled as \(2\), that starts and finishes on Earth, moving back and forth along the \(x\) direction. For simplicity, we restrict the movement of the rocket to this coordinate, with the Earth located at \(x=0\). Physical observers move along wordlines \(x^{\mu}(\lambda)\) whose tangent at any point defines a timeline vector \(\eta_{\mu\nu}\dot{x}^{\mu}(\lambda)\dot{x}^{\nu}(\lambda)>0\). The time elapsed between two events \(A\) and \(B\) as measured by the clock carried by the observer (called its proper time) equals the spacetime length along the worldline \(\gamma_{AB}\) \[\Delta s_{AB}=\int_{\gamma_{AB}}ds=\int_{\lambda_{A}}^{\lambda_{B}}d\lambda \sqrt{\eta_{\mu\nu}\dot{x}^{\mu}(\lambda)\dot{x}^{\nu}(\lambda)}. \tag{2.19}\] A particularly convenient parametrization of the curve is provided by the coordinate time, \(x^{0}\equiv t\), so writing \(x^{\mu}(t)=\big{(}t,\mathbf{R}(t)\big{)}\) the previous equation becomes \[\Delta s_{AB}=\int_{t_{A}}^{t_{B}}dt^{\prime}\sqrt{1-\mathbf{v}(t^{\prime})^{ 2}}, \tag{2.20}\] with \(\mathbf{v}(t)=\dot{\mathbf{R}}(t)\) the observer velocity satisfying \(|\mathbf{v}(t)|<1\). Let us return to our twins. Both of them travel from \(A\) to \(B\), as shown in the graph above, but along different worldlines with different speeds. The one on Earth has \(\mathbf{v}=\mathbf{0}\), so the time elapsed between the departure and arrival of the second twin is \[\Delta s_{AB}^{(1)}=t_{B}-t_{A}. \tag{2.21}\] For the twin on the spaceship, by contrast, we do not even need to know anything about the details of the varying speed. It is enough to notice that \(0<\sqrt{1-\mathbf{v}(t)^{2}}<1\), implying \[\Delta s^{(2)}_{AB}<\Delta s^{(1)}_{AB}. \tag{2.22}\] Consequently, after reunion, the traveling twin will be the younger. A basic difference between the twins is that the one at rest is precisely the inertial observer for which the timelike separated events \(A\) and \(B\) happen at the same point of space. In fact, the result (2.22) reflects a property of this particular frame: its worldline represents the path of the longest proper time interpolating between two given events. As announced, the reason why there is no paradox is because only one of the twins is an inertial observer and their descriptions cannot be simply interchanged without further ado. Seeing everything from the point of view of the spaceship leads us to give up the Minkowski metric (2.7). Indeed, by changing the coordinates \[t^{\prime} =t,\] \[\mathbf{r}^{\prime} =\mathbf{r}+\mathbf{R}(t), \tag{2.23}\] the worldlines of both twins are respectively parametrized by \(x^{\mu}_{1}(t^{\prime})=\left(t^{\prime},-\mathbf{R}(t^{\prime})\right)\) and \(x^{\mu}_{2}(t^{\prime})=\left(t^{\prime},\mathbf{0}\right)\), while the spacetime metric now reads \[ds^{2}=\left[1-\mathbf{v}(t^{\prime})^{2}\right]dt^{\prime 2}+2\mathbf{v}(t^{ \prime})\cdot dt^{\prime}\,dt^{\prime}-dr^{\prime 2}, \tag{2.24}\] which is no longer the Minkowski metric. To compute the proper time of both twins we use eq. (2.19), replacing \(\eta_{\mu\nu}\) by the line element (2.24). We then find \[\Delta s^{(1)}_{AB} =\int_{t^{\prime}_{A}}^{t^{\prime}_{B}}dt^{\prime}\sqrt{1-\mathbf{ v}(t^{\prime})^{2}+2\mathbf{v}(t^{\prime})^{2}-\mathbf{v}(t^{\prime})^{2}}=t_{B}-t _{A},\] \[\Delta s^{(2)}_{AB} =\int_{t^{\prime}_{A}}^{t^{\prime}_{B}}dt^{\prime}\sqrt{1-\mathbf{ v}(t^{\prime})^{2}}<\Delta s^{(1)}_{AB}, \tag{2.25}\] which reproduce the results obtained above. The conclusion is that if properly analyzed, the descriptions from the points of view of both twins are absolutely consistent and no paradox arises. As time and space coordinates combine to label a point (event) in the four-dimensional Minkowski spacetime, so energy and momentum build up an energy-momentum four-vector \(p^{\mu}=(E,\mathbf{p})\). For a particle of mass \(m\) moving along an affinely paramerized worldline \(x^{\mu}(s)\), four-momentum is defined by \[p^{\mu}(s)\equiv m\dot{x}^{\mu}(s)=\left(\frac{m}{\sqrt{1-\mathbf{v}^{2}}}, \frac{m\mathbf{v}}{\sqrt{1-\mathbf{v}^{2}}}\right), \tag{2.26}\] with \(\mathbf{v}\) the particle's velocity. A first thing to be noticed here is that the particle's energy is nonzero even when its velocity vanishes. Restoring powers of \(c\) \[E\longrightarrow\frac{E}{c},\hskip 28.452756ptm\longrightarrow mc\hskip 28.452756pt \mathbf{v}\longrightarrow\frac{\mathbf{v}}{c}, \tag{2.27}\] we get the famous equation \(E_{\rm rest}=mc^{2}\). On the other hand, the particle's energy diverges as \(|{\bf v}|\to c\). This shows that the speed of light a physical limiting velocity for any massive particle, since reaching \(|{\bf v}|=c\) would require pumping an infinite amount of energy into the system. The transformation of energy and momentum among inertial observers is fixed by \(p^{\mu}\) being being a four-vector, whose change under a Lorentz transformation \(\Lambda^{\mu}{}_{\nu}\) is given by \(p^{\prime\mu}=\Lambda^{\mu}{}_{\nu}p^{\nu}\). Considering a boost along the \(x\) direction with velocity \(V\) and using the expressions obtained in Box 1 in pages 10-11, we have \[E^{\prime}=\frac{E-Vp_{x}}{\sqrt{1-V^{2}}},\hskip 28.452756ptp^{\prime}_{x}= \frac{p_{x}-VE}{\sqrt{1-V^{2}}}, \tag{2.28}\] together with \(p^{\prime}_{y}=p_{y}\) and \(p^{\prime}_{z}=p_{z}\). Equation (2.26) also implies the mass-shell condition2 Footnote 2: In covariant terms, the mass-shell condition reads \(p_{\mu}p^{\mu}=m^{2}\) and follows from (2.26), remembering that the particle’s worldline is affinely parametrized, \(\eta_{\mu\nu}x^{\mu}(s)\dot{x}^{\nu}(s)=1\). \[E^{2}-{\bf p}^{2}=m^{2}. \tag{2.29}\] In the four-dimensional energy-momentum space spanned by \(E\) and \({\bf p}\), the particle's four-momentum \(p^{\mu}\) lies on the two-sheeted hyperboloid \(E=\pm\sqrt{{\bf p}^{2}+m^{2}}\), with the two signs corresponding to the upper and lower sheet. Interestingly, the mass-shell condition has a smooth limit as \(m\to 0\), where the hyperboloid degenerates into the cone \(E^{2}={\bf p}^{2}\), to which all massive hyperboloids asymptote for large spatial momentum, \(|{\bf p}|\gg m\) (see fig. 4). Unlike Newtonian mechanics, special relativity admits the existence of zero-mass particles whose four-momenta have the form \[p^{\mu}=(|{\bf p}|,{\bf p}), \tag{2.30}\] Figure 4: Energy-momentum hyperboloid for a particle of mass \(m\neq 0\) (orange). The energy-momentum vector of a massless particle lies on the blue cone. where we have chosen the positive energy solution. In terms of its energy and momentum, the velocity of a massive particle is given by [cf. (2.26) and (2.29)] \[\mathbf{v}=\frac{\mathbf{p}}{\sqrt{\mathbf{p}^{2}+m^{2}}}, \tag{2.31}\] which as \(m\to 0\) gives \(|\mathbf{v}|=1\). Thus, massless particles necessarily propagate at the speed of light. ### Relativity and quantum mechanics So far, our analysis has left out quantum effects. Special relativity can be combined with quantum mechanics to formulate relativistic wave equations plagued with trouble. An immediate problem arises from the energy hyperboloid depicted in fig. 4. The existence of the lower sheet implies that the system of a relativistic quantum particle coupled to an electromagnetic field has no ground state, since the particle has infinitely many available states with arbitrary negative energy to which it could decay by radiating energy. This fundamental instability of the system is impossible to solve in the context of the Klein-Gordon wave equation, while in the Dirac equation it can be avoided by "filling" all states in the lower sheet of the hyperboloid (the Dirac sea). The Pauli exclusion principle now prevents electrons from occupying negative energy states, and the system is stable. The Dirac sea notwithstanding, the interpretation of the Dirac equation as a single-particle relativistic wave equation is problematic, leading to puzzling results such as the Klein paradox [47, 14]. In fact, all the difficulties we run into when trying to marry quantum mechanics with special relativity stem from insisting in a single-particle description, as can be seen from a simple heuristic arguments. As we know, Heisenberg's uncertainty principle correlates quantum fluctuations in the position and momentum of a particle \[\Delta x\Delta p_{x}\geq\frac{\hbar}{2}. \tag{2.32}\] Looking at physics at small distances requires taming spatial fluctuations below the scale of interest, which in turn leads to large fluctuations in the particle's momentum. When the latter reaches the scale \(\Delta p_{x}\sim mc\), the corresponding energy fluctuations \(\Delta E\sim mc^{2}\) are large enough to allow the creation of particles out of the vacuum and the single-particle description breaks down. Equivalently, localizing a particle below its Compton wavelength \[\Delta x\leq\frac{\hbar}{2mc}, \tag{2.33}\] leads to a quantum state characterized by an indefinite number of them. Unlike what happens in non-relativistic many body physics, in the quantum-relativistic domain particle number is not conserved and creation-annihilation of particles is a central ingredient of the theory. Thus, the single-particle description inherent to the relativistic wave equation is fundamentally wrong, as indicated by the paradoxes and inconsistencies it leads to. **Box 3. Antiparticles and causality** One of the consequences of the Klein paradox alluded to above is the impossibility of a consistent formulation of relativistic quantum mechanics without the inclusion of antiparticles. We can reach the same conclusion by showing that antiparticles are the unavoidable ingredient to preserve causality in a relativistic quantum theory. To do so, let us consider a relativistic particle of mass \(m\) that at \(t=0\) is detected at the origin. Its quantum-mechanical propagator is given by \[G(\tau,\mathbf{r})\equiv\langle\mathbf{r}|e^{-i\tau\sqrt{\mathbf{p}^{2}+m^{2}} }|\mathbf{0}\rangle=e^{-i\tau\sqrt{-\mathbf{\nabla}^{2}+m^{2}}}\delta^{(3)}( \mathbf{r}). \tag{2.34}\] Physically, this quantity gives the probability amplitude of the particle being detected at a later time \(t=\tau\) at some location \(\mathbf{r}\). To explicitly evaluate the propagator, we Fourier transform the Dirac delta function and compute the resulting integral in terms of a modified Bessel function of the second kind \[\begin{split} G(\tau,\mathbf{r})&=\int\frac{d^{3}k }{(2\pi)^{3}}e^{-i\tau\sqrt{\mathbf{k}^{2}+m^{2}}+i\mathbf{k}\cdot\mathbf{r}} \\ &=\frac{1}{2\pi^{2}|\mathbf{r}|}\int_{0}^{\infty}kdk\,\sin(k| \mathbf{r}|)e^{-i\tau\sqrt{k^{2}+m^{2}}}\\ &=-\frac{i}{2\pi^{2}}\frac{m^{2}t}{\tau^{2}-\mathbf{r}^{2}}K_{2} \left(im\sqrt{\tau^{2}-\mathbf{r}^{2}}\right),\end{split} \tag{2.35}\] where to write the last identity we regularized the momentum integral by analytical continuation \(\tau\rightarrow\tau-i\epsilon\). Naively, one would expect this propagator to vanish outside the light cone, \(\tau^{2}-\mathbf{r}^{2}<0\), since otherwise the particle would have a nonvanishing probability of being detected at points spacelike separated from the origin, its location at \(t=0\). Were this to happen, it would imply a violation of causality. Despite expectations, the modified Bessel function in (2.35) is nonzero for both real and imaginary values of the argument and the propagator spills out of the light-cone despite being derived from a relativistic Hamiltonian. The key point to understand what is going on is that when \(\mathbf{r}\) lies outside the light-cone at the origin there are frames in which the detection of the particle at the position \(\mathbf{r}\)_precedes_ its detection at the origin. In computing the propagator we should take this into account and consider the superposition of both processes outside and inside the light-cone \[G(\tau,\mathbf{r})=\left\{\begin{array}{ll}\langle\mathbf{r}|e^{-i\tau\sqrt{ \mathbf{p}+m^{2}}}|\mathbf{0}\rangle&\mbox{when}\ \ \tau^{2}-\mathbf{r}^{2}>0\\ &\\ \langle\mathbf{r}|e^{-i\tau\sqrt{\mathbf{p}+m^{2}}}|\mathbf{0}\rangle+\langle \mathbf{0}|e^{i\tau\sqrt{\mathbf{p}+m^{2}}}|\mathbf{r}\rangle&\mbox{when}\ \ \ \tau^{2}-\mathbf{r}^{2}<0\end{array}\right. \tag{2.36}\] Now, from the explicit expression (2.35) we can check that \(\langle\mathbf{r}|e^{-i\tau\sqrt{\mathbf{p}+m^{2}}}|\mathbf{0}\rangle\) is purely imaginary when \(\tau^{2}-\mathbf{r}^{2}<0\). Since, on the other hand, \[\langle\mathbf{r}|e^{-i\tau\sqrt{\mathbf{p}+m^{2}}}|\mathbf{0}\rangle+\langle \mathbf{0}|e^{i\tau\sqrt{\mathbf{p}+m^{2}}}|\mathbf{r}\rangle=2\mathrm{Re} \,\langle\mathbf{r}|e^{-i\tau\sqrt{\mathbf{p}+m^{2}}}|\mathbf{0}\rangle, \tag{2.37}\] we conclude that \[G(\mathbf{r},\tau)=-\frac{i}{2\pi^{2}}\frac{m^{2}t}{r^{2}-\mathbf{r}^{2}}K_{2} \left(im\sqrt{\tau^{2}-\mathbf{r}^{2}}\right)\theta(\tau^{2}-\mathbf{r}^{2}), \tag{2.38}\] and causality is consequently restored. There exists an interesting interpretation of this cancellation mechanism due to Ernst Stueckelberg [48] and Richard Feynman [49, 50]. Our propagator can be seen as the wave function of the particle of interest, \(\psi(\mathbf{r},\mathbf{r})\equiv G(\tau,\mathbf{r})\), satisfying the boundary condition \(\psi(0,\mathbf{r})=\delta^{(3)}(\mathbf{r})\). We found that outside the light-cone there is a superposition of two processes: one in which the particle is traveling from the origin to \(\mathbf{r}\) forward in time, and a second described by the wave function \[\psi(\tau,\mathbf{r})_{\Downarrow}\equiv\langle \mathbf{0}|e^{i\tau\sqrt{\mathbf{p}^{2}+m^{2}}}|\mathbf{r}\rangle=\langle \mathbf{r}|e^{-i\tau\sqrt{\mathbf{p}^{2}+m^{2}}}|\mathbf{0}\rangle^{\star} \equiv\psi(\tau,\mathbf{r})_{\Downarrow}^{\star}, \tag{2.39}\] where the particle moves backwards in time from \(\mathbf{r}\) to the origin. Furthermore, writing \[\psi(\tau,\mathbf{r})_{\Downarrow}=\int\frac{d^{3}k}{(2\pi)^{3}}e^{i\tau \sqrt{\mathbf{k}^{2}+m^{2}}-i\mathbf{k}\cdot\mathbf{r}}=\int\frac{d^{3}k}{(2 \pi)^{3}}e^{-i\tau(-\sqrt{\mathbf{k}^{2}+m^{2}})+i(-\mathbf{k})\cdot\mathbf{r}}, \tag{2.40}\] and comparing with the first line in eq. (2.35), we reinterpret \(\psi(\tau,\mathbf{r})_{\Downarrow}\) as describing a state of mass \(m\) and momentum \(-\mathbf{k}\), lying in the lower sheet of the energy hyperboloid, and propagating forward in time. This represents a hole in the Dirac sea, i.e. an _antiparticle_ of momentum \(\mathbf{k}\). Moreover, from (2.39) we see that if our particle has charge \(q\) with respect to some global U(1), the antiparticle necessarily transforms with the opposite charge \[\psi(\tau,\mathbf{r})_{\Uparrow}\to e^{iq\theta}\psi(\tau,\mathbf{r})_{ \Uparrow}\qquad\implies\qquad\psi(\tau,\mathbf{r})_{\Downarrow}\to e^{-iq \theta}\psi(\tau,\mathbf{r})_{\Downarrow}. \tag{2.41}\] Antiparticles are therefore a necessary ingredient in a relativist theory of quantum processes if we want to avoid superluminal effects. They automatically imply the possibility of creation/annihilation of particle-antiparticle pairs, turning what was intended as single-particle relativistic quantum mechanics into a multiparticle theory where the number of particles is not even well defined. A fundamental consequence of the causal structure of spacetime is that measurement of observables in regions that are spacelike separated cannot interfere with each other. In the quantum theory these measurement are implemented by local operators \(\mathcal{O}(x)\) smeared over the spacetime region \(R\) where the measurement takes place \[\mathcal{O}(R)\equiv\int d^{4}x\,\mathcal{O}(x)f_{R}(x), \tag{2.42}\] where \[f_{R}(x)=\left\{\begin{array}{ll}1&\text{if }x\in R\\ 0&\text{if }x\notin R\end{array}\right. \tag{2.43}\] is the characteristic function associated with \(R\). In mathematical terms, the noninterference of the measurements carried out in spacelike separated regions \(R_{1}\) and \(R_{2}\) like those shown in fig. 5 is expressed by the vanishing of the commutator of the associated operators \[[\mathcal{O}(R_{1}),\mathcal{O}(R_{2})]=0\qquad\text{ if }R_{1}\text{ and }R_{2}\text{ are spacelike separated}, \tag{2.44}\] or equivalently \[[\mathcal{O}(x),\mathcal{O}(y)]=0\qquad\text{ if }(x-y)^{2}<0. \tag{2.45}\] This states the _principle of microcausality_, a profound form of locality that has to be imposed on constructing any admissible QFT. To date no consistent theory has been formulated violating this principle. This is why all theories to be encountered later in these lecture will be local quantum field theories (LQFTs) in the sense of eq. (2.44). ## 3 The importance of classical field theory Maxwell's electromagnetism is arguably the mother of all classical field theories. Despite its apparent simplicity, the theory contains a number of symmetry and structures that underlie many other developments in QFT. This is the reason why it is worthwhile to spend some time extracting some lessons from classical electromagnetism that we will find useful later in our study of the SM and other theories. ### The symmetries of Maxwell's theory Using Heaviside units, and keeping \(c=1\) all the way, the Maxwell's equations take the form \[\mathbf{\nabla}\cdot\mathbf{E}=\rho_{e},\] Figure 5: The two spacelike-separated regions \(R_{1}\) and \(R_{2}\) cannot causally influence one another. \[\boldsymbol{\nabla}\cdot\mathbf{B} =\rho_{m},\] \[\boldsymbol{\nabla}\times\mathbf{E} =-\mathbf{j}_{m}-\frac{\partial\mathbf{B}}{\partial t} \tag{3.1}\] \[\boldsymbol{\nabla}\times\mathbf{B} =\mathbf{j}_{e}+\frac{\partial\mathbf{E}}{\partial t}.\] Here we have introduced a color code signaling various layers of generality. Setting to zero all terms in blue and red we get the vacuum Maxwell's equations governing the evolution of electromagnetic fields in the absence of any kind of matter. If we keep the terms in blue but remove those in red, the resulting expressions describe the coupling of electric and magnetic fields to electrically charged matter, where \(\rho_{e}\) and \(\mathbf{j}_{e}\) respectively represent the electric charge density and current. These are the Maxwell's equations that can be found in most textbooks on classical electrodynamics (see, for example, [51]). Let us postpone a little bit the discussion of the terms in red and concentrate on the second and third equations \[\boldsymbol{\nabla}\cdot\mathbf{B} =0,\] \[\boldsymbol{\nabla}\times\mathbf{E} =-\frac{\partial\mathbf{B}}{\partial t}. \tag{3.2}\] They imply that the electric and magnetic fields can be written in terms of a scalar and a vector potential \((\phi,\mathbf{A})\) as \[\mathbf{B} =\boldsymbol{\nabla}\times\mathbf{A},\] \[\mathbf{E} =-\boldsymbol{\nabla}\phi-\frac{\partial\mathbf{A}}{\partial t}. \tag{3.3}\] These potentials, however, are not uniquely defined. The electric and magnetic fields remain unchanged if we replace \[\phi \longrightarrow\phi+\frac{\partial\epsilon}{\partial t},\] \[\mathbf{A} \longrightarrow\mathbf{A}-\boldsymbol{\nabla}\epsilon, \tag{3.4}\] with \(\epsilon(t,\mathbf{r})\) an arbitrary well-behaved function. This _gauge invariance_ is probably the most important of those structures of the electromagnetic theory that we said were of radical importance for QFT at large. Although at a classical level it might seem a mere technicality, it has profound implications for the quantum theory and is the cornerstone of the whole SM. We explore its significance in some detail in the following. For computational purposes, it is convenient sometimes to (partially) fix the gauge freedom by imposing certain conditions on \(\phi\) and \(\mathbf{A}\). Two popular choices in classical electromagnetism are the Coulomb gauge \(\boldsymbol{\nabla}\cdot\mathbf{A}=0\) and the temporal (also called Weyl) gauge \(\phi=0\). These conditions still leave a residual invariance, generated in the first case by harmonic functions \(\nabla^{2}\epsilon(t,\mathbf{r})=0\) and by time independent functions \(\epsilon({\bf r})\) in the second. A covariant alternative is the Lorenz gauge \[{\bf\nabla}\cdot{\bf A}+\frac{\partial\phi}{\partial t}=0, \tag{3.5}\] preserved by gauge functions satisfying the wave equation, \(\Box\epsilon(t,{\bf r})=0\). Gauge invariance introduces a _redundancy_ in the description in terms of the electromagnetic potentials that however cannot reflect in physically measurable quantities such as the electric and magnetic fields. Although these are not the only gauge invariant quantities that can be constructed in terms of \(\varphi\) and \({\bf A}\). There is also the Wilson loop, defined by \[U(\gamma)\equiv\exp\left(-ie\oint_{\gamma}d{\bf r}\cdot{\bf A}\right), \tag{3.6}\] where \(\gamma\) is a closed path in space and \(e\) the electric charge. Implementing a gauge transformation on the vector potential and using the Stokes theorem, we see that it is indeed gauge invariant \[\exp\left(-ie\oint_{\gamma}d{\bf r}\cdot{\bf A}\right)\longrightarrow\exp \left(-ie\oint_{\gamma}d{\bf r}\cdot{\bf A}+ie\oint_{\gamma}d{\bf r}\cdot{\bf \nabla}\epsilon\right)=\exp\left(-ie\oint_{\gamma}d{\bf r}\cdot{\bf A}\right), \tag{3.7}\] after taking into account that \(\gamma\) is closed. Whereas \({\bf E}\) and \({\bf B}\) are _local_ observables depending on the spacetime point where they are measured, the Wilson loop is _nonlocal_ since it "explores" the whole region enclosed by \(\gamma\). It is enlightening to study the consequences of gauge transformations for the dynamics of a quantum particle coupled to an electromagnetic field. In quantum mechanics the prescription of minimal coupling of a particle with electric charge \(e\) to the electromagnetic field \[{\bf p}\longrightarrow{\bf p}-e{\bf A},\hskip 28.452756ptH\longrightarrow H+e\phi, \tag{3.8}\] introduces an explicit dependence of the Schrodinger equation on the electromagnetic potentials \[i\frac{\partial\psi}{\partial t}=\left[-\frac{1}{2m}\big{(}{\bf\nabla}-ie{\bf A }\big{)}^{2}+e\phi\right]\psi. \tag{3.9}\] To preserve the gauge invariance of this equation, the transformations (3.7) have to be supplemented by a phase shift of the wave function \[\psi(t,{\bf r})\longrightarrow e^{-i\epsilon\epsilon(t,{\bf r})}\psi(t,{\bf r }), \tag{3.10}\] which does not affect to the probability density \(|\psi(t,{\bf r})|^{2}\). This shows that the gauge transformations in electromagnetism belong to the Abelian group U(1) of complex rotations, parametrized by elements \[U=e^{-i\epsilon\epsilon(t,{\bf r})}, \tag{3.11}\] in terms of which eq. (3.4) reads \[\phi \longrightarrow\phi+\frac{i}{e}U^{-1}\frac{\partial}{\partial t}U,\] \[\mathbf{A} \longrightarrow\mathbf{A}-\frac{i}{e}U^{-1}\boldsymbol{\nabla}U. \tag{3.12}\] **Box 4. Wilson loops and quantum interference** At the classical level we can live with just local observables like the electric and magnetic fields, but not anymore when we introduce quantum effects. In this case the phase transformation of the wave function may give rise to observable interference phenomena. As we will see now, these are measured by a Wilson loop \(U(\gamma)\). We work for simplicity in the temporal gauge \(\phi=0\). The action of a classical charged particle propagating in the background of an electromagnetic potential \(\mathbf{A}(t,\mathbf{r})\) is given by \[S=\frac{1}{2}\int dt\,m\dot{\mathbf{r}}^{2}-e\int_{\gamma}d\mathbf{r}\cdot \mathbf{A}, \tag{3.13}\] where \(\gamma\) is the particle trajectory and \(e\) is the electron charge. An interesting property of the second term is that its value does not change if we smoothly deform the path \(\gamma\) across any region where the magnetic field vanishes. Let us consider two paths \(\gamma_{1}\) and \(\gamma_{2}\) joining two points \(A\) and \(B\) as shown here Computing the difference between the contributions of both paths, we find a Wilson loop \[\int_{\gamma_{1}}d\mathbf{r}\cdot\mathbf{A}-\int_{\gamma_{2}}d\mathbf{r}\cdot \mathbf{A}=\oint_{\gamma_{2}^{-1}\gamma_{1}}d\mathbf{r}\cdot\mathbf{A}=0, \tag{3.14}\] where \(\gamma_{2}^{-1}\gamma_{1}\) represents the closed path from \(A\) to \(B\) following \(\gamma_{1}\) and back to \(A\) along \(\gamma_{2}\). To see why this term is zero, let us denote by \(S\) any surface bounded by \(\gamma_{2}^{-1}\gamma_{1}\). Applying the Stokes theorem, we have \[\oint_{\gamma_{2}^{-1}\gamma_{1}}d\mathbf{r}\cdot\mathbf{A}=\int_{S}d\mathbf{ S}\cdot(\boldsymbol{\nabla}\times\mathbf{A})=0, \tag{3.15}\] since we assumed that \(\mathbf{B}=\boldsymbol{\nabla}\times\mathbf{A}=0\) in the integration domain. This topological property of the interaction term in (3.13) has important consequence in quantum mechanics, as pointed out by Yakir Aharonov and David Bohm [52]. Let us look at a double slit experiment performed with electrons in which behind the slitted screen we place a vertical solenoid confining a constant magnetic field \(\mathbf{B}\) (see fig. 6 in page 24). The amplitude for an electron emitted from \(A\) at \(t=0\) to be detected at a point \(P\) of the detection screen at \(t=\tau\) can be computed as a coherent quantum superposition of all possible classical trajectories, expressed by the Feynman path integral \[G(\tau;\mathbf{r}_{A},\mathbf{r}_{P})=\mathcal{N}\int\limits_{ \begin{subarray}{c}\mathbf{r}(0)=\mathbf{r}_{A}\\ \mathbf{r}(\tau)=\mathbf{r}_{P}\end{subarray}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! pieved by the solenoid, and the magnetic field \(\mathbf{B}=\mathbf{\nabla}\times\mathbf{A}\) is not zero everywhere. Instead \[\oint_{\gamma_{R}^{-1}\gamma_{L}}d\mathbf{r}\cdot\mathbf{A}=\int_{S}d\mathbf{S} \cdot\mathbf{B}=\Phi, \tag{3.19}\] where \(\Phi\) is the magnetic flux inside the solenoid and we have \[U(\gamma_{R}^{-1}\gamma_{L})=e^{-ie\Phi}\neq 1. \tag{3.20}\] Hence, the presence of the solenoid modifies the interference pattern on the screen, even if the electrons never enter the region where the magnetic field is nonzero. The reason is that even if \(\mathbf{B}=\mathbf{0}\) outside, \(\mathbf{A}\) is not. Although no force is applied to them, the electrons interact with the vector potential whose global structure, codified in the nonlocal gauge-invariant quantity \(U(\gamma_{R}^{-1}\gamma_{L})\), contains information about the confined magnetic field. Going back to the Maxwell's equations (3.1), we notice that the vacuum equations (with all blue and red terms removed) exhibit an interesting symmetry. Combining the electric and magnetic fields into a single complex field \(\mathbf{E}+i\mathbf{B}\), the four equations can be summarized as \[\mathbf{\nabla}\cdot(\mathbf{E}+i\mathbf{B}) =0,\] \[\mathbf{\nabla}\times(\mathbf{E}+i\mathbf{B})-i\frac{\partial}{ \partial t}(\mathbf{E}+i\mathbf{B}) =0. \tag{3.21}\] Both identities remain invariant under the transformation \[\mathbf{E}+i\mathbf{B}\longrightarrow e^{i\theta}\big{(}\mathbf{E}+i\mathbf{B }\big{)}, \tag{3.22}\] with \(\theta\) a real global angle. To be more specific, splitting the previous equation into its real and imaginary Figure 6: Experimental setup to exhibit the Aharonov-Bohm effect explained in Box 4. parts, we find \[\mathbf{E} \longrightarrow\mathbf{E}\cos\theta-\mathbf{B}\sin\theta,\] \[\mathbf{B} \longrightarrow\mathbf{E}\sin\theta+\mathbf{B}\cos\theta, \tag{3.23}\] which for \(\theta=\frac{\pi}{2}\) interchanges electric and magnetic fields \((\mathbf{E},\mathbf{B})\rightarrow(-\mathbf{B},\mathbf{E})\). This electric-magnetic duality of the vacuum equations is however broken by the source terms in the "textbook" Maxwell's equations [i.e., eq. (3.1) without the terms in red]. The identities (3.21) are then recast as \[\boldsymbol{\nabla}\cdot(\mathbf{E}+i\mathbf{B}) =\rho_{e},\] \[\boldsymbol{\nabla}\times(\mathbf{E}+i\mathbf{B})-i\frac{\partial }{\partial t}(\mathbf{E}+i\mathbf{B}) =i\mathbf{j}_{e}. \tag{3.24}\] Since \(\rho_{e}\) and \(\mathbf{j}_{e}\) are both real quantities, the only transformations preserving these equations are the trivial ones which either leave invariant the electric and magnetic fields or reverse their signs (corresponding respectively to \(\theta=0,\pi\)), the latter one also requiring the reversal of the sign of \(\rho_{e}\) and \(\mathbf{j}_{e}\). Physically this makes sense, since as far as we know there is a fundamental asymmetry in nature between electric and magnetic fields. While the first are sourced by point charges (electric monopoles) at which field lines either begin or end, magnetic fields are associated with the motion of electric charges and their field lines always close on themselves. Restoring electric-magnetic duality in the Maxwell's equations requires treating the sources of both fields symmetrically, which means introducing magnetic charge density and current. These are the terms in red in eq. (3.1), that we rewrite now as \[\boldsymbol{\nabla}\cdot(\mathbf{E}+i\mathbf{B}) =\rho_{e}+i\rho_{m},\] \[\boldsymbol{\nabla}\times(\mathbf{E}+i\mathbf{B})-i\frac{\partial }{\partial t}(\mathbf{E}+i\mathbf{B}) =i\big{(}\mathbf{j}_{e}+i\mathbf{j}_{m}\big{)}. \tag{3.25}\] These equations remain invariant under electric-magnetic duality (3.22) when supplemented by a corresponding rotation of the sources \[\rho_{e}+i\rho_{m} \longrightarrow e^{i\theta}\big{(}\rho_{e}+i\rho_{m}\big{)},\] \[\mathbf{j}_{e}+i\mathbf{j}_{m} \longrightarrow e^{i\theta}\big{(}\mathbf{j}_{e}+i\mathbf{j}_{m}\big{)}. \tag{3.26}\] For \(\theta=\frac{\pi}{2}\) the interchange of electric and magnetic fields is accompanied by a swap of the electric and magnetic sources, \((\rho_{e},\mathbf{j}_{e})\rightarrow(-\rho_{m},-\mathbf{j}_{m})\) and \((\rho_{m},\mathbf{j}_{m})\rightarrow(\rho_{e},\mathbf{j}_{e})\). The consequences of having particles with magnetic charge were first explored by Dirac in [53]. Let us assume the existence of a point magnetic source that for simplicity we locate at the origin, \(\rho_{m}=g\delta^{(3)}(\mathbf{r})\). The second equation in (3.1) leads to \[\boldsymbol{\nabla}\cdot\mathbf{B}=g\delta^{(3)}(\mathbf{r}) \qquad\implies\qquad\mathbf{B}(\mathbf{r})=\frac{1}{4\pi}\frac{g}{r^{2}} \mathbf{u}_{r}, \tag{3.27}\] which would be a magnetic analog of the Coulomb field. An important point to consider is that, despite the source's presence, the magnetic field's divergence still vanishes everywhere except at the monopole's position. As a consequence, away from this point we can still write \(\mathbf{B}=\mathbf{\nabla}\times\mathbf{A}\), which is solved by \[\mathbf{A}(\mathbf{r})=\frac{1}{4\pi}\frac{g}{r}\tan\left(\frac{ \theta}{2}\right)\mathbf{u}_{\varphi}, \tag{3.28}\] where we are using spherical coordinates \((r,\varphi,\theta)\). This vector potential is singular not only at the monopole location at \(\mathbf{r}=0\), but all along the line \(\theta=\pi\) as well. The existence of this singular Dirac string should not be a surprise. Were \(\mathbf{A}(\mathbf{r})\) be regular everywhere outside the origin, we could apply the Stokes theorem to the integral giving the magnetic flux across a closed surface \(\mathcal{S}\) enclosing the monopole, to find \[\int_{\mathcal{S}}d\mathbf{S}\cdot\mathbf{B}=\int_{\mathcal{S}}d \mathbf{S}\cdot(\mathbf{\nabla}\times\mathbf{A})=\oint_{\partial\mathcal{S}}d\mathbf{ \ell}\cdot\mathbf{A}=0, \tag{3.29}\] since \(\partial\mathcal{S}=\emptyset\). This would contradict the calculation of the same integral applying Gauss' theorem \[\int_{\mathcal{S}}d\mathbf{S}\cdot\mathbf{B}=\int_{\mathcal{B}_{ 3}}\mathbf{\nabla}\cdot\mathbf{B}=g\neq 0, \tag{3.30}\] where \(\mathcal{B}_{3}\) denotes the three-dimensional region bounded by \(\mathcal{S}\) and containing the monopole. Notice that this second calculation is free of trouble, since the magnetic field (3.27) is regular everywhere on \(\mathcal{S}\). The catch of course is that the vector potential is singular at \(\theta=\pi\) and the surface \(\mathcal{S}\) in (3.29) cannot be closed. As shown on the left of fig. 7, its boundary is a circle surrounding the singularity and the integral gives a nonzero result \[\oint_{\partial\mathcal{S}}d\mathbf{\ell}\cdot\mathbf{A}=\frac{1}{2}g \sin\delta_{0}\tan\left(\frac{\delta_{0}}{2}\right)\xrightarrow{\delta_{0} \to 0}g, \tag{3.31}\] where the last limit corresponds to shrinking the boundary to a point, reproducing the result of eq. (3.30). Even if mathematically unavoidable, the existence of a singularity is always a source of concern in physics. A way to restore our peace of mind in this case might be to make the Dirac string an artefact that somehow is rendered unobservable. One may think that a way to accomplish this is to apply a gauge transformation, since the vector potential is not uniquely defined. This, however, does not eliminate the Dirac string, just changes its location. Let us look a bit closer at the vector potential (3.28) near the Dirac string. Denoting by \(\varrho\) the linear distance to the string (see the right of fig. 7), in the limit \(\varrho\to 0\) we can write \[\mathbf{A}\approx\frac{1}{2\pi}\frac{g}{\varrho}\mathbf{u}_{ \varphi}. \tag{3.32}\] This expression should be familiar from elementary electrodynamics, since it represents the vector potential outside an infinite solenoid. The Dirac string can be pictured then as an infinitely thin solenoid pumping magnetic flux into the monopole which, according to the limiting value of the integral in eq. (3.31), is actually equal to the outgoing flux through a closed surface surrounding the monopole. In Box 4 we learned a way to "detect solenoids" by their imprints on the wave function of charged quantum particles detectable by interference experiments. The Wilson loop of a particle with electric charge \(e\) going around the Dirac string is computed from the vector potential (3.32) and gives [see also eq. (3.31)] \[U(\gamma)=\exp\left(-ie\oint_{\gamma}d\mathbf{\ell}\cdot\mathbf{A}\right)=e^{-ieg}. \tag{3.33}\] The absence of detectable interference requires this phase to be equal to one for any electrically charged particle, which amounts to the condition \[eg=2\pi n\qquad\implies\qquad e=\frac{2\pi}{g}n. \tag{3.34}\] with \(n\) an integer. This is a very interesting result, stating that the existence of a single magnetic monopole anywhere in the universe implies by consistency that electric charges have to be _quantized_. The quantization condition (3.34) remains invariant under electric-magnetic duality with \(\theta=\frac{\pi}{2}\). Unconfirmed sightings in cosmic rays notwithstanding [54, 55], no evidence exists of magnetically charged particles at the energies explored. They are, however, an almost ubiquitous prediction of many theories beyond the SM, where they usually emerge as solitonic objects resulting from the spontaneous breaking in unified field theories leaving behind unbroken U(1)'s. Although they acquire masses of the order of the symmetry breaking scale, magnetic monopoles should have been created in huge amounts at the early stages of the universe's history. One of the original aims of cosmological inflation models was to dilute their presence in the early universe, thus accounting for its apparent absence. **Box 5. Magnetic monopoles from topology** The origin of all our troubles with the Dirac monopole was after all _topological_: although the vector Figure 7: Left: section of a sphere around a Dirac magnetic monopole with charge \(g\), resulting from cutting out a region around the south pole. Its boundary \(\partial\mathcal{S}\) surrounds the singular Dirac string located along \(\theta=\pi\) (in red). Right: closed path surrounding the Dirac string. potential of the magnetic monopole is locally well defined anywhere away from the origin, it cannot be extended globally to the sphere surrounding the monopole. There is however a way to avoid the singular Dirac string which was pointed out by Tai Tsun Wu and Chen Ning Yang [56]. When computing the flux integral (3.30), instead of covering the sphere with a single patch cutting out the region around the place where the Dirac string crosses the surface (in our case, the south pole), we can be more sophisticated and use two patches respectively centered at the north and south poles and overlapping at the equator. This is what we represent in the picture below, with \(D_{\pm}\) the upper and lower hemispheres glued together along their respective boundaries \(S^{1}_{\pm}\) On both \(D_{+}\) and \(D_{-}\) we can write a vector potential whose curls reproduce the expression of the monopole field (3.27) \[\mathbf{A}(\mathbf{r})_{+} =\frac{1}{4\pi}\frac{g}{r}\tan\left(\frac{\theta}{2}\right)\mathbf{ u}_{\varphi}\hskip 28.452756pt0\leq\theta\leq\frac{\pi}{2},\] \[\mathbf{A}(\mathbf{r})_{-} =-\frac{1}{4\pi}\frac{g}{r}\cot\left(\frac{\theta}{2}\right) \mathbf{u}_{\varphi}\hskip 28.452756pt\frac{\pi}{2}\leq\theta\leq\pi. \tag{3.35}\] The important point here is that both expressions are perfectly regular in their respective domains, so our vector potential is regular everywhere on the sphere \(S^{2}=D_{+}\cup D_{-}\). An apparent obstacle arises in their overlap at the equator \(\theta=\frac{\pi}{2}\), where the two expressions do not agree \[\mathbf{A}(\mathbf{r})_{+}\Big{|}_{S^{1}_{+}}-\mathbf{A}(\mathbf{r})_{-} \Big{|}_{S^{1}_{-}}=\frac{1}{2\pi}\frac{g}{r}\mathbf{u}_{\varphi}. \tag{3.36}\] This is however not a problem, since as we know the vector potential is not uniquely defined. It is physically acceptable that the identification of the vector potentials at the equator is made modulo a gauge transformation, which is indeed the case here \[\epsilon=-\frac{g}{2\pi}\varphi\hskip 28.452756pt\Longrightarrow \hskip 28.452756pt\mathbf{A}(\mathbf{r})_{+}\Big{|}_{S^{1}_{+}}=\mathbf{A}( \mathbf{r})_{-}\Big{|}_{S^{1}_{-}}-\boldsymbol{\nabla}\epsilon. \tag{3.37}\] The magnetic flux due to the magnetic monopole at its center can be evaluated using these expressions as \[\int_{\mathcal{S}^{2}}d\mathbf{S}\cdot\mathbf{B}=\int_{D_{+}}d \mathbf{S}\cdot(\boldsymbol{\nabla}\times\mathbf{A}_{+})+\int_{D_{-}}d \mathbf{S}\cdot(\boldsymbol{\nabla}\times\mathbf{A}_{-})\] \[=\oint_{S^{1}_{+}}d\mathbf{\ell}\cdot{\bf A}_{+}+\oint_{S^{1}_{-}}d\mathbf{ \ell}\cdot{\bf A}_{-} \tag{3.38}\] \[=\epsilon(2\pi)-\epsilon(0)=g.\] correctly reproducing (3.30). Notice that the two boundaries \(S^{1}_{\pm}=\partial D_{\pm}\) have opposite orientations, so using eq. (3.37) the second line combines into a single integral of \(\epsilon^{\prime}(\varphi)\) from \(0\) to \(2\pi\). The gauge function \(\epsilon(\varphi)\) relating the vector potentials along the equator is not single-valued on \(S^{1}\). This might pose a problem in the presence of quantum charged particles, since their wave functions also change under gauge transformations [see eq. (3.10)]. In order to avoid multivaluedness of the wave function, we must require \[e^{-i\epsilon e(0)}=e^{-i\epsilon e(2\pi)}\qquad\implies\qquad e^{i\epsilon g }=1, \tag{3.39}\] and the Dirac quantization condition (3.34) is retrieved. Alternatively, we can also notice that under a gauge transformation the action of a particle moving along the equator changes by \(\Delta S=-e\!g\), as can be easily checked from eq. (3.13). This has no effect in the Feynman path integral provided \(e\!g=2\pi n\), with \(n\in\mathbb{Z}\), and the same result is obtained. The Wu-Yang construction highlights the topological structure underlying the magnetic monopole. Implementing the quantization condition \(e\!g=2\pi n\), the U(1) transformation (3.37) relating the vector potential of both hemispheres takes the form [cf. (3.11)] \[U=e^{in\varphi}. \tag{3.40}\] Since U(1) is the multiplicative group of complex phases, it can be identified with the unit circle. As we move once along the equator and the azimuthal angle \(\varphi\) changes from \(0\) to \(2\pi\), the gauge transformation (3.40) wraps \(n\) times around U(1), as we illustrate here for the particular case \(n=3\) More technically speaking, when mapping the circle \(S^{1}\) onto U(1) we encounter infinitely many sectors that cannot be smoothly deformed into one another and are distinguished by how many times the circle wraps around U(1). The corresponding integer is an element of the first homotopy group \(\pi_{1}[\text{U(1)}]=\mathbb{Z}\) classifying the continuous maps \(U:S^{1}\to\text{U(1)}\) (see, for example, [57, 58, 59, 60] for physicist-oriented overviews of basic concepts in differential geometry). This should not come as a surprise. After all, at face value, our insistence in expressing the magnetic field as the curl of the vector potential is incompatible with having a nonvanishing value for \(\mathbf{\nabla}\cdot\mathbf{B}\) as in eq. (3.27). To reconcile these two facts we have to assume that although is valid on a contractible coordinate patch, there is no vector field \(\mathbf{A}\) globally defined on the sphere with this property. This is why in our case the topologically trivial configuration \(n=0\) corresponds to zero magnetic charge and a vanishing magnetic field. Looking at the symmetries of classical electrodynamics, we notice one conspicuously absent from the Maxwell's equations (3.1): Galilean invariance. It is amusing that Maxwell composed a fully relativistic invariant field theory some forty years before Einstein's formulation of special relativity. It took the latter's genius to realize that the tension between classical mechanics and electrodynamics was to be solved giving full credit to the Maxwell's equations and their spacetime symmetries. The price to pay was to modify Newtonian mechanics to make it applicable to systems involving velocities close to the speed of light. ### Quantum electromagnetism The easiest way to show the relativistic invariance of the Maxwell's equations is to rewrite them as tensor equations with respect to Poincare transformations. To do so, we combine the scalar and vector electromagnetic potentials into a single four-vector \[A^{\mu}\equiv(\phi,\mathbf{A}), \tag{3.41}\] while electric and magnetic fields are codified in the field strength two-tensor \[F_{\mu\nu}\equiv\partial_{\mu}A_{\nu}-\partial_{\nu}A_{\mu}. \tag{3.42}\] The latter can be explicitly computed to be \[F_{\mu\nu}=\left(\begin{array}{cccc}0&E_{x}&E_{y}&E_{z}\\ -E_{x}&0&-B_{z}&B_{y}\\ -E_{y}&B_{z}&0&-B_{x}\\ -E_{z}&-B_{y}&B_{x}&0\end{array}\right), \tag{3.43}\] where \(\mathbf{E}=(E_{x},E_{y},E_{z})\) and \(\mathbf{B}=(B_{x},B_{y},B_{z})\). The gauge transformations (3.4) are now expressed in the more compact form \[A_{\mu}\longrightarrow A_{\mu}+\partial_{\mu}\epsilon, \tag{3.44}\] which obviously leave \(F_{\mu\nu}\) invariant. It is also convenient to define the dual field strength \[\widetilde{F}_{\mu\nu}=\frac{1}{2}\epsilon_{\mu\nu\alpha\beta}F^{\alpha\beta}, \tag{3.45}\] whose components are obtained from (3.43) by replacing \(\mathbf{E}\rightarrow\mathbf{B}\) and \(\mathbf{B}\rightarrow-\mathbf{E}\). Charge densities and currents are also merged into four-vectors \[j_{e}^{\mu}\equiv(\rho_{e},\mathbf{j}_{e}),\] \[j^{\mu}_{m}\equiv(\rho_{m},{\bf j}_{m}), \tag{3.46}\] in terms of which the four Maxwell's equations (3.1) are recast as \[\partial_{\mu}F^{\mu\nu} =j^{\nu}_{e},\] \[\partial_{\mu}\widetilde{F}^{\mu\nu} =j^{\nu}_{m}. \tag{3.47}\] Some comments about the magnetic current are in order here. It should be noticed that the definition (3.42) automatically implies the Bianchi identity \[\partial_{\mu}\widetilde{F}^{\mu\nu}=\frac{1}{2}e^{\nu\sigma\alpha\beta} \partial_{\sigma}F_{\alpha\beta}=\epsilon^{\nu\sigma\alpha\beta}\partial_{ \sigma}\partial_{\alpha}A_{\beta}=0, \tag{3.48}\] contradicting the second equation in (3.47). In fact, we have already encountered this problem in its noncovariant version when discussing magnetic monopoles: writing \({\bf B}=\mathbf{\nabla}\times{\bf A}\) is incompatible with having \(\mathbf{\nabla}\cdot{\bf B}\neq 0\). The solution given there is also applicable here. What happens is that (3.42) is valid locally but _not globally_. Magnetic monopoles can be described using the vector potential \(A_{\mu}\), but the gauge field configuration needs to be topologically nontrivial. The tensors \(F_{\mu\nu}\) and \(\widetilde{F}_{\mu\nu}\) can be used to construct quantities that are relativistic invariant. By contracting them, we find the two invariants \[F_{\mu\nu}F^{\mu\nu} =\widetilde{F}_{\mu\nu}\widetilde{F}^{\mu\nu}=-2\big{(}{\bf E}^{ 2}-{\bf B}^{2}\big{)},\] \[F_{\mu\nu}\widetilde{F}^{\mu\nu} =2{\bf E}\cdot{\bf B}. \tag{3.49}\] This implies that the complex combinations \[\big{(}{\bf E}\pm i{\bf B}\big{)}^{2}={\bf E}^{2}-{\bf B}^{2}\pm 2i{\bf E} \cdot{\bf B}, \tag{3.50}\] also remain invariant under the Lorentz group3. The present discussion is very relevant for building an action principle for classical electrodynamics. In particular, noticing that \(F_{\mu\nu}\widetilde{F}^{\mu\nu}=2\partial_{\mu}(A_{\nu}F^{\mu\nu})\) is a total derivative, the obvious choice is Footnote 3: They change however under electric-magnetic duality, which mixes the two quantities introduced in (3.49). \[S =\int d^{4}x\left(-\frac{1}{4}F_{\mu\nu}F^{\mu\nu}+j^{\mu}A_{\mu}\right)\] \[=\int dtd^{3}x\left[\frac{1}{2}\big{(}{\bf E}^{2}-{\bf B}^{2} \big{)}+\rho\phi-{\bf j}\cdot{\bf A}\right], \tag{3.51}\] which is also gauge invariant provided charge is conserved, \(\partial_{\mu}j^{\mu}=0\). Since from now on we will ignore the presence of magnetic charges, we drop the color code used so far, as well as the subscript in the electric density and current. Although obtaining the Maxwell field equations from the action in (3.51) is straightforward, the canonical formalism is tricky. The reason is that \(\dot{\phi}\) does not appear in the action and as a consequence the momentum conjugate to \(A_{0}\) is identically zero. Thus, we have a constrained system that has to be dealt with using Dirac's formalism (see, for example, [14] for the details). At a practical level, we regard \(\mathbf{A}\) and \(\mathbf{E}\) as a pair of canonically conjugated variables \[\left\{A_{i}(t,\mathbf{r}),E_{j}(t,\mathbf{r}^{\prime})\right\}_{\mathrm{PB}}= \delta_{ij}\delta^{(3)}(\mathbf{r}-\mathbf{r}^{\prime}). \tag{3.52}\] Using \(\dot{\mathbf{A}}=-\mathbf{E}-\boldsymbol{\nabla}\phi\), we construct the Hamiltonian \[H =\int dtd^{3}x\left[-\dot{\mathbf{A}}\cdot\mathbf{E}-\frac{1}{2} \big{(}\mathbf{E}^{2}-\mathbf{B}^{2}\big{)}-\rho\phi+\mathbf{j}\cdot\mathbf{A}\right]\] \[=\int dtd^{3}x\left[\frac{1}{2}\big{(}\mathbf{E}^{2}+\mathbf{B}^ {2}\big{)}+\phi\big{(}\boldsymbol{\nabla}\cdot\mathbf{E}-\rho\big{)}+\mathbf{ j}\cdot\mathbf{A}\right], \tag{3.53}\] where the term \(-\mathbf{E}\cdot\boldsymbol{\nabla}\phi\) has been integrated by parts and the substitution \(\mathbf{B}=\boldsymbol{\nabla}\times\mathbf{A}\) is understood. Gauss' law \(\boldsymbol{\nabla}\cdot\mathbf{E}=\rho\) emerges as a constraint preserved by time evolution \[\left\{\boldsymbol{\nabla}\cdot\mathbf{E}-\rho,H\right\}_{\mathrm{PB}}=- \boldsymbol{\nabla}\cdot\mathbf{j}-\dot{\rho}\approx 0, \tag{3.54}\] where we follow Dirac notation and denote by \(\approx\) identities that are satisfied after the equations of motions are implemented. It also generates the gauge transformations of the vector potential \[\delta\mathbf{A}(t,\mathbf{r})=\left\{\mathbf{A}(t,\mathbf{r}),\int d^{3}r^{ \prime}\epsilon(t,\mathbf{r}^{\prime})\big{[}\boldsymbol{\nabla}\cdot\mathbf{E }(t,\mathbf{r}^{\prime})-\rho(t,\mathbf{r}^{\prime})\big{]}\right\}_{\mathrm{PB }}=-\boldsymbol{\nabla}\epsilon(t,\mathbf{r}). \tag{3.55}\] Solving the vacuum field equations written in terms of the gauge potential \[\Box A_{\mu}-\partial_{\mu}\partial_{\nu}A^{\nu}=0, \tag{3.56}\] requires fixing the gauge freedom (3.44). To preserve relativistic covariance it is convenient to use the Lorenz gauge \(\partial_{\mu}A^{\mu}=0\) introduced in (3.5), so the gauge potential satisfies then wave equation \(\Box A_{\mu}=0\). Trying a plane wave ansatz \[A_{\mu}(x)\sim\varepsilon_{\mu}(k,\lambda)e^{-ik_{\mu}x^{\mu}}, \tag{3.57}\] the wave equation implies that the momentum vector \(k^{\mu}\) is null \[k_{\mu}k^{\mu}=0\qquad\implies\qquad k^{0}=\pm|\mathbf{k}|. \tag{3.58}\] The parameter \(\lambda\) in \(\varepsilon_{\mu}(k,\lambda)\) labels the number of independent polarization vectors, which the Lorenz gauge condition force to be transverse \[k^{\mu}\varepsilon_{\mu}(\mathbf{k},\lambda)=0. \tag{3.59}\] Using this condition we elliminate the temporal polarization in terms of the other three \[\varepsilon_{0}({\bf k},\lambda)=\frac{1}{|{\bf k}|}{\bf k}\cdot\mathbf{\varepsilon} ({\bf k},\lambda). \tag{3.60}\] In addition, there is a residual gauge freedom preserving the Lorenz condition implemented on the plane wave solutions by shifts of the polarization vector proportional to the wave momentum \[\varepsilon_{\mu}({\bf k},\lambda)\longrightarrow\varepsilon_{\mu}({\bf k}, \lambda)+\alpha({\bf k})k_{\mu}. \tag{3.61}\] Using this freedom to set \(\varepsilon_{0}({\bf k},\lambda)\) to zero, we are left with just two independent transverse polarizations satisfying \({\bf k}\cdot\mathbf{\varepsilon}({\bf k},\lambda)=0\). The plane wave solution then reads \[{\bf A}(t,{\bf r})\sim\mathbf{\varepsilon}({\bf k},\lambda)e^{-i|{\bf k}|t+i{\bf k }\cdot{\bf r}}, \tag{3.62}\] with \(A_{0}=0\) and \(\lambda=\pm 1\) labelling the two transverse polarizations, that in the following we will respectively identify with right-left circular polarizations4, \(\mathbf{\varepsilon}({\bf k},\lambda)^{*}=\mathbf{\varepsilon}({\bf k},-\lambda)\). They moreover satisfy Footnote 4: For a massive vector field the Lorenz condition \(\partial_{\mu}A^{\mu}=0\) is still satisfied as an integrability condition of the equations of motion \(\partial_{\mu}F^{\mu\nu}+m^{2}A^{\nu}=0\) and eq. (3.60) therefore holds. The key difference lies in that the residual freedom (3.61) is absent and we have an additional longitudinal polarization (i.e., aligned with \({\bf k}\)) in addition to the two transverse ones. \[\mathbf{\varepsilon}({\bf k},\lambda)\cdot\left[{\bf k}\times\mathbf{\varepsilon}({ \bf k},\lambda^{\prime})\right]=i\lambda|{\bf k}|\delta_{\lambda,-\lambda^{ \prime}}. \tag{3.63}\] This identity will be useful later on. Since the field equations are linear a general solution can be written as a superposition of the plane wave solutions (3.62) and their complex conjugates. Upon quantization the coefficients in this expansion become operators and we can write a general expression for the gauge field operator \[\widehat{\bf A}(t,{\bf r})=\sum_{\lambda=\pm 1}\int\frac{d^{3}k}{(2\pi)^{3}} \frac{1}{2|{\bf k}|}\left[\mathbf{\varepsilon}({\bf k},\lambda)\widehat{a}({\bf k },\lambda)e^{-i|{\bf k}|t+i{\bf k}\cdot{\bf r}}+\mathbf{\varepsilon}({\bf k}, \lambda)^{*}\widehat{a}({\bf k},\lambda)^{\dagger}e^{i|{\bf k}|t-i{\bf k} \cdot{\bf r}}\right], \tag{3.64}\] where, with our gauge fixing, \(\widehat{A}_{0}(t,{\bf r})=0\). The integration measure appearing in this expression results from integrating over all four-dimensional momenta lying on the upper light-cone in fig. 4 \[\int\frac{d^{4}k}{(2\pi)^{4}}\delta(k_{\mu}k^{\mu})\theta(k^{0})[\ldots]=\int \frac{d^{3}k}{(2\pi)^{3}}\frac{1}{2|{\bf k}|}[\cdots], \tag{3.65}\] and is by construction Lorentz invariant. The quantum states of the theory are vectors in the space of states the operator (3.64) acts on. To determine it and therefore the excitations of the quantum field, we establish first the algebra of operators and then find a representation. This is done by applying the canonical quantization prescription replacing classical Poisson brackets with quantum commutators \[i\{\cdot,\cdot\}_{\rm PB}\longrightarrow[\cdot,\cdot]. \tag{3.66}\] Using the definition \(\widehat{\mathbf{E}}=\partial_{0}\widehat{\mathbf{A}}\), the electric field operator is computed to be \[\widehat{\mathbf{E}}(t,\mathbf{r})=-\frac{i}{2}\sum_{\lambda=\pm 1}\int \frac{d^{3}k}{(2\pi)^{3}}\left[\varepsilon(\mathbf{k},\lambda)\widehat{a}( \mathbf{k},\lambda)e^{-i|\mathbf{k}|t+i\mathbf{k}\cdot\mathbf{r}}-\varepsilon (\mathbf{k},\lambda)^{*}\widehat{a}(\mathbf{k},\lambda)^{\dagger}e^{i| \mathbf{k}|t-i\mathbf{k}\cdot\mathbf{r}}\right], \tag{3.67}\] Classically, the electric field is canonically conjugate to the vector potential [see eq. (3.52)], so the prescription (3.66) gives its equal-time commutator with the gauge field \[[A_{i}(t,\mathbf{r}),E_{i}(t,\mathbf{r}^{\prime})]=i\delta_{ij} \delta^{(3)}(\mathbf{r}-\mathbf{r}^{\prime}). \tag{3.68}\] that translates into the following commutation relations for the operators \(\widehat{a}(\mathbf{k},\lambda)\) and their Hermitian conjugates \[[\widehat{a}(\mathbf{k},\lambda),\widehat{a}(\mathbf{k}^{\prime },\lambda^{\prime})^{\dagger}]=(2\pi)^{3}2|\mathbf{k}|\delta_{\lambda\lambda^{ \prime}}\delta^{(3)}(\mathbf{k}-\mathbf{k}^{\prime}),\] \[[\widehat{a}(\mathbf{k},\lambda),\widehat{a}(\mathbf{k}^{\prime },\lambda^{\prime})]=[\widehat{a}(\mathbf{k},\lambda)^{\dagger},\widehat{a}( \mathbf{k}^{\prime},\lambda^{\prime})^{\dagger}]=0. \tag{3.69}\] This algebra is reminiscent of the one of creation-annihilation operators in the quantum harmonic oscillator. Introducing a properly normalized vacuum state \(|0\rangle\) to be annihilated by all \(\widehat{a}(\mathbf{k};\lambda)\), we define the vector \[|\mathbf{k},\lambda\rangle=\widehat{a}(\mathbf{k},\lambda)^{ \dagger}|0\rangle, \tag{3.70}\] representing a one-photon state with momentum \(\mathbf{k}\) and helicity \(\lambda\). These states are covariantly normalized according to \[\langle\mathbf{k},\lambda|\mathbf{k}^{\prime},\lambda^{\prime} \rangle=(2\pi)^{3}2|\mathbf{k}|\delta_{\lambda\lambda^{\prime}}\delta^{(3)}( \mathbf{k}-\mathbf{k}^{\prime}), \tag{3.71}\] as can be seen from eq. (3.69). Multiple photon states are obtained by successive application of creation operators \[|\mathbf{k}_{1},\lambda_{1};\mathbf{k}_{2},\lambda_{2};\ldots; \mathbf{k}_{n},\lambda_{n}\rangle=\widehat{a}(\mathbf{k}_{1},\lambda_{1})^{ \dagger}\widehat{a}(\mathbf{k}_{2},\lambda_{2})^{\dagger}\ldots\widehat{a}( \mathbf{k}_{n},\lambda_{n})^{\dagger}|0\rangle. \tag{3.72}\] From the commutation relation of creation operators given in (3.69) we see that the multi-photon state is even under the interchange of whatever two photons, as it should be for bosons. Although we have been talking about photons, we must check that the states (3.70) have the quantum numbers corresponding to these particles. So, first we compute their energy by writing the quantum Hamiltonian. Going back to eq. (3.53), we set the sources to zero (\(\rho=0\) and \(\mathbf{j}=\mathbf{0}\)) and replace the electric and magnetic field for their corresponding operators. A first thing to notice is that the electric field (3.67) satisfies the Gauss law \(\boldsymbol{\nabla}\cdot\widehat{\mathbf{E}}=0\) as a consequence of the transversality condition of the polarizations vectors. Computing in addition \(\mathbf{B}=\boldsymbol{\nabla}\times\mathbf{A}\) and after some algebra, we find \[\widehat{H}=\sum_{\lambda=\pm 1}\int\frac{d^{3}k}{(2\pi)^{3}}\frac{1}{2| \mathbf{k}|}\,|\mathbf{k}|\widehat{a}(\mathbf{k},\lambda)^{\dagger}\widehat{a }(\mathbf{k},\lambda)+\frac{1}{2}\sum_{\lambda=\pm 1}\int d^{3}k\,|\mathbf{k}| \delta^{(3)}(\mathbf{0}). \tag{3.73}\] The second term on the right-hand side represents the energy of the vacuum state \[\widehat{H}|0\rangle=\left(\frac{1}{2}\sum_{\lambda=\pm 1}d^{3}k\,|{\bf k}|\delta^{(3 )}({\bf 0})\right)|0\rangle, \tag{3.74}\] and is doubly divergent. One infinity originates in the delta function and comes about because we are working at infinite volume, a type of divergence that is QFT are designated as _infrared_ (IR). It can be regularized by setting our system in a box of volume \(V\), which replaces \((2\pi)^{3}\delta^{(3)}({\bf 0})\). Proceeding in this way, we write the energy density of the vacuum \[\rho_{\rm vac}\equiv\frac{E_{\rm vac}}{V}=\frac{1}{2}\sum_{\lambda=\pm 1} \int\frac{d^{3}k}{(2\pi)^{3}}\,|{\bf k}|. \tag{3.75}\] This expression has the obvious interpretation of being the result of adding the zero-point energies of infinitely many harmonic oscillators, each with frequency \(\omega=|{\bf k}|\). It is still divergent, and since the infinity originates in the integration over arbitrary high momenta, it is called _ultraviolet_ (UV). A way to get rid of it is assuming that \(|{\bf k}|<\Lambda_{\rm UV}\), so after carrying out the integral the vacuum energy density is given by \[\rho_{\rm vac}=\frac{1}{16\pi^{2}}\Lambda_{\rm UV}^{4}. \tag{3.76}\] In the spirit of effective field theory this UV cutoff is physically interpreted as the energy scale at which our description of the electromagnetic field breaks down and has to be replaced by some more general theory. The vacuum energy density (3.76) is at the origin of the cosmological constant problem. Due to its strong dependence with the UV cutoff, when we add the contributions of all known quantum field to \(\rho_{\rm vac}\) the result is many orders of magnitude larger than the one measured through cosmological observations. The way to handle this mismatch is by assuming the existence of a nonzero cosmological constant \(\Lambda_{c}\) contribution to the total vacuum energy of the universe as \[\rho_{\rm vac}=\frac{\Lambda_{c}}{8\pi G_{N}}+\sum_{i}\rho_{{\rm vac},i}, \tag{3.77}\] where the sum is over all quantum fields in nature. Identifying the UV cutoff with the Planck energy, \(\Lambda_{\rm UV}\simeq\Lambda_{\rm Pl}\), the cosmological constant has to be fine tuned over 120 orders of magnitude in order to cancel the excess contribution of the quantum fields to the vacuum energy density of the universe (see, for example, [61, 62, 63] for comprehensive reviews). Let us get rid of the vacuum energy for the time being by subtracting it from the Hamiltonian (3.73). Acting with this subtracted Hamiltonian on the multiparticle states (3.72), we find they are energy eigenstates \[\widehat{H}|{\bf k}_{1},\lambda_{1};{\bf k}_{2},\lambda_{2};\ldots;{\bf k}_{n},\lambda_{n}\rangle=\big{(}|{\bf k}_{1}|+|{\bf k}_{2}|+\ldots+|{\bf k}_{n}| \big{)}|{\bf k}_{1},\lambda_{1};{\bf k}_{2},\lambda_{2};\ldots;{\bf k}_{n}, \lambda_{n}\rangle, \tag{3.78}\] with the eigenvalue giving the energy of \(n\) free photons with momenta \({\bf k}_{1},{\bf k}_{2},\ldots,{\bf k}_{n}\). The field momen tum, on the other hand, is given by the Poynting operator \[\widehat{\mathbf{P}} =\int d^{3}r\,\mathbf{E}(t,\mathbf{r})\times\mathbf{B}(t,\mathbf{r})\] \[=\sum_{\lambda=\pm 1}\int\frac{d^{3}k}{(2\pi)^{3}}\frac{1}{2| \mathbf{k}|}\mathbf{k}\,\widehat{a}(\mathbf{k},\lambda)^{\dagger}\widehat{a}( \mathbf{k},\lambda), \tag{3.79}\] where, unlike the Hamiltonian, here there is no vacuum contribution due to the rotational invariance of \(|0\rangle\). Its action on the states (3.72) gives \[\widehat{\mathbf{P}}|\mathbf{k}_{1},\lambda_{1};\mathbf{k}_{2},\lambda_{2}; \ldots;\mathbf{k}_{n},\lambda_{n}\rangle=\big{(}\mathbf{k}_{1}+\mathbf{k}_{2} +\ldots+\mathbf{k}_{n}\big{)}|\mathbf{k}_{1},\lambda_{1};\mathbf{k}_{2}, \lambda_{2};\ldots;\mathbf{k}_{n},\lambda_{n}\rangle, \tag{3.80}\] showing that the vector \(\mathbf{k}\) labelling the one-particle states (3.70) is rightly interpreted as the photon momentum. Finally, we compute the spin momentum operator \[\widehat{\mathbf{S}} =\int d^{3}x\,\widehat{\mathbf{A}}\times\widehat{\mathbf{E}}\] \[=i\sum_{\lambda,\lambda^{\prime}=\pm 1}\int\frac{d^{3}k}{(2\pi)^{ 2}}\frac{1}{2|\mathbf{k}|}\boldsymbol{\varepsilon}(\mathbf{k},\lambda)\times \boldsymbol{\varepsilon}(\mathbf{k},\lambda^{\prime})^{*}\,\widehat{a}( \mathbf{k},\lambda)^{\dagger}\widehat{a}(\mathbf{k},\lambda). \tag{3.81}\] Acting on a one-particle state (3.70), we find \[\widehat{\mathbf{S}}|\mathbf{k},\lambda\rangle=i\sum_{\lambda,\lambda^{ \prime}=\pm 1}\boldsymbol{\varepsilon}(\mathbf{k},\lambda)\times\boldsymbol{ \varepsilon}(\mathbf{k},\lambda^{\prime})^{*}|\mathbf{k},\lambda\rangle. \tag{3.82}\] We project now this expression on the direction of the photon's momentum, we find the helicity operator acting on the single photon state \[\widehat{h}|\mathbf{k},\lambda\rangle\equiv\frac{\mathbf{k}}{|\mathbf{k}|} \cdot\widehat{\mathbf{S}}|\mathbf{k},\lambda\rangle=\frac{i}{|\mathbf{k}|} \sum_{\lambda,\lambda^{\prime}=\pm 1}\mathbf{k}\cdot\big{[}\boldsymbol{ \varepsilon}(\mathbf{k},\lambda)\times\boldsymbol{\varepsilon}(\mathbf{k}, \lambda^{\prime})^{*}\big{]}|\mathbf{k},\lambda\rangle. \tag{3.83}\] Using the relation (3.63) to evaluate the mixed product inside the sum, we arrive at \[\widehat{h}|\mathbf{k},\lambda\rangle=\lambda|\mathbf{k},\lambda\rangle, \tag{3.84}\] which shows that \(\lambda\) is indeed the helicity of the photon. We have convinced ourselves that our interpretation of the quantum numbers describing the Hamiltonian eigenstates was correct, and they describe states with an arbitrary number of free photons of definite momenta and helicities. Photons therefore emerge as the elementary excitations of the quantum electromagnetic field. ### Some comments on quantum fields The previous calculation also teaches an important lesson: the space of states of a free quantum field (in this case the electromagnetic field) is in fact a Fock space, i.e., the direct sum of Hilbert spaces spanned by the \(n\)-particle states (3.72), \[\mathscr{F}=\bigoplus_{n=0}^{\infty}\mathscr{H}_{n}, \tag{3.85}\] where we take \(\mathscr{H}_{0}=L\{|0\rangle\}\), the one-dimensional linear space generated by the vacuum state \(|0\rangle\). We have shown that the canonical commutation relations (3.68) admit a representation in the Fock space. Although we have done this for the free sourceless Maxwell's theory, it is also the case for any other free field theory, as we will see in other examples below. Including interactions does not change this, provided they are sufficiently weak and to be treated in perturbation theory. Thus, the first step in describing a physical system is to identify the weakly coupled degrees of freedom, whose multiparticle states span the Fock space representing the asymptotic states in scattering experiments of the type carried out everyday in high energy facilities around the world. This is well illustrated by the case of QCD discussed in the Introduction (see page 3.2), where while the asymptotic states are described by hadrons, the fundamental interactions taking place are described in terms of weakly coupled quarks and gluons5. Footnote 5: A technical caveat: Haag’s theorem [64], however, states that for a general interacting QFT there exists no Fock space representation of the canonical commutation relation. This is usually interpreted as implying that full interacting QFT is not a theory of particles [65, 66, 67]. ## Box 6. Complex fields and antiparticles The analysis presented for electrodynamics carries over to the quantization of other free fields. A simple but particularly interesting example is provided by a complex scalar field, with action \[S=\int d^{4}x\,\Big{(}\partial_{\mu}\varphi^{*}\partial^{\mu}\varphi-m^{2} \varphi^{*}\varphi\Big{)}. \tag{3.86}\] Life is now simpler since there is no gauge freedom and the Hamiltonian formalism is straightforward. We compute the conjugate momentum and the canonical Poisson brackets \[\pi(t,\mathbf{r})=\frac{\delta S}{\delta\partial_{0}\varphi(t,\mathbf{r})}= \partial_{0}\varphi(t,\mathbf{r})^{*}\quad\implies\quad\big{\{}\varphi(t, \mathbf{r}),\pi(t,\mathbf{r}^{\prime})\big{\}}_{\mathrm{PB}}=\delta^{(3)}( \mathbf{r}-\mathbf{r}^{\prime}), \tag{3.87}\] with the corresponding expression for the complex conjugate fields, \(\varphi(t,\mathbf{r})^{*}\) and \(\pi(t,\mathbf{r})^{*}\). The Hamiltonian is then given by \[H=\int d^{3}r\,\Big{[}\pi^{*}\pi+(\mathbf{\nabla}\varphi^{*})\cdot(\mathbf{\nabla} \varphi)+m^{2}\varphi^{*}\varphi\Big{]}. \tag{3.88}\] The equation of motion derived from the action (3.86) is the Klein-Gordon equation \[\big{(}\Box+m^{2}\big{)}\varphi=0, \tag{3.89}\] which admits plane wave solutions of the form \[\varphi(x)\sim e^{ip_{\mu}x^{\mu}}, \tag{3.90}\] with \(p_{\mu}\) satisfying the mass-shell condition \[p_{\mu}p^{\mu}=m^{2}\qquad\implies\qquad p^{0}\equiv\pm E_{\mathbf{p}}=\pm\sqrt{ \mathbf{p}^{2}+m^{2}}. \tag{3.91}\] As with the electromagnetic field, the corresponding quantum fields are operator-valued superposition of plane waves \[\widehat{\varphi}(t,\mathbf{r}) =\int\frac{d^{3}p}{(2\pi)^{3}}\frac{1}{2E_{\mathbf{p}}}\left[ \widehat{\alpha}(\mathbf{p})e^{-iE_{\mathbf{p}}t+i\mathbf{p}\cdot\mathbf{r}}+ \widehat{\beta}(\mathbf{p})^{\dagger}e^{iE_{\mathbf{p}}t-i\mathbf{p}\cdot \mathbf{r}}\right],\] \[\widehat{\varphi}(t,\mathbf{r})^{\dagger} =\int\frac{d^{3}p}{(2\pi)^{3}}\frac{1}{2E_{\mathbf{p}}}\left[ \widehat{\beta}(\mathbf{p})e^{-iE_{\mathbf{p}}t+i\mathbf{p}\cdot\mathbf{r}}+ \widehat{\alpha}(\mathbf{p})^{\dagger}e^{iE_{\mathbf{p}}t-i\mathbf{p}\cdot \mathbf{r}}\right] \tag{3.92}\] while the operator associated to the canonically conjugate momentum is given by \[\widehat{\pi}(t,\mathbf{r}) =-\frac{i}{2}\int\frac{d^{3}p}{(2\pi)^{3}}\left[\widehat{\beta}( \mathbf{p})e^{-iE_{\mathbf{p}}t+i\mathbf{p}\cdot\mathbf{r}}-\widehat{\alpha} (\mathbf{p})^{\dagger}e^{iE_{\mathbf{p}}t-i\mathbf{p}\cdot\mathbf{r}}\right],\] \[\widehat{\alpha}(t,\mathbf{r})^{\dagger} =\frac{i}{2}\int\frac{d^{3}p}{(2\pi)^{3}}\left[\widehat{\alpha}( \mathbf{p})e^{-iE_{\mathbf{p}}t+i\mathbf{p}\cdot\mathbf{r}}-\widehat{\beta}( \mathbf{p})^{\dagger}e^{iE_{\mathbf{p}}t-i\mathbf{p}\cdot\mathbf{r}}\right]. \tag{3.93}\] The key observation here is that since \(\widehat{\varphi}\) is not Hermitian, the two operators \(\widehat{\alpha}(\mathbf{p})\) and \(\widehat{\beta}(\mathbf{p})\) cannot be identified, as it was the case with the electromagnetic field. Imposing the equal-time canonical commutation relations induced by the canonical Poisson brackets [see eq. (3.87)] leads to the following algebra of operators \[[\widehat{\alpha}(\mathbf{p}),\widehat{\alpha}(\mathbf{p}^{\prime })^{\dagger}] =(2\pi)^{3}2E_{\mathbf{p}}\delta^{(3)}(\mathbf{p}-\mathbf{p}^{ \prime}),\] \[[\widehat{\alpha}(\mathbf{p}),\widehat{\alpha}(\mathbf{p}^{\prime })] =[\widehat{\alpha}(\mathbf{p})^{\dagger},\widehat{\alpha}(\mathbf{p}^{ \prime})^{\dagger}]=0, \tag{3.94}\] and corresponding expressions for \(\widehat{\beta}(\mathbf{p})\) and \(\widehat{\beta}(\mathbf{p})^{\dagger}\), with both types of operators commuting with each other. As with the photons, the Fock space of states is built by acting with \(\widehat{\alpha}(\mathbf{p})^{\dagger}\)'s and \(\widehat{\beta}(\mathbf{p})^{\dagger}\)'s on the vacuum state \(|0\rangle\), which is itself annihilated by \(\widehat{\alpha}(\mathbf{p})\)'s and \(\widehat{\beta}(\mathbf{p})\)'s \[|\mathbf{p}_{1},\ldots,\mathbf{p}_{n};\mathbf{q}_{1},\ldots,\mathbf{q}_{m} \rangle=\widehat{\alpha}(\mathbf{p}_{1})^{\dagger}\ldots\widehat{\alpha}( \mathbf{p}_{n})^{\dagger}\widehat{\beta}(\mathbf{q}_{1})^{\dagger}\ldots \widehat{\beta}(\mathbf{q}_{1})^{\dagger}|0\rangle, \tag{3.95}\] where we have distinguished the momenta associated with the two kinds of creation operators. Notice that since the operators on the right-hand side of this expression commute with each other, the order in which we list the momenta \(\mathbf{p}_{1},\ldots,\mathbf{p}_{n}\) and \(\mathbf{q}_{1},\ldots,\mathbf{q}_{m}\) is irrelevant, signalling that both types of excitations are bosons. The states constructed in (3.95) in fact diagonalize the Hamiltonian \[\widehat{H}=\int\frac{d^{3}p}{(2\pi)^{3}}\frac{1}{2E_{\mathbf{p}}}E_{\mathbf{ p}}\Big{[}\widehat{\alpha}(\mathbf{p})^{\dagger}\widehat{\alpha}(\mathbf{p})+ \widehat{\beta}(\mathbf{p})^{\dagger}\widehat{\beta}(\mathbf{p})\Big{]}, \tag{3.96}\] where here we have subtracted a UV and IR divergent vacuum contribution similar to the one en countered in eq. (3.73). Indeed, it is not difficult to show that \[\widehat{H}|\mathbf{p}_{1},\ldots,\mathbf{p}_{n}; \mathbf{q}_{1},\ldots,\mathbf{q}_{m}\rangle\] \[=\big{(}E_{\mathbf{p}_{1}}+\ldots+E_{\mathbf{p}_{n}}+E_{\mathbf{ q}_{1}}+\ldots+E_{\mathbf{q}_{m}}\big{)}|\mathbf{p}_{1},\ldots,\mathbf{p}_{n}; \mathbf{q}_{1},\ldots,\mathbf{q}_{m}\rangle, \tag{3.97}\] from where we conclude that the elementary excitations of the quantum real scalar field are free scalars particles with well-defined energy and momentum. These particles, however, come in two different types depending on whether they are created by \(\widehat{\alpha}(\mathbf{p})^{\dagger}\) or \(\widehat{\beta}(\mathbf{p})^{\dagger}\), although sharing the same dispersion relation have equal masses. The obvious question is what distinguish physically one from the other. To answer we have to study the symmetries of the classical theory. A look at the action (3.86) shows that it is invariant under global phase rotations of the complex field \[\varphi(x)\longrightarrow e^{i\vartheta}\varphi(x),\hskip 28.452756pt\varphi(x)^ {*}\longrightarrow e^{-i\vartheta}\varphi(x), \tag{3.98}\] with \(\vartheta\) a constant real parameter. Noether's theorem (see page 58 below) states that associated to this symmetry there must be a conserved current, whose expression turns out to be \[j^{\mu}=i\varphi^{*}\overleftrightarrow{\partial}^{\mu}\varphi\equiv i \varphi^{*}\partial^{\mu}\varphi-i(\partial^{\mu}\varphi^{*})\varphi\quad \Longrightarrow\quad\partial_{\mu}j^{\mu}=0. \tag{3.99}\] In particular, the conserved charge is given by \[Q=\int d^{3}r\,\big{(}\varphi^{*}\pi^{*}-\pi\varphi\big{)}, \tag{3.100}\] and once classical fields are replaced by their operator counterparts (and complex by Hermitian conjugation), we have the following form for the charge operator \[\widehat{Q}=\int\frac{d^{3}p}{(2\pi)^{3}}\frac{1}{2E_{\mathbf{p}}}\Big{[} \widehat{\alpha}(\mathbf{p})^{\dagger}\widehat{\alpha}(\mathbf{p})-\widehat{ \beta}(\mathbf{p})^{\dagger}\widehat{\beta}(\mathbf{p})\Big{]}. \tag{3.101}\] By acting with it on one-particle states, we get \[\widehat{Q}|\mathbf{p};0\rangle =|\mathbf{p};0\rangle,\] \[\widehat{Q}|0;\mathbf{q}\rangle =-|0;\mathbf{q}\rangle, \tag{3.102}\] showing that the conserved charge distinguishes the excitations generated by \(\widehat{\alpha}(\mathbf{p})^{\dagger}\) from those generated by \(\widehat{\beta}(\mathbf{p})^{\dagger}\). Moreover, the complex scalar field can be coupled to the electromagnetic field by identifying the current (3.99) with the one appearing in the Maxwell action (3.51), its conservation guaranteeing gauge invariance of the combined action. Thus, the two kinds of particles with the same mass and spin have opposite electric charges and are identified as particles and antiparticles. The complex (i.e., non-Hermitian) character of the scalar field is crucial to have both particles and antiparticles. In the case of the gauge field \(\widehat{\mathbf{A}}\), Hermiticity identifies the operators associated with positive and negative energy plane wave solutions as conjugate to each other, making the photon its own antiparticle. It is time we address another symmetry present in Maxwell's electrodynamics that is of pivotal importance for QFT as a whole: scale invariance. Looking at the free electromagnetic action \[S_{\rm EM}=-\frac{1}{4}\int d^{4}x\,F_{\mu\nu}F^{\mu\nu}. \tag{3.103}\] we notice the absence of any dimensionful parameters, unlike in the case of the complex scalar field action (3.86) where we have a parameter \(m\) that turns out to be the mass of its elementary quantum excitations. It seems that the free Maxwell's theory should be invariant under changes of scale. To formulate the idea of scale invaraince in more general and precise mathematical terms, let as assume a scale transformation of the coordinates \[x^{\mu}\longrightarrow\lambda x^{\mu}, \tag{3.104}\] with \(\lambda\) a nonzero real parameter, combined with the following scaling of the fields in the theory \[\Phi(x)\longrightarrow\lambda^{-\Delta\Phi}\Phi(\lambda^{-1}x), \tag{3.105}\] where \(\Delta_{\Phi}\) is called the field's scaling dimension. Applying these transformations to particular case of the action (3.103), we find \[S_{\rm EM}\longrightarrow\lambda^{2-2\Delta_{A}}S_{\rm EM}, \tag{3.106}\] so by setting \(\Delta_{A}=1\) the action remains invariant under scale transformations. We will explore now whether the scale invariance of the free Maxwell's theory is preserved by the coupling of the electromagnetic field to charged matter. As an example, let us consider the complex scalar field we studied in Box 6 but now coupled to an electromagnetic field \[S =\int d^{4}x\left\{\partial_{\mu}\varphi^{*}\partial^{\mu}\varphi -m\varphi^{*}\varphi-\frac{1}{4}F_{\mu\nu}F^{\mu\nu}+ie\big{[}\varphi^{*} \partial_{\mu}\varphi-(\partial_{\mu}\varphi^{*})\varphi\big{]}A^{\mu}+e^{2} \varphi^{*}\varphi A_{\mu}A^{\mu}\right\}\] \[=\int d^{4}x\left[\big{(}\partial_{\mu}+ieA_{\mu}\big{)}\varphi^{* }\big{(}\partial^{\mu}-ieA^{\mu}\big{)}\varphi-m^{2}\varphi^{*}\varphi-\frac{ 1}{4}F_{\mu\nu}F^{\mu\nu}\right]. \tag{3.107}\] Here, besides the coupling \(j_{\mu}A^{\mu}\) suggested by the Maxwell's equations, we also have the term \(e^{2}\varphi^{*}\varphi A_{\mu}A^{\mu}\), that has to be added to preserve the invariance of the whole action under the gauge transformations6 Footnote 6: Notice that the combination \((\partial_{\mu}-ieA_{\mu})\varphi\) appearing in the second line of eq. (3.107) transforms as the complex scalar field itself. It defines the gauge covariant derivative of \(\varphi\), its name reflecting its covariant transformation under gauge transformations, \(D_{\mu}\varphi\to e^{ie\epsilon(x)}D_{\mu}\varphi\). \[\varphi\to e^{ie\epsilon(x)}\varphi^{*},\hskip 28.452756pt\varphi^{*} \to e^{-ie\epsilon(x)}\varphi^{*},\hskip 28.452756ptA_{\mu}\to A_{\mu}+ \partial_{\mu}\epsilon(x). \tag{3.108}\] Setting the scaling dimension of the scalar field to one, \(\Delta_{\varphi}=1\), we easily check that the scale invariance of the action (3.107) is only broken by the mass term of the scalar field \[m\int d^{4}x\,\varphi^{*}\varphi\longrightarrow\lambda^{2}m\int d^{4}x\,\varphi^ {*}\varphi. \tag{3.109}\] This confirms our intuition that classical scale invariance is incompatible with the presence of dimensionful parameters in the action. It also shows that taking \(m=0\) the photon can be coupled to scalar charged matter preserving the classical scale invariance of the free Maxwell theory. Several essential field theories sharing this property besides the example just analyzed, most notably QCD once all quark masses are set to zero. The discussion above has emphasized the term _classical_ whenever referring to scale invariance. The reason is that this is a very fragile symmetry once quantum effects are included. For example, let us go back to the action (3.107) but now take \(m=0\). The classical scale invariance is broken by quantum effects in the sense that, once the quantum corrections induced by interactions are taken into account, physics depends on the energy scale at which experiments are carried out. One way in which this happens is by the electric charge of the elementary excitations of the field depending on the energy at which it is measured7. We will further elaborate on this phenomenon in section 10. Footnote 7: Incidentally, most scale invariant QFTs are also invariant under the full conformal group, i.e., the group of coordinate transformations preserving the light cone. ## 4 Some group theory and some more wave equations Scalars and vectors are relatively intuitive objects, which why we did not need to get into sophisticated mathematics to handle them. In nature, however, elementary scalar fields are rare (as of today, we know just one, the Higgs field) and vector fields only describe interactions, not matter. To describe fundamental physics we need fields whose excitations are particles with spin-\(\frac{1}{2}\), such as the electron, the muon, and the quarks. We have to plunge into group theory before we can formulate these objects rigurously. ### Special relativity and group theory Let us begin by giving a more technical picture of the Lorentz group. We have defined it as the set of linear transformations of the spacetime coordinates \(x^{\prime\mu}=\Lambda^{\mu}_{\ \nu}x^{\nu}\) satisfying (2.10) and therefore preserving the Minkowski metric. The first thing to be noticed is that this condition implies the inequality \[(\Lambda^{0}_{\ 0})^{2}-\sum_{i=1}^{3}(\Lambda^{i}_{\ 0})^{2}=1\qquad\implies \qquad|\Lambda^{0}_{\ 0}|\geq 1. \tag{4.1}\] The sign of \(\Lambda^{0}_{\ 0}\) indicates whether or not the transformed time coordinate "flows" in the same direction as the original one, this being why transformations with \(\Lambda^{0}_{\ 0}\geq 1\) are called _orthochronous_. At the same time, eq. (2.10) also implies \[(\det\Lambda)^{2}=1\qquad\implies\qquad\det\Lambda=\pm 1. \tag{4.2}\] Since it is not possible to change the signs of \(\Lambda^{0}_{\ 0}\) or \(\det\Lambda\) by continuously deforming a Lorentz transformations, the full Lorentz group is seen to be composed of four different connected components: \[\mathfrak{L}^{\uparrow}_{+}:\ \text{proper, orthochronous transformations with}\ \Lambda^{0}_{\ 0}\geq 1\ \text{and}\ \det\Lambda=1,\] \[\mathfrak{L}^{\downarrow}_{+}:\ \text{proper, non-orthochronous transformations with}\ \Lambda^{0}_{\ 0}\leq-1\ \text{and}\ \det\Lambda=1,\] \[\mathfrak{L}^{\uparrow}_{-}:\ \text{improper, orthochronous transformations with}\ \Lambda^{0}_{\ 0}\geq 1\ \text{and}\ \det\Lambda=-1, \tag{4.3}\] \[\mathfrak{L}^{\downarrow}_{-}:\ \text{improper, non-orthochronous transformations with}\ \Lambda^{0}_{\ 0}\leq-1\ \text{and}\ \det\Lambda=-1,\] The set of proper orthochronous transformations \(\mathfrak{L}^{\uparrow}_{+}\) contains the identity, while the remaining ones respectively include the time reversal operation (\(\text{T}:x^{0}\rightarrow-x^{0}\)), parity (\(\text{P}:x^{i}\rightarrow-x^{i}\)), and the composition of both. As indicated in fig. 8, these discrete transformations also map the identity's connected component to the other three \[\text{T}:\mathfrak{L}^{\uparrow}_{+}\longrightarrow\mathfrak{L}^{\downarrow }_{-},\hskip 28.452756pt\text{P}:\mathfrak{L}^{\uparrow}_{+} \longrightarrow\mathfrak{L}^{\uparrow}_{-},\hskip 28.452756pt\text{PT}: \mathfrak{L}^{\uparrow}_{+}\longrightarrow\mathfrak{L}^{\downarrow}_{+}. \tag{4.4}\] Thus, to study the irreps of the Lorentz group it is enough to restrict our attention to \(\mathfrak{L}^{\uparrow}_{+}\equiv\text{SO}(1,3)\). As discussed in page 10, the proper group Lorentz SO(1,3) is composed by two kinds of transformations: rotations with angle \(0\leq\phi<2\pi\) around an axis defined by the unit vector \(\mathbf{u}\) and boosts with rapidity \(\lambda\) along the direction set by the unit vector \(\mathbf{e}\). Since we are on the connected component of the identity, the transformations can be written by exponentiation of the Lie algebra generators \[R(\phi,\mathbf{u})=e^{-i\phi\mathbf{u}\cdot\mathbf{J}},\] Figure 8: The four connected components of the Lorentz group. The matrices indicate the transformations \(P\), \(T\), and \(PT\) mapping the connected component of the identity \(\mathfrak{L}^{\uparrow}_{+}\) to the other three. \[B(\lambda,{\bf e})=e^{-i\lambda{\bf e}\cdot{\bf M}}, \tag{4.5}\] where \({\bf J}=(J_{1},J_{2},J_{3})\) and \({\bf M}=(M_{1},M_{2},M_{3})\) are the generators of rotations and boost respectively. They satisfy the algebra8 Footnote 8: The six generators \((J_{i},M_{i})\) of the proper Lorentz group can be fit into a rank-2 antisymmetric tensor with components \(\,\mathscr{J}_{0i}=M_{i}\) and \(\mathscr{J}_{ij}=\epsilon_{ijk}J_{k}\), satisfying the algebra \([\mathscr{J}_{\mu\nu},\mathscr{J}_{\alpha\beta}]=i\eta_{\mu\alpha}\mathscr{J} _{\nu\beta}-i\eta_{\mu\beta}\mathscr{J}_{\nu\alpha}+i\eta_{\nu\beta}\mathscr{J }_{\mu\alpha}-i\eta_{\nu\alpha}\mathscr{J}_{\mu\beta}\). \[\begin{split}[J_{i},J_{j}]&=i\epsilon_{ijk}J_{k},\\ [J_{i},M_{j}]&=i\epsilon_{ijk}M_{k},\\ [M_{i},M_{j}]&=-i\epsilon_{ijk}J_{k}.\end{split} \tag{4.6}\] Although the calculation leading to them is relatively easy, the previous commutation relations can also be heuristically understood. The first commutator reproduces the usual algebra of infinitesimal rotations familiar from elementary quantum mechanics. The second one is the simple statement that the generators of the boost along the three spatial directions transform as vectors under three-dimensional rotations. The third identity is the less obvious. It amounts to saying that if we carry out two boosts along the directions set by unit vectors \({\bf e}_{1}\) and \({\bf e}_{2}\), the ambiguity in the order of the boost is equivalent to a three-dimensional rotation with respect to the axis defined by \({\bf e}_{1}\times{\bf e}_{2}\). We could now try to find irreducible representations (irreps) of the algebra (4.6). Life get simpler if we relate this algebra to the one of a group we are more familiar with. This can be done in this case by introducing the new set of generators \[J_{i}^{\pm}=\frac{1}{2}\big{(}J_{i}\pm iM_{i}\big{)}, \tag{4.7}\] in terms of which, the algebra (4.6) reads \[\begin{split}[J_{i}^{+},J_{j}^{+}]&=i\epsilon_{ ijk}J_{k}^{+},\\ [J_{i}^{-},J_{j}^{-}]&=i\epsilon_{ijk}J_{k}^{-},\\ [J_{i}^{+},J_{j}^{-}]&=0,\end{split} \tag{4.8}\] One thing we gain with this is that we have decoupled an algebra of six generators into two algebras of three generators each commuting with one another. But the real bonus here is that the individual algebras are those of SU(2), whose representation theory can be found in any quantum mechanics group. Thus, SO(1,3) \(=\) SU(2)\({}_{+}\times\) SU(2)\({}_{-}\) and its irreps are obtained by providing a pair of irreps of SU(2), labelled by their total spins \(({\bf s}_{+},{\bf s}_{-})\) with \({\bf s}_{\pm}={\bf 0},\frac{1}{2},{\bf 1},\frac{3}{2},\ldots\) Since \(J_{i}\) is a pseudovector, it does not change under parity transformations, whereas the boost generators \(M_{i}\) do reverse sign \[{\rm P}:J_{i}\longrightarrow J_{i},\hskip 28.452756pt{\rm P}:M_{i}\longrightarrow-M_ {i}. \tag{4.9}\] As a consequence, parity interchanges the two SU(2) factors \[{\rm P}:({\bf s}_{+},{\bf s}_{-})\longrightarrow({\bf s}_{-},{\bf s}_{+}). \tag{4.10}\] Finally, the generators of the group SO(3) \(\approx\) SU(2) of spatial rotations are given by \[J_{i}=J_{i}^{+}+J_{i}^{-}, \tag{4.11}\] so the irrep \(({\bf s}_{+},{\bf s}_{-})\) decomposes into those of SU(2) with \(j={\bf s}_{+}+{\bf s}_{-},{\bf s}_{+}+{\bf s}_{-}-1,\ldots,|{\bf s}_{+}-{\bf s}_ {-}|\). Let us illustrate this general analysis with some relevant examples. We begin with the trivial irrep \(({\bf s}_{+},{\bf s}_{-})=({\bf 0},{\bf 0})\), whose generators are \(J_{i}^{\pm}=0\). Fields transforming in this representation are scalar which under a Lorentz transformation \(x^{\prime\mu}=\Lambda^{\mu}_{\ \nu}x^{\nu}\) change according to \[\varphi^{\prime}(x^{\prime})=\varphi(x). \tag{4.12}\] Another parity invariant representation is \(({\bf s}_{+},{\bf s}_{-})=(\frac{1}{2},\frac{1}{2})\), with generators \(J_{i}^{+}=J_{i}^{-}=\frac{1}{2}\sigma^{i}.\) Decomposing this irrep with respect to those of spatial rotations, we see that they include a scalar (\(j=0\)) and a three-vector (\(j=1\)). These correspond respectively to the zero and spatial components of a spin-one vector field \(V^{\mu}(x)\) transforming as \[V^{\mu}(x^{\prime})=\Lambda^{\mu}_{\ \nu}V^{\nu}(x). \tag{4.13}\] Finally, we look at \(({\bf s}_{+},{\bf s}_{-})=({\bf 1},{\bf 1})\). This is decomposed in terms of three irreps of SU(2) \(\approx\) SO(3) with \(j=2,1,0\). Together, they build a rank-two symmetric-traceless tensor field \(h^{\mu\nu}(x)=h^{\nu\mu}(x)\), \(\eta_{\mu\nu}h^{\mu\nu}(x)=0\) transforming as \[h^{\prime\mu\nu}(x^{\prime})=\Lambda^{\mu}_{\ \alpha}\Lambda^{\nu}_{\ \beta}h^{\alpha\beta}(x), \tag{4.14}\] the three irreps of SU(2) corresponding respectively to \(h^{ij}-\frac{1}{3}\delta^{ij}h^{00}\), \(h^{0i}=h^{i0}\), and \(h^{00}\). This is a spin-two field like the one used to describe a graviton. We look next with parity-violating representations, starting with \(({\bf s}_{+},{\bf s}_{-})=(\frac{1}{2},{\bf 0})\). Its generators are \[J_{k}^{+}=\frac{1}{2}\sigma^{k},\ \ \ \ \ \ \ \ J_{k}^{-}=0, \tag{4.15}\] Hence, objects transforming in this representation have two complex components changing under rotations and boost according to \[\chi_{+}\longrightarrow e^{-\frac{i}{2}(\phi{\bf u}-i{\boldsymbol{\lambda}}) \cdot{\boldsymbol{\sigma}}}\chi_{+}. \tag{4.16}\] where \({\boldsymbol{\lambda}}=(\lambda_{1},\lambda_{2},\lambda_{3})\) is the boost's rapidity. In particular, we see that \(\chi_{+}\) transforms as a SO(3) spinor. A field transforming in this representation is a _positive helicity_ Weyl spinor. Very soon we will learn the reason for its name. ### Chiral (and also nonchiral) fermions After all these group-theoretical considerations, it is time to start thinking about physics. To construct an action principle for Weyl spinors, we need to build Lorentz invariant quantities from these fields. To begin with, we notice that the Hermitian conjugate spinor \(u_{+}^{\dagger}\) also transforms in the \((\frac{1}{2},{\bf 0})\) representation of the Lorentz group, since the representations of SU(2) are real. A general bilinear \(\chi_{+}^{\dagger}A\chi_{+}\), on the other hand, transforms under the group SO(3) \(\approx\) SU(2) of three-dimensional rotations in the product representation \(\frac{1}{2}\otimes\frac{1}{2}={\bf 1}\otimes{\bf 0}\). Computing the appropriate Clebsh-Gordan coefficients, we find \[\chi_{+}^{\dagger}\chi_{+} \implies j=0,\] \[\chi_{+}^{\dagger}\sigma^{i}\chi_{x} \implies j=1. \tag{4.17}\] They represent the time and spatial components of a four-vector \[\chi_{+}^{\dagger}\sigma_{+}^{\mu}\chi_{+}, \tag{4.18}\] where \(\sigma_{+}^{\mu}\equiv(1,\sigma^{i})\). With this, we construct an action for the Weyl field as \[S_{+}=\int d^{4}x\,i\chi_{+}^{\dagger}\sigma_{+}^{\mu}\partial_{ \mu}\chi_{+}. \tag{4.19}\] Notice that although \(\chi_{+}^{\dagger}\chi_{+}\) is invariant under rotations it does transform under boosts. Therefore it is not a Lorentz scalar and cannot be added to the action as a mass term. As for the \(({\bf s}_{+},{\bf s}_{-})=({\bf 0},\frac{1}{2})\) irrep of SO(1,3), a _negative helicity_ Weyl spinor, the analysis is similar to the one just presented and the corresponding expressions are obtained from the ones derived above by applying a parity transformation. In particular, we find its transformations under rotations and boosts to be \[\chi_{-}\longrightarrow e^{-\frac{i}{2}(\phi{\bf u}+i{\bf\lambda })\cdot\mathbf{\sigma}}\chi_{-}, \tag{4.20}\] showing that they also transform as SO(3) spinors. Their free dynamics is derived from the action \[S_{-}=\int d^{4}x\,i\chi_{-}^{\dagger}\sigma_{-}^{\mu}\partial_{ \mu}\chi_{-}, \tag{4.21}\] where \(\sigma_{-}^{\mu}\equiv(1,-\sigma^{i})\). Let us analyze in some more detail the physics of Weyl spinor fields. The equations of motion derived from the actions (4.19) and (4.21) are \[i\sigma_{\pm}^{\mu}\partial_{\mu}\chi_{\pm}=0\qquad\implies\qquad \big{(}\partial_{0}\mp\mathbf{\sigma}\cdot\mathbf{\nabla} \big{)}\chi_{\pm}=0. \tag{4.22}\] As in other cases, we search for positive energy (\(k^{0}>0\)) plane wave solutions of the form \[\chi_{\pm}(x)\sim u_{\pm}({\bf k})e^{-ik\cdot x}, \tag{4.23}\] where \(u_{\pm}(\mathbf{k})\) are \((\frac{1}{2},\mathbf{0})\) and \((\mathbf{0},\frac{1}{2})\) spinors normalized according to \[u_{\pm}(\mathbf{k})^{\dagger}\sigma_{\pm}^{\mu}u_{\pm}(\mathbf{k})=2k^{\mu}1. \tag{4.24}\] Using this ansatz, the wave equations (4.22) then takes the form \[\big{(}k_{0}\mp\mathbf{k}\cdot\mathbf{\sigma}\big{)}u_{\pm}(\mathbf{k})=0. \tag{4.25}\] Multiplying by \(k_{0}\pm\mathbf{k}\cdot\mathbf{\sigma}\) on the left and using \(k_{i}k_{j}\sigma^{i}\sigma^{j}=\mathbf{k}^{2}\mathbb{1}\), we obtain the dispersion relation of a massless particle, \(k_{0}=|\mathbf{k}|\). Equation (4.25) implies the condition \[\left(\mathbb{1}\mp\frac{\mathbf{k}}{|\mathbf{k}|}\cdot\mathbf{\sigma}\right)u_{ \pm}(\mathbf{k})=0\qquad\implies\qquad\left(\frac{\mathbf{k}}{|\mathbf{k}|} \cdot\mathbf{s}\right)u_{\pm}(\mathbf{k})=\pm\frac{1}{2}u_{\pm}(\mathbf{k}), \tag{4.26}\] where \(\mathbf{s}\equiv\frac{1}{2}\mathbf{\sigma}\) is the spin operator. Helicity is defined as the projection of the particle's spin on its direction of motion and the previous identity shows that \(u_{\pm}(k)\) are spinors with positive and negative helicity respectively. Since the generic Weyl spinors \(\chi_{\pm}\) can be written as a superposition of the plane wave solutions (4.23), this explains the terminology introduced above. To write a general positive (resp. negatively) helicity Weyl spinor, we also need to consider negative energy plane waves \(v_{\pm}(\mathbf{k})e^{-ik\cdot x}\), where \(k^{0}<0\). Imposing this to solve eq. (4.22), we find that \(v_{\pm}(\mathbf{k})\) satisfies \[\big{(}k^{0}\pm\mathbf{k}\cdot\mathbf{\sigma}\big{)}v_{\pm}(\mathbf{k})=0, \tag{4.27}\] where we set the normalization \[v_{\pm}(\mathbf{k})^{\dagger}\sigma_{\pm}^{\mu}v_{\pm}(\mathbf{k})=2k^{\mu}1. \tag{4.28}\] In addition, it can also be shown that the positive and negative energy solutions satisfy the orthogonality relations \[u(-\mathbf{k})^{\dagger}v(\mathbf{k})=v(-\mathbf{k})^{\dagger}u(\mathbf{k})=0. \tag{4.29}\] These identities will be important later in determining the spectrum of excitation of the free quantum Weyl spinor field. Classical Weyl spinors are complex fields and their actions (4.19) and (4.21) are invariant under global phase rotations \(\chi_{\pm}\longrightarrow e^{i\theta}\chi_{\pm}\). The associated Noether currents (see page 58) are the bilinear Lorentz vector constructed in eq. (4.18), and the corresponding expression for negative helicity, \[j_{\pm}^{\mu}=\chi_{\pm}^{\dagger}\sigma_{\pm}^{\mu}\chi_{\pm}. \tag{4.30}\] Plugging this current into eq. (3.51) we couple the Weyl spinors to the electromagnetic field \[S_{\pm}=\int d^{4}x\left(i\chi_{\pm}^{\dagger}\sigma_{\pm}^{\mu}\partial_{\mu} \chi_{\pm}+e\chi_{\pm}\sigma_{\pm}^{\mu}\chi_{\pm}A_{\mu}-\frac{1}{4}F_{\mu \nu}F^{\mu\nu}\right)\] \[=\int d^{4}x\,\left[i\chi^{\dagger}_{\pm}\sigma^{\mu}_{\pm}\big{(} \partial_{\mu}-ieA_{\mu}\big{)}\chi_{\pm}-\frac{1}{4}F_{\mu\nu}F^{\mu\nu}\right], \tag{4.31}\] where in the second line we find again the gauge covariant derivative first introduced in eq. (3.107). This action is invariant under gauge transformations, acting on the Weyl spinor by _local_ phase rotations \(\chi_{\pm}\longrightarrow e^{ie\epsilon(x)}\chi_{\pm}\). Moreover, given the absence of any dimensionful parameter in the action, we can expect the classical theory to be scale invariant. This is indeed the case, with the Weyl spinors having scaling dimension \(\Delta_{\chi}=\frac{3}{2}\). To quantize the Weyl field, we begin with the computation of the canonical Poisson algebra. The momentum canonically conjugate to the spinor is given by \[\pi_{\pm}\equiv\frac{\delta S_{\pm}}{\delta\partial_{0}\chi_{\pm}} =i\chi^{\dagger}_{\pm} \tag{4.32}\] leading to \[\big{\{}\chi_{\pm,a}(t,{\bf r}),\chi_{\pm,b}(t,{\bf r}^{\prime}) ^{\dagger}\big{\}}_{\rm PB}=-i\delta_{ab}\delta^{(3)}({\bf r}-{\bf r}^{\prime }), \tag{4.33}\] where \(a,b\) denote the spinor indices and all other Poisson brackets are equal to zero. The Hamiltonian then reads \[H_{\pm}=\pm i\int d^{3}x\,\chi^{\dagger}_{\pm}(\mathbf{\sigma}\cdot \mathbf{\nabla})\chi_{\pm}. \tag{4.34}\] So much for the classical theory. Quantum Weyl spinor fields are written as operator-valued superpositions of positive- and negative-energy plane wave solutions \[\widehat{\chi}_{\pm}(t,{\bf r})=\int\frac{d^{3}k}{(2\pi)^{3}} \frac{1}{2|{\bf k}|}\Big{[}\widehat{b}({\bf k},\pm)u_{\pm}({\bf k})e^{-i|{\bf k }|t+i{\bf k}\cdot{\bf r}}+\widehat{d}({\bf k},\pm)^{\dagger}v_{\pm}({\bf k})^ {*}e^{i|{\bf k}|t-i{\bf k}\cdot{\bf r}}\Big{]}. \tag{4.35}\] It is important to remember that the previous operator is not Hermitian. Similarly to what we learned from the analysis of the complex scalar field, this implies that the operators \(\widehat{b}({\bf k},\pm)\) and \(\widehat{d}({\bf k},\pm)\) are independent and unrelated to each other by Hermitian conjugation. However, we need to be careful when constructing the algebra of field operators. For example, the spin-statistics theorem states that particles with half-integer spin are fermions, and their quantum states should be antisymmetric under the interchange of two of them. To achieve this, the prescription (3.66) has to be modified and Poisson brackets are replaced by _anticommutators_ instead of commutators \[i\{\cdot,\cdot\}_{\rm PB}\longrightarrow\{\cdot,\cdot\}. \tag{4.36}\] Accordingly, we impose \[\big{\{}\widehat{\chi}_{\pm,a}(t,{\bf r}),\widehat{\chi}_{\pm,b} (t,{\bf r}^{\prime})^{\dagger}\big{\}}_{\rm PB}=\delta_{ab}\delta^{(3)}({\bf r }-{\bf r}^{\prime}), \tag{4.37}\] which, using the normalization \(u_{\pm}({\bf k})^{\dagger}u_{\pm}({\bf k})=2|{\bf k}|\) [cf. (4.24)], leads to the operator algebra \[\big{\{}\widehat{b}({\bf k},\pm),\widehat{b}({\bf k}^{\prime},\pm)^ {\dagger}\big{\}} =(2\pi)^{3}2|{\bf k}|\delta_{ab}\delta^{(3)}({\bf r}-{\bf r}^{ \prime}),\] \[\big{\{}\widehat{d}({\bf k},\pm),\widehat{d}({\bf k}^{\prime},\pm )^{\dagger}\big{\}} =(2\pi)^{3}2|{\bf k}|\delta_{ab}\delta^{(3)}({\bf r}-{\bf r}^{ \prime}), \tag{4.38}\] with all remaining anticommutators equal to zero. As in the case of the complex scalar field analyzed in Box 6, here we also get two types of particles generated by the two kinds of creation operators acting on the vacuum \[|{\bf k}_{1},\ldots,{\bf k}_{n};{\bf p}_{1},\ldots,{\bf p}_{m} \rangle_{\pm}=\widehat{b}({\bf k}_{1},\pm)^{\dagger}\ldots\widehat{b}({\bf k} _{n},\pm)^{\dagger}\widehat{d}({\bf p}_{1},\pm)^{\dagger}\ldots\widehat{d}({ \bf p}_{m},\pm)^{\dagger}|0\rangle. \tag{4.39}\] As expected the state is antisymmetric under the interchange of two particles of the same type, due to the anticommutation of the creation operators. Similarly to the complex scalar field, the two types of particles are distinguished by the charge operator defined by the conserved current (4.30). \[\widehat{Q}=\int d^{3}{\bf r}\,\widehat{\chi}_{\pm}(t,{\bf r})^ {\dagger}\widehat{\chi}_{\pm}(t,{\bf r})\qquad\implies\qquad\left\{\begin{array} []{l}\widehat{Q}|{\bf k};0\rangle_{\pm}=|{\bf k};0\rangle_{\pm}\\ \widehat{Q}|0;{\bf k}\rangle_{\pm}=-|0;{\bf k}\rangle_{\pm}\end{array}\right., \tag{4.40}\] so the states \(|0;{\bf k}\rangle_{\pm}\) are naturally identified as the antiparticles of \(|{\bf k};0\rangle_{\pm}\). The calculation of the Hamiltonian operator follows the lines outlined in previous cases. Replacing classical fields by operators in the Hamiltonian (4.34), and using the properties of the positive and negative energy solutions \(u({\bf k})\) and \(v({\bf k})\), we find after some algebra \[\widehat{H}_{\pm}=\int\frac{d^{3}k}{(2\pi)^{3}}\frac{1}{2|{\bf k }|}\Big{[}|{\bf k}|\widehat{b}({\bf k},\pm)^{\dagger}\widehat{b}({\bf k},\pm) +|{\bf k}|\widehat{d}({\bf k},\pm)^{\dagger}\widehat{d}({\bf k},\pm)\Big{]}- \int d^{3}k\,|{\bf k}|\delta^{(3)}({\bf 0}). \tag{4.41}\] We see from the first term on the right-hand side that the multiparticle states (4.39) diagonalize the Hamiltonian, with particles and antiparticles having zero mass, \(E_{\bf k}=|{\bf k}|\). In this Hamiltonian we find once more the UV and IR divergent zero-point contribution, that once regularized gives a vacuum energy density \[\rho_{\rm vac}=-\frac{1}{8\pi^{2}}\Lambda_{\rm UV}^{4}. \tag{4.42}\] Although it will eventually be subtracted, it is worthwhile to stop a moment and compare this with the expression (3.76). A first thing meeting the eye is the relative factor of two in the Weyl spinor case. This reflects that while a real scalar field has a single propagating degree of freedom, here we have two, associated with the complex field's real and imaginary parts. The second and physically very relevant feature is the different sign, boiling down to having anticommutators rather than commutators. It implies that bosons and fermions contribute to the vacuum energy with oposite signs. This is the reason why supersymmetric theories, which have as many bosonic as fermionic degrees of freedom and therefore zero vacuum energy, have been invoked to solve the problem of the cosmological constant mentioned in page 35, or at least to ameliorate it9. Footnote 9: Since supersymmetry must be broken at low energies (after all, we do not “see” the same number of bosons as fermions), we have to do not “be” the same number of bosons as fermions). Footnote 11: The \(\lambda_{\rm P1}\) term in the action (4.44) is not included in the action (4.44). ### Box 7. Dirac spinors Although the theory of a single Weyl spinor violates parity, it is possible to construct a parity-invariant theory by taking together two Weyl spinors with opposite chiralities. They can be combined into a single object, a Dirac spinor \[\psi\equiv\left(\begin{array}{c}\chi_{+}\\ \chi_{-}\end{array}\right), \tag{4.43}\] which obviously transforms in the parity-invariant reducible representation \((\mathbf{\frac{1}{2}},\mathbf{0})\oplus(\mathbf{0},\mathbf{\frac{1}{2}})\). The corresponding free action is obtained by adding the ones already written in eqs. (4.19) and (4.19) for Weyl spinors of different chiralities, namely \[S=\int d^{4}x\left(i\chi_{+}^{\dagger}\sigma_{+}^{\mu}\partial_{\mu}\chi_{+}+i \chi_{-}^{\dagger}\sigma_{-}^{\mu}\chi_{-}\right)=i\int d^{4}x\,\psi^{\dagger} \left(\begin{array}{cc}\sigma_{+}^{\mu}&0\\ 0&\sigma_{-}^{\mu}\end{array}\right)\partial_{\mu}\psi. \tag{4.44}\] An important point to be taken into account now is that \(u_{\pm}\) and \(u_{\pm}^{\star}\) do have opposite helicities. This is the reason why \(u_{\pm}^{\dagger}\sigma_{\pm}^{\mu}u_{\pm}\equiv u_{\pm,a}^{*}(\sigma_{\pm}^ {\mu})_{ab}u_{\pm,b}\) defines a Lorentz vector, since \((\mathbf{\frac{1}{2}},\mathbf{0})\otimes(\mathbf{0},\mathbf{\frac{1}{2}})=( \mathbf{\frac{1}{2}},\mathbf{\frac{1}{2}})\) and \((\sigma_{\pm}^{\mu})_{ab}\) are the Clebsh-Gordan coefficients decomposing the product representation into its irreps. As a consequence, whereas \(\psi^{*}\) does not transform in the same representation as \(\psi\), the spinor \[\overline{\psi}^{T}\equiv\left(\begin{array}{c}u_{-}^{*}\\ u_{+}^{*}\end{array}\right)=\left(\begin{array}{cc}0&\mathbf{1}\\ \mathbf{1}&0\end{array}\right)\psi^{*}, \tag{4.45}\] does. This suggest recasting the action (4.44) as \[S=i\int d^{4}x\,\overline{\psi}\left(\begin{array}{cc}0&\mathbf{1}\\ \mathbf{1}&0\end{array}\right)\left(\begin{array}{cc}\sigma_{+}^{\mu}&0\\ 0&\sigma_{-}^{\mu}\end{array}\right)\partial_{\mu}\psi=i\int d^{4}x\, \overline{\psi}\left(\begin{array}{cc}0&\sigma_{-}^{\mu}\\ \sigma_{+}^{\mu}&0\end{array}\right)\partial_{\mu}\psi, \tag{4.46}\] It seems natural to introduce a new set of \(4\times 4\) matrices, the _Dirac matrices_, defined by \[\gamma^{\mu}\equiv\left(\begin{array}{cc}0&\sigma_{-}^{\mu}\\ \sigma_{+}^{\mu}&0\end{array}\right), \tag{4.47}\] and satisfying the Clifford algebra \[\{\gamma^{\mu},\gamma^{\nu}\}=2\eta^{\mu\nu}\mathbf{1}, \tag{4.48}\] as can be easily checked using the anticommutation relations of the Pauli matrices. The generators of the representation of \((\mathbf{\frac{1}{2}},\mathbf{0})\oplus(\mathbf{0},\mathbf{\frac{1}{2}})\) are then given in terms of the Dirac matrices by (see the footnote in page 43) \[\mathscr{J}^{\mu\nu}=-\frac{i}{4}[\gamma^{\mu},\gamma^{\nu}]\equiv\sigma^{\mu\nu}. \tag{4.49}\] Denoting by \(\mathcal{U}(\Lambda)\) the matrix implementing the Lorentz transformation \(\Lambda^{\mu}_{\ \nu}\) on Dirac spinors and using the property \(\gamma^{\mu\dagger}=\gamma^{0}\gamma^{\mu}\gamma^{0}\), it is easy to show that \(\mathcal{U}(\Lambda)^{\dagger}=\gamma^{0}\mathcal{U}(\Lambda)^{-1}\gamma^{0}\). This implies that while \(\psi\to\mathcal{U}(\Lambda)\psi\), the conjugate spinor transforms contravariantly, \(\overline{\psi}\to\overline{\psi}\,\mathcal{U}(\Lambda)^{-1}\), and the Dirac matrices themselves satisfy \(\mathcal{U}(\Lambda)^{-1}\gamma^{\mu}\mathcal{U}(\Lambda)=\Lambda^{\mu}_{\ \nu}\gamma^{\nu}\). Let this serve as _a posteriori_ justification of the introduction of the conjugate field \(\overline{\psi}\). The previous discussion shows that \(\overline{\psi}\psi\) is a Lorentz scalar that can be added to the Dirac action (4.46), that we now write in a much more compact form \[S=\int d^{4}x\big{(}i\overline{\psi}\gamma^{\mu}\partial_{\mu}\psi-m \overline{\psi}\psi\big{)}. \tag{4.50}\] The associated field equations admit positive energy plane wave solutions of the form \(\psi(x)\sim u(\mathbf{k},s)e^{-ik\cdot x}\), with \(s=\pm\frac{1}{2}\) labelling the two possible values of the spin third component \[\big{(}i\gamma^{\mu}\partial_{\mu}-m\big{)}\psi(x)=0\qquad\implies\qquad \big{(}/\!\!\!k-m\big{)}u(\mathbf{k},s)=0. \tag{4.51}\] Here we have introduced the Feynman slash notation \(\not{\epsilon}\equiv\gamma^{\mu}a_{\mu}\) that we will use throughout this lectures. Acting on the equation to the right of (4.51) with \(/\!\!\!k+m\) and implementing the identity \(/\!\!\!k+k^{2}1\), we find the massive dispersion relation \(k^{0}\equiv E_{\mathbf{k}}=\sqrt{\mathbf{k}^{2}+m^{2}}\). To get a better idea about the role played by the mass term in the Dirac equation, it is instructive to write the equation \(\big{(}/\!\!\!k-m\big{)}u(\mathbf{k},s)=0\) in terms of the two helicity components of the Dirac spinor \[\big{(}E_{\mathbf{k}}1-\mathbf{k}\cdot\boldsymbol{\sigma}\big{)} u_{+}(\mathbf{k},s)=mu_{-}(\mathbf{k},s),\] \[\big{(}E_{\mathbf{k}}1+\mathbf{k}\cdot\boldsymbol{\sigma}\big{)} u_{-}(\mathbf{k},s)=mu_{+}(\mathbf{k},s). \tag{4.52}\] These expressions shows that the mass terms mixes the two helicities. Introducing the chirality matrix \[\gamma_{5}\equiv-i\gamma^{0}\gamma^{1}\gamma^{2}\gamma^{3}=\left(\begin{array} []{cc}1&0\\ 0&-1\end{array}\right), \tag{4.53}\] the previous identity is recast as \[\left(\begin{array}{cc}\frac{\mathbf{k}}{|\mathbf{k}|}\cdot\mathbf{s}&0\\ 0&\frac{\mathbf{k}}{|\mathbf{k}|}\cdot\mathbf{s}\end{array}\right)u(\mathbf{k},s)=\frac{1}{2}\left(\frac{E_{\mathbf{k}}}{|\mathbf{k}|}1-\frac{m}{|\mathbf{k }|}\gamma^{0}\right)\gamma_{5}u(\mathbf{k},s), \tag{4.54}\] with \(\mathbf{s}=\frac{1}{2}\boldsymbol{\sigma}\) the spin, so the matrix on the left-hand side of this expression is the helicity operator \(h\) acting on a four-component Dirac spinor. The chirality matrix satisfies \(\gamma_{5}^{2}=\mathbbm{1}\) and anticommutes with all Dirac matrices, \(\{\gamma_{5},\gamma^{\mu}\}=0\). As a consequence, its commutator with the Lorentz generators vanishes, \([\gamma_{5},\sigma^{\mu\nu}]=0\), and by Schur's lemma this means that the spinors \(P_{+}\psi\) and \(P_{-}\psi\) transform in different irreps of the Lorentz group, with \(P_{\pm}=\frac{1}{2}(\mathbbm{1}\pm\gamma_{5})\) the projector onto the two chiralities. The spinor's chirality is therefore a Lorentz invariant. A look at eq. (4.54) shows that for a _massive_ Dirac spinor helicity (the projection of the spin onto the direction of motion) and chirality (the eigenvalue of the chirality matrix) are very different things. The former is not even a Lorentz invariant, since for a massive fermion with positive/negative helicity we can switch to a moving frame overcoming the particle and make the helicity negative/positive. Taking, however, the massless limit \(m\to 0\) we have \(E_{\mathbf{k}}\rightarrow|\mathbf{k}|\) and chirality and helicity turn out to be equivalent \[h=\frac{1}{2}\gamma_{5}\hskip 28.452756pt(m=0). \tag{4.55}\] This is why when dealing with massless spin-\(\frac{1}{2}\) fermions, both terms can be used indistinctly, although in the case of massive particles one should be very careful in using the one appropriate to the physical situation under analysis. To quantize the theory, we write an expansion of the Dirac field operator into its positive and negative energy solutions \[\widehat{\psi}(t,\mathbf{r})=\sum_{s=\pm\frac{1}{2}}\int\frac{d^{3}k}{(2\pi)^{ 3}}\frac{1}{2E_{\mathbf{k}}}\Big{[}\widehat{b}(\mathbf{k},s)u(\mathbf{k},s)e^ {-i|\mathbf{k}|t+i\mathbf{k}\cdot\mathbf{r}}+\widehat{d}(\mathbf{k},s)^{ \dagger}v(\mathbf{k},s)^{*}e^{i|\mathbf{k}|t-i\mathbf{k}\cdot\mathbf{r}} \Big{]}, \tag{4.56}\] where the negative energy solutions \(v(\mathbf{k},s)\) are defined by the equation \((\not{k}+m)v(\mathbf{k},s)=0\). The canonical anticommutation relations of the Dirac field with its Hermitian conjugate imply that \(\widehat{b}(\mathbf{k},s)\) and \(\widehat{b}(\mathbf{k},s)^{\dagger}\) are a system of fermionic creation-annihilation operators for particles, while \(\widehat{d}(\mathbf{k},s)\) and \(\widehat{d}(\mathbf{k},s)^{\dagger}\) respectively annihilate and create antiparticles out of the vacuum. The multiparticle states obtained by acting with creation operators on the Fock vacuum are eigenstates of the Dirac Hamiltonian, with the elementary excitations \(\widehat{b}(\mathbf{k},s)^{\dagger}|0\rangle\) and \(\widehat{d}(\mathbf{k},s)^{\dagger}|0\rangle\) representing spin \(\frac{1}{2}\) particles (resp. antiparticles) of momentum \(\mathbf{k}\), energy \(E_{\mathbf{k}}=\sqrt{\mathbf{k}^{2}+m^{2}}\), and spin third component \(s\). The details of this analysis are similar to the ones presented above for Weyl fermions and can be found in any of the QFT textbooks listed in the references. Finally, let us mention that Dirac spinors can be coupled to the electromagnetic field as we did in eq. (4.31) for the Weyl spinors. The Dirac action (4.50) is invariant under global phase rotation of the spinor, \(\psi\to e^{i\alpha}\psi\), leading to the existence of a conserved current due to the first Noether theorem (see page 58) \[j^{\mu}=\overline{\psi}\gamma^{\mu}\psi. \tag{4.57}\] We can use this conserved current to couple fermions to the electromagnetic field and write the QED action \[S =\int d^{4}x\,\left[-\frac{1}{4}F_{\mu\nu}F^{\mu\nu}+\overline{\psi} \big{(}i\partial\!\!\!/-m\big{)}\psi+eA_{\mu}\overline{\psi}\gamma^{\mu}\psi\right]\] \[=\int d^{4}x\,\left[-\frac{1}{4}F_{\mu\nu}F^{\mu\nu}+\overline{ \psi}\big{(}i\partial\!\!\!/-m\big{)}\psi\right], \tag{4.58}\] where once again we encounter the covariant derivative \(D_{\mu}=\partial_{\mu}-ieA_{\mu}\) and the slash notation introduced in eq. (4.51) is used. This action describes the interaction of spinors with the electromagnetic field, that upon quantization is called quantum electrodynamics (QED). It is an interacting theory of charged particles (e.g., electrons) and photon that, unlike the free theories we have been dealing with so far, cannot be exactly solved. One particularly effective way to extract physical information is perturbation theory. This assumes that the coupling is sufficiently weak, so that physics can be reliably described in terms of the interaction among the excitations of the free theory. Before closing our discussion of the irreps of the Lorentz group, let us mention some more relevant examples. The representations \(({\bf s}_{+},{\bf s}_{-})=({\bf 1},{\bf 0})\) and \(({\bf s}_{+},{\bf s}_{-})=({\bf 0},{\bf 1})\) correspond to rank-2 antisymmetric tensor fields \(B_{\mu\nu}=B_{[\mu\nu]}\) respectively satisfying self-dual \((+)\) and anti-self-dual \((-)\) conditions \[B_{\mu\nu}=\pm\frac{1}{2}\epsilon_{\mu\nu\alpha\beta}B^{\alpha\beta}. \tag{4.59}\] An example of the \(({\bf 1},{\bf 0})\) and \(({\bf 0},{\bf 1})\) irreps are the complex combinations \({\bf E}\pm i{\bf B}\) that we encountered in our discussion of electric-magnetic duality in page 25. The two irreps can be added to form the parity-invariant reducible representation \(({\bf 1},{\bf 0})\oplus({\bf 0},{\bf 1})\), corresponding to a generic rank-2 antisymmetric tensor field such as the electromagnetic field strength10. \begin{table} \begin{tabular}{c|l|c} Representation & Field & Parity \\ \hline \(({\bf 0},{\bf 0})\) & Scalar & \(\checkmark\) \\ \((\frac{1}{2},{\bf 0})\) & Positive helicity Weyl spinor & \(\times\) \\ \(({\bf 0},\frac{1}{2})\) & Negative helicity Weyl spinor & \(\times\) \\ \((\frac{1}{2},\frac{1}{2})\) & Vector & \(\checkmark\) \\ \((\frac{1}{2},{\bf 0})\oplus({\bf 0},\frac{1}{2})\) & Dirac spinor & \(\checkmark\) \\ \(({\bf 1},{\bf 0})\) & Self-dual rank-2 antisymmetric tensor & \(\times\) \\ \(({\bf 0},{\bf 1})\) & Anti-self-dual rank-2 antisymmetric tensor & \(\times\) \\ \(({\bf 1},{\bf 0})\oplus({\bf 0},{\bf 1})\) & Antisymmetric rank-2 tensor & \(\checkmark\) \\ \(({\bf 1},{\bf 1})\) & Symmetric-traceless rank-2 tensor & \(\checkmark\) \\ \hline \end{tabular} \end{table} Table 1: Summary of some relevant representations of the Lorentz group and their parity properties. Finally, multiplying together two vector representations we have \[\left(\frac{\mathbf{1}}{\mathbf{2}},\frac{\mathbf{1}}{\mathbf{2}}\right)\otimes \left(\frac{\mathbf{1}}{\mathbf{2}},\frac{\mathbf{1}}{\mathbf{2}}\right)=( \mathbf{1},\mathbf{1})\oplus\left[(\mathbf{1},\mathbf{0})\oplus(\mathbf{0}, \mathbf{1})\right]\oplus(\mathbf{0},\mathbf{0}). \tag{4.60}\] This is just group theory lingo to express the decomposition of the product \(V_{\mu}W_{\nu}\) of two four-vectors into its symmetric-traceless, antisymmetric, and trace pieces \[V_{\mu}W_{\nu}=\left(V_{(\mu}W_{\nu)}-\frac{1}{4}\eta_{\mu\nu}V_{\alpha}W^{ \alpha}\right)+V_{[\mu}W_{\nu]}+\frac{1}{4}\eta_{\mu\nu}V_{\alpha}W^{\alpha}. \tag{4.61}\] This leads to identify the \((\mathbf{1},\mathbf{1})\) irrep as corresponding to a symmetric-traceless rank-2 tensor field. For the reader's benefit, we have summarized in table 1 the different representations of the Lorentz group discussed in this section, indicating as well whether or not they preserve parity. ### Some more group theory Having got some practice with the language of group theory, we close this section by enlarging our vocabulary with many important group-theoretic concepts that will become handy later on (see [68, 69] for some physics oriented textbooks on group theory, or Appendix B of [14] for a quick survey of basic facts). Next, we focus on the relevant groups for the SM, namely SU(3), SU(2), and U(1) associated with the strong and electroweak interactions. The Abelian group U(1) we have encountered when discussing electromagnetism and learned there that it has a single generator, let us call it \(Q\), so its elements are written as \(U(\vartheta)=e^{i\vartheta Q}\). This is the only irrep of this group, all others being reducible to a diagonal form. Concerning SU(2), its properties are well know from the theory of angular momentum in quantum mechanics and we have already used many of them in our analysis of the representations of the Lorentz group. Its three generators satisfy the algebra \[[T^{a}_{\mathbf{R}},T^{b}_{\mathbf{R}}]=i\epsilon^{abc}T^{c}_{\mathbf{R}}, \tag{4.62}\] where the subscript \(\mathbf{R}\) denotes the representation. Up to this point, we have labelled the irreps of SU(2) by their spin \(\mathbf{s}=\mathbf{0},\frac{\mathbf{1}}{\mathbf{2}},\mathbf{1},\ldots\) although they are also frequently referred to by their dimension \(2\mathbf{s}+1\), as it is customary for all unitary groups SU(\(N\)). As an example, the fundamental representations \(\mathbf{s}=\frac{\mathbf{1}}{\mathbf{2}}\) is denoted by \(\mathbf{2}\) and the adjoint \(\mathbf{s}=\mathbf{1}\) by \(\mathbf{3}\). In the former case the generators are written in terms of the three Pauli matrices as \(T^{a}_{\mathbf{2}}=\frac{1}{2}\sigma_{a}\), a fact we used when studying Weyl spinors. As for the group SU(3), less familiar from elementary physics, it has eight generators satisfying the Lie algebra \[[T^{a}_{\mathbf{R}},T^{b}_{\mathbf{R}}]=if^{abc}T^{c}_{\mathbf{R}}\qquad \qquad(a,b,c=1,\ldots,8), \tag{4.63}\] where the structure constants are given by \[f^{123}=1,\hskip 14.226378ptf^{147}=-f^{156}=f^{246}=f^{257}=f^{345}=-f^{367}= \frac{1}{2},\hskip 14.226378ptf^{458}=f^{678}=\frac{\sqrt{3}}{2}, \tag{4.64}\] the remaining ones being either zero or fixed from the ones just given by antisymmetry. The group elements are written as exponentials of linear combinations of the algebra generators \[U(\alpha)_{\bf R}=e^{i\alpha a^{T}_{\bf R}}, \tag{4.65}\] where the condition \(\det U(\alpha)_{\bf R}=1\) implies \(\mathop{\rm tr}\nolimits T_{\bf R}^{a}=0\) and the generators can be chosen to satisfy the orthogonality relations \[\mathop{\rm tr}\nolimits\left(T_{\bf R}^{a}T_{\bf R}^{b}\right)=T_{2}({\bf R} )\delta^{ab}. \tag{4.66}\] Although similar in many aspects, there are however important differences between SU(2) and SU(3) concerning the character of their irreps. For any Lie algebra representation with generators \(T_{\bf R}^{a}\) it is very easy to check that \(-T_{\bf R}^{a*}\) satisfy the same Lie algebra, defining the complex conjugate representation denoted by \(\overline{\bf R}\). A representation is said to be _real_ or _pseudoreal_ whenever it is related to its complex conjugate irrep by a similarity transformation \[T_{\overline{\bf R}}^{a}\equiv-T_{\bf R}^{a*}=S^{-1}T_{\bf R}^{a}S, \tag{4.67}\] with \(S\) either symmetric (real representation) or antisymmetric (pseudoreal representation). For SU(2) all irreps are real or pseudoreal. This is the reason why we only have one independent irrep of a given dimension labelled by its spin. The group SU(3), on the other hand, has complex irreps. This is the case of the fundamental and an antifundamental representations, \({\bf 3}\) and \(\overline{\bf 3}\), whose generators are given by \[T_{\bf 3}^{a}=\frac{1}{2}\lambda_{a}\hskip 28.452756pt\mbox{ and}\hskip 28.452756ptT_{\overline{\bf 3}}^{a}=-\frac{1}{2}\lambda_{a}^{T}, \tag{4.68}\] where \(\lambda_{a}\) are the eight Gell-Mann matrices, given by \[\lambda_{1}=\left(\begin{array}{ccc}0&1&0\\ 1&0&0\\ 0&0&0\end{array}\right),\hskip 14.226378pt\lambda_{2}=\left(\begin{array}{ ccc}0&-i&0\\ i&0&0\\ 0&0&0\end{array}\right),\hskip 14.226378pt\lambda_{3}=\left(\begin{array}{ ccc}1&0&0\\ 0&-1&0\\ 0&0&0\end{array}\right),\] \[\lambda_{4}=\left(\begin{array}{ccc}0&0&1\\ 0&0&0\\ 1&0&0\end{array}\right),\hskip 14.226378pt\lambda_{5}=\left(\begin{array}{ ccc}0&0&-i\\ 0&0&0\\ i&0&0\end{array}\right),\hskip 14.226378pt\lambda_{6}=\left(\begin{array}{ ccc}0&0&0\\ 0&0&1\\ 0&1&0\end{array}\right), \tag{4.69}\] \[\lambda_{7}=\left(\begin{array}{ccc}0&0&0\\ 0&0&-i\\ 0&i&0\end{array}\right),\hskip 14.226378pt\lambda_{8}=\left(\begin{array}{ ccc}\frac{1}{\sqrt{3}}&0&0\\ 0&\frac{1}{\sqrt{3}}&0\\ 0&0&-\frac{2}{\sqrt{3}}\end{array}\right).\] Two instances of the group SU(3) exists in the SM. One is the color gauge symmetry of QCD, which we will study in some detail in later sections. The second is the global SU(3)\({}_{f}\) flavor symmetry of the eightfold way, originally formulated by Murray Gell-Mann [70] and Yuval Ne'eman [71]. With the hindsight provided by the quark model, this classification scheme is based on the assumption that strong nuclear force does not distinguish among different quark flavors11. Let us consider the action for three quark flavors \(q_{i}\) (\(i=1,2,3\)) Footnote 11: Quarks were proposed as hadron constituents in [72, 73], some three years after the formulation of the eightfold way. The name, as with quarks, was invented by Gell-Mann drawing this time not from James Joyce but from the Noble Eightfold Path of Buddhism: Right View, Right Intention, Right Speech, Right Conduct, Right Livehood, Right Effort, Right Mindfulness, and Right Meditation. \[S =\sum_{i=u,d,s}\int d^{4}x\,\overline{q}_{i}(i\partial\!\!\!/-m_{i} )q_{i}+S_{\rm int}\] \[=\int d^{4}x\,\overline{\mathbf{q}}\big{(}i\partial\!\! \!/1-\mathbf{m}\big{)}\mathbf{q}+S_{\rm int}, \tag{4.70}\] where \(S_{\rm int}\) represent interaction terms that we will not care about for the time being and in the second line we have grouped the quarks into a triplet \(q\) and rewrote the action in matrix notation, with \(\mathbf{m}={\rm diag}(m_{u},m_{d},m_{s})\). Under SU(3)\({}_{f}\) the quark triplet transforms in the fundamental irrep **3** as \(\mathbf{q}\to U\mathbf{q}\). This result in the following transformation of the free action \[\int d^{4}x\,\overline{\mathbf{q}}\big{(}i\partial\!\! \!/1-\mathbf{m}\big{)}\mathbf{q}\longrightarrow\int d^{4}x \,\overline{\mathbf{q}}\big{(}i\partial\!\!\!/1-U^{\dagger}\mathbf{m}U\big{)}\mathbf{q}, \tag{4.71}\] where \(\mathbf{m}={\rm diag}(m_{u},m_{d},m_{s})\). Since all three quark masses are different, \(m\) is not proportional to the identity and \(U^{\dagger}\mathbf{m}U\neq\mathbf{m}\), and the mass term breaks the global SU(3)\({}_{f}\) invariance. Moreover, the strong interaction does not distinguishes quark flavors and \(S_{\rm int}\) remains invariant. Thus, we conclude that SU(3)\({}_{f}\) is an approximate symmetry of QCD that becomes exact in the limit of equal, in particular zero quark masses (also called, for obvious reasons, the chiral limit). Mesons are bound states of a quark and an antiquark, the later transforming in the antifundamental \(\overline{\mathbf{3}}\) irrep. Their classification into SU(3)\({}_{f}\) multiplets follows from decomposing into irreps the product of the fundamental and the antifundamental \[\mathbf{3}\otimes\overline{\mathbf{3}}=\mathbf{8}\oplus\mathbf{1}. \tag{4.72}\] The octet contains the \(\pi^{0}\), \(\pi^{\pm}\), \(K^{0}\), \(\overline{K}^{0}\), \(K^{\pm}\), and \(\eta_{8}\) mesons, while the singlet is the \(\eta_{1}\) meson. In fact, the \(\eta_{1}\) and \(\eta_{8}\) mesons mix together into the \(\eta\) and the \(\eta^{\prime}\) mesons, which are the interaction eigenstates in the electroweak sector of the SM. A similar classification scheme works for the baryons. Being composed of three quarks, the baryon multiplets emerge from decomposing the product of three fundamental representations \[\mathbf{3}\otimes\mathbf{3}\otimes\mathbf{3}=\mathbf{10}\oplus\mathbf{8}\oplus\mathbf{8}\oplus\mathbf{1}. \tag{4.73}\] The proton and the neutron are in one of the octets, together with the \(\Sigma^{0}\), \(\Sigma^{\pm}\), \(\Xi^{0}\), and \(\Xi^{-}\) particles of nonzero strangeness. Were SU(3)\({}_{f}\) an exact symmetry, the masses of all hadrons within a single multiplet would be equal. However, the differences in the quark masses induce a mass split, which in the case of the octet containing the proton and the neutron, is about \(30\%\) of the average mass. By contrast, the mass split between the proton and the neutron is only \(0.1\%\) of their average mass. The wider mass gap with the other octet members results from the larger mass of the strange quark, \(m_{s}>m_{u}\sim m_{d}\). ## 5 A tale of many symmetries Symmetry is probably the most important heuristic principle at our disposal in fundamental physics. The formulation of particle physics models starts with selecting those symmetries/invariances to be implemented in the theory, what usually restrict drastically the types of interactions allowed. In the SM gauge, for example, invariance plus the condition that the action only contains operators of dimension four or less fixes the action, up to a relatively small number of numerical parameters to be experimentally measured in high energy facilities. ### The symmetries of physics Our approach to symmetry up to here has been rather casual. It is time to be more precise, beginning with a discussion of the types of symmetries we encounter in QFT and how they are implemented. 1. **Kinematic (or spacetime) symmetries**. They act on the spacetime coordinates and field indices. This class of symmetries includes Lorentz, Poincare, scale, and conformal transformations that we already encountered in previous sections. 2. **Discrete symmetries.** They include parity P, charge conjugation C, time reversal T, and the compositions CP and CPT. If gravity and electromagnetism were the only interactions in nature, the universe would be invariant under C, P, and T separately. However, nuclear (both weak and strong) interactions break P, C, T and CP in different degrees. CPT, however, turns out to be a symmetry of QFT forced upon us by the basic requirements of Poincare invariance and locality. Moreover, it is a completely general result that can be demonstrated without relying on the specific form of any Hamiltonian (for a detailed proof of this result, called the CPT theorem, see chapter 11 of [14]). 3. **Global continuous symmetries**. These are transformations depending on a continuous constant parameter. One example is the invariance of the complex scalar field action (3.86) under spacetime constant phase rotation (3.98). The current view in QFT is that global symmetries are accidental properties of the low energy theories, whereas, in the UV, all fundamental symmetries should be local (see next). 4. **Local (gauge) invariance**. Unlike the previous case, the theory is invariant under a set of continuous transformations that vary from point to point in spacetime. The archetypical example is the gauge invariance of the Maxwell's equations found in (3.4). Unlike standard quantum mechanical symmetries, gauge invariance does not map one physical state into another, but represents a redundancy in the labelling the physical states. This is the price we pay to describe fields with spin one and two in a way that manifestly preserves locality and Lorentz invariance. To highlight this fundamental feature, we will refrain from talking about gauge symmetry and stick to gauge invariance (we will qualify this statement below). 5. **Spontaneously/softly broken symmetries.** In all instances discussed above, we have assumed that the symmetries/invariances are realized at the action level and in the spectrum of the quantum theory. Classically, it is possible that the symmetries of the action are not reflected in their solu tions which implies that in the quantum theory, the spectrum does not remain invariant under the symmetry. When this happens, we say that the symmetry (or invariance) is _spontaneously broken_. Since the breaking takes place by the choice of vacuum, it does not affect the UV behavior of the theory. Another situation when this also happens is when adding terms to the action that, explicitly break the symmetry but does not modify the UV behavior of the theory (e.g., mass terms). In this case, the symmetry is _softly broken_. * **Anomalous symmetries.** Usually, symmetries are identified in the classical action and then implemented in the quantum theory. This tacitly assumes that all classical symmetries remain after quantization, and this is not always the case. Sometimes, the classical symmetry is impossible to implement quantum mechanically, and it is said to be _anomalous_. Anomalies are originated in very profound mathematical properties of QFT and they have important physical consequences. Let us see now how symmetries are implemented in QFT. We know from quantum mechanics that symmetries are maps among rays in the theory's Hilbert space that preserve probability amplitudes. More precisely, for two arbitrary states \(|\alpha\rangle\) and \(|\beta\rangle\), a symmetry is implemented by some operator \(U\) acting as \[|\alpha\rangle\longrightarrow|U\alpha\rangle,\hskip 28.452756pt|\beta \rangle\longrightarrow|U\beta\rangle. \tag{5.1}\] and satisfying the condition that probability amplitudes are preserved \[|\langle\alpha|\beta\rangle|=|\langle U\alpha|U\beta\rangle|. \tag{5.2}\] There are two ways in which this last condition can be achieved. One is that \[\langle\alpha|\beta\rangle=\langle U\alpha|U\beta\rangle, \tag{5.3}\] implying that the operator \(U\) is _unitary_. But there also exists a second alternative to fullfil eq. (5.2) \[\langle U\alpha|U\beta\rangle=\langle\alpha|\beta\rangle^{*}. \tag{5.4}\] In this case the operator \(U\) is said to be _antiunitary_. Notice that consistency requires that in this case the operator \(U\) implementing the symmetry should be antilinear: \[U\big{(}a|\alpha\rangle+b|\beta\rangle\big{)}=a^{*}|U\alpha \rangle+b^{*}|U\beta\rangle, \tag{5.5}\] for any two states \(|\alpha\rangle\) and \(|\beta\rangle\) and \(a,b\in\mathbb{C}\). Our discussion has led us to Wigner's theorem [74]: symmetries are implemented quantum-mechanically either by unitary or antiunitary operators. In fact, continuous symmetries are always implemented by the first kind. This can be understood by thinking that a family of operators \(U(\lambda)\), depending on a continuous parameter, can always be smoothly deformed to the identity, a linear and not an antilinear operator. On the other hand, there are two critical discrete symmetries implemented by antiunitary operators: time reversal T and CPT. ### Noether's two theorems In the case of continuous symmetries, we have the celebrated theorem due to Noether linking them to the existence of conserved quantities [45]. What is often called "the" Noether theorem is actually the first of two theorems, dealing with the consequences of _global_ and _local_ symmetries respectively. Let us begin with the first one considering a classical field theory of \(n\) fields whose field equations remain invariant under infinitesimal variations \(\phi_{i}\to\phi_{i}+\delta_{\epsilon}\phi_{i}\) linearly depending on \(N\) continuous parameters \(\epsilon_{A}\). There are two essential things about the transformations we are talking about. First, they form a group, as can be seen by noticing that the composition of two symmetries is itself a symmetry and, that for each transformation, there exists its inverse obtained by reversing the signs of \(\epsilon_{A}\). The second fact is that the infinitesimal transformations can be exponentiated to cover all transformations that can be continuously connected to the identity. The latter statement is rather subtle in the case of diffeomorphisms (i.e., coordinate transformations), but we will not worry about them here. Since the transformations leave invariant the field equations, the theory's Lagrangian density must change at most by a total derivative, namely \[S=\int d^{4}x\,\mathcal{L}(\phi_{i},\partial_{\mu}\phi_{i})\qquad\implies \qquad\delta_{\epsilon}S=\int d^{4}x\,\partial_{\mu}K^{\mu}, \tag{5.6}\] where \(K^{\mu}\) is linear in the \(\epsilon_{A}\)'s. At the same time, a general variation of the action can be written as \[\delta_{\epsilon}S=\int d^{4}x\,\left\{\left[\frac{\partial\mathcal{L}}{ \partial\phi_{i}}-\partial_{\mu}\left(\frac{\partial\mathcal{L}}{\partial \,\partial_{\mu}\phi_{i}}\right)\right]\delta_{\epsilon}\phi_{i}+\partial_{ \mu}\left(\frac{\partial\mathcal{L}}{\partial\,\partial_{\mu}\phi_{i}}\delta_ {\epsilon}\phi_{i}\right)\right\}, \tag{5.7}\] so equating expressions (5.6) and (5.7), we find \[\int d^{4}x\,\left\{\left[\frac{\partial\mathcal{L}}{\partial\phi_{i}}- \partial_{\mu}\left(\frac{\partial\mathcal{L}}{\partial\,\partial_{\mu}\phi_{ i}}\right)\right]\delta_{\epsilon}\phi_{i}+\partial_{\mu}\left(\frac{\partial \mathcal{L}}{\partial\,\partial_{\mu}\phi_{i}}\delta_{\epsilon}\phi_{i}-K^{\mu }\right)\right\}=0, \tag{5.8}\] which is valid for arbitrary \(\epsilon\). From this equation we identify the conserved current \[j^{\mu}(\epsilon)=\frac{\partial\mathcal{L}}{\partial\,\partial_{\mu}\phi_{i }}\delta_{\epsilon}\phi_{i}-K^{\mu}\qquad\implies\qquad\partial_{\mu}j^{\mu} (\epsilon)=\left[\partial_{\mu}\left(\frac{\partial\mathcal{L}}{\partial\, \partial_{\mu}\phi_{i}}\right)-\frac{\partial\mathcal{L}}{\partial\phi_{i}} \right]\delta_{\epsilon}\phi_{i}\approx 0, \tag{5.9}\] where again we used the Dirac notation first introduced in page 32. Notice that since the expression of the current is linear in the parameters \(\epsilon_{A}\) the current can be written as \(j^{\mu}(\epsilon)=\epsilon_{A}j^{\mu}_{A}\), and (5.9) is satisfied for arbitrary values of \(\epsilon_{A}\), we conclude that there are a total of \(N\) conserved currents \(\partial_{\mu}j^{\mu}_{A}\). An important point glaring in the previous analysis is that current conservation happens _on-shell_, i.e., once the equations of motion are implemented12. Footnote 12: A note of warning: the term on-shell is employed in physics with at least two different meanings. In the one used here we say that an identity is valid on-shell whenever it holds after the equations of motion are implemented. The second use applies to the four-momentum of a particle with mass \(m\). The momentum \(p^{\mu}\) (or the particle carrying it) is said to be on-shell if it satisfies \(p^{2}=m^{2}\). As an example, particles running in loops in Feynman diagrams are off-shell in this sense. The second Noether theorem deals with local symmetries depending on a number of point-dependent parameters \(\epsilon_{A}(x)\). It is important to keep in mind that the first theorem remains valid in this case, in the sense that there exists a current \(j_{\mu}\) whose divergence is proportional to the equations of motion. To simplify expressions, let us denote the latter as \[E_{i}(\phi)\equiv\partial_{\mu}\left(\frac{\partial\mathcal{L}}{ \partial\,\partial_{\mu}\phi_{i}}\right)-\frac{\partial\mathcal{L}}{\partial\phi _{i}}, \tag{5.10}\] and consider that our theory is invariant under field transformations involving only \(\epsilon_{A}(x)\) and their first derivatives \[\delta_{\epsilon}\phi_{i}=R_{i,a}(\phi_{k})\epsilon_{A}+R^{\mu} _{i,A}(\phi_{k})\partial_{\mu}\epsilon_{A}. \tag{5.11}\] This includes, for example, the gauge transformations of electromagnetism, \(\delta_{\epsilon}A_{\mu}=\partial_{\mu}\epsilon\) (the argument here can be easily generalized to include transformations depending up to the \(k\)-th derivative of the gauge functions). The general variation of the action \(\delta_{\epsilon}S\) has the same structure shown in eq. (5.8) \[\int d^{4}x\,\Big{[}-E_{i}(\phi)\delta_{\epsilon}\phi_{i}+ \partial_{\mu}j^{\mu}(\epsilon)\Big{]}=0. \tag{5.12}\] with \(\delta_{\epsilon}\phi_{i}\) given in (5.11) and \(j^{\mu}\) the Noether current implied by the first theorem and defined in eq. (5.9). A crucial difference now is that since \(j^{\mu}(\epsilon)\) is linear in \(\epsilon_{A}\), when these parameters vanish at infinity the boundary term on the right-hand side appearing when integrating by parts is zero \[\delta_{\epsilon}S=-\int d^{4}x\,\epsilon_{A}(x)\Big{\{}R_{i,A}( \phi_{k})E_{i}(\phi_{k})-\partial_{\mu}\Big{[}R^{\mu}_{i,A}(\phi_{k})E_{i}( \phi_{k})\Big{]}\Big{\}}. \tag{5.13}\] Thus, if this is a symmetry. \(\delta_{\epsilon}S=0\) for any \(\epsilon_{A}(x)\), we obtain the identities \[R_{i,A}(\phi_{k})E_{i}(\phi_{k})-\partial_{\mu}\Big{[}R^{\mu}_{ i,A}(\phi_{k})E_{i}(\phi_{k})\Big{]}=0, \tag{5.14}\] where we should remember that \(A=1,\ldots,N\), with \(N\) the number of gauge functions (i.e., the dimension of the symmetry's Lie algebra). This results is Noether's second theorem: invariance of a field theory under local transformations implies the existence of several differential identities among the field equations, meaning that some are redundant. As to the existence of conserved currents associated with local invariance, using eq. (5.14) it can be shown that \[\partial_{\mu}\Big{[}\epsilon_{A}(x)R^{\mu}_{i,A}(\phi_{k})E_{i} (\phi_{k})\Big{]}=E_{i}(\phi_{k})\delta_{\epsilon}\phi_{i}, \tag{5.15}\] from where we read the conserved current \[S^{\mu}(\epsilon)\equiv\epsilon_{A}(x)R^{\mu}_{i,A}(\phi_{k})E_{ i}(\phi_{k}) \qquad\implies\qquad\partial_{\mu}S^{\mu}(\epsilon)=E_{i}(\phi_{k})\delta_{ \epsilon}\phi_{i}\approx 0. \tag{5.16}\] This quantity is however trivial, in the sense that it vanishes on-shell, \(S^{\mu}(\epsilon)\approx 0\). Notice, however, that the conserved current obtained as the result of the first Noether theorem also applies to the gauge case. Indeed, considering transformations such that \(\epsilon_{A}(x)\) does not vanish at infinity, we find from (5.12) \[\partial_{\mu}j^{\mu}(\epsilon)=E_{i}(\phi_{k})\delta_{\epsilon} \phi_{i}\approx 0, \tag{5.17}\] where \(j^{\mu}\) is explicitly given by the expression on the left of eq. (5.9). This shows that for theories with local invariances the only nontrivial conserved currents are the ones provided by Noether's first theorem, associated with transformations that do not vanish at infinity (see also the discussion in Box 9 below). Together with the conserved current from the first Noether theorem, there exists a conserved charge defined by its time component \[Q(\epsilon)=\int_{\Sigma}d^{3}r\,j^{0}(\epsilon), \tag{5.18}\] where \(\Sigma\) is a three-dimensional spatial section of spacetime. Using current conservation it is easy to see that the time derivative of the charge vanishes on-shell \[\dot{Q}(\epsilon)\approx-\int_{\Sigma}d^{3}r\,\mathbf{\nabla}\cdot\mathbf{j}( \epsilon)=\int_{\partial\Sigma}d\mathbf{S}\cdot\mathbf{j}(\epsilon)=0, \tag{5.19}\] provided the spatial components of the current \(\mathbf{j}(\epsilon)\) is zero at \(\partial\Sigma\) or, equivalently, there is no flux of charge entering or leaving the spatial sections at infinity. Applying the first Noether theorem to different symmetries, we get a number of conserved quantities: * The energy-momentum tensor \(T^{\mu}{}_{\nu}\) is the conserved current associated with the invariance of field theories under spacetime translations, \(x^{\mu}\to x^{\mu}+a^{\mu}\). Its general expression is \[T^{\mu}{}_{\nu}=\frac{\partial\mathcal{L}}{\partial\,\partial_{\mu}\phi_{i}} \partial_{\nu}\phi_{i}-\delta^{\mu}_{\nu}\mathcal{L},\] (5.20) with \(\partial_{\mu}T^{\mu}{}_{\nu}=0\). Notice that this canonical is not necessarily symmetric as, for example, in Maxwell's electrodynamics \[T^{\mu}{}_{\nu}=-F^{\mu\alpha}\partial_{\nu}A_{\alpha}+\frac{1}{4}\delta^{\mu }_{\nu}F_{\alpha\beta}F^{\alpha\beta}.\] (5.21) It can nevertheless be symmetrized by adding a term of the form \(\partial_{\sigma}K^{\sigma\mu}{}_{\nu}\), with \(K^{\sigma\mu}{}_{\nu}=-K^{\mu\sigma}{}_{\nu}\), that does not spoil its conservation [75, 76]. In the case of the electromagnetism, the resulting Belinfante-Rosenfeld energy-momentum tensor reads \[K^{\mu\nu}{}_{\sigma}=F^{\mu\nu}A_{\sigma}\qquad\implies\qquad\widetilde{T}^{ \mu}{}_{\nu}=-F^{\mu\alpha}F_{\nu\alpha}+\frac{1}{4}\delta^{\mu}_{\nu}F_{ \alpha\beta}F^{\alpha\beta}.\] (5.22) This modified energy-momentum tensor not only is symmetric but, unlike (5.21), also gauge invariant. Notice that since conserved currents are quantities evaluated on-shell, we can apply the vacuum field equations \(\partial_{\mu}F^{\mu\nu}=0\). * Invariance under infinitesimal Lorentz transformations \(\delta x^{\mu}=\omega^{\mu}{}_{\nu}x^{\nu}\), with \(\omega_{\mu\nu}=-\omega_{\nu\mu}\), implies the conservation of the total angular momentum \[J^{\mu}{}_{\nu\sigma}=T^{\mu}{}_{\nu}x_{\sigma}-T^{\mu}{}_{\sigma}x_{\nu}+S^{ \mu}{}_{\nu\sigma},\] (5.23) where \(J^{\mu}{}_{\nu\sigma}=-J^{\mu}{}_{\sigma\nu}\) and \(\partial_{\mu}J^{\mu}{}_{\nu\sigma}=0\). The first two terms on the right-hand side represent the "orbital" contribution induced by the Lorentz variation of the spacetime coordinates, while \(S^{\mu}{}_{\nu\sigma}\) is the "intrinsic" angular momentum (or spin) coming from the spacetime transformation properties of the field itself. For a scalar field this last part vanishes13. Footnote 13: To connect with the notation employed in our discussion of the first Noether theorem, let us indicate that the conserved current (5.9) associated to the invariance under spacetime translations is written by \(j^{\mu}(a^{\sigma})=T^{\mu}{}_{\nu}a^{\nu}\), whereas \(j^{\mu}(\omega^{\alpha\beta})=J^{\mu}{}_{\nu\sigma}\omega^{\nu\sigma}\) is the current whose conservation follows from Lorentz invariance. * As a further application, let us mention the invariance of complex fields under phase rotation, already anticipated in various examples in previous pages. For instance, in the case of the complex scalar field studied in Box 6, applying (5.9) to infinitesimal variations \(\delta_{\vartheta}\phi=i\vartheta\phi\), \(\delta_{\vartheta}\phi^{*}=-i\vartheta\phi^{*}\) leads to the conserved current (3.99). The corresponding analysis for Weyl spinors gives (4.30). ### Quantum symmetries: to break or not to break (spontaneously) In the quantum theory symmetries are realized on the Hilbert space of physical states. In particular, the charge (5.18) is promoted to a Hermitian operator \(\widehat{Q}(\epsilon)\) implementing infinitesimal transformations on the fields \[\delta_{\epsilon}\widehat{\phi}_{k}=-i[\widehat{Q}(\epsilon), \widehat{\phi}_{k}], \tag{5.24}\] whereas, due to the conservation equation (5.19), it commutes with the Hamiltonian, \([\widehat{Q}(\epsilon),\widehat{H}]=0\). In the case of rigid transformations, the parameters \(\epsilon_{A}\) can be taken outside the integral in (5.18) to write \(\widehat{Q}(\epsilon)=\epsilon_{A}\widehat{Q}^{A}\). Finite transformations in the connected component of the identity are obtained then by exponentiating the charge operator \[\widehat{\mathscr{U}}(\epsilon)=e^{i\epsilon_{A}\widehat{Q}^{A}} \qquad\implies\qquad\widehat{\mathscr{U}}(\epsilon)^{\dagger}\widehat{\phi}_{ k}(x)\widehat{\mathscr{U}}(\epsilon)=\mathscr{U}_{k\ell}(\epsilon)\widehat{ \phi}_{\ell}(x), \tag{5.25}\] where \(\mathscr{U}_{k\ell}(\epsilon)\) is the representation of the symmetry group acting on the field indices and the Hermiticity of \(\widehat{Q}\) guarantees the unitarity of \(\widehat{\mathscr{U}}(\epsilon)\). The implication for the free theory is that the creation-annihilation operators transform covariantly under the symmetry. Consequently, to determine the action of \(\widehat{\mathscr{U}}(\epsilon)\) on the Fock space of the theory, we need to know how the charge acts on the vacuum. Here, we may have two possibilities corresponding to different realization of the symmetry. **Wigner-Weyl realization:** the vacuum state is left invariant by the symmetry \[\widehat{\mathscr{U}}(\epsilon)|0\rangle=|0\rangle\qquad\implies \qquad\widehat{Q}_{a}|0\rangle=0. \tag{5.26}\] If this is the case, the symmetry is manifest in the spectrum, falling into representations of the symmetry group. Since the whole Fock space is generated by succesive application of the fields \(\widehat{\phi}_{k}(x)\) on the vacuum, it is enough to know how the symmetry acts on the states \(|\phi_{k}\rangle\equiv\widehat{\phi}_{k}(x)|0\rangle\) \[\widehat{\mathscr{U}}(\epsilon)|\phi_{k}\rangle=\mathscr{U}_{k \ell}(\epsilon)|\phi_{\ell}\rangle. \tag{5.27}\] where \(\mathscr{U}_{k\ell}(\epsilon)\) is the representation of the symmetry group introduced in (5.25). This is what happens, for example, in the hydrogen atom. Its ground state has \(j=0\) and therefore remains invariant a generic rotation labelled by the Euler angles \(\phi\), \(\theta\), and \(\psi\) \[\widehat{\mathcal{R}}(\phi,\theta,\psi)|0,0,0\rangle=|0,0,0\rangle, \tag{5.28}\] while the other states transform in irreps of the rotation group SO(3) \(\simeq\) SU(2) \[\widehat{\mathcal{R}}(\phi,\theta,\psi)|n,j,m\rangle=\sum_{m^{ \prime}=-j}^{j}\mathscr{D}^{(j)}_{mm^{\prime}}(\phi,\theta,\psi)|n,j,m^{ \prime}\rangle, \tag{5.29}\] where \(\mathscr{D}^{(j)}_{mm^{\prime}}(\phi,\theta,\psi)\) is the spin \(j\) rotation matrix [77]. From this point of view, the angular momentum and magnetic quantum numbers introduced to account for certain properties of atomic spectra are just group theory labels indicating how the atomic state transforms under spatial rotations. Symmetries in quantum mechanical systems with finite degrees of freedom are usually realized a la Wigner-Weyl, since tunneling among different vacua results in an invariant ground state. We will return to this issue on page 64. **Nambu-Goldstone realization:** the vacuum state is not invariant under the symmetry. This means that the conserved charge do not annihilate the vacuum \[\widehat{Q}(\epsilon)|0\rangle\neq 0. \tag{5.30}\] Whenever this happens, the symmetry is said to be _spontaneously broken_. Notice that the previous equation does not imply that \(\widehat{Q}_{a}|0\rangle\neq 0\) for all \(a\). There might be a subset of charges satisfying \(\widehat{Q}_{A}|0\rangle=0\), with \(\{A\}\subset\{a\}\) that we refer to as _unbroken_ generators. It is easy to see that since \([\widehat{Q}_{A},\widehat{Q}_{B}||0\rangle=0\), they must form a closed subalgebra under commutation. Let us illustrate this mode of realization of the symmetry with the example of \(N\) real scalar fields \(\varphi^{i}\) with action \[S=\int d^{4}x\,\left[\frac{1}{2}\partial_{\mu}\varphi^{i} \partial^{\mu}\varphi^{i}-V(\varphi^{i}\varphi^{i})\right]. \tag{5.31}\] This theory is invariant under global infinitesimal transformations \[\delta_{\epsilon}\varphi^{i}=\epsilon_{a}(T^{a}_{\mathbf{f}})^{i }{}_{j}\varphi^{j}, \tag{5.32}\] with \(T^{a}_{\mathbf{f}}\) the generators in the fundamental representation of SO(\(N\)). Using the standard procedure, we compute the associated Hamiltonian \[H=\int d^{3}x\left[\frac{1}{2}\pi^{i}\pi^{i}+\frac{1}{2}(\mathbf{ \nabla}\varphi^{i})\cdot(\mathbf{\nabla}\varphi^{i})+V(\varphi^{i}\varphi^{i}) \right], \tag{5.33}\] with \(\pi^{i}=\partial_{0}\varphi^{i}\) the conjugate momenta. From this expression we read the SO(\(N\))-invariant potential energy \[\mathscr{V}(\varphi^{i})=\int d^{3}x\,\left[\frac{1}{2}(\mathbf{ \nabla}\varphi^{i})\cdot(\mathbf{\nabla}\varphi^{i})+V(\varphi^{i}\varphi^{i}) \right]. \tag{5.34}\] Its minimum is attained for spatially constant configurations \(\mathbf{\nabla}\varphi^{i}=0\) lying at the bottom of the potential \(V(\varphi^{i}\varphi^{i})\). This is known as the _vacuum expectation value_ (vev) of the field and it is represented as \(\langle\varphi^{i}\rangle\). Its value is determined by \[\left.\frac{\partial V}{\partial\varphi^{i}}\right|_{\varphi^{k}= \langle\varphi^{k}\rangle}=0. \tag{5.35}\] Once the vev \(\langle\varphi^{i}\rangle\) is known, we can expand the fields around it by writing \(\varphi^{i}=\langle\varphi^{i}\rangle+\xi^{i}\). Substituting in (5.31) we obtain the the action for the fluctuations \(\xi^{i}\) whose quantization gives the elementary excitations (particle) of the field in this vacuum. Here we may encounter two possible situations. One is that the vev of the field is SO(\(N\)) invariant, \((T_{\mathbf{f}})^{i}_{\ j}\langle\varphi^{j}\rangle=0\). In this case the action of the fluctuations \(\xi^{i}\) inherits the global symmetry of the parent theory that is then realized a la Wigner-Weyl. Here we want to explore the second alternative, the vev breaks at least part of the symmetry. Let us split the SO(\(N\)) generators into \(T^{a}_{\mathbf{f}}=\{K^{\alpha}_{\mathbf{f}},H^{A}_{\mathbf{f}}\}\), such that \[(K^{\alpha}_{\mathbf{f}})^{i}_{\ j}\langle\varphi^{j}\rangle\neq 0, \qquad(H^{A}_{\mathbf{f}})^{i}_{\ j}\langle\varphi^{j}\rangle=0, \tag{5.36}\] and the global symmetry SO(\(N\)) is spontaneously broken. As argued after eq. (5.30), the generators preserving the symmetry must form a Lie subalgebra generating the unbroken subgroup \(H\subset\) SO(\(N\)) and we have the spontaneous symmetry breaking (SSB) pattern SO(\(N\)) \(\to H\). Generically, the action for the field fluctuations around the vev can be written as \[S=\int d^{4}x\,\left(\frac{1}{2}\partial_{\mu}\xi^{i}\partial^{ \mu}\xi^{i}-\frac{1}{2}M_{ij}^{2}\xi^{i}\xi^{j}+\ldots\right), \tag{5.37}\] where the ellipsis stands for interactions terms and the mass-squared matrix \(M_{ij}^{2}\) is given by \[M_{ij}^{2}\equiv\left.\frac{\partial^{2}V}{\partial\varphi^{i} \partial\varphi^{j}}\right|_{\varphi^{k}=\langle\varphi^{k}\rangle}. \tag{5.38}\] The SO(\(N\)) invariance of the potential \(\delta_{e}V=0\) implies \[\epsilon_{a}\frac{\partial V}{\partial\varphi^{i}}(T^{a}_{\mathbf{f}})^{i}_{ \ j}\varphi^{j}=0\qquad\implies\qquad\epsilon_{a}\frac{\partial^{2}V}{ \partial\varphi^{k}\partial\varphi^{i}}(T^{a}_{\mathbf{f}})^{i}_{\ j}\varphi^{ j}+\epsilon_{a}\frac{\partial V}{\partial\varphi^{i}}(T^{a}_{\mathbf{f}})^{i}_{\ k}=0, \tag{5.39}\] where in the equation on the right we have taken a further derivative with respect to \(\varphi^{k}\). Evaluating this expression at the vev, and taking into account eqs. (5.35) and (5.38), we find \[M_{ik}(T^{a}_{\mathbf{f}})^{k}_{\ j}\langle\varphi^{j}\rangle=0. \tag{5.40}\] This equation is trivially satisfied for the unbroken generators \(H^{A}_{\mathbf{f}}\), but has very nontrivial physical implications for \(K^{\alpha}_{\mathbf{R}}\). It states that there are as many zero eigenvalues of the mass matrix as broken generators, i.e., the theory contains one massless particle for each generator not preserving the vacuum. This result is the Goldstone theorem [78, 79], and the corresponding massless particles emerging as the result of spon taneous symmetry breaking are known as Nambu-Goldstone (NG) modes [80, 81]. Although obtained here using a particular example and in a classical setup, the result is also valid quantum mechanically and applicable to any field theory with a global symmetry group \(G\) spontaneously broken down to a subgroup \(H\subset G\), where the broken part of the symmetry is the coset space \(G/H\). One way to prove the Goldstone theorem in the quantum theory is by considering instead of the classical action the quantum effective action and replacing \(V(\varphi^{i}\varphi^{i})\) with the effective potential, including all interactions among the scalar fields resulting from resumming quantum effects. It can also be shown that the NG modes always have zero spin, also known as NG bosons. Although we are mostly concerned with applications to particle physics, the idea of SSB, in general, and the Goldstone theorem, in particular, have critical applications to nonrelativistic systems, particularly in condensed matter physics14. In particular, the notion of SSB is intimately related to the theory of phase transitions [82, 83, 84]. It is frequently the case that the phase change is associated with the system changing its ground state. For example, the translational symmetry present in a liquid is spontaneously broken at its freezing point when the full group of three-dimensional translation is broken down to the crystalographic group preserving the lattice in the solid phase. The corresponding NG bosons are the three species of acoustic phonons. These are massless quasiparticles in the sense that their dispersion relation at low momentum takes the form \(E_{\mathbf{k}}\simeq c_{s}|\mathbf{k}|\), with \(c_{s}\) the speed of sound, so it has no mass gap. Another well-known example is a ferromagnet below the Curie point. The rotationally symmetric ground state at high temperature is replaced by a lowest energy configuration where atomic magnetic moments align, generating a macroscopic magnetization that spontaneously breaks rotational symmetry. Magnetic waves, called magnons, are the associated NG gapless modes. Footnote 14: It should be stressed that historically the very notion of SSB and of NG bosons was inspired by solid state physics, as it is clear in the seminal works by Yoichiro Nambu [80] and Jeffrey Goldstone [78]. Another example of this cross-fertilization between the fields of condensed matter and high energy physics can be found in the formulation of the Brout-Englert-Higgs mechanism to be discussed in section 5.4. Besides their intrinsic physical interest, these condensed matter examples are useful in bringing home a very important aspect of NG bosons: they do not need to be elementary states. Indeed, phonons and magnons are quasiparticles and, therefore, collective excitations of the system. But also in high energy physics we encounter situations where the NG bosons are bound states of elementary constituents. The most relevant example is the pions, appearing as NG bosons associated with the spontaneous breaking of chiral symmetry in QCD (see Box 8 below). It is frequently stated that systems with SSB present vacuum degeneracy. Although technically the theory might possess various vacua, there are important subtleties involved in the infinite volume limit preventing quantum transitions among them, that would restore the broken symmetry through tunneling. Let us consider a theory at finite volume \(V\) and with a family of degenerate vacua labelled by a properly normalized real parameter \(\xi\). It can be shown that the overlap between any two of these vacua is exponentially suppressed but nonzero (see chapter 7 of [14] for a more detailed analysis) \[|\langle\xi^{\prime}|\xi\rangle|=e^{-\frac{1}{4}(\xi^{\prime}-\xi)^{2}V^{ \frac{2}{3}}}|\langle\xi|\xi\rangle|. \tag{5.41}\] This means that transitions among Fock states built on different vacua are allowed, resulting in a unique ground state invariant under the original symmetry. As a consequence, no SSB can happen at finite volume and symmetries are usually realized a la Wigner-Weyl. The situation is radically different in the \(V\to\infty\) limit when the overlap between any two vacua vanishes \(\langle\xi^{\prime}|\xi\rangle\to 0\). This means that the Fock space of states builds on different vacua that are mutually orthogonal, and no transition among them can occur. At a more heuristic level, what happens is that at infinite volume switching from one vacuum to another requires a nonlocal operation acting at each space-time point. Notice, however, that at a practical level if the volume is "large enough" compared with the system's microscopic characteristic scale we can consider the vacua as orthogonal for all purposes. This is why we see SSB in finite samples, as illustrated by the examples of ferromagnets and superconductors. **Box 8. Of quarks, chiral symmetry breaking, and pions** The SM offers a very important implementation of SSB as a consequence of quark low-energy dynamics. Let us consider a generalization of the action in eq. (4.70), now with \(N_{f}\) different quark flavors. Writing \(\mathbf{q}^{T}=(q_{1},\ldots,q_{N_{f}})\), the action reads \[S =\int d^{4}x\,\overline{\mathbf{q}}\,(i\partial\mathbf{1}-\mathbf{m})\mathbf{q}+S_ {\rm int}\] \[=\int d^{4}x\left(i\overline{\mathbf{q}}_{R}\partial\mathbf{q}_{R}+i \overline{\mathbf{q}}_{L}\partial\mathbf{q}_{L}-\overline{\mathbf{q}}_{R}\mathbf{m}\mathbf{q}_{L} -\overline{\mathbf{q}}_{L}\mathbf{m}\mathbf{q}_{R}\right)+S_{\rm int}, \tag{5.42}\] where in the second line we split the quark fields into its right- and left-handed chiralities and in \(S_{\rm int}\) we include all interaction terms. This theory is invariant under global U(\(N_{f}\)) transformations acting on the fermion fields as \[\mathbf{q}_{R,L}\to\mathcal{U}(\alpha)\mathbf{q}_{R,L}\qquad\quad\text{ where }\qquad\mathcal{U}(\alpha)=e^{i\alpha^{A}T_{R}^{A}}, \tag{5.43}\] and \((T_{R}^{A})^{i}\,_{j}\), with \(A=1,\ldots,N_{f}^{2}\), are the U(\(N_{f}\)) generators in the representation \(\mathbf{R}\) with dimension \(N\). We observe that it is the presence of the mass term, mixing right- and left-handed quarks, what forces the two chiralities to transform under the same transformation of U(\(N_{f}\)). This is why in the chiral limit (i.e., zero quark masses \(\mathbf{m}\to 0\)) the global symmetry is enhanced from U(\(N_{f}\)) to U(\(N_{f}\))\({}_{R}\times\) U(\(N_{f}\))\({}_{L}\), acting independently on the two chiralities \[\mathbf{q}_{R}\to\mathcal{U}(\alpha_{R})\mathbf{q}_{R},\qquad\quad\mathbf{q}_{L}\to \mathcal{U}(\alpha_{L})\mathbf{q}_{L}, \tag{5.44}\] where \(\alpha_{R}^{a}\) and \(\alpha_{L}^{a}\) are independent. Thus, there are two independent Noether currents \[j_{R}^{\mu}(\alpha)=\alpha_{R}^{A}\overline{\mathbf{q}}_{R}\gamma^{\mu}T_{\mathbf{ R}}^{A}\mathbf{q}_{R},\qquad\quad j_{L}^{\mu}(\alpha)=\alpha_{L}^{A}\overline{\mathbf{q}}_ {L}\gamma^{\mu}T_{\mathbf{R}}^{A}\mathbf{q}_{L} \tag{5.45}\] as well as \(2\times N_{f}^{2}\) conserved charges \[Q_{R}^{A}=\int d^{3}x\,\mathbf{q}_{R}^{\dagger}T_{\mathbf{R}}^{A}\mathbf{q}_{R},\qquad \quad Q_{L}^{A}=\int d^{3}x\,\mathbf{q}_{L}^{\dagger}T_{\mathbf{R}}^{A}\mathbf{q}_{L}. \tag{5.46}\] Upon quantization, these charges are replaced by the corresponding operators \(\widehat{Q}_{R,L}^{A}\) whose commu tator realizes the algebra of generators of \(\text{U}(N_{f})_{R}\times\text{U}(N_{f})_{L}\). Taking into account that \(\text{U}(N_{f})=\text{U}(1)\times\text{SU}(N_{f})\), the theory's global symmetry group can be written as \[\text{U}(N_{f})_{R}\times\text{U}(N_{f})_{L}=\text{U}(1)_{B}\times\text{U}(1)_{ A}\times\text{SU}(N_{f})_{R}\times\text{SU}(N_{f})_{L}. \tag{5.47}\] The first two factors on the right-hand side act on the quark fields respectively as \[\mathbf{q}\to e^{i\alpha}\mathbf{q},\hskip 28.452756pt\mathbf{q}\to e^{i\beta\gamma_{5}}\mathbf{q}, \tag{5.48}\] the former symmetry leading to baryon number conservation (hence the subscript). The \(\text{U}(1)_{A}\) factor is an axial vector transformation acting on the two chiralities with opposite phases and is broken by anomalies (more on this in section 7). The action of the two \(\text{SU}(N_{f})_{R,L}\) factors, on the other hand, is defined by \[\text{SU}(N_{f})_{R}:\left\{\begin{array}{ccc}\mathbf{q}_{R}&\to&U_{R}\mathbf{q}_{R} \\ \mathbf{q}_{L}&\to&\mathbf{q}_{L}\end{array}\right.\hskip 28.452756pt\text{SU}(N_{f})_{L}: \left\{\begin{array}{ccc}\mathbf{q}_{R}&\to&\mathbf{q}_{R}\\ \mathbf{q}_{L}&\to&U_{L}\mathbf{q}_{L}\end{array}\right. \tag{5.49}\] with \[U_{R,L}\equiv e^{i\alpha_{L,R}^{I}t_{R}^{I}} \tag{5.50}\] and \(t_{\mathbf{t}}^{I}\) (\(I=1,\ldots,N_{f}^{2}-1\)) the generators of the fundamental irrep of \(\text{SU}(N_{f})\). At low energies the strong quark dynamics triggers quark condensation, giving a non-zero vev to the scalar quark bilinear \(\overline{q}_{i}q_{j}\) \[\langle 0|\overline{q}_{i}q_{j}|0\rangle\equiv\langle 0|\big{(}\overline{q}_{ i,R}q_{j,L}+\overline{q}_{i,L}q_{i,R}\big{)}|0\rangle=\Lambda_{\chi\text{SB}}^{3} \delta_{ij}, \tag{5.51}\] where \(\Lambda_{\chi\text{SB}}\) is the energy scale associated with the condensation. This vev, however, is only invariant under the "diagonal" subgroup of the \(\text{SU}(N_{f})_{R}\times\text{SU}(N_{f})_{L}\) transformations (5.49) consisting of transformations with \(U_{R}=U_{L}\). What happens is that the global \(\text{SU}(N_{f})_{R}\times\text{SU}(N_{f})_{L}\) chiral symmetry is spontaneously broken down to its vector subgroup \[\text{U}(1)_{B}\times\text{SU}(N_{f})_{R}\times\text{SU}(N_{f})_{L}\longrightarrow \text{U}(1)_{B}\times\text{SU}(N_{f})_{V}, \tag{5.52}\] Goldstone's theorem implies that associated with each spontaneously broken generator there should be a massless NG boson. In our case there are \(N_{f}^{2}-1\) broken generators corresponding to the \(\text{SU}(N_{f})_{A}\) factor. Excitations around the vev (5.51) are parametrized by the field \(\Sigma_{ij}(x)\) defined by \[\overline{q}_{i}(x)q_{j}(x)=\Lambda_{\chi\text{SB}}^{3}\Sigma_{ij}(x). \tag{5.53}\] This in turn can be written in terms of the NG matrix field \(\mathbf{\pi}(x)\equiv\pi^{A}(x)t_{\mathbf{f}}^{A}\) as \[\mathbf{\Sigma}(x)\equiv e^{\frac{i\sqrt{2}}{f_{\pi}}\mathbf{\pi}(x)}, \tag{5.54}\] with \(f_{\pi}\) a constant with dimensions of energy called the pion decay constant for reasons that will eventually be clear. Mathematically speaking, the field \(\mathbf{\Sigma}\) parametrizes the coset \[\frac{\text{SU}(N_{f})_{R}\times\text{SU}(N_{f})_{L}}{\text{SU}(N_{f})_{V}}, \tag{5.55}\] leading to the following transformation under \(\text{SU}(N_{f})_{R}\times\text{SU}(N_{f})_{L}\) \[\mathbf{\Sigma}\longrightarrow U_{R}\mathbf{\Sigma}U_{L}^{\dagger}. \tag{5.56}\] We specialize the analysis now to the case \(N_{f}=2\), with only have the \(u\) and \(d\) quarks. The unbroken \(\text{SU}(2)_{V}\) symmetry is just the good old isospin interchanging both quarks, while the NG bosons are the three pions \(\pi^{\pm}\) and \(\pi^{0}\) \[\mathbf{\pi}=\frac{1}{\sqrt{2}}\left(\begin{array}{cc}\pi^{0}&\sqrt{2}\pi^{+}\\ \sqrt{2}\pi^{-}&-\pi^{0}\end{array}\right). \tag{5.57}\] The objection might be raised that pions are not massless particles as the Goldstone theorem requires. Our analysis has ignored the nonvanishing quark masses, explicitly breaking the \(\text{SU}(2)_{R}\times\text{SU}(2)_{L}\) global chiral symmetry. Since the \(u\) and \(d\) quarks are relatively light, we have instead three _pseudo_-NG bosons whose masses are not zero but still lighter than other states in the theory. It is precisely the strong mass hierarchy between the pions and the remaining hadrons what identifies them as the pseudo-NG bosons associated with chiral symmetry breaking. In the \(N_{f}=3\) case, where we add the strange quark to the two lightest ones, \(\text{SU}(3)_{V}\) is Gell-Mann's eightfold way discussed on page 55 and the set of pseudo-NG bosons is enriched by the four kaons and the \(\eta\)-meson in the octet appearing on the right-hand side of eq. (4.72). As mentioned in the introduction, quarks and gluons do not exist as asymptotic states and QCD at low energies is a theory of hadrons. The lowest lying particles are the pion triplet, whose interactions can be obtained from symmetry considerations alone playing the EFT game. The question is how to write the simplest action for NG bosons containing operators with the lowest energy dimension and compatible at the same time with all the symmetries of the theory. For terms with just two derivatives, the solution is \[S_{\text{NG}} =\frac{f_{\pi}^{2}}{4}\int d^{4}x\,\text{tr}\left(\partial_{\mu} \mathbf{\Sigma}^{\dagger}\partial^{\mu}\mathbf{\Sigma}\right)\] \[=\int d^{4}x\,\left[\frac{1}{2}\text{tr}\left(\partial_{\mu}\mathbf{ \pi}\partial^{\mu}\mathbf{\pi}\right)-\frac{1}{3f_{\pi}^{2}}\text{tr}\left( \partial_{\mu}\mathbf{\pi}[\mathbf{\pi},[\mathbf{\pi},\partial^{\mu}\mathbf{\pi}]\right)+\dots \right]. \tag{5.58}\] This _chiral effective action_ contains an infinite sequence of higher-dimensional operators suppressed by increasing powers of the dimensionful constant \(f_{\pi}\). It determines how pions couple among themselves at low energies. Its coupling to the electromagnetic field is obtained by replacing \(\partial_{\mu}\mathbf{\Sigma}\) by the adjoint covariant derivative \(D_{\mu}\mathbf{\Sigma}=\partial_{\mu}\mathbf{\Sigma}-iA_{\mu}[Q,\mathbf{\Sigma}]\) where the charge matrix is given by \(Q=e\sigma^{3}\). This, however, does not exhausts all their electromagnetic interaction. Neutral pions couple to photons as a consequence of the anomalous realization of the U(1)\({}_{A}\) symmetry, resulting in the \(\pi^{0}\to 2\gamma\) decay (see section 7). In our analysis of chiral symmetry breaking we encountered two energy scales: \(\Lambda_{\chi\text{SSB}}\) appearing in (5.51) as a consequence of the quark condensate having dimensions of (energy)\({}^{3}\), and \(f_{\pi}\) needed to give the pion fields their proper dimensions in eq. (5.54). Both of them have to be experimentally measured. In the pion EFT it is \(f_{\pi}\) what determines the relative size of the infinite terms in the effective action (5.58). Operators weighted by \(f_{\pi}^{-n}\) typically give contributions or order \((E/f_{\pi})^{n}\) with \(E\) the characteristic energy of the process under study. In the spirit of EFT, working at a given experimental precision only a finite number of terms in the chiral Lagrangian have to be retained, making the theory fully predictive (see [85, 86] for comprehensive reviews of chiral perturbation theory). ### The Brout-Englert-Higgs mechanism Besides the ones already discussed, a further instance of SSB in condensed matter connecting with one of the key concepts in the formulation of the SM, the Brout-Englert-Higgs (BEH) mechanism. In the Bardeen-Cooper-Schrieffer (BCS) theory of superconductivity the transition from the normal to the superconductor phase is triggered by the condensation of Cooper pairs, collective excitations of two electrons bound together by phonon exchange. Having net electric charge, the Cooper pair wave function transform under electromagnetic U(1) phase rotations and their condensation spontaneously breaks this invariance. The physical consequence of this is a screening of magnetic fields inside the superconductor, the Meissner effect, physically equivalent to the electromagnetic vector potential \(\mathbf{A}(t,\mathbf{r})\) acquiring an effective nonzero mass [87]. The main difference between the BCS example and the ones discussed above is that this is not about spontaneously breaking some global symmetry, but gauge invariance itself. This might look like risky business, since we know that preserving gauge invariance is crucial to get rid of unwanted physical states that otherwise would pop up in the theory's physical spectrum destroying its consistency. As we will see, due to the magic of SSB gauge invariance is in fact not lost, only hidden. That is why, even if not manifest, it still protects the theory. Let us analyze spontaneous symmetry breaking triggered by a complex scalar coupled to the electromagnetic field. We start with the action \[S=\int d^{4}r\,\left[-\frac{1}{4}F_{\mu\nu}F^{\mu\nu}+(D_{\mu}\phi)^{*}(D^{\mu }\phi)-\frac{\lambda}{4}\left(\phi^{*}\phi-\frac{v^{2}}{2}\right)^{2}\right], \tag{5.59}\] where \(D_{\mu}=\partial_{\mu}-ieA_{\mu}\) is the covariant derivative already introduced in the footnote of page 40. This action is invariant under U(1) gauge transformations acting as \[\phi(x)\longrightarrow e^{ie\epsilon(x)}\phi(x),\hskip 28.452756pt\phi(x)^{*} \longrightarrow e^{-ie\epsilon(x)}\phi(x)^{*},\hskip 28.452756ptA_{\mu}(x) \longrightarrow A_{\mu}(x)+\partial_{\mu}\epsilon(x). \tag{5.60}\] As shown in fig. 9, the scalar field potential \[V(\phi^{*}\phi)=\frac{\lambda}{4}\left(\phi^{*}\phi-\frac{v^{2}}{2}\right)^{2}, \tag{5.61}\] has the celebrated Mexican hat shape with a valley of minima located at \(\phi^{*}\phi=\frac{v^{2}}{2}\). When the scalar field takes a nonzero vev \[\langle\phi\rangle=\frac{v}{\sqrt{2}}e^{i\vartheta_{0}}, \tag{5.62}\] U(1) invariance is spontaneously broken, since \(\langle\phi\rangle\) does not remain invariant, \(\langle\phi\rangle\to e^{ie\epsilon}\langle\phi\rangle\). The dynamics of the fluctuations around the vev (5.62) is obtained by plugging \[\phi(x)=\frac{1}{\sqrt{2}}\big{[}v+h(x)\big{]}e^{i\vartheta(x)}, \tag{5.63}\] into the (5.59). The resulting action is \[S =\int d^{4}x\left[-\frac{1}{4}F_{\mu\nu}F^{\mu\nu}+\frac{e^{2}v^{2 }}{2}\left(A_{\mu}+\frac{1}{e}\partial_{\mu}\vartheta\right)\left(A^{\mu}+ \frac{1}{e}\partial^{\mu}\vartheta\right)+\frac{1}{2}\partial_{\mu}h\partial^ {\mu}h-\frac{\lambda v^{2}}{4}h^{2}\right.\] \[-\left.\frac{\lambda v}{4}h^{3}-\frac{\lambda}{16}h^{4}+\frac{e^{ 2}}{2}\left(A_{\mu}+\frac{1}{e}\partial_{\mu}\vartheta\right)\left(A^{\mu}+ \frac{1}{e}\partial^{\mu}\vartheta\right)\left(2vh+h^{2}\right)\right], \tag{5.64}\] which remains invariant under U(1) gauge transformations, now acting as \[A_{\mu}\longrightarrow A_{\mu}+\partial_{\mu}\epsilon,\hskip 28.452756pt \vartheta\longrightarrow\vartheta-e\epsilon,\hskip 28.452756pth \longrightarrow h. \tag{5.65}\] In fact, the phase field \(\vartheta(x)\) is the NG boson resulting from the spontaneous breaking of the U(1) symmetry by the vev in eq. (5.62). At this stage, we still keep a photon with two polarizations while the two real degrees of freedom of the complex field \(\phi\) have been recast in terms of the field \(h\) and the NG boson \(\vartheta\). We can fix the Figure 9: Illustration from ref. [88] depicting the celebrated Mexican hat potential shown in eq. (5.61). gauge freedom (5.65) by setting \(\vartheta=0\). In doing so, the disappearing NG boson transmutes into the longitudinal component of \(A_{\mu}\), as befits a massive gauge field (see the footnote on page 33). We then arrive at the gauged-fixed action \[S =\int d^{4}x\left(-\frac{1}{4}F_{\mu\nu}F^{\mu\nu}+\frac{e^{2}v^{2} }{2}A_{\mu}A^{\mu}+\frac{1}{2}\partial_{\mu}h\partial^{\mu}h-\frac{\lambda v^{2 }}{4}h^{2}\right.\] \[-\left.\frac{\lambda v}{4}h^{3}-\frac{\lambda}{16}h^{4}+e^{2}vA_{ \mu}A^{\mu}h+\frac{e^{2}}{2}A_{\mu}A^{\mu}h^{2}\right), \tag{5.66}\] where the photon has acquired a nonzero mass15 Footnote 15: The same result can be obtained noticing that the action (5.64) contains a term \(ev^{2}A^{\mu}\partial_{\mu}\vartheta\) mixing the NG boson and the gauge field. Physically, this means that as the photon propagates it transmutes into the NG boson and vice versa. Resumming these transmutations results in the mass term for \(A^{\mu}\). \[m_{\gamma}=ev. \tag{5.67}\] The real scalar field \(h\) gets massive as well \[m_{h}=v\sqrt{\frac{\lambda}{2}}, \tag{5.68}\] and has cubic and quartic self-interactions terms, besides coupling to the photon through terms involving two gauge field and one scalar and two gauge fields and two scalars. As we see, no degree of freedom has gone amiss. We ended up with a massive photon with three physical polarizations and a real scalar, making up for the four real degrees of freedom we started with. SSB has just rearranged the theory's degrees of the theory. Here we have been only concerned with giving mass to the photon. Imagine now that we would have two chiral fermions \(\psi_{R}\), \(\psi_{L}\) such that they transform differently under U(1) \[\psi_{L}(x)\longrightarrow e^{ie\epsilon(x)}\psi_{L}(x),\hskip 28.452756pt\psi_{R} (x)\longrightarrow\psi_{R}(x). \tag{5.69}\] Due to the theory's chiral nature, a mass term of the form \(\overline{\psi}_{L}\psi_{R}+\overline{\psi}_{R}\psi_{L}\) would not be gauge invariant, so it seems that we need to keep our fermions massless for the sake of consistency. Using the Higgs field, however, there is a way to construct an action where the fermions couple to the complex scalar field in a gauge invariant way \[S_{\rm fermion}=\int d^{4}x\,\Big{(}i\overline{\psi}_{R}\not{D}\psi_{R}+i \overline{\psi}_{L}\not{D}\psi_{+}-c\phi\overline{\psi}_{L}\psi_{R}-c\phi^{*} \overline{\psi}_{R}\psi_{L}\Big{)}, \tag{5.70}\] where \(c\) is some dimensionless constant. This particular form of the coupling between \(\phi\) and the fermions is called a Yukawa coupling, since it is similar to the one introduced by Hideki Yukawa in his 1935 theory of nuclear interactions between nucleons and mesons [89]. The interest of this construction is that once the field \(\phi\) acquires the vev (5.62), and after gauging away the field \(\vartheta\), the fermion action takes the form \[S_{\rm fermion}=\int d^{4}x\,\left[i\overline{\psi}_{R}\not{D}\psi_{R}+i \overline{\psi}_{L}\not{D}\psi_{L}-\frac{cv}{\sqrt{2}}\big{(}\overline{\psi}_ {L}\psi_{R}-\overline{\psi}_{R}\psi_{L}\big{)}\right.\] \[-\frac{c}{\sqrt{2}}h\overline{\psi}_{L}\psi_{R}-\frac{c}{\sqrt{2}}h\overline{\psi}_ {R}\psi_{L}\bigg{]}\,. \tag{5.71}\] Thus, the same mechanism giving mass to the photon also results in a mass for the fermion field \[m_{f}=\frac{cv}{\sqrt{2}}, \tag{5.72}\] also generated without a explicit breaking of gauge invariance, hidden due to the choice of vacuum of the complex scalar field. Notice that, owing to symmetry breaking, the now massive Dirac fermion couples to the remaining scalar degree of freedom \(h\) within strength controlled by the dimensionless constant \(\frac{c}{\sqrt{2}}=\frac{m_{f}}{v}\). This indicates that the higher the mass of the fermion, the stronger it couples to the Higgs field. This feature, as we will see, has important experimental consequences for the SM. This Abelian Higgs model illustrates the basic features of the BEH mechanism responsible for giving masses to the SM particles, with the scalar field \(h\) corresponding to the Higgs boson discovered at CERN in 2012 [19, 20]. In its nonrelativistic version it also provides the basis for the Ginzburg-Landau analysis of the BCS theory of superconductivity, where the free energy in the broken phase has the same structure as the potential terms in the action (5.59) \[\mathscr{F}_{\rm BCS}=\int d^{3}r\,\left\{\frac{1}{2\mu}(\mathbf{\nabla}\times\mathbf{ \Lambda})^{2}+\frac{1}{2m_{*}}\big{|}\mathbf{\nabla}\phi-ie_{*}\mathbf{\Lambda}\phi \big{|}^{2}+\frac{\lambda(T)}{4}\left[\phi^{*}\phi-\frac{v(T)^{2}}{2}\right]^ {2}\right\}. \tag{5.73}\] Here \(\phi(\mathbf{r})\) is the Cooper pair condensate, \(\mu\) the magnetic permeability of the medium, and \(m_{*}\) and \(e_{*}\) the effective mass and charge of the quasiparticles. For \(T>T_{c}\) we have \(v(T)=0\), so at temperatures about the critical one the only minimum of the free energy is at \(\langle\phi\rangle=0\). When \(T<T_{c}\), on the other hand, \(v(T)\neq 0\) and the U(1) invariance of the theory is spontaneously broken at \(|\langle\phi\rangle|=v(T)\) minima, while the former one at \(\langle\phi\rangle=0\) becomes a local maximum. As in the case studied earlier, this results in a nonzero mass for the vector potential \(\mathbf{A}(\mathbf{r})\) given by \(m(T)=e_{*}v(T)\). This provides the order parameter of the transition and physically accounts for the Meissner effect inside the superconductor [83]. The system also contains a scalar massive excitation, the condensed matter equivalent of the Higgs boson [90, 91]. ## Box 9. "Large" vs. "small" gauge transformations We return briefly to the discussion of Noether's second theorem on page 58. There we paid attention to gauge transformations in the connected component of the identity and made an important distinction among those approaching the identity at the spacetime boundary (\(\epsilon_{A}\to 0\)) and those that do not. Let us call them "small" and "large" gauge transformations respectively. To understand the physical difference between them, we compare (5.17) with (5.16) to see that \(j^{\mu}-S^{\mu}\) is conserved even off-shell, namely that \(\partial_{\mu}(j^{\mu}-S^{\mu})\) is _identically zero_. This means that we can write \[j^{\mu}=S^{\mu}+\partial_{\nu}k^{\mu\nu}\approx\partial_{\nu}k^{\mu\nu}, \tag{5.74}\] where \(k^{\mu\nu}\) is an antisymmetric tensor and we have applied that \(S^{\mu}\) vanishes on-shell. This peculiar structure of the gauge theory current implies that the gauge charge is determined by an integral over the _boundary_ of the spatial sections \[Q\approx\int_{\Sigma}dV\,\partial_{i}k^{0i}=\int_{\partial\Sigma}dS_{i}\,k^{0i}. \tag{5.75}\] Since the current, and therefore also \(k^{\mu\nu}\), is linear in the gauge functions \(\epsilon_{A}(x)\), we conclude that the charge vanishes for "small" gauge transformations \[Q_{\rm small}\approx 0. \tag{5.76}\] This is not the case of "large" transformations, the ones determining the value of \(Q\). A very important fact to remember about "small" gauge transformations is that they are the ones leading to the Noether identities (5.14) that, as we indicated, express the redundancy intrinsic to gauge theories. Quantum mechanically, invariance under these transformations is mandatory in order to get rid of the spurious states that we introduced as the price of maintaining locality and Lorentz covariance. They cannot be spontaneously broken or affected by anomalies without rendering the theory inconsistent. However, no such restriction exists for "large" transformations, that can be broken without disastrous consequences. To connect with the discussion of the Abelian Higgs model, let us look at the case of Maxwell's electrodynamics in the temporal gauge \(A_{0}=0\). In the quantum theory, the vacuum Gauss law constraint \(\mathbf{\nabla}\cdot{\bf E}=0\) is implemented by the corresponding operator annihilating physical states, namely (to keep notation simple, we drop hats to denote operators) \[\mathbf{\nabla}\cdot{\bf E}|{\rm phys})=0. \tag{5.77}\] Finite gauge transformations preserving the temporal gauge condition \(A_{0}=0\) are generated by time-independent gauge functions and implemented in the space of states by the operator \[{\cal U}_{\epsilon}=\exp\left[i\int d^{3}r\,{\bf E}(t,{\bf r})\cdot\mathbf{\nabla}\epsilon({\bf r})\right]. \tag{5.78}\] Using the canonical commutation relations (3.68), we readily compute \[{\cal U}_{\epsilon}A_{0}(t,{\bf r}){\cal U}_{\epsilon}^{-1} = 0,\] \[{\cal U}_{\epsilon}{\bf A}(t,{\bf r}){\cal U}_{\epsilon}^{-1} = {\bf A}(t,{\bf r})+\mathbf{\nabla}\epsilon({\bf r}). \tag{5.79}\] At the same time, the operator \({\cal U}_{\epsilon}\) leaves the physical states invariant \[{\cal U}_{\epsilon}|{\rm phys}\rangle = \exp\left[i\int d^{3}x\,{\bf E}(t,{\bf r})\cdot\mathbf{ \nabla}\epsilon({\bf r})\right]|{\rm phys}\rangle \tag{5.80}\] \[= \exp\left[-i\int d^{3}x\,\epsilon({\bf r})\mbox{\boldmath$\nabla$ }\cdot{\bf E}(t,{\bf r})\right]|{\rm phys}\rangle=|{\rm phys}\rangle,\] where in the second line it is crucial that the gauge function \(\epsilon({\bf r})\)_vanishes at infinity_ so that after integrating by parts we do not pick up a boundary term. This means that \(\mathscr{U}_{\epsilon}\to 1\) as \(|{\bf r}|\to\infty\). We have shown that invariance of the physical states under "small" gauge transformations follows from Gauss' law (5.77) annihilating them, precisely the condition that factors out the spurious degrees of freedom. The conclusion is that "large" gauge transformations are not necessary to eliminate the gauge redundancy and can be broken without jeopardizing the consistency of the theory. This is precisely how the BEH mechanism works. The nonvanishing vacuum expectation value of the complex scalar field breaks "large" gauge transformations without spoiling Gauss' law. This is the reason why we need to qualify our statement in pages 21 and 56 that gauge invariance is just a redundancy in state labelling: "small" gauge transformations are indeed redundancies, but "large" gauge transformations are _bona fide_ symmetries. ## 6 Some more gauge invariances So far the only gauge theory we dealt with was Maxwell's electrodynamics, although here and there we hinted at its non-Abelian generalizations. It is about time to introduce these in a more systematic fashion. We start with a set of fermions \(\boldsymbol{\psi}^{T}=(\psi_{1},\ldots,\psi_{N})\) transforming in some representation \({\bf R}\) of the gauge group \(G\) \[\boldsymbol{\psi}\longrightarrow e^{i\alpha^{a}T^{a}_{\bf R}} \boldsymbol{\psi}\equiv g(\alpha)\boldsymbol{\psi}. \tag{6.1}\] By now, we know very well how to construct an action that has this symmetry \[S=\int d^{4}x\,\overline{\boldsymbol{\psi}}\big{(}i\partial \!\!\!/-m\big{)}\boldsymbol{\psi}. \tag{6.2}\] The problem arises when we want to make \(G\) a local invariance. In this case, the action we just wrote fails to be invariant due to the nonvanishing derivatives of \(\alpha^{a}(x)\) \[\partial_{\mu}\boldsymbol{\psi}\longrightarrow g\partial_{\mu} \boldsymbol{\psi}+i\partial_{\mu}g\boldsymbol{\psi}=g\big{(}\partial_{\mu} \boldsymbol{\psi}+ig^{-1}\partial_{\mu}g\big{)}\boldsymbol{\psi}, \tag{6.3}\] where, to avoid cluttering expressions, we have omitted the dependence of the group element \(g\) on the parameters \(\alpha^{a}\). To overcome this problem we have to find a covariant derivative \(D_{\mu}\), similarly to the one we introduced for Maxwell's theory, with the transformation \[D_{\mu}\boldsymbol{\psi}\longrightarrow gD_{\mu}\boldsymbol{\psi}. \tag{6.4}\] A reasonable ansatz turns out to be \[D_{\mu}\boldsymbol{\psi}=\big{(}\partial_{\mu}-iA_{\mu}\big{)} \boldsymbol{\psi}, \tag{6.5}\] where we omitted the identity multiplying \(\partial_{\mu}\) and \(A_{\mu}\equiv A_{\mu}^{a}T^{a}_{\bf R}\) is a field taking values in the algebra of generators of \(G\). In order to get the transformations (6.4), \(A_{\mu}\) has to transform according to \[A_{\mu}\longrightarrow A_{\mu}^{\prime}=ig^{-1}\partial_{\mu}g+g^{-1}A_{\mu}g. \tag{6.6}\] With this we can turn (6.2) into a locally invariant action by replacing \(\partial_{\mu}\) with \(D_{\mu}\) defined in eq. (6.5). In addition, we must include the dynamics of the new field \(A_{\mu}\) adding a suitable kinetic term that preserves the gauge invariance of the fermionic action. The Abelian-informed choice \(\partial_{\mu}A_{\nu}-\partial_{\nu}A_{\mu}\) for the gauge field strength will not do, since it does not transform covariantly \[\partial_{\mu}A_{\nu}-\partial_{\nu}A_{\mu}\longrightarrow g^{-1}\big{(}\partial_{\mu}A_{\nu}-\partial_{\nu}A_{\mu}\big{)}g+i \big{[}g^{-1}\partial_{\mu}g,g^{-1}\partial_{\nu}g\big{]}\] \[+\big{[}g^{-1}A_{\mu}g,g^{-1}\partial_{\nu}g\big{]}+\big{[}g^{-1 }\partial_{\mu}g,g^{-1}A_{\nu}g\big{]}. \tag{6.7}\] This however suggests a wiser choice \[F_{\mu\nu}=\partial_{\mu}A_{\nu}-\partial_{\nu}A_{\mu}+i\big{[}A_{\mu},A_{\nu }\big{]}, \tag{6.8}\] with the much nicer (i.e., covariant) transformation \[F_{\mu\nu}\longrightarrow F_{\mu\nu}^{\prime}=g^{-1}F_{\mu\nu}g. \tag{6.9}\] Notice that, similar to \(A_{\mu}\), the field strenght \(F_{\mu\nu}\) takes values in the algebra of generators, so we can write \(F_{\mu\nu}=F_{\mu\nu}^{a}T_{\mathbf{R}}^{a}\), with the components given by \[F_{\mu\nu}^{a}=\partial_{\mu}A_{\nu}^{a}-\partial_{\nu}A_{\mu}^{a}+f^{abc}A_{ \mu}^{b}A_{\nu}^{c}. \tag{6.10}\] where \(f^{abd}\) are the structure constants of the Lie algebra of generators, \([T_{\mathbf{R}}^{a},T_{\mathbf{R}}^{b}]=if^{abc}T_{\mathbf{R}}^{c}\). We denote by \(\mathscr{G}\) the set of gauge transformations acting on the fields. Although to fix ideas here, we have considered transformations (6.1) in the connected component of the identity \(\mathscr{G}_{0}\), the derived expressions remain valid for all transformations in \(\mathscr{G}\), even if they lie in disconnected components (we saw an example of this in the case of the Lorentz group studied in page 42). For transformations in \(\mathscr{G}_{0}\), we can write their infinitesimal form \[g(\alpha)\simeq\mathbb{1}+i\alpha^{a}T_{\mathbf{R}}^{a}, \tag{6.11}\] to write the first order transformation of both the gauge field and its field strength \[\delta_{\alpha}A_{\mu}^{a} =\partial_{\mu}\alpha^{a}+if^{abc}\alpha^{b}A_{\mu}^{c}\equiv(D_{ \mu}\alpha)^{a},\] \[\delta_{\alpha}F_{\mu\nu}^{a} =if^{abc}\alpha^{b}F_{\mu\nu}^{c}, \tag{6.12}\] where in the first line we expressed the variation of the gauge field in terms of the (adjoint) covariant derivative of the gauge function. The field strength, in turn, can be also recast as the commutator of two covariant derivatives, \(F_{\mu\nu}=[D_{\mu},D_{\nu}]\). After all these preliminaries, we can write a gauge invariant action for fermions coupled to non-Abelian gauge fields \[S_{\text{YM}} =\int d^{4}x\left[-\frac{1}{2g_{\text{YM}}^{2}}\text{tr}\left(F_{\mu \nu}F^{\mu\nu}\right)+\overline{\psi}\big{(}i\not{D}-m\big{)}\mathbf{\psi}\right]\] \[=\int d^{4}x\left[-\frac{1}{4g_{\text{YM}}^{2}}F_{\mu\nu}^{a}F^{a \mu\nu}+\overline{\psi}\big{(}i\not{\partial}-m\big{)}\mathbf{\psi}+A_{\mu}^{a} \overline{\psi}\gamma^{\mu}T_{\mathbf{R}}^{a}\mathbf{\psi}\right], \tag{6.13}\] where \(g_{\text{YM}}\) is the only coupling constant of the theory16. This non-Abelian generizations of QED was first formulated by C. N. Yang and Robert L. Mills [92]. Yang-Mills (YM) theories are the backbone of our understanding of elementary particle physics. Although the action \(S_{\text{YM}}\) reduces to that of QED in eq. (4.58) for \(G=U(1)\), it displays a much richer structure for non-Abelian gauge groups. For starters, the commutator in the field strength (6.8) is nonzero and the \(F_{\mu\nu}^{a}F^{a\mu\nu}\) term in (6.13) contains cubic and quartic gauge field self-interaction terms. This indicates that, unlike the photon, non-Abelian gauge bosons are never free particles even if uncoupled to matter. Footnote 16: The factors of \(g_{\text{YM}}\) in front of the first term in the action can be removed by a rescale \(A_{\mu}\to g_{\text{YM}}A_{\mu}\). In doing so, an inverse power of the coupling constant appears in the derivative terms in (6.6) and the first identity in (6.12), while the commutator in (6.8) acquires a power of \(g_{\text{YM}}\), as well as the structure constant term in (6.10). The general analysis of gauge invariance follows in many aspects the Abelian case. The corresponding electric and magnetic fields are defined in terms of the gauge potential \(A_{\mu}^{a}\equiv(A_{0}^{a},-\mathbf{A}^{a})\) by \[\mathbf{E}^{a} =-\mathbf{\nabla}A_{0}^{a}-\frac{\partial\mathbf{A}^{a}}{\partial t}+ f^{abc}A_{0}^{a}\mathbf{A}^{b},\] \[\mathbf{B}^{a} =\mathbf{\nabla}\times\mathbf{A}^{a}+f^{abc}\mathbf{A}^{b}\times \mathbf{A}^{c}, \tag{6.14}\] and, unlike their Abelian counterparts, they are not gauge invariant. The electric field \(\mathbf{E}^{a}\) is in fact the momentum canonically conjugate to \(\mathbf{A}^{a}\) \[\big{\{}A_{i}^{a}(t,\mathbf{r}),E_{j}^{b}(t,\mathbf{r}^{\prime})\big{\}}_{ \text{PB}}=\delta_{ij}\delta^{ab}\delta^{(3)}(\mathbf{r}-\mathbf{r}^{\prime}), \tag{6.15}\] and the Hamiltonian reads \[H=\int d^{3}x\,\left[\frac{1}{2}\mathbf{E}^{a}\cdot\mathbf{E}^{a}+\frac{1}{2} \mathbf{B}^{a}\cdot\mathbf{B}^{a}+A_{0}^{a}\big{(}\mathbf{D}\cdot\mathbf{E} \big{)}^{a}\right]. \tag{6.16}\] Similarly to Maxwell's electrodynamics, \(A_{0}^{a}\) plays the role of a Lagrange multiplier enforcing the Gauss law constraint, now reading \[(\mathbf{D}\cdot\mathbf{E})^{a}\equiv\mathbf{\nabla}\cdot\mathbf{E}^{a}+f^{abc} \mathbf{A}^{b}\times\mathbf{E}^{c}=0. \tag{6.17}\] In the quantum theory, classical fields are replaced by operators. Using the non-Abelian version of the temporal gauge, \(A_{0}^{a}=0\), residual gauge transformations correspond to time-independent gauge functions \(\alpha^{a}({\bf r})\) and are generated by \({\bf D}\cdot{\bf E}\) \[\delta_{\alpha}{\bf A}(t,{\bf r}) =i\left[\int d^{3}r\,\alpha^{a}({\bf r})({\bf D}\cdot{\bf E})^{a},{ \bf A}(t,{\bf r})\right]\] \[=\mathbf{\nabla}\alpha^{a}+if^{abc}\alpha^{b}{\bf A}^{c}\equiv({\bf D }\alpha)^{a}, \tag{6.18}\] where we have used the canonical commutation relations derived from (6.15) and to avoid boundary terms after integration by parts we need to restrict to "small" gauge transformations where \(\alpha^{a}({\bf r})\) vanishes when \(|{\bf r}|\to\infty\). Those in the connected component of the identity \(\mathscr{G}_{0}\) are therefore implemented on the space of physical states by the operator \[\mathscr{U}(\alpha)=\exp\left[i\int d^{3}r\,\alpha^{a}({\bf r})({\bf D}\cdot{ \bf E})^{a}\right]. \tag{6.19}\] As in the Abelian case discussed in Box 9 (see page 71), the invariance under these "small" gauge transformations has to be preserved at all expenses to avoid unphysical states entering the theory's spectrum. To achieve this, we require that the Gauss law annihilates physical states \[({\bf D}\cdot{\bf E})^{a}|\text{phys}\rangle=0. \tag{6.20}\] In the presence of non-Abelian sources, \(({\bf D}\cdot{\bf E})^{a}\) gets replaced by \(({\bf D}\cdot{\bf E})^{a}-\rho^{a}\), with \(\rho^{a}\) the matter charge density operator. We should not forget about "large" gauge transformations whose gauge parameter \(\alpha^{a}({\bf r})\) does not vanish when \(|{\bf r}|\to\infty\). Notice that any transformation of this kind can be written as \[g({\bf r})_{\rm large}=hg({\bf r})_{\rm small}, \tag{6.21}\] where \(h\neq 1\) is a rigid transformations such that \(g({\bf r})_{\rm large}\to h\) as \(|{\bf r}|\to\infty\). They build up what can be called a copy of the group at infinity, \(G_{\infty}\), the global invariance leading to charge conservation by the first Noether theorem. This is a real symmetry that quantum mechanically can be realized either a la Wigner-Weyl or a la Nambu-Goldstone. For the SM gauge group SU(3) \(\times\) SU(2) \(\times\) U(1), the color SU(3)\({}_{\infty}\) symmetry remains unbroken by the vacuum, whereas due to the BEH mechanism the electroweak factor \([{\rm SU(2)}\times{\rm U(1)}]_{\infty}\) is partially realized a la Nambu-Goldstone, with a preserved U(1)\({}_{\infty}\) corresponding to the global invariance of electromagnetism17. Footnote 17: As we will see shortly, the unbroken U(1) generator is a mixture of the two generators of the Cartan subalgebra of the electroweak SU(2) \(\times\) U(1) gauge group factor. ## 7 Anomalous symmetries In section 5, we mentioned the possibility that classical symmetries or invariances could somehow turn out to be incompatible with the process of quantization but so far did not elaborate any further. Since anomalous symmetries are crucial in our understanding of a number of physical phenomena, it is about time to look into anomalies in some detail (see [93, 94, 95, 96] for some reviews on the topic). ### Symmetry vs. the quantum Let us go back to the QED action (4.58). We have already discussed the global phase invariance leading by the first Noether theorem to the conserved current (4.57). In addition, we can also consider the transformations \[\psi\longrightarrow e^{i\alpha\gamma_{5}}\psi,\hskip 28.452756pt\overline{\psi} \longrightarrow\overline{\psi}e^{i\alpha\gamma_{5}}, \tag{7.1}\] where \(\gamma_{5}\) is the chirality matrix defined in eq. (4.53). Unlike the transformation \(\psi\to e^{i\theta}\psi\) rotating the positive and negative chirality components of the Dirac spinor by the same phase, in (7.1) they change by opposite phases. In what follows, we refer to the first type as _vector_ transformations, while the second we dub as _axial-vector_. The latter, however, are not a symmetry of the QED action for \(m\neq 0\), since \(\overline{\psi}\psi\rightarrow\overline{\psi}e^{2i\alpha\gamma_{5}}\psi\neq \overline{\psi}\psi\), whereas \(\overline{\psi}\gamma^{\mu}\partial_{\mu}\psi\) invariant. In fact, using the Dirac field equations it can be shown that the axial-vector current \[j_{5}^{\mu}=\overline{\psi}\gamma_{5}\gamma^{\mu}\psi, \tag{7.2}\] satisfies the relation \[\partial_{\mu}j_{5}^{\mu}=2im\overline{\psi}\gamma_{5}\psi, \tag{7.3}\] and for \(m=0\) gives the conservation equation associated with the invariance of massless QED under axial-vector transformations. Similar to what we found on Box 8 for the flavor symmetry of QCD, in this limit the global U(1)\({}_{V}\) symmetry of QED gets enhanced to U(1)\({}_{V}\times\) U(1)\({}_{A}\). In the quantum theory Noether currents are constructed as products of field operators evaluated at the same spacetime point. These quantities are typically divergent and it is necessary to introduce some regularization in order to make sense of them. In the case of QED one way to handle the vector current \(j^{\mu}(x)=\overline{\psi}(x)\gamma^{\mu}\psi(x)\) is by using point splitting \[j^{\mu}(x,\epsilon)_{\rm reg}\equiv\overline{\psi}\left(x-\frac{1}{2}\epsilon \right)\gamma^{\mu}\psi\left(x+\frac{1}{2}\epsilon\right)\exp\left(ie\int_{x- \frac{1}{2}\epsilon}^{x+\frac{1}{2}\epsilon}dx^{\mu}A_{\mu}\right), \tag{7.4}\] where the divergences appear as poles in \(\epsilon=0\). Notice that since the phases introduced by the gauge transformations of the two fields are evaluated at different points, an extra Wilson line term is needed to restore gauge invariance of the regularized current. Alternatively, we can use Pauli-Villars (PV) regularization, where a number of spurious fermion fields of masses \(M_{i}\) are added to the action \[S_{\rm reg}=\int d^{4}x\left[-\frac{1}{4}F_{\mu\nu}F^{\mu\nu}+\overline{\psi} (i\not{D}-m)\psi+\sum_{i=1}^{n}c_{k}\overline{\Psi}_{k}(i\not{D}-M_{k})\Psi_{ k}\right], \tag{7.5}\] with \(n\) and \(c_{k}\) chosen so the limit \[j^{\mu}(x)_{\rm reg}\equiv\lim_{x^{\prime}\to x}\left[\overline{\psi}(x^{ \prime})\gamma^{\mu}\psi(x)+\sum_{k=1}^{n}c_{k}\overline{\Psi}_{k}(x^{\prime} )\gamma^{\mu}\Psi_{k}(x)\right], \tag{7.6}\] remains finite (i.e., all poles at cancel). An important feature of the PV regularization is that it explicitly preserves gauge invariance. The masses act as regulators, since in the limit \(M_{k}\to\infty\) the PV fermions decouple and the original divergences reappear. The need to make sense of composite operators is at the bottom of the potential problems with current conservation in the quantum domain. The regularization procedure might collide with some of the classical symmetries of the theory, resulting in its breaking after divergences are properly handled. This is why our discussion of the regularization of the current operator in QED has been conspicuously concerned with the issue of gauge invariance of the vector current. The existence of gauge invariant regularization schemes guarantees that the current coupling to the gauge field can be defined in the quantum theory without spoiling its conservation \(\partial_{\mu}j^{\mu}=0\) at operator level. Otherwise, we would be in serious trouble, as we can see by applying the quantization prescription (3.66) to the stability condition of the Gauss law (3.54) \[[G,H]=-i\partial_{\mu}j^{\mu}, \tag{7.7}\] where we have defined \(G\equiv\mathbf{\nabla}\cdot\mathbf{E}-j^{0}\). If \(\partial_{\mu}j^{\mu}\neq 0\), the Gauss law condition ensuring the factorization of redundant states would not be preserved by time evolution. Indeed, imposing the constraint at \(t=0\) on some state, \(G|\Psi(0)\rangle=0\), we would have at first order in \(\delta t\) \[G|\Psi(\delta t)\rangle=-i\delta tGH|\Psi(0)\rangle=-\delta t \partial_{\mu}j^{\mu}|\Psi(0)\rangle\neq 0, \tag{7.8}\] so the constraint is no longer satisfied and unphysical states enter the spectrum. Another sign that something goes wrong when implementing the Gauss law constraint in theories with gauge anomalies appears when computing the commutator of two \(G\)'s evaluated at different points. In the presence of a gauge anomaly, it is no longer zero [97, 98, 99] \[[G(\mathbf{r}),G(\mathbf{r}^{\prime})]=c\mathbf{B}(\mathbf{r}) \cdot\mathbf{\nabla}\delta^{(3)}(\mathbf{r}-\mathbf{r}^{\prime}), \tag{7.9}\] where \(c\neq 0\) is a constant determined by the value of \(\partial_{\mu}j^{\mu}\). This result implies that \(G(\mathbf{r})|\text{phys}\rangle=0\) cannot be consistently imposed, since this condition would imply \([G(\mathbf{r}),G(\mathbf{r}^{\prime})]|\text{phys}\rangle=0\) whereas the right-hand side of (7.9) gives a nonzero result when acting on the state18. This being the case, spurious states cannot be factored out from the spectrum, with the upshot that the theory becomes inconsistent. Footnote 18: Something similar happens in the case of non-Abelian gauge theories that we will discuss in the next section. There, the commutator of two Gauss law operators acquires a central extension, \([G^{a}(\mathbf{r}),G^{b}(\mathbf{r}^{\prime})]=if^{abc}G^{c}(\mathbf{r})\delta ^{(3)}(\mathbf{r}-\mathbf{r}^{\prime})+\mathcal{A}^{ab}(\mathbf{r},\mathbf{r} ^{\prime})\), with \(G^{a}\equiv(\mathbf{D}\cdot\mathbf{E})^{a}-j^{a0}\) in this case. This shows that in constructing QFTs, gauge anomalies cannot emerge. This condition is a very powerful constraint in model building, since it limits both the type of fields that can be allowed in the actions and also their couplings. As we will see in Box 13 in page 104, in the SM this requirement completely fixes the hypercharges of quarks and leptons, up to a global normalization (see [96] for examples of anomaly cancellation in the SM and beyond). After this digression, we go back to the quantum mechanical definition of the axial-vector current (7.2) and the fate of its (pseudo)conservation (7.3). To simplify things, we consider the massless case where axial-vector transformations (7.1) are a symmetry of the classical action. A very convenient way to study this problem is to treat the gauge field as a classical external source coupling to the quantum Dirac field. This is made clear by denoting gauge fields and field strengths using calligraphic fonts as \(\mathscr{A}_{\mu}\) and \(\mathscr{F}_{\mu\nu}\) respectively. Instead of working with operators, we deal with their vacuum expectation values in the presence of the background field and compute \(\langle J^{\mu}_{5}\rangle_{\mathscr{A}^{\prime}}\equiv\langle 0|J^{\mu}_{5}|0\rangle\) together with its divergence. This can be done using either the regularized operators introduced above (see, for example, [100] for a calculation using point-splitting regularization) or diagrammatic techniques. In the latter case, we need to compute the celebrated triangle diagrams (7.10) where in the left vertex of both diagrams (indicated by a dot) an axial-vector current is inserted, whereas the other two are coupled to the externa gauge field through the vector gauge currents. Since in this lectures we are not entering into the computation of Feynman graphs, we will not elaborate on how to calculate these ones. Details can be found in chapter 9 of ref. [14] or in [94]. Here we just give the final result for the anomaly of the axial-vector current \[\partial_{\mu}\langle J^{\mu}_{5}\rangle_{\mathscr{A}^{\prime}}=-\frac{e^{2} \hbar}{16\pi^{2}}\epsilon^{\mu\nu\alpha\beta}\mathscr{F}_{\mu\nu}\mathscr{F}_{ \alpha\beta}. \tag{7.11}\] Despite having used all the time natural units with \(\hbar=1\), in this expression we have restored the powers of the Planck constant to make explicit the fact that the anomaly is a pure quantum effect. This crucial result has a long history. The diagrams in eq. (7.10) were computed in 1949 by Jack Steinberger [101] and later by Julian Schwinger in 1951 [102], in both cases in the context of the electromagnetic decay of neutral mesons19. Almost two decades later, the consequences of the triangle diagram for the quantum realization of the axial-vector symmetry of QED were pointed out by Stephen Adler [105] and John S. Bell and Roman Jackiw [106] in what are considered today the foundational papers of the subject of quantum anomalies. Footnote 19: Other early calculations of the triangle diagrams were carried out in 1949 by Hiroshi Fukuda and Yoneji Miyamoto [103] and by S. Ozaki, S. Oneda, and S. Sasaki [104]. There are some very important issues that should be mentioned concerning the calculation of the axial anomaly (7.11). We have stressed how the anomaly could be seen as originated by the need to regularize UV (i.e., short distance) divergences in the definition of the current or, alternatively, in the computation of the triangle diagrams. Nevertheless, using either method, we find a regular result in the limit in which the regulator is removed. In the language of QFT, we do not need to subtract and renormalize divergences to find the anomaly of the axial current. At the level of diagrams, what happens is that although the integrals are linearly divergent this only results in an ambiguity in their value that is fixed by requiring the gauge (vector) current to be conserved. In the case of the point splitting calculation, introducing a Wilson line similar to the one inserted in eq. (7.4) in the regulariz vector current to preserve gauge invariance we are led to the axial anomaly after taking the \(\epsilon\to 0\) limit. Another important point to be stressed is a _tension_ between the conservation of the gauge and the axial-vector currents: we can impose the conservation of either of the two, _but not of both simultaneously_. After the above discussion of the dire consequences of violating gauge current conservation, the choice is clear enough. ### The physical power of the anomaly When studying the global symmetries of QCD, we have also encountered axial transformations [see Box 8, and in particular eq. (5.48)] and mentioned that it is anomalous. Now we can be more explicit. The axial-vector current of interest in this case is given by \[J_{5}^{\mu}=\overline{\mathbf{q}}\gamma_{5}\gamma^{\mu}\mathbf{q}, \tag{7.12}\] where a sum over color indices should be understood. Its anomaly comes from triangle diagrams similar to the ones shown in fig. (7.10), this time with quarks running in the loop. But, together with the triangles coupling to the electromagnetic external potential \(\mathscr{A}_{\mu}\), we also have a pair of triangles where the vertices on the right couple to external gluon field \(\mathcal{A}_{\mu}^{a}\) (for this, we also use calligraphic fonts to indicate that we are dealing with classical sources). This results in the anomaly \[\partial_{\mu}\langle J_{5}^{\mu}\rangle_{\mathscr{A},\mathcal{A}}=-\frac{N_{ c}}{16\pi^{2}}\bigg{(}\sum_{f=1}^{N_{f}}q_{f}^{2}\bigg{)}\epsilon^{\mu\nu \alpha\beta}\mathscr{F}_{\mu\nu}\mathscr{F}_{\alpha\beta}-\frac{N_{f}}{16\pi^ {2}}\epsilon^{\mu\nu\alpha\beta}\mathcal{F}_{\mu\nu}^{a}\mathcal{F}_{\alpha \beta}^{a}, \tag{7.13}\] where \(\mathcal{F}_{\mu\nu}^{a}\) is the non-Abelian field strength associated with the external gluon field and \(N_{c}\) is the number of colors. The coefficient of the first term is obtained by summing the expression of the axial anomaly given in (7.11) to all quarks running in the loop. As for the second, the quarks couple to the gluon fields through the gauge current \[J^{\mu a}=\overline{\mathbf{q}}\gamma^{\mu}\tau^{a}\mathbf{q}, \tag{7.14}\] where \(\tau^{a}\) are the generators of the fundamental representation of SU(3) acting on the color indices of each component of \(\mathbf{q}\). Since the axial current does not act on color indices, the prefactor is proportional to \((\operatorname{tr}\mathbbm{1})(\operatorname{tr}\left\{\tau^{a},\tau^{b}\right\} )=N_{f}\delta^{ab}\), with \(\mathbbm{1}\) the identity in flavor space. Anomalies can also affect the global non-Abelian \(\text{SU}(N_{f})_{L}\times\text{SU}(N_{f})_{R}\) symmetry defined in (5.49). This global symmetry group can be rearranged in terms of vector and axial transformations \(\text{SU}(N_{f})_{L}\times\text{SU}(N_{f})_{R}=\text{SU}(N_{f})_{V}\times\text {SU}(N_{f})_{A}\) acting on the quark fields as \[\text{SU}(N_{f})_{V}:\mathbf{q}\to e^{i\alpha\{t^{I}_{r}\}t^{I}_{r} \mathbf{q}}, \text{SU}(N_{f})_{A}:\mathbf{q}\to e^{i\alpha_{A}^{I}t^{I}_{r}\gamma_{5}}\mathbf{q}, \tag{7.15}\] with \(\mathbf{q}_{R}\) and \(\mathbf{q}_{L}\) transforming respectively with the same or opposite SU(\(N_{f}\)) parameters20. Vector currents, however, are always anomaly-free. A simple way to come to this conclusion is to notice that the PV regularization method introduced above preserved all vector symmetries, since these remain unbroken by fermion mass terms21. We thus focus on the chiral SU(\(N_{f}\))\({}_{A}\) factor, whose associated axial-vector current is Footnote 21: This argument also applies to the SU(3) gauge invariance of QCD, which cannot be anomalous since it acts in the same way on quarks of both chiralities. As a consequence, the theory can be regularized in a gauge invariant way. \[J_{5}^{I\mu}=\overline{\mathbf{q}}\gamma_{5}\gamma^{\mu}t_{\mathbf{t}}^{ I}\mathbf{q}, \tag{7.16}\] where, again, there is a tacit sum over the quark color index. As in the case of the singlet current (7.12), there are contribution coming from the photon and gluon couplings of the quarks. Taking into account that, unlike photons, gluons are flavor-blind, we find \[\partial_{\mu}\langle J_{5}^{I\mu}\rangle_{\mathscr{A},\mathbf{A}}=- \frac{N_{c}}{16\pi^{2}}\bigg{[}\sum_{f=1}^{N_{f}}q_{f}^{2}(t_{\mathbf{t}}^{I})_{ff} \bigg{]}\epsilon^{\mu\nu\alpha\beta}\mathscr{F}_{\mu\nu}\mathscr{F}_{\alpha \beta}-\frac{N_{f}}{16\pi^{2}}\big{(}\mathrm{tr}\,t_{\mathbf{t}}^{I}\big{)} \epsilon^{\mu\nu\alpha\beta}\mathcal{F}_{\mu\nu}^{a}\mathcal{F}_{\alpha\beta}^ {a}. \tag{7.17}\] Since all generators of SU(\(N_{f}\)) are traceless the second term is zero but the first one does not necessarily vanishes. Let be focus on the dynamics of the two lightest quarks \(u\) and \(d\), where \(q_{u}=\frac{2}{3}e\) and \(q_{d}=-\frac{1}{3}e\). In this case \(N_{f}=2\) and the flavor group is generated by \(t_{\mathbf{t}}^{I}=\frac{1}{2}\sigma^{I}\), with \(\sigma^{I}\) the Pauli matrices. We have then \[\sum_{f=1}^{2}q_{f}^{2}(t_{\mathbf{t}}^{1})_{ff}=\sum_{f=1}^{2}q_{f}^{2 }(t_{\mathbf{t}}^{2})_{ff}=0,\hskip 28.452756pt\sum_{f=1}^{2}q_{f}^{2}(t_{\mathbf{t}}^{3} )_{ff}=\frac{e^{2}}{6}, \tag{7.18}\] where \(N_{c}\) is the number of quark colors. This means that \(J_{5}^{3\mu}\) is anomalous \[\partial_{\mu}\langle J_{5}^{3\mu}\rangle_{\mathscr{A},\mathbf{A}}=- \frac{e^{2}N_{c}}{48\pi^{2}}\epsilon^{\mu\nu\alpha\beta}\mathscr{F}_{\mu\nu} \mathscr{F}_{\alpha\beta}. \tag{7.19}\] The physical importance of this result lies in that after chiral symmetry breaking (see Box 8 in page 65), the operator \(\partial_{\mu}J_{5}^{4\mu}\) becomes the interpolating field for pions, creating them out of the vacuum22 Footnote 22: The first identity follows from \(\langle\pi^{a}(p)|J_{5}^{b\mu}(x)|0\rangle\sim p^{\mu}\delta^{ab}e^{-ip\cdot x}\), a direct consequence of the Goldstone theorem [79]. \[\langle\pi^{a}(p)|\partial_{\mu}J_{5}^{a\mu}(x)|0\rangle=f_{\pi}m_ {\pi}\delta^{ab}e^{-ip\cdot x}\hskip 28.452756pt\Longrightarrow\hskip 28.452756pt \pi^{a}(x)=\frac{1}{f_{\pi}m_{\pi}}\partial_{\mu}J_{5}^{a\mu}(x), \tag{7.20}\] where \(m_{\pi}\) is the pion mass and \(f_{\pi}\) the pion decay constant introduced in eq. (5.54) to parametrize the matrix of NG bosons resulting from chiral symmetry breaking. Although to compute the anomaly (7.19) we took the electromagnetic field to be a classical source, the corresponding operator identity implies the existence of a nontrivial overlap between the neutral pion state and the state with two photons \[\langle\mathbf{k}_{1},\lambda_{1};\mathbf{k}_{2},\lambda_{2}|\pi^ {0}(p)\rangle=\frac{e^{2}N_{c}}{12\pi^{2}f_{\pi}}(2\pi)^{4}\delta^{(4)}(p-k_{ 1}-k_{2})\epsilon_{\mu\nu\alpha\beta}k_{1}^{\mu}k_{2}^{\nu}\epsilon^{\alpha}( \mathbf{k}_{1})\epsilon^{\beta}(\mathbf{k}_{2}). \tag{7.21}\] The width of the process can be computed from this result to be \[\Gamma(\pi^{0}\to 2\gamma)=\frac{\alpha^{2}N_{c}^{2}m_{\pi}^{3}}{576\pi^{3}f_{ \pi}^{2}}=7.73\,\text{eV}, \tag{7.22}\] which is perfectly consistent with experimental measurements [107] \[\Gamma(\pi^{0}\to 2\gamma)_{\text{exp}}=7.798\pm 0.056\text{ (stat.)}\pm 0.109\text{ (syst.)}\,\text{eV}. \tag{7.23}\] Incidentally, the presence of \(f_{\pi}=93\,\text{MeV}\) in eq. (7.22) give a rationale for it being called the pion decay constant. The electromagnetic decay of the neutral pion is a direct consequence of the existence of the axial anomaly. On general grounds, it can be argued that the amplitude for the decay process of the \(\pi^{0}\) into two photons has the structure \[\langle{\bf k}_{1},\lambda_{1};{\bf k}_{2},\lambda_{2}|\pi^{0}(p)\rangle=i \frac{p^{2}-m_{\pi}^{2}}{f_{\pi}m_{\pi}^{2}}p^{2}f(p^{2})(2\pi)^{4}\delta^{(4) }(p-k_{1}-k_{2})\epsilon_{\mu\nu\alpha\beta}k_{1}^{\mu}k_{2}^{\nu}\epsilon^{ \alpha}({\bf k}_{1})\epsilon^{\beta}({\bf k}_{2}), \tag{7.24}\] with \(f(p^{2})\) a function of the pion squared momentum. We could naively assume \(f(p^{2})\) to be well-behaved, with a pole singularity at \(p^{2}=m_{\pi}^{2}\) and a branch cut starting at \(9m_{\pi}^{2}\) signalling multi-pion production (see fig. 10). Were this the case, the amplitude would be suppressed in the \(p^{2}\to 0\) limit. Historically, this result was known as the Sutherland-Veltman theorem [108, 109] and essentially ruled out the existence of the process \(\pi^{0}\to 2\gamma\), that was nevertheless observed. The catch lies in that the regularity hypothesis concerning \(f(p^{2})\), called partial conservation of the axial current (PCAC), is wrong due to the axial anomaly. The calculation of the triangle diagrams (7.10) shows that this function is not regular at zero momentum, but actually has a pole \[f(p^{2})\sim\frac{ie^{2}N_{c}}{12\pi}\frac{1}{p^{2}}\qquad\text{ as}\qquad p\to 0. \tag{7.25}\] This singularity is precisely responsible for compensating the low-momentum suppression of the amplitude (7.24), giving the nonzero result accounting for the \(\pi^{0}\to 2\gamma\) decay. It is somewhat fascinating Figure 10: Complex \(p^{2}\)-plane showing the structure of singularities of the function \(f(p^{2})\) in eq. (7.24): a pole at \(p^{2}=m_{\pi}^{2}\) and a branch cut beginning at \(p^{2}=9m_{\pi}^{2}\). that the anomaly, that we identified from the start as resulting from UV ambiguities in the definition of the current, is also associated with an IR pole and determined by its residue. This reflects the profound topological connexions of QFT anomalies [93, 94, 95, 96]. **Box 10. The path integral way to the anomaly** There are many different roads leading to the chiral anomaly. For our presentation above we have chosen the perturbative approach, involving the computation of the two one-loop triangle diagrams shown in eq. (7.10). But the anomaly can also be computed using path integrals, where it appears as a result of the noninvariance of the functional measure under chiral rotations of the Dirac fermions. To see how this comes about, let us consider again a Dirac fermion coupled to an external electromagnetic field \(\mathscr{A}_{\mu}\) that we treat as a classical source. Its action is given by \[S[\psi,\overline{\psi},\mathscr{A}_{\mu}] =\int d^{4}x\overline{\psi}\gamma^{\mu}\big{(}i\partial_{\mu}+e \mathscr{A}_{\mu}\big{)}\psi\] \[=\int d^{4}x\Big{[}\overline{\psi}_{R}\big{(}i\partial_{\mu}+e \mathscr{A}_{\mu}\big{)}\psi_{R}+\overline{\psi}_{L}\big{(}i\partial_{\mu}+e \mathscr{A}_{\mu}\big{)}\psi_{L}\Big{]}, \tag{7.26}\] where in the second line we split the Dirac fermion into its two chiralities. A quantum effective action \(\Gamma[\mathscr{A}_{\mu}]\) for the external field can be defined by integrating out the fermions \[e^{i\Gamma[\mathscr{A}_{\mu}]}=\int\mathscr{D}\overline{\psi}\mathscr{D}\psi \,e^{iS[\psi,\overline{\psi},\mathscr{A}_{\mu}]}. \tag{7.27}\] The important point in this expressions is that the Dirac fields are dummy variables that can be modified without changing the value of the functional integral. In particular, we can implement the following "change of variables" \[\psi=e^{i\alpha\gamma_{5}}\psi^{\prime}\qquad\implies\qquad\psi_{R,L}=e^{\pm i \alpha}\psi^{\prime}_{R,L}, \tag{7.28}\] writing the original Dirac field in terms of its chiral-transformed [see eq. (7.1)]. As we know, in the absence of a Dirac mass term the fermion action does not change \[S[\psi,\overline{\psi},\mathscr{A}_{\mu}]=S[\psi^{\prime},\overline{\psi}^{ \prime},\mathscr{A}_{\mu}], \tag{7.29}\] reflecting the classical chiral invariance of the massless theory. However, we have to be careful when implementing this change in the integral (7.27). The reason is that we have to properly transform the fermion integration measure, which in principle might pick up a nontrivial Jacobian. Since the transformation is linear in the fermions, this Jacobian can only depend on the external sources, as well as on the transformations parameter \(\alpha\) \[\mathscr{D}\overline{\psi}\mathscr{D}\psi=J[\mathscr{A}_{\mu}]\mathscr{D} \overline{\psi}^{\prime}\mathscr{D}\psi^{\prime}. \tag{7.30}\] Taking this into account, we go back to (7.27) that now reads \[e^{i\Gamma[\mathscr{A}_{\mu}]}=\int\mathscr{D}\overline{\psi}^{\prime}\mathscr{D} \psi^{\prime}\,e^{iS[\psi^{\prime},\overline{\psi}^{\prime},\mathscr{A}_{\mu}] +\log J[\mathscr{A}_{\mu}]}\equiv\int\mathscr{D}\overline{\psi}^{\prime} \mathscr{D}\psi^{\prime}\,e^{iS^{\prime}[\psi^{\prime},\overline{\psi}^{ \prime},\mathscr{A}_{\mu}]}. \tag{7.31}\] Thus, the effective action can be computed in the new variables provided we use the new fermion action \(S^{\prime}[\psi^{\prime},\overline{\psi}^{\prime},\mathscr{A}_{\mu}]\) including an additional term \[S^{\prime}[\psi^{\prime},\overline{\psi}^{\prime},\mathscr{A}_{\mu}]=\int d^{4 }x\,\overline{\psi}^{\prime}\gamma^{\mu}\big{(}i\partial_{\mu}+e\mathscr{A}_{ \mu}\big{)}\psi^{\prime}-i\log J[\mathscr{A}_{\mu}], \tag{7.32}\] that, coming from the functional measure, is obviously a pure quantum effect. A convenient way to compute the Jacobian is by expanding the Dirac fermions in a basis of Dirac operator \(\not{D}(\mathscr{A})\equiv\gamma^{\mu}(\partial_{\mu}-ie\mathscr{A}_{\mu})\) eigenstates. Using a regularization method preserving gauge invariance, a finite result is obtained [95, 110, 111]. \[-i\log J[\mathscr{A}_{\mu}]=\frac{e^{2}\alpha}{16\pi^{2}}\int d^{4}x\,e^{\mu \nu\alpha\beta}\mathscr{F}_{\mu\nu}\mathscr{F}_{\alpha\beta}. \tag{7.33}\] Notice that in the case of massive fermions the change (7.28) also introduces, besides the quantum anomalous term, a complex phase in the mass which has a classical origin \[S^{\prime}[\psi^{\prime},\overline{\psi}^{\prime},\mathscr{A}_{ \mu}] =\int d^{4}x\,\Big{[}\overline{\psi}^{\prime}_{R}\gamma^{\mu}\big{(}i \partial_{\mu}+e\mathscr{A}_{\mu}\big{)}\psi^{\prime}_{R}+\overline{\psi}^{ \prime}_{L}\gamma^{\mu}\big{(}i\partial_{\mu}+e\mathscr{A}_{\mu}\big{)}\psi^{ \prime}_{L}\] \[+me^{2i\alpha}\big{(}\overline{\psi}^{\prime}_{R}\psi^{\prime}_{L }+\overline{\psi}^{\prime}_{L}\psi^{\prime}_{R}\big{)}\Big{]}+\frac{e^{2} \alpha}{16\pi^{2}}\int d^{4}x\,e^{\mu\nu\alpha\beta}\mathscr{F}_{\mu\nu} \mathscr{F}_{\alpha\beta}. \tag{7.34}\] The last term associated to the nonzero Jacobian is just the integrated form of the chiral anomaly found in (7.11). The analysis just presented will be useful in analyzing the strong CP problem in the next section. ## 8 The strong CP problem and axions When studying magnetic monopoles in Box 5 (see page 7.2), we discussed the possibility of having non-trivial gauge field topologies. In this section, we are going to look deeper into the role played by topology in non-Abelian gauge field theories and study how nonequivalent topological gauge field configurations define different vacua of the theory. ### The (infinitely) many vacua of QCD To fix ideas, let us consider pure YM theory in the temporal gauge \(A_{0}^{a}=0\), preserved by the set \(\mathscr{G}\) of time-independent gauge transformations \(g(\mathbf{r})\). Adding to the Euclidean space \(\mathbb{R}^{\,3}\) the point at infinity, it gets compatitified to a three-sphere, \(\mathbb{R}^{\,3}\cup\{\infty\}\simeq S^{3}\). Thus, the residual gauge transformations in \(\mathscr{G}\) define maps from \(S^{3}\) onto the gauge group23 Footnote 23: At a more physical level, the compactification of \(\mathbb{R}^{\,3}\) to \(S^{3}\) amounts to requiring that all fields, as well as gauge transformations, have well-defined limits as \(|\mathbf{r}|\to\infty\), independent of the direction along which the limit is taken. \[\mathscr{G}:S^{3}\longrightarrow G. \tag{8.1}\] The space \(\mathscr{G}\) consists of infinitely topological nonequivalent sectors classified by the third-homotopy group \(\pi_{3}(G)\)[57, 58, 59, 60]. As an example, let us consider a gauge theory with group \(G=\) SU(2). This Lie group is topologically equivalent to a three-dimensional sphere \(S^{3}\), as can be seen by writing \[g=n^{0}\mathbb{1}+i\mathbf{n}\cdot\boldsymbol{\sigma}, \tag{8.2}\] with \(n^{0}\) and \(\mathbf{n}=(n^{1},n^{2},n^{3})\) real. Both unitarity \[g^{\dagger}g=gg^{\dagger}=\big{[}(n^{0})+\mathbf{n}^{2}\big{]} \mathbb{1}=\mathbb{1}, \tag{8.3}\] and the requirement of unit determinant \[\det g=(n^{0})^{2}+\mathbf{n}^{2}=1, \tag{8.4}\] lead to the condition \[(n^{0})^{2}+\mathbf{n}^{2}=1, \tag{8.5}\] so \((n^{0},\mathbf{n})\) parametrizes the unit three-sphere \(S^{3}\). Since \(\pi_{3}(S^{3})=\mathbb{Z}\), the set of time-independent SU(2) gauge transformations decomposes into topological nonequivalent sectors \[\mathscr{G}=\bigcup_{n\in\mathbb{Z}}\mathscr{G}_{n}, \tag{8.6}\] where \(n\) is the winding number of the map \(S^{3}\to S^{3}\). For a gauge transformation \(g(\mathbf{r})\), its winding number can be shown to be \[n=\frac{1}{24\pi^{2}}\int_{S^{3}}d^{3}r\,\epsilon_{ijk}\mathrm{ tr}\,\Big{[}(g^{-1}\partial_{i}g)(g^{-1}\partial_{j}g)(g^{-1}\partial_{k}g) \Big{]}. \tag{8.7}\] Moreover, two gauge transformations can be continuously deformed into one another only when they share the same winding number, with \(\mathscr{G}_{0}\) the identity's connected component. Additivity is an important property of the winding number. Given \(g\in\mathscr{G}_{n}\) and \(g^{\prime}\in\mathscr{G}_{n^{\prime}}\), their product \(gg^{\prime}\) has winding number \[n_{gg^{\prime}}=n_{g}+n_{g^{\prime}}, \tag{8.8}\] and in particular \(n_{g^{-1}}=-n_{g}\). This, together with the fact that \(\mathbb{1}\in\mathscr{G}_{0}\), shows that \(\mathscr{G}_{0}\) is the only sector forming a subgroup. From the discussion in section 6, we learn that physical states are preserved by "small" gauge transformations in \(\mathscr{G}_{0}\) provided they satisfy the Gauss law (6.20). As for transformations in \(\mathscr{G}_{n}\) with \(n\neq 0\), keeping in mind that quantum states are rays in a Hilbert space defined up to a global complex phase we conclude that physical invariance under a transformation \(g_{1}\in\mathscr{G}_{1}\) requires \[g_{1}|\mathrm{phys}\rangle=e^{i\theta}|\mathrm{phys}\rangle, \tag{8.9}\] for some \(\theta\in\mathbb{R}\). This number should be independent of the state, since otherwise gauge transformations would give rise to observable interference. Another relevant fact to notice is that the value of \(\theta\) is also independent of the transformation in \(\mathscr{G}_{1}\). To see this, let us consider \(g_{1},g_{1}^{\prime}\in\mathscr{G}_{1}\) and assume that \[g_{1}|\text{phys}\rangle=e^{i\theta}|\text{phys}\rangle,\hskip 28.452756ptg_{1}^{ \prime}|\text{phys}\rangle=e^{i\theta^{\prime}}|\text{phys}\rangle. \tag{8.10}\] Since by additivity of the winding number \(g_{1}^{\prime}g_{1}^{-1}\in\mathscr{G}_{0}\), and transformations in the connected component of the identity leave the physical states invariant without any complex phase, we immediately conclude that \(\theta^{\prime}=\theta\). Using a similar argument it is straightforward to show that for \(g_{n}\in\mathscr{G}_{n}\) \[g_{n}|\text{phys}\rangle=e^{in\theta}|\text{phys}\rangle. \tag{8.11}\] The conclusion is that a single actual number \(\theta\) determines the action of all gauge transformations on physical states. We can reach the same conclusion about the vacuum structure of YM theories in a different way. Besides the gauge kinetic term in the action (6.13), there is also a second admissible gauge invariant term \[S_{\theta} =-\frac{\theta}{32\pi^{2}}\int d^{4}x\,F^{a}_{\mu\nu}\widetilde{ F}^{a\mu\nu}\] \[=-\frac{\theta}{8\pi^{2}}\int d^{4}x\,\mathbf{E}^{a}\cdot \mathbf{B}^{a}, \tag{8.12}\] where \(\widetilde{F}^{a}_{\mu\nu}\) is the non-Abelian analog of the dual tensor field introduced in eq. (3.45), defined as \[\widetilde{F}^{a}_{\mu\nu}=\frac{1}{2}\epsilon_{\mu\nu\alpha\beta}F^{a\alpha \beta}. \tag{8.13}\] What makes the \(\theta\)-term (8.12) interesting is that it is the integral of a total derivative \[\epsilon^{\mu\nu\alpha\beta}F^{a}_{\mu\nu}F^{a}_{\alpha\beta}=\partial_{\mu }\mathscr{J}^{\mu}, \tag{8.14}\] and therefore does not contribute to the field equations. The current on the right-hand side of the previous equation takes the form (see Box 11 below for a rather simple derivation of this result) \[\mathscr{J}^{\mu}=4\epsilon^{\mu\nu\alpha\beta}\left(A^{a}_{\nu}\partial_{ \alpha}A^{a}_{\beta}+\frac{1}{3}f^{abc}A^{a}_{\nu}A^{b}_{\alpha}A^{c}_{\beta} \right). \tag{8.15}\] In the \(A^{a}_{0}=0\) gauge, we have \[\epsilon^{\mu\nu\alpha\beta}F^{a}_{\mu\nu}F^{a}_{\alpha\beta}=4\frac{\partial }{\partial t}\left[\mathbf{A}^{a}\cdot\left(\boldsymbol{\nabla}\times \mathbf{A}^{a}\right)+\frac{1}{3}f^{abc}\mathbf{A}^{a}\cdot\left(\mathbf{A}^{ b}\times\mathbf{A}^{c}\right)\right], \tag{8.16}\] which once integrated and with the proper normalization gives the following expression of the \(\theta\)-term \[S_{\theta}=-\frac{\theta}{8\pi^{2}}\left\{\int d^{3}r\,\left[\mathbf{A}^{a} \cdot\left(\boldsymbol{\nabla}\times\mathbf{A}^{a}\right)+\frac{1}{3}f^{abc} \mathbf{A}^{a}\cdot\left(\mathbf{A}^{b}\times\mathbf{A}^{c}\right)\right] \right|_{t=\infty}\] \[-\int d^{3}r\,\left.\left[{\bf A}^{a}\cdot\left({\mathbf{\nabla}}\times{\bf A}^{a} \right)+\frac{1}{3}f^{abc}{\bf A}^{a}\cdot\left({\bf A}^{b}\times{\bf A}^{c} \right)\right]\right|_{t=-\infty}\right\}. \tag{8.17}\] To ensure finiteness, we take the gauge field \({\bf A}={\bf A}^{a}T^{a}_{\bf R}\) to approach pure-gauge configurations \({\bf A}_{\pm}=g_{\pm}^{-1}{\mathbf{\nabla}}g_{\pm}\) at \(t=\pm\infty\) (see fig. 11). It is easy to see that the integrands in eq. (8.17) are not gauge invariant and therefore the \(\theta\)-term is nonzero (again, a derivation is outlined in Box 11) \[S_{\theta} =\frac{\theta}{24\pi^{2}}\int d^{3}r\,{\rm tr}\left\{(g_{+}^{-1} {\mathbf{\nabla}}g_{+})\cdot\left[(g_{+}^{-1}{\mathbf{\nabla}}g_{+})\times(g_{+}^{-1} {\mathbf{\nabla}}g_{+})\right]\right\}\] \[-\frac{\theta}{24\pi^{2}}\int d^{3}r\,{\rm tr}\left\{(g_{-}^{-1} {\mathbf{\nabla}}g_{-})\cdot\left[(g_{-}^{-1}{\mathbf{\nabla}}g_{-})\times(g_{-}^{-1} {\mathbf{\nabla}}g_{-})\right]\right\}. \tag{8.18}\] Comparing with eq. (8.7), we identify the winding numbers \(n_{\pm}\) of the asymptotic gauge transformations \(g_{\pm}\), to write \[S_{\theta}=(n_{+}-n_{-})\theta. \tag{8.19}\] Thus, non-Abelian gauge field configurations are classified into topological sectors interpolating between early and late time configurations of definite winding number \(n_{\pm}\). These sectors are labelled by the integer \(n=n_{+}-n_{-}\), and when summing in the Feynman path integral over all gauge configurations we also have to include all possible sectors. Each one is weighted by the same phase \[e^{iS_{\theta}}=e^{in\theta}, \tag{8.20}\] that we encountered in eq. (8.11). **Box 11. Gauge fields and differential forms** The analysis of YM theories gets very much simplified in the language of differential forms [57, 58, 59, 60]. The gauge field \(A_{\mu}=A_{\mu}^{a}T_{\bf R}^{a}\) can be recast as the Lie algebra valued one-form \[A=-iA_{\mu}dx^{\mu}, \tag{8.21}\] while the two-form field strength is given by \[F\equiv-\frac{i}{2}F_{\mu\nu}dx^{\mu}\wedge dx^{\nu}=dA+A\wedge A, \tag{8.22}\] where in the second term on the right-hand side a matrix multiplication of the one-forms is also understood (in the Abelian case the matrices commute and the term vanishes due to the anticommutativity of the wedge product). The factor of \(-i\) in both eqs. (8.21) and (8.22) is introduced to avoid cluttering expressions with powers of \(i\). Gauge transformations are determined by a zero-form \(g\in\mathscr{G}\) acting on the gauge field one-form as [cf. (6.6)] \[A\longrightarrow A^{\prime}=g^{-1}dg+g^{-1}Ag. \tag{8.23}\] This leads to the corresponding transformation of the field strength \[F\longrightarrow F^{\prime} =dA^{\prime}+A^{\prime}\wedge A^{\prime}\] \[=g^{-1}Fg, \tag{8.24}\] that once written in components agrees with the one given in eq. (6.9). In fact, given an adjoint \(p\)-form field \[\Phi_{p}=-\frac{i}{p!}\Phi_{\mu_{1}\ldots\mu_{p}}dx^{\mu_{1}} \wedge\ldots\wedge dx^{\mu_{p}}\quad\implies\quad\Phi_{p}\to\Phi_{p}^{\prime} =g^{-1}\Phi_{p}g, \tag{8.25}\] a covariant exterior derivative is defined acting as \[D\Phi_{p}\equiv d\Phi_{p}+A\wedge\Phi_{p}-(-1)^{p}\Phi_{p}\wedge A \quad\implies\quad(D\Phi_{p})^{\prime}=g^{-1}(D\Phi_{p})g, \tag{8.26}\] satisfying the Leibniz rule \[D(\Phi_{p}\wedge\Psi_{q})=(D\Phi_{p})\wedge\Psi_{q}+(-1)^{p} \Phi_{p}\wedge(D\Psi_{q}). \tag{8.27}\] Using these definitions and properties, it is easy to check that the field strength two-form (8.22) verifies the Bianchi identity \(DF=0\). In four dimensions there are two gauge invariant four-forms that can be constructed from the field-strength two-form. The first one is \[\operatorname{tr}\left(F\wedge\star F\right), \tag{8.28}\] where \(\star\) denotes the Hodge dual, acting on a \(p\)-form field as [58] \[\star\Phi_{p}=-\frac{i}{p!(4-p)!}\epsilon^{\mu_{1}\ldots\mu_{p}}{}_{ \nu_{1}\ldots\nu_{4-p}}\Phi_{\mu_{1}\ldots\mu_{p}}dx^{\nu_{1}}\wedge\ldots \wedge dx^{\nu_{4-p}}. \tag{8.29}\] Since this operation commutes with the multiplication by a zero-form, the gauge invariance of (8.28) follows directly from applying the cyclic property of the trace. In addition, we can also construct a second gauge invariant four-form \[\operatorname{tr}\left(F\wedge F\right), \tag{8.30}\] so the action of pure YM theory without matter couplings can be written as \[\mathcal{S}_{\text{YM}}=\frac{1}{2g_{\text{YM}}^{2}}\int_{\mathcal{M}_{4}} \operatorname{tr}\left(F\wedge\star F\right)+\frac{\theta}{8\pi^{2}}\int_{ \mathcal{M}_{4}}\operatorname{tr}\left(F\wedge F\right), \tag{8.31}\] where \(\mathcal{M}_{4}\) represents the four-dimensional spacetime. The two terms correspond respectively to the kinetic and \(\theta\) terms given in components in eqs. (6.13) and (8.12). Incidentally, notice that while the term inside the first integral is always a maximal form in any dimension, the one in the second term is only maximal in \(D=4\). In fact, no analog of the \(\theta\)-term exits in odd-dimensional spacetimes. Although in these lectures we are restricting our attention to (flat) Minkowski spacetime, QFTs can also be defined in curved spacetimes. In this respect, the action (8.31) written in terms of differential forms is also valid for non-flat metrics. An interesting difference between the two terms is that, while the first one depends on the spacetime metric the \(\theta\)-term does not and is therefore topological. Metric dependence is actually signaled by the presence of the Hodge dual in the action. Another relevant fact that can be easily shown using differential forms is that the \(\theta\)-term is a total derivative, as we saw in eq. (8.16). Indeed, eq. (8.30) can be explicitly written in terms of the gauge field one-form as \[\operatorname{tr}\left(F\wedge F\right) =\operatorname{tr}\left(dA\wedge dA+2dA\wedge A\wedge A+A\wedge A \wedge A\wedge A\right)\] \[=d\operatorname{tr}\,\left(A\wedge dA+\frac{2}{3}A\wedge A\wedge A \right), \tag{8.32}\] where we have used that \(\operatorname{tr}\left(A\wedge A\wedge A\wedge A\right)=0\), as a result of the anticommutativity of one-forms and the trace's cyclic property. Using the properties of the Hodge dual operator, we finally write \[\star\operatorname{tr}\left(F\wedge F\right)=d^{\dagger}J, \tag{8.33}\] where \(d^{\dagger}\equiv\star d\star\) is the adjoint exterior derivative [58] and \(J\) is the current one form \[J=\star\operatorname{tr}\,\left(A\wedge dA+\frac{2}{3}A\wedge A\wedge A \right). \tag{8.34}\] Once expressed in components we retrieve eq. (8.16). The trace on the right-hand side of (8.34) defines the _Chern-Simons form_. Applying (8.23) and after some algebra we obtain its gauge transformation \[\omega_{3}(A)\equiv\mathrm{tr}\,\left(A\wedge dA+\frac{2}{3}A\wedge A \wedge A\right)\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\ ### Breaking CP strongly An significant feature of the \(\theta\)-term (8.12) is that it violates both parity and CP, the combination of parity and charge conjugation, \[\text{CP}:\left\{\begin{array}{ccc}\mathbf{E}^{a}(t,\mathbf{r})& \longrightarrow&\mathbf{E}^{a}(t,-\mathbf{r})\\ \mathbf{B}^{a}(t,\mathbf{r})&\longrightarrow&-\mathbf{B}^{a}(t,-\mathbf{r}) \end{array}\right.\qquad\Longrightarrow\qquad\text{CP}:S_{\theta} \longrightarrow-S_{\theta}. \tag{8.40}\] To understand these transformations heuristically, we can use the analogy with Maxwell's electric and magnetic fields to conclude that \(\mathbf{E}^{a}\) is reversed by both parity and charge conjugation, whereas the pseudovector \(\mathbf{B}^{a}\) is preserved by the former and reversed by the latter. Notice that since CPT is a symmetry of QFT, a breaking of CP is equivalent to a violation of time reversal T. Among the phenomena where CP (or T) violation can manifest in QCD is the existence of a nonvanishing electric dipole moment of the neutron (see, for example, [115, 116] for reviews). To be clear, were neutrons elementary, we would not expect them to have an electric dipolar moment. But being composed of three valence quarks with different charges, a nonvanishing value may appear depending on the quark distribution. To estimate its size, let us consider a classical picture of the neutron assuming a structure similar to the water molecule (see fig. 12): the two \(d\) quarks are located at a distance \(\ell\) of the \(u\) quark and their position vectors \(\mathbf{r}_{1}\) and \(\mathbf{r}_{2}\) span an angle \(\psi\) with each other. Taking coordinates on the plane defined by the three quarks, the modulus of the electric dipole moment \(\mathbf{d}_{n}\) is readily computed to be \[|\mathbf{d}_{n}|=\frac{2}{3}e\ell\cos\frac{\psi}{2}\equiv\frac{2}{3}e\ell\sin \frac{\theta}{2}, \tag{8.41}\] where we have introduced the angle \(\theta\equiv\pi-\psi\), controlling the amount of CP violation. To estimate the prefactor in eq. (8.41), we recall that the distance \(\ell\) between the quarks is of the order of the pion's Figure 12: Classical depiction of the neutron and its electric dipolar moment \(\mathbf{d}_{n}\). The components of the \(d\) quarks position vectors \(\mathbf{r}_{1}\) and \(\mathbf{r}_{2}\) are written using the coordinate axes shown in the picture, with origin on the position of the \(u\) quark. Compton wavelength \[\ell\simeq\frac{\hbar}{m_{\pi}c}, \tag{8.42}\] where for computational purposes we have restored powers of \(\hbar\) and \(c\). Noticing that \(\hbar c\simeq 200\,\text{MeV}\cdot\text{fm}\) and \(m_{\pi}c^{2}\simeq 135\,\text{MeV}\), we find \[|\mathbf{d}_{n}|\simeq 10^{-13}\sin\frac{\theta}{2}\,e\cdot\text{cm}. \tag{8.43}\] A comparison with experimental measurements of the neutron electric dipole [117, 118] \[|\mathbf{d}_{n}|_{\text{exp}}\lesssim 10^{-26}\,e\cdot\text{cm}, \tag{8.44}\] leads then to the bound \[\theta\lesssim 10^{-13}. \tag{8.45}\] This means that the angle \(\psi=\pi-\theta\) in fig. 12 is extremely close to \(\pi\), making the quark configuration inside the neutron look like a CO\({}_{2}\) rather than a water molecule. This cartoon calculation exhibits the basic feature of the so-called _strong CP problem_: the stringent experimental bound for the neutron electric dipole moment implies the existence of a dimensionless parameter that is extremely small without any dynamical reason. Once we rephrase the problem in the correct language of QCD, we will see that this parameter is precisely the \(\theta\) coupling introduced in eq. (8.12). From a QFT point of view the neutron electric dipole emerges from the dimension-five nonminimal coupling of the neutron to the electromagnetic field \[S\supset-\frac{i}{2}|\mathbf{d}_{n}|\int d^{4}x\,\overline{n} \sigma^{\mu\nu}\gamma_{5}nF_{\mu\nu}, \tag{8.46}\] where \(n\) is the neutron field and \(\sigma^{\mu\nu}\) has been defined in eq. (4.49). This term is explicitly gauge invariant but breaks parity, as follows from the presence of \(\gamma_{5}\). It is, however, invariant under charge conjugation, which preserves the neutron and gauge fields, and therefore it breaks CP. The operator (8.46) is in fact an effective interaction emerging from loops diagrams in the EFT of pions and nucleons described by an extension of the action (5.58). To construct this theory, let us consider QCD with the two light flavors \(u\) and \(d\). Written in terms of the chiral isospin doublets \[\boldsymbol{q}_{R,L}=\left(\begin{array}{c}u_{R,L}\\ d_{R,L}\end{array}\right), \tag{8.47}\] the microscopic action takes the form \[S=\int d^{4}x\left(i\overline{\boldsymbol{q}}_{R}\not{D} \boldsymbol{q}_{R}+i\overline{\boldsymbol{q}}_{L}\not{D}\boldsymbol{q}_{L}+ \overline{\boldsymbol{q}}_{L}M\boldsymbol{q}_{R}+\overline{\boldsymbol{q}}_{ R}M^{T}\boldsymbol{q}_{L}-\frac{\theta}{32\pi^{2}}\epsilon^{\mu\nu\alpha\beta}F^{a}_{\mu \nu}F^{a}_{\alpha\beta}+\ldots\right), \tag{8.48}\] where \(D_{\mu}=\partial_{\mu}-iA_{\mu}^{a}T^{a}\) denotes the gauge covariant derivative and the mass matrix is given by \[M=\left(\begin{array}{cc}m_{u}&0\\ 0&m_{d}\end{array}\right). \tag{8.49}\] We have included the \(\theta\)-term, while the ellipsis indicates other terms not important for the argument. In writing the action (5.58) we assumed that quarks are massless, and also the NG bosons associated with chiral SSB, but we now relax this condition. Although the chiral SU(2)\({}_{R}\times\) SU(2)\({}_{L}\) transformations \[\mathbf{q}_{R,L}\longrightarrow U_{R,L}\mathbf{q}_{R,L}, \tag{8.50}\] do not leave the quark action (8.48) invariant, we can restore the symmetry promoting the mass matrix \(M\) to a spurion field transforming as \[M\longrightarrow U_{L}MU_{R}^{\dagger}. \tag{8.51}\] Thus, the original action can be seen as one where chiral symmetry is spontaneously broken by \(M\) taking the value in eq. (8.49). The transformation of \(M\), together with eq. (5.56), provides the basic clue to incorporate masses into the NG action (5.58). An invariant mass term can be built by taking the trace of the product of the mass and the NG boson matrices \[S_{\rm NG}=\int d^{4}x\,\left[\frac{f_{\pi}^{2}}{4}{\rm tr}\left(D_{\mu}\mathbf{ \Sigma}^{\dagger}D^{\mu}\mathbf{\Sigma}\right)+f_{\pi}^{3}B_{0}{\rm tr}\left(M^{ \dagger}\mathbf{\Sigma}+\mathbf{\Sigma}^{\dagger}M\right)\right]. \tag{8.52}\] Here \(D_{\mu}\mathbf{\Sigma}=\partial_{\mu}\mathbf{\Sigma}-iA_{\mu}[Q,\mathbf{\Sigma}]\), with \(Q=e\sigma^{3}\) the pion charge matrix, is the electromagnetic covariant derivative and \(B_{0}\) is a numerical constant that cannot be determined within the EFT framework24. Substituting the explicit expressions of \(M\) and \(\mathbf{\Sigma}\) and expanding in powers of the pion fields, we find the mass term Footnote 24: The pion effective action \(S_{\rm NG}\) also contains terms induced by the anomalous global symmetries of QCD, which are fully determined by the mathematical structure of the anomaly (see, for example, [93]). An example is the term proportional to \(\left({\rm tr}\,\log\mathbf{\Sigma}-{\rm tr}\,\log\mathbf{\Sigma}^{\dagger}\right)F_{ \mu\nu}\widetilde{F}^{\mu\nu}\), accounting for the electromagnetic decay of the neutral pion discussed in page 82. \[\Delta S_{\rm NG}=-f_{\pi}B_{0}(m_{u}+m_{d})\int d^{4}x\Big{[}(\pi^{0})^{2}+2 \pi^{+}\pi^{-}\Big{]}, \tag{8.53}\] from where we read the pion mass \[m_{\pi}^{2}=2f_{\pi}B_{0}(m_{u}+m_{d})\qquad\implies\qquad B_{0}=\frac{m_{ \pi}^{2}}{2f_{\pi}(m_{u}+m_{d})}. \tag{8.54}\] Within this approximation, neutral and charged pions have the same mass. Nucleons can also be added to the chiral Lagrangian (see [119, 120] for reviews). They are introduced through the isospin doublet \[N=\left(\begin{array}{c}p\\ n\end{array}\right), \tag{8.55}\] transforming under SU(2)\({}_{R}\times\) SU(2)\({}_{L}\) as [121, 122, 123] \[N\longrightarrow K(U_{R},U_{L},\mathbf{\Sigma})N. \tag{8.56}\] The so-called compensating field \(K(U_{R},U_{L},\mathbf{\Sigma})\) is a SU(2)-valued matrix depending on the NG boson matrix \(\mathbf{\Sigma}(x)\), and through it on the spacetime point. It is defined by \(K(U_{R},U_{L},\mathbf{\Sigma})=\mathbf{u}^{\prime}(x)^{-1}U_{R}\mathbf{u}(x)\), where \(\mathbf{u}(x)^{2}\equiv\mathbf{\Sigma}(x)\) and \(\mathbf{u}^{\prime}(x)^{2}\equiv\mathbf{\Sigma}^{\prime}(x)=U_{R}\mathbf{\Sigma}(x)U_{L}^ {\dagger}\), thus providing a nonlinear realization of the SU(2)\({}_{R}\times\) SU(2)\({}_{L}\) global chiral symmetry acting on the nucleon isospin doublet. Having established the transformation of nucleons, we add to the effective action the term \[\Delta S_{\pi N}=\int d^{4}x\,\overline{N}\Big{[}i\not{D}-f(\mathbf{\Sigma})\Big{]}N. \tag{8.57}\] with \(f(\mathbf{\Sigma})\) a matrix-valued function depending on the NG boson matrix and such that \(\not{D}\equiv\not{D}+if(\mathbf{\Sigma})\) defines a covariant derivative with respect to the local transformation (8.56), \(\not{D}\to K\not{D}K^{\dagger}\). At linear order in the pion fields, it includes the pion-nucleon vertices \[f(\mathbf{\Sigma}) =m_{N}\mathbbm{1}+\frac{g_{A}}{2f_{\pi}}\gamma^{\mu}\gamma_{5} \partial_{\mu}\mathbf{\pi}+\mathcal{O}(\mathbf{\pi}^{2})\] \[=m_{N}\mathbbm{1}+\frac{g_{A}}{2\sqrt{2}f_{\pi}}\big{(}\overline{ \pi}\gamma^{\mu}\gamma_{5}n-\overline{p}\gamma^{\mu}\gamma_{5}p\big{)} \partial_{\mu}\pi^{0}+\frac{g_{A}}{2f_{\pi}}\big{(}\overline{\pi}\gamma^{\mu} \gamma_{5}p\,\partial_{\mu}\pi^{-}+\overline{p}\gamma^{\mu}\gamma_{5}n\, \partial_{\mu}\pi^{+}\big{)}, \tag{8.58}\] where \(m_{N}\) is the nucleon mass. Incidentally, substituting this expression of \(f(\mathbf{\Sigma})\) into the action (8.57) we can integrate by parts and move the derivative from \(\mathbf{\pi}\) to \(N\) and \(\overline{N}\). For scattering processes with on-shell nucleons the Dirac equation \(i\not{\partial}N=m_{N}N\) can be implemented to write the nucleon-pion interaction term as \(ig_{\pi\!\!N\!N}\overline{N}t_{\mathbf{\mathrm{f}}}^{I}N\pi^{I}\), with \(t_{\mathbf{\mathrm{f}}}^{I}\) the generators in the fundamental representation of SU(2). Furthermore, the coupling constant \(g_{\pi\!\!N\!N}\) satisfies by the Goldberger-Treiman relation [124] \[f_{\pi}g_{\pi\!\!N\!N}=g_{A}m_{N}. \tag{8.59}\] Notice that since \(g_{A}\) is real, the couplings in eq. (8.58) preserve CP. We would like to study the effects in the chiral Lagrangian of adding the \(\theta\)-term to the quark action. At this point we should invoke the analysis presented in Box 10 (see page 8.1) where we saw how, due to the chiral anomaly, implementing a chiral rotation of the fermions induces a \(\theta\)-term in the action. More precisely, performing a chiral rotation of the \(u\)-quark \[u_{R,L}\longrightarrow e^{\pm i\alpha}u_{R,L}, \tag{8.60}\] results in shifting the value of the theta angle \[S=\int d^{4}x\left(i\overline{\mathbf{q}}_{R}\not{D}\mathbf{q}_{R}+i\overline{\mathbf{q}}_ {L}\not{D}\mathbf{q}_{L}+\overline{\mathbf{q}}_{L}M\mathbf{q}_{R}+\overline{\mathbf{q}}_{R}M^ {\dagger}\mathbf{q}_{L}-\frac{\theta-2\alpha}{32\pi^{2}}\epsilon^{\mu\nu\alpha \beta}F^{a}_{\mu\nu}F^{a}_{\alpha\beta}+\ldots\right), \tag{8.61}\] and a complex mass matrix \[M=\left(\begin{array}{cc}e^{2i\alpha}m_{u}&0\\ 0&m_{d}\end{array}\right). \tag{8.62}\] In particular, setting \(\alpha=\frac{1}{2}\theta\) the \(\theta\)-term cancels and all dependence on \(\theta\) is shifted to a phase in the mass matrix \(M\). In more physical terms, we have transferred the source of CP violation in the quark action from the \(\theta\)-term to a complex coupling25. Footnote 25: In fact, it is easy to prove that the quantity \(\overline{\theta}\equiv\theta+\arg\det M\) remains invariant under chiral transformations of the quarks. It might seem that, at the level of the chiral effective field theory, the phase in the mass matrix \(M=\text{diag}(e^{i\theta}m_{u},m_{d})\) could be removed by an appropriate chiral transformation of the NG field \(\mathbf{\Sigma}(x)\). In doing so, however, we introduce a \(\theta\)-dependence in \(f(\mathbf{\Sigma},\theta)\) defined in (8.57), inducing additional nucleon-pion couplings. In particular, besides the neutron-proton-pion vertex in eq. (8.58), there is a new CP violating vertex contributing to the dimension-five non-minimal electromagnetic coupling in eq. (8.46) (8.63) The black dots in the diagrams on the right-hand side represent the CP-violating vertex, whereas the lined blobs indicate the neutron-pion coupling in (8.58). The chiral loop integrals are logarithmically divergent and once evaluated give the following contribution to the neutron electric dipole moment [125] \[|\mathbf{d}_{n}|=\frac{1}{4\pi^{2}}\frac{|g_{\pi\!N\!N}\overline{g}_{\pi\!N\!N} |}{m_{N}}\log\left(\frac{m_{N}}{m_{\pi}}\right), \tag{8.64}\] where \[|\overline{g}_{\pi\!N\!N}|\approx 0.027|\theta|, \tag{8.65}\] is the coupling of the CP-violating vertex and, in the spirit of EFT, integrals have been cut off at \(\Lambda=m_{\pi}\). Substituting the value for the CP-preserving pion-nucleon coupling and implementing the experimental bound (8.44), we find \[|\theta|\lesssim 10^{-11}. \tag{8.66}\] We see that the amount of fine tuning in the \(\theta\) parameter needed to explain experiments is not very far off the one obtained for the angle \(\theta\) in (8.45) in the classical toy model of the neutron (not by accident both quantities were denoted by the same Greek letter). **Box 12. A "potential" for \(\theta\)** We would like to understand how the energy of the ground state of QCD depends on the parameter \(\theta\). There are a number of things that can be said about this quantity, that we denote by \(V(\theta)\). As we learned above [see eq. (8.19)], the \(\theta\)-term is a topological object and any physical quantity depending on it like \(V(\theta)\) should be periodic in \(\theta\) with period equal to \(2\pi\) \[V(\theta+2\pi)=V(\theta). \tag{8.67}\] Moreover, there exists a very elegant argument showing that energy is minimized for \(\theta=0\)[126] \[V(0)\leq V(\theta). \tag{8.68}\] To go beyond these general considerations and find an explicit expression of \(V(\theta)\) in QCD, we consider the potential energy in the pion effective action (8.52) \[\mathcal{V}(\mathbf{\Sigma})=-\frac{m_{\pi}^{2}f_{\pi}^{2}}{2(m_{u}+m_{d})}\mathrm{ tr}\,\big{(}M^{\dagger}\mathbf{\Sigma}+M\mathbf{\Sigma}^{\dagger}\big{)}, \tag{8.69}\] where \(M\) is given by \[M=\left(\begin{array}{cc}e^{i\theta}m_{u}&0\\ 0&m_{d}\end{array}\right). \tag{8.70}\] To find the vacuum energy, we look for a NG boson matrix configuration minimizing \(\mathcal{V}(\mathbf{\Sigma})\). In fact, since the mass matrix is diagonal it can be seen that the trace in (8.69) only depends on the diagonal components of \(\mathbf{\Sigma}\). This means that in order to minimize the potential it is enough consider NG matrices of the form \(\mathbf{\Sigma}=\text{diag}(e^{i\varphi_{1}},e^{i\varphi_{2}})\). Furthermore, the dependence on \(\theta\) in the mass matrix can be shifted to the NG boson matrix by the field redefinition \[\mathbf{\Sigma}\longrightarrow\mathbf{\widetilde{\Sigma}}\equiv\left( \begin{array}{cc}e^{-\frac{i\theta}{2}}&0\\ 0&1\end{array}\right)\mathbf{\Sigma}\left(\begin{array}{cc}e^{-\frac{i\theta}{2 }}&0\\ 0&1\end{array}\right)=\left(\begin{array}{cc}e^{i(\varphi_{1}-\theta)}&0\\ 0&e^{i\varphi_{2}}\end{array}\right). \tag{8.71}\] Imposing the condition \(\det\mathbf{\widetilde{\Sigma}}=1\), we have \(\varphi_{1}+\varphi_{2}=\theta\) mod \(2\pi\). Substituting the redefined NG matrix field \(\widetilde{\Sigma}\) into (8.69) with \(M=\text{diag}(m_{u},m_{d})\), we arrive at the potential \[\mathcal{V}(\varphi_{1},\varphi_{2})=-\frac{m_{\pi}^{2}f_{\pi}^{2}}{m_{u}+m_ {d}}\big{(}m_{u}\cos\varphi_{1}+m_{d}\cos\varphi_{2}\big{)}, \tag{8.72}\] that has to be minimized subject to the constraint \(\varphi_{1}+\varphi_{2}=\theta\). The equation to be solved is \[m_{u}\sin\varphi_{1}=m_{d}\sin(\theta-\varphi_{1}), \tag{8.73}\] that, after a bit of algebra, gives \[\cos^{2}\varphi_{1} =\frac{(m_{u}+m_{d}\cos\theta)^{2}}{m_{u}^{2}+m_{d}^{2}+2m_{u}m_{d} \cos\theta},\] \[\cos^{2}\varphi_{2} =\frac{(m_{d}+m_{u}\cos\theta)^{2}}{m_{u}^{2}+m_{d}^{2}+2m_{u}m_{d} \cos\theta}. \tag{8.74}\] Substituting these results into (8.72), we arrive at the expression of the QCD vacuum energy as a function of \(\theta\) \[V(\theta)=-\frac{m_{\pi}^{2}f_{\pi}^{2}}{m_{u}+m_{d}}\sqrt{m_{u}^{2}+m_{d}^{2 }+2m_{u}m_{d}\cos\theta}. \tag{8.75}\] In fig. 13 we have represented this function for various values of the ratio \(m_{d}/m_{u}\), from where we see that, as announced, the minimum occurs at \(\theta=0\). We also see that when \(m_{u}=m_{d}\) there are cusps at the maxima located at \(\theta=(2n+1)\pi\), that are smoothed out when the quarks have different masses. Being an experimental fact that \(\theta\) is very small, we can expand \(V(\theta)\) around \(\theta=0\) to find \[V(\theta)=-m_{\pi}^{2}f_{\pi}^{2}+\frac{1}{2}m_{\pi}^{2}f_{\pi}^{2}\frac{m_{u }m_{d}}{(m_{u}+m_{d})^{2}}\theta^{2}. \tag{8.76}\] This expression will become handy later on when it will be reinterpreted as the potential for the axion field. Since \(m_{s}\gg m_{u},m_{d}\) we have restricted our attention to QCD with the two lightest flavors, although the analysis can be easily extended to any \(N_{f}\geq 2\). The resulting expression of the ground state energy \(V(\theta;m_{1},\dots,m_{f})\) for small \(\theta\) is symmetric under permutations of the quark masses and satisfies a recursion relation \[V(\theta;m_{1},\dots,m_{f-1})=\lim_{m_{f}\to\infty}V(\theta;m_{1},\dots,m_{f}), \tag{8.77}\] implementing the decoupling of the \(f\)-th flavor. ### 8.3 Enters the axion We would like to understand the smallness of \(\theta\) in a natural way, i.e., either as following from some symmetry principle or by finding out some dynamical reason for its value26. One possible explanation would be that \(m_{u}=0\), so a chiral rotation of the \(u\)-quark field would get rid of the \(\theta\)-term without introducing CP-violating phases in the chiral Lagrangian. This is however no good, since all experimental evidences indicate that the \(u\)-quark is not massless. Footnote 26: The fact that in the CO\({}_{2}\) molecule the angle \(\theta\) is zero is a consequence of the dynamics of the atomic orbitals and is therefore “natural”. A very popular solution to the CP problem is the one proposed by Roberto Peccei and Helen Quinn [127, 128] consisting in making the \(\theta\)-parameter the vev of a pseudoscalar field \(a(x)\), the _axion_[129, 130], whose potential would drive it to \(\langle 0|a(x)|0\rangle=0\). To be more precise, let us consider the action \[S=\int d^{4}x\left(i\overline{q}_{R}\not{D}\!\!q_{R}+i\overline{q}_{L}\not{D}\! \!q_{L}+\overline{q}_{L}M\!\!q_{R}+\overline{q}_{R}M^{\dagger}\!\!q_{L}-\frac{ 1}{32\pi^{2}f_{a}}aF^{a}_{\mu\nu}\widetilde{F}^{a\mu\nu}\right), \tag{8.78}\] where \(f_{a}\) is an energy scale introduced so the axion field has canonical dimenensions of energy. We can now play the old game of shifting the last term in the action (8.78) to a complex phase in the mass matrix. In the low-energy effective field theory, this phase can be absorbed into the NG bosons matrix by the field redefinition (cf. the analysis presented in Box 12) \[\mathbf{\Sigma}\longrightarrow\left(\begin{array}{cc}e^{-\frac{ia}{2f_{a}}}&0 \\ 0&1\end{array}\right)\mathbf{\Sigma}\left(\begin{array}{cc}e^{-\frac{ia}{2f_{a}}}& 0\\ 0&1\end{array}\right). \tag{8.79}\] In the absence of a mass term for the NG bosons, \(\mathbf{\Sigma}\) only has derivative couplings and the theory is invariant under constant shifts of the axion field, \(a(x)\to a(x)+\text{constant}\). The presence of the term \(f_{\pi}^{3}B_{0}\text{tr}\left(M^{\dagger}\mathbf{\Sigma}+\mathbf{\Sigma}^{\dagger}M\right)\), however, induces a potential that can be read off eq. (8.75) with \(\theta\) replaced by \(a/f_{a}\). Expanding around the minimum at \(a=0\), we find \[V(a)=\frac{m_{\pi}^{2}f_{\pi}^{2}}{2f_{a}^{2}}\frac{m_{u}m_{d}}{(m_{u}+m_{d})^ {2}}a^{2}+\ldots, \tag{8.80}\] where we have dropped constant terms and the ellipsis indicates higher-order axion self-interactions. This gives the axion mass \[m_{a}=\frac{m_{\pi}f_{\pi}}{f_{a}}\frac{\sqrt{m_{u}m_{d}}}{m_{u}+m_{d}}=5.7 \left(\frac{10^{9}\text{ GeV}}{f_{a}}\right)\text{ meV}. \tag{8.81}\] The field redefinition (8.79) also induces axion interactions with mesons, baryons, leptons, and photons. Figure 13: Plot of \(V(\theta)\) in eq. (8.75) for three different values of the \(\frac{m_{u}}{m_{d}}\) ratio: 1 (blue), 0.3 (orange), and 0.5 (green). For example, \[S_{\rm axion}\supset-\int d^{4}x\,\left(\frac{i}{2}g_{ap\gamma}a\overline{p} \sigma^{\mu\nu}\gamma_{5}pF_{\mu\nu}+\frac{i}{2}g_{an\gamma}a\overline{n}\sigma ^{\mu\nu}\gamma_{5}nF_{\mu\nu}+\frac{g_{a\gamma\gamma}}{4}aF_{\mu\nu}\widetilde {F}^{\mu\nu}\right), \tag{8.82}\] where \(g_{an\gamma}=-g_{ap\gamma}\sim f_{a}^{-2}\) and \(g_{a\gamma\gamma}\sim f_{a}^{-1}\). The last non-minimal electromagnetic coupling of the axion comes from the anomaly-induced term in the chiral Lagrangian pointed out in the footnote on page 93. In a strong magnetic field, this term allows the conversion of a photon into an axion and vice versa, one of the main astrophysical signatures of the axion and also the target process of the light-shining-through-walls experiments [131]. Among other candidates for dark matter (sterile neutrinos, supersymmetric particles,...) axions are currently one of the most popular candidates to account for the missing matter in the universe [132, 133]. Cosmological and astrophysical phenomena provide a wide class of observational windows for these kind of particles, ranging from CMB physics to stellar astrophysics and black holes (see fig. 14). Observations so far have been used to constraint the parameter space for axion-like particles, leaving a wide allowed region including most of the values of the QCD axion. A comprehensive overview of current axion experiments and the bounds on different parameters can be found in the review [116], as well as in [117] (see also [134] for a collection of exclusion plots for various parameters). ## 9 The electroweak theory It is time we look into the electroweak sector of the SM. As already mentioned several times in these lectures, our current understanding of the electromagnetic and weak forces is based on a gauge theory with group \(\text{SU(2)}\times\text{U(1)}_{Y}\). This theory has subtle differences with respect to the color SU(3) QCD gauge group used to describe strong interactions. The basic one is that it is a chiral theory in which left- and right-handed fermions transform in different representations of the gauge group. Closely related to this is Figure 14: Exclusion plot from [134] for the axion parameters \(f_{a}\) (resp. \(g_{an\gamma}\)) and \(m_{a}\). The yellow line represents the relation (8.81). that the SU(2) \(\times\) U(1)\({}_{Y}\) gauge invariance is spontaneously broken at low energies by an implementation of the BEH mechanism explained in section 5. This feature, that for decades was the shakiest part of the electroweak theory, was finally confirmed in July 2012 when the detection of the Higgs boson was announced at CERN, thus fitting the final piece into the jigsaw puzzle. Whereas only hadrons (i.e., quarks) partake of the strong interaction, the weak force affects both quarks and leptons. Its chiral character is reflected in that the weak interaction violate parity, a fact discovered in the late 1950s in the study of \(\beta\)-decay and other processes mediated by the weak force [135, 136, 137, 138]. Unlike gluons, coupling to quarks through a vector current \(J^{\mu}_{\rm QCD}=\overline{q}\gamma^{\mu}q\), the carriers of the weak force interact with matter via the V - A current \(J^{\mu}_{\rm weak}=\overline{\psi}\gamma^{\mu}(\mathbbm{1}-\gamma_{5})\psi\), with \(\psi\) either a lepton or a quark field [139, 140]. ### Implementing SU(2) \(\times\) U(1)\({}_{Y}\) To be more precise, \(\beta\)-decay transmutes left-handed electrons into left-handed electron neutrinos (an vice versa), while \(u\)-quarks (resp. \(d\)-quarks) transform into \(d\) quarks (resp. \(u\)-quarks). This suggests grouping left-handed electrons/neutrinos and quarks into doublets \[\mathbf{L}=\left(\begin{array}{c}\nu_{e}\\ e^{-}\end{array}\right)_{L},\hskip 28.452756pt\mathbf{Q}=\left(\begin{array}{ c}u\\ d\end{array}\right)_{L}, \tag{9.1}\] and assume they transform in the fundamental representation \(\mathbf{2}\) of the SU(2) algebra. At the same time, since right-handed electrons and quarks do not undergo \(\beta\)-decay, their components are taken to be SU(2) singlets \[\ell_{R}\equiv e_{R}^{-},\hskip 42.679134ptU_{R}\equiv u_{R}, \hskip 42.679134ptD_{R}\equiv d_{R}. \tag{9.2}\] Moreover, since there is no experimental evidence of the existence of right-handed neutrinos, we do not include them in the description (at least for now; we will return to this issue later). The whole picture is complicated because the weak force mixes with the electromagnetic interaction. In fact, the U(1)\({}_{Y}\) of the electroweak gauge group is not the U(1) of Maxwell's theory. The generator \(Y_{\mathbf{R}}\) of the former, called the _weak hypercharge_, satisfies the Gell-Mann-Nishijima relation \[Q=Y_{\mathbf{R}}+t_{\mathbf{R}}^{3}, \tag{9.3}\] where \(Q\) is the charge of the field in units of \(e\) and \(t_{\mathbf{R}}^{3}\) is the Cartan generator of SU(2) in the representation \(\mathbf{R}\). As an example, for \(\mathbf{L}\) in eq. (9.1) we have \(t_{2}^{3}\equiv\frac{1}{2}\sigma^{2}=\text{diag}(\frac{1}{2},-\frac{1}{2})\) and \(Q=\text{diag}(0,-1)\), so we have \(Y(\mathbf{L})=-\frac{1}{2}\mathbbm{1}\). Repeating this for all leptons and quark fields, we find \[Y(\mathbf{L})=-\frac{1}{2}\mathbbm{1},\hskip 14.226378ptY(\ell)=-1, \hskip 14.226378ptY(\mathbf{Q})=-\frac{1}{6}\mathbbm{1},\hskip 14.226378ptY(U_{R}) =\frac{2}{3},\hskip 14.226378ptY(D_{R})=-\frac{1}{3}, \tag{9.4}\] where for the SU(2) singlets we have \(t_{\mathbf{1}}^{3}=0\). Notice that for U(1)\({}_{Y}\) we have \(Y_{\mathbf{R}}=Y\mathbbm{1}\), so the representation of U(1)\({}_{Y}\) is fully determined by the _hypercharge_\(Y\). We might be tempted to believe that with this we have determined how _all_ matter fields in the SM transform under the gauge group \(\mathrm{SU(2)}\times\mathrm{U(1)}_{Y}\). However, for reasons that we so far ignore, nature has decided to have three copies of the structure just described. In addition to the electron, its neutrino, and the \(u\)- and \(d\)-quarks there are two more replicas or _families_. The second family includes the muon (\(\mu^{-}\)) and its neutrino (\(\nu_{\mu}\)), together with the charm (\(c\)) and strange (\(s\)) quarks. The third family, on the other hand, contains the \(\tau^{-}\) lepton, its neutrino (\(\nu_{\tau}\)), and the top (\(t\)) and bottom (\(t\)) quarks. Apart from an increasing hierarchy of masses, each extra family exactly replicates the transformation properties of the fields in the first one. To include this feature in our description, we add an index \(i=1,2,3\) to the doublet \(\{\mathbf{L}^{i},\mathbf{Q}^{i}\}\) and singlet \(\{\ell_{R}^{i},U_{R}^{i},D_{R}^{i}\}\) fields introduced above, summarizing in table 2 the three-family structure with the corresponding representations of \(\mathrm{SU(2)}\times\mathrm{U(1)}_{Y}\). We should not forget that, besides the electroweak quantum numbers, leptons are singlets with respect to color SU(3), whereas quarks are triplets transforming in the fundamental representation of this group. Once the matter content of the SM is determined, as well as how the fields transform under the electroweak gauge group, we fix our attention on the gauge bosons. In the case of \begin{table} \begin{tabular}{c|c|c|c|c|c} & \(i=1\) & \(i=2\) & \(i=3\) & \(t_{\mathbf{R}}^{3}\) & \(Y_{\mathbf{R}}\) \\ \hline \hline \(\mathbf{L}^{i}\) & \(\left(\begin{array}{c}\nu_{e}\\ e^{-}\end{array}\right)_{L}\) & \(\left(\begin{array}{c}\nu_{\mu}\\ \mu^{-}\end{array}\right)_{L}\) & \(\left(\begin{array}{c}\nu_{\tau}\\ \tau^{-}\end{array}\right)_{L}\) & \(\frac{1}{2}\sigma^{3}\) & \(-\frac{1}{2}\mathbb{1}\) \\ \(\ell_{R}^{i}\) & \(e_{R}^{-}\) & \(\mu_{R}^{-}\) & \(\tau_{R}^{-}\) & \(0\) & \(-1\) \\ \hline \hline \multicolumn{7}{c}{**Quarks**} \\ \multicolumn{7}{c}{\(i=1\)} & \(i=2\) & \(i=3\) & \(t_{\mathbf{R}}^{3}\) & \(Y_{\mathbf{R}}\) \\ \hline \hline \(\mathbf{Q}^{i}\) & \(\left(\begin{array}{c}u\\ d\end{array}\right)_{L}\) & \(\left(\begin{array}{c}c\\ s\end{array}\right)_{L}\) & \(\left(\begin{array}{c}c\\ b\end{array}\right)_{L}\) & \(\frac{1}{2}\sigma^{3}\) & \(\frac{1}{6}\mathbb{1}\) \\ \(U_{R}^{i}\) & \(u_{R}\) & \(c_{R}\) & \(t_{R}\) & \(0\) & \(\frac{2}{3}\) \\ \(D_{R}^{i}\) & \(d_{R}\) & \(s_{R}\) & \(b_{R}\) & \(0\) & \(-\frac{1}{3}\) \\ \hline \hline \end{tabular} \end{table} Table 2: Transformation properties of leptons and quarks in the electroweak sector of the SM. In addition to the indicated representations of \(\mathrm{SU(2)}\times\mathrm{U(1)}_{Y}\), quarks transform in the fundamental \(\mathbf{3}\) irrep of \(\mathrm{SU(3)}\), whereas leptons are singled under this group. to use the \(\{t_{\mathbf{R}}^{\pm},t_{\mathbf{R}}^{3}\}\) basis, so the corresponding gauge field is written as27 Footnote 27: In terms of the generators \(t_{\mathbf{R}}^{\pm}\equiv t_{\mathbf{R}}^{1}\pm i_{\mathbf{R}}^{2}\), the SU(2) algebra reads \([t_{\mathbf{R}}^{3},t_{\mathbf{R}}^{\pm}]=t_{\mathbf{R}}^{\pm},[t_{\mathbf{R} }^{+},t_{\mathbf{R}}^{-}]=2t_{\mathbf{R}}^{3}\). This is just the algebra of ladder operators familiar from the theory of angular momentum in quantum mechanics. \[\mathbf{W}_{\mu}=W_{\mu}^{+}t_{\mathbf{R}}^{-}+W_{\mu}^{-}t_{\mathbf{R}}^{+}+W_ {\mu}^{3}t_{\mathbf{R}}^{3}, \tag{9.5}\] whereas for the Abelian gauge field associated with U(1)\({}_{Y}\), we have \[\mathbf{B}_{\mu}=B_{\mu}Y\mathbb{1}. \tag{9.6}\] The covariant derivative needed to construct the matter action is then given by \[D_{\mu} =\partial_{\mu}-ig\mathbf{W}_{\mu}-ig^{\prime}\mathbf{B}_{\mu}\] \[=\partial_{\mu}-igW_{\mu}^{+}t_{\mathbf{R}}^{-}-igW_{\mu}^{-}t_{ \mathbf{R}}^{+}-igW_{\mu}^{3}t_{\mathbf{R}}^{3}-ig^{\prime}B_{\mu}Y\mathbb{1}, \tag{9.7}\] where \(g\) and \(g^{\prime}\) are the coupling constant associated with the two factors of the electroweak gauge group. We should not forget however that the electric charge \(Q\), the hypercharge \(Y\mathbb{1}\), and the SU(3) Cartan generator \(t_{\mathbf{R}}^{3}\) are not independent, but connected by the Gell-Mann-Nishijima relation (9.3). It is therefore useful to consider combinations \[A_{\mu} =B_{\mu}\cos\theta_{w}+W_{\mu}^{3}\sin\theta_{w},\] \[Z_{\mu} =-B_{\mu}\sin\theta_{w}+W_{\mu}^{3}\cos\theta_{w}, \tag{9.8}\] where \(A_{\mu}\) is to be identified with the electromagnetic field, whose gauge group will be denoted by U(1)\({}_{\rm em}\) to distinguish it from the one associated with the gauge field \(\mathbf{B}_{\mu}\). The parameter \(\theta_{w}\) is called the _weak mixing angle_ and sometimes also the Weinberg angle, although it was first introduced by Glashow in [37]. Expressing the covariant derivative (9.7) in terms of the \(\{W_{\mu}^{\pm},A_{\mu},Z_{\mu}\}\) gauge fields, we find \[D_{\mu} =\partial_{\mu}-igW_{\mu}^{+}t_{\mathbf{R}}^{-}-igW_{\mu}^{-}t_{ \mathbf{R}}^{+}-iA_{\mu}\big{(}g\sin\theta_{w}t_{\mathbf{R}}^{3}+g^{\prime} \cos\theta_{w}Y\mathbb{1}\big{)}\] \[-iZ_{\mu}\big{(}g\sin\theta_{w}t_{\mathbf{R}}^{3}-g^{\prime}\cos \theta_{w}Y\mathbb{1}\big{)}. \tag{9.9}\] Now, if \(A_{\mu}\) is to be identified with the electromagnetic field, it has to couple to the electric charge matrix \(eQ\). Consistency with the Gell-Mann-Nishijima relation (9.3) implies then \[g\sin\theta_{w}=g^{\prime}\cos\theta_{w}=e\qquad\implies\qquad\tan\theta_{w}= \frac{g}{g^{\prime}}. \tag{9.10}\] This relation shows that the weak mixing angle not only measures the mixing among the Abelian gauge fields associated with the U(1)\({}_{Y}\) and the Cartan generator of SU(2), but also of the relative strength of the interactions associated with the two factors of the electroweak gauge group. Implementing all the previous relations, the covariant derivative reads \[D_{\mu}=\partial_{\mu}-\frac{ie}{\sin\theta_{w}}W_{\mu}^{+}t_{\mathbf{R}}^{-}- \frac{ie}{\sin\theta_{w}}W_{\mu}^{-}t_{\mathbf{R}}^{+}-ieA_{\mu}Q-\frac{2ie} {\sin(2\theta_{w})}Z_{\mu}\big{(}t_{\mathbf{R}}^{3}-Q\sin^{2}\theta_{w}\big{)}, \tag{9.11}\] where we have eliminated \(Y\), \(g\), and \(g^{\prime}\) in favor of \(Q\), \(e\), and \(\theta_{w}\). With this, the SM matter action reads \[S_{\rm matter}=\sum_{k=1}^{3}\int d^{4}x\left(i\overline{\mathbf{L}}^{k}/\!\!\! \not\!\!D\mathbf{L}^{k}+i\vec{t}_{R}^{k}/\!\!\!\not\!\!D\ell_{R}^{k}+i\overline {\mathbf{Q}}^{k}/\!\!\!\not\!\!D\mathbf{Q}^{k}+i\overline{U}_{R}^{k}/\!\!\! \not\!\!D\upsilon_{R}^{k}+i\overline{D}_{R}^{k}/\!\!\!\not\!\!DD_{R}^{k}\right). \tag{9.12}\] Next we now look at the gauge action \[S_{\rm gauge}=-\frac{1}{2}\int d^{4}x\left[{\rm tr}\left(\mathbf{W}_{\mu\nu} \mathbf{W}^{\mu\nu}\right)+{\rm tr}\left(\mathbf{B}_{\mu\nu}\mathbf{B}^{\mu \nu}\right)\right], \tag{9.13}\] where \(\mathbf{W}_{\mu\nu}\) and \(\mathbf{B}_{\mu\nu}\) are the field strength of \(\mathbf{W}_{\mu}\) and \(\mathbf{B}_{\mu}\) respectively. Recasting it in terms of the electromagnetic and \(Z_{\mu}\) gauge fields defined in eq. (9.8), we have \[S_{\rm gauge} =-\int d^{4}x\left\{\frac{1}{4}W_{\mu\nu}^{+}W^{-\mu\nu}+\frac{1 }{4}Z_{\mu\nu}Z^{\mu\nu}+\frac{1}{4}F_{\mu\nu}F^{\mu\nu}-\frac{ie}{2}\cot\theta _{w}W_{\mu}^{+}W_{\nu}^{-}Z^{\mu\nu}\right.\] \[\left.-\frac{ie}{2}W_{\mu}^{+}W_{\nu}^{-}F^{\mu\nu}+\frac{e^{2}}{ 2\sin\theta_{w}}\Big{[}(W_{\mu}^{+}W^{+\mu})(W_{\mu}^{-}W^{-\mu})-(W_{\mu}^{+ }W^{-\mu})^{2}\Big{]}\right\}, \tag{9.14}\] where \(Z_{\mu\nu}=\partial_{\mu}Z_{\nu}-\partial_{\nu}Z_{\mu}\), \(F_{\mu\nu}=\partial_{\mu}A_{\nu}-\partial_{\nu}A_{\mu}\), and we have defined \[W_{\mu\nu}^{\pm}=\partial_{\mu}W_{\nu}^{\pm}-\partial_{\nu}W_{\mu}^{\pm}\mp e \big{(}W_{\mu}^{\pm}A_{\nu}-W_{\nu}^{\pm}A_{\mu}\big{)}\mp ie\cot\theta_{w} \big{(}W^{\pm}Z_{\nu}-W_{\nu}^{\pm}Z_{\mu}\big{)}. \tag{9.15}\] The SM gauge couplings can be now read off eqs. (9.11), (9.12), (9.14), and (9.15). The first thing to notice from the last two equations is that the \(W_{\mu}^{\pm}\) gauge fields have electric charge \(\pm e\) and also couple to the \(Z_{\mu}\) gauge field, which has itself zero electric charge. A look a the matter action also shows that the two components of the SU(2) doublets are transmuted into one another by the emission/absorption of a \(W\) boson. As to the \(Z^{0}\), it can be emitted/absorbed by quarks and leptons with couplings that depend on their SU(2) \(\times\) U(1)\({}_{Y}\) quantum numbers (see chapter 5 of [14] or any other SM textbook for the details). As a practical example, the neutron \(\beta\)-decay \(n\to p^{+}e^{-}\overline{\nu}_{e}\) proceeds by the emission of a \(W^{-}\) by one of the neutron's \(d\) quarks, turning itself into a \(u\) quark (and the neutron into a proton). The \(W^{-}\) then decays into an electron and an electronic antineutrino. (9.16) As a second example, we also have lepton-neutrino scattering mediated by the interchange of a \(Z^{0}\) \[\ell^{-}+\nu_{\ell}\longrightarrow\ell^{-}+\nu_{\ell}\qquad\implies\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad There is a simple way to summarize the group-theoretical information contained in table 2 by just indicating the representations of the different fermion species with respect to SU(3) \(\times\) SU(2) \(\times\) U(1)\({}_{Y}\), including also now the gauge group factor associated with the strong force. Using the notation \((\mathbf{N}_{c},\mathbf{N})_{Y}\), with \(\mathbf{N}_{c}\), \(\mathbf{N}\), and \(Y\) the representations of SU(3), SU(2), and U(1)\({}_{Y}\), we write for a single family \[\mathbf{L}^{i}:(\mathbf{1},\mathbf{2})^{L}_{-\frac{1}{2}}, \ell^{i}_{R}:(\mathbf{1},\mathbf{1})^{R}_{-1},\] \[\mathbf{Q}^{i}:(\mathbf{3},\mathbf{2})^{L}_{\frac{1}{6}}, U^{i}_{R}:(\mathbf{3},\mathbf{1})^{R}_{\frac{2}{3}}, D^{i}_{R}:(\mathbf{3},\mathbf{1})^{R}_{-\frac{1}{3}}, \tag{9.20}\] and we also introduced a superscript to remind ourselves whether they are left- or right-handed fermions (a useful information to decide what sign they come with in the anomaly cancellation condition). In this notation, the generators of the representation \((\mathbf{N}_{c},\mathbf{N})_{Y}\) are given by \[T^{(I,a)}_{(\mathbf{N}_{c},\mathbf{N})_{Y}}=t^{I}_{\mathbf{N}_{c}}\otimes \mathbf{1}\otimes\mathbf{1}+\mathbf{1}\otimes t^{a}_{\mathbf{N}}\otimes 1+ \mathbf{1}\otimes\mathbf{1}\otimes Y, \tag{9.21}\] where \(I=1,\ldots,8\) and \(a=1,2,3\) respectively label the generators of SU(3) and SU(2). At a practical level, in order to check anomaly cancellation in the SM we attach a group factor to each vertex of the triangle and compute the left-hand side of (9.19) to check whether it vanishes. Since we have three different factors and three vertices, there are ten inequivalent possibilities Some of the possibilities are rather trivial. For example, the triangle with three SU(3) factors gives zero since the strong interaction does not distinguishes left- from right-handed quarks and the two terms on the left-hand side of (9.19) are equal. The same happens whenever we have a single SU(3) or SU(2) factor, since the generators of these groups are traceless. At the end of the day, there are just four nontrivial cases. Using an obvious notation, they are: SU(2)\({}^{3}\), SU(2)\({}^{2}\)U(1), SU(3)\({}^{2}\)U(1), and U(1)\({}^{3}\). In the first case, since only left-handed fermions couple to SU(2), anomaly cancellation follows directly from the properties of the Pauli matrices \[\mathrm{tr}\left(\sigma^{i}\{\sigma^{j},\sigma^{k}\}\right)=2\delta_{ jk}\mathrm{tr}\,\sigma_{i}=0. \tag{9.22}\] For SU(2)\({}^{2}\)U(1), again the SU(2) factors only allow left-handed fermions in the loop, and the anomaly cancellation condition reads \[\sum_{L}Y_{L}=0, \tag{9.23}\] while in the SU(3)\({}^{2}\)U(1) triangle the color factor rules out leptons, so we have \[\sum_{\mathrm{quarks},L}Y_{L}-\sum_{\mathrm{quarks},R}Y_{R}=0. \tag{9.24}\] Finally, we are left with the triangle with one U(1) at each vertex, leading to the condition \[\sum_{L}Y_{L}^{3}-\sum_{R}Y_{R}^{3}=0, \tag{9.25}\] where the sum in this case extends to all fermion species. But this is not all. Since the SM model couples to gravity, it turns out that we might have gauge anomalies triggered by triangle diagrams with one gauge boson and two gravitons. The condition to avoid this is \[\sum_{L}\mathrm{tr}\,(T_{\mathbf{R}}^{a})_{L}-\sum_{R}\mathrm{tr} \,(T_{\mathbf{R}}^{a})_{R}=0. \tag{9.26}\] In this case there are just three possibilities, corresponding to having a SU(3), SU(2) or U(1) factor in the non-graviton vertex. For the first two cases, the condition for anomaly cancellation is automatically satisfied, again because the generators of SU(3) and SU(2) are traceless. The third possibility, on the other hand, gives a nontrivial condition \[\sum_{L}Y_{L}-\sum_{R}Y_{R}=0, \tag{9.27}\] where the sum runs over both leptons and quarks. We have found the four conditions (9.23), (9.24), (9.25), and (9.27) to ensure the cancelation of anomalies, all of them involving the hypercharges of the chiral fermion fields in the SM. Now, instead of checking whether the hypercharges in eq. (9.20) satisfy this condition, we are going to see to what extend anomaly cancellation determines the fermion hypercharges. Let us therefore write the representations of leptons and quarks in each family as \((\mathbf{1},\mathbf{2})_{Y_{1}}^{L}\), \((\mathbf{1},\mathbf{1})_{Y_{2}}^{R}\), \((\mathbf{3},\mathbf{2})_{Y_{3}}^{L}\), \(U_{R}^{i}:(\mathbf{3},\mathbf{1})_{Y_{4}}^{R}\), and \(D_{R}^{i}:(\mathbf{3},\mathbf{1})_{Y_{5}}^{R}\), reading now the anomaly cancellation conditions as equations to determine \(Y_{1},\ldots,Y_{5}\). These are \[2Y_{1}+6Y_{3} =0,\] \[6Y_{3}-3Y_{4}-3Y_{5} =0,\] \[2Y_{1}^{3}+6Y_{3}^{3}-Y_{2}^{3}-3Y_{4}^{3}-3Y_{5}^{3} =0, \tag{9.28}\] \[2Y_{1}+6Y_{3}-Y_{2}-3Y_{4}-3Y_{5} =0.\] Now, since these are homogeneous equations there exists the freedom to fix the overall normalization of the five hypercharges or, equivalently, to choose the value of one of them. Taking for example \(Y_{2}=-1\), we are left with four equations for the four remaining unknowns. They have a single solution given by \[Y_{1}=-\frac{1}{2},\qquad Y_{2}=-1,\qquad Y_{3}=\frac{1}{6},\qquad Y_{4}=- \frac{1}{3},\qquad Y_{5}=\frac{2}{3}, \tag{9.29}\] up to the interchange of \(Y_{4}\) and \(Y_{5}\) (notice that the associated fields \(U_{R}^{i}\) and \(D_{R}^{i}\) transform in the same representation with respect to the other two gauge group factors). This solution precisely reproduces the hypercharges shown in eq. (9.20). With this calculation we have learned two things. One is that all gauge anomalies (and also the so-called mixed gauge-gravitational anomalies) cancel in the SM, and that they do within each family. And second, that anomaly cancellation condition is a very powerful way of constraining viable models in particle physics: in the SM it fixes, up to a global normalization, the U(1)\({}_{Y}\) charges of all chiral fermions in the theory. ### But, where are the masses? Adding together eqs. (9.12) and (9.14), we still do not get the full action of the electroweak sector of the SM model. The reason is that all fermion species in the SM have nonvanishing masses and, therefore, we need to add the corresponding mass terms to the matter action. This is, however, a very risky business in a chiral theory like the electroweak model. As we learned in Box 7 (see page 49), fermion mass terms mix left- and right-handed components. In our case, since they transform in different representations of the SU(2) \(\times\) U(1)\({}_{Y}\) gauge group, adding such terms spoils gauge invariance and with that all hell breaks loose. Fermion masses are not the only problem. Weak interactions are short ranged, something that can only be explained if the intermediate bosons \(W^{\pm}\) and \(Z^{0}\) have masses of the order of tens of GeV. Mass terms of the form \(m_{W}^{2}W_{\mu}^{\mp}W^{\pm\mu}\) and \(m_{Z}^{2}Z_{\mu}Z^{\mu}\) also violate gauge invariance, so it seems that we are facing double trouble. The theory resulting from adding all needed mass terms to \(S_{\rm matter}+S_{\rm gauge}\) is the original model proposed in 1961 by Glashow [37], where gauge invariance in _explicitly broken_. The inclusion of masses in the SM in a manner compatible with gauge invariance was achieved by Weinberg and Salam [38, 39] and requires the implementation of the BEH mechanism [34, 35, 36] studied in section 5 in its Abelian version. In the case at hand, we need to introduce is a SU(2) complex scalar doublet \[{\bf H}=\left(\begin{array}{c}H^{+}\\ H^{0}\end{array}\right), \tag{9.30}\] with \(Y({\bf H})=\frac{1}{2}\mathbbm{1}\), so using the Gell-Mann-Nishijima relation (9.3) we find that \(H^{+}\) has charge \(e\) and \(H^{0}\) is neutral. We consider then the action \[S_{\rm Higgs}=\int d^{4}x\ \left[(D_{\mu}{\bf H})^{\dagger}D^{\mu}{\bf H }-\frac{\lambda}{4}\left({\bf H}^{\dagger}{\bf H}-\frac{v^{2}}{2}\right)^{2} \right], \tag{9.31}\] where the covariant derivative is defined in (9.11). Although the action is fully SU(2) \(\times\) U(1)\({}_{Y}\) invariant, the potential has the Mexican hat shape shown in fig. 9 and the field \({\bf H}\) gets a nonzero vev, that by a suitable gauge transformation can always be brought to the form \[\langle{\bf H}\rangle=\frac{1}{\sqrt{2}}\left(\begin{array}{c}0\\ v\end{array}\right). \tag{9.32}\] This vev obviously breaks SU(2) and, having nonzero hypercharge, also U(1)\({}_{Y}\). However, since \(\langle H^{+}\rangle=0\) it nevertheless preserve the gauge invariance of electromagnetism. We have then the SSB pattern \[\text{SU(2)}\times\text{U(1)}_{Y}\longrightarrow\text{U(1)}_{ \rm em}. \tag{9.33}\] The masses of the gauge bosons are obtained by substituting the vev (9.32) into the action (9.31) and collecting the terms quadratic in the gauge fields. With this, we see that the \(W\) and \(Z\) bosons acquire a nonzero masses given respectively by \[m_{W}=\frac{ev}{2\sin\theta_{w}},\hskip 28.452756ptm_{Z}=\frac{ev}{ \sin(2\theta_{w})}, \tag{9.34}\] and satisfying the custodial relation \(m_{W}=m_{Z}\cos\theta_{w}\). Interestingly, the scale \(v\) is related to the Fermi constant \(G_{F}\), a quantity that can be measured at low energies. Considering the neutron \(\beta\)-decay process in eq. (9.16) at energies below the mass of the \(W\) boson and comparing with the result obtained from the Fermi interaction \[S_{\rm Fermi}=\frac{G_{F}}{\sqrt{2}}\int d^{4}x\,\overline{\nu} _{e}\gamma_{\mu}(1-\gamma_{5})e\,\overline{d}\gamma^{\mu}(1-\gamma_{5})u, \tag{9.35}\] we get the relation \[G_{F}=\frac{\sqrt{2}}{8}\frac{e^{2}}{m_{W}^{2}\sin^{2}\theta_{w} }=\frac{1}{\sqrt{2}v^{2}}, \tag{9.36}\] where the expression of \(m_{W}\) given in eq. (9.34) has been used. Substituting now the experimental value of the Fermi constant \(G_{F}=1.166\times 10^{-5}\) GeV\({}^{2}\)[117], we find \[v\approx 246\ \text{GeV}. \tag{9.37}\] In order to give mass to the fermions, we need to follow the strategy explained in page 70 and write the appropriate Yukawa couplings, which in this case read \[S_{\rm Yukawa} =-\sum_{i,j=1}^{3}\int d^{4}x\Big{(}C^{(\ell)}_{ij}\overline{ \mathbf{L}}^{i}\mathbf{H}\ell^{j}_{R}+C^{(\ell)*}_{ji}\overline{\ell}^{i}_{R} \mathbf{H}^{\dagger}\mathbf{L}^{j}+C^{(q)}_{ij}\overline{\mathbf{Q}}^{i} \mathbf{H}D^{j}_{R}+C^{(q)*}_{ji}\overline{D}^{i}_{R}\mathbf{H}^{\dagger} \mathbf{Q}^{j}\] \[+\widetilde{C}^{(q)}_{ij}\overline{\mathbf{Q}}\widetilde{\mathbf{ H}}U^{j}_{R}+\widetilde{C}^{(q)*}_{ji}\overline{U}^{i}_{R}\widetilde{\mathbf{ H}}^{\dagger}\mathbf{Q}^{j}\Big{)}. \tag{9.38}\] The two terms in the second line involve the conjugate field \[\widetilde{\mathbf{H}}\equiv i\sigma^{2}\left(\begin{array}{c}H^{+*}\\ H^{0*}\end{array}\right)=\left(\begin{array}{c}H^{0*}\\ -H^{+*}\end{array}\right), \tag{9.39}\] which has \(Y(\widetilde{\mathbf{H}})=-\frac{1}{2}\mathbb{1}\) and can be seen to transform also as a \(SU(2)\) doublet. Given the transformation properties of all fields involved, it is very easy to check that the action (9.38) is SU(2) \(\times\) U(1)\({}_{Y}\) gauge invariant. Notice that here we are assuming that neutrino masses are not due to the BEH mechanism. This is the reason why lepton doublets only couple to the Higgs doublet \(\mathbf{H}\), whose upper component has zero vev. In the case of quarks, however, we need to generate masses for both the upper and lower components of \(\mathbf{Q}\). This is why they couple to the conjugate field \(\widetilde{\mathbf{H}}\), whose upper component acquires a nonzero vev \[\langle\widetilde{\mathbf{H}}\rangle=\frac{1}{\sqrt{2}}\left( \begin{array}{c}v\\ 0\end{array}\right). \tag{9.40}\] To find the expression of the fermions masses generated by the BEH mechanism, we substitute in the Yukawa action the field \(\mathbf{H}\) and its conjugate \(\widetilde{\mathbf{H}}\) by their vevs (9.32) and (9.40). The resulting mass terms have the form \[S_{\rm mass} =-\int d^{4}x\,\left[\left(\overline{e}_{L},\overline{\mu}_{L}, \overline{\tau}_{L}\right)M^{(\ell)}\left(\begin{array}{c}e_{R}\\ \mu_{R}\\ \tau_{R}\end{array}\right)+\left(\overline{d}_{L},\overline{s}_{L},\overline{ b}_{L}\right)M^{(q)}\left(\begin{array}{c}d_{R}\\ s_{R}\\ b_{R}\end{array}\right)\right.\] \[+\left.\left(\overline{u}_{L},\overline{e}_{L},\overline{t}_{L} \right)\widetilde{M}^{(q)}\left(\begin{array}{c}u_{R}\\ c_{R}\\ t_{R}\end{array}\right)+\text{H.c.}\right], \tag{9.41}\] where the mass matrices are given in term of the couplings in eq. (9.38) by \[M^{(\ell)}_{ij}=\frac{v}{\sqrt{2}}C^{(\ell)}_{ij},\hskip 28.452756ptM^{(q)}_{ ij}=\frac{v}{\sqrt{2}}C^{(q)}_{ij},\hskip 28.452756pt\widetilde{M}^{(q)}_{ij}= \frac{v}{\sqrt{2}}\widetilde{C}^{(q)}_{ij}. \tag{9.42}\] These complex matrices are however not necessarily diagonal, although they can be diagonalized through bi-unitary transformations \[U^{(\ell)\dagger}_{L}M^{(\ell)}U^{(\ell)}_{R}=\text{diag}(m_{e},m_{\mu},m_{ \tau}),\] \[V_{L}^{(q)\dagger}M^{(q)}V_{R}^{(q)} =\text{diag}(m_{d},m_{s},m_{b}), \tag{9.43}\] \[\widetilde{V}_{L}^{(q)\dagger}\widetilde{M}^{(q)}\widetilde{V}_{R} ^{(q)} =\text{diag}(m_{u},m_{c},m_{t}),\] where the eigenvalues are the leptons and quarks masses. Notice that fermion masses are determined by both the Higgs vev scale \(v\) and the dimensionless Yukawa couplings \(C_{ij}^{(\ell)}\), \(C_{ij}^{(q)}\), and \(\widetilde{C}_{ij}^{(q)}\), which are experimentally determined. Let us focus for the time being on the quark sector (leptons will be dealt with below in section 9.4). Since \(V_{L,R}^{(q)},\widetilde{V}_{L,R}^{(q)}\) are constant unitary matrices we could use them to redefine the quark and lepton triplets in the total action \[\left(\begin{array}{c}u_{L,R}^{\prime}\\ c_{L,R}^{\prime}\\ t_{L,R}^{\prime}\end{array}\right)=\widetilde{V}_{L,R}^{(q)\dagger}\left( \begin{array}{c}u_{L,R}\\ c_{L,R}\\ t_{L,R}\end{array}\right),\qquad\left(\begin{array}{c}d_{L,R}^{\prime}\\ s_{L,R}^{\prime}\\ b_{L,R}^{\prime}\end{array}\right)=V_{L,R}^{(q)\dagger}\left(\begin{array}{c} d_{L,R}\\ s_{L,R}\\ b_{L,R}\end{array}\right), \tag{9.44}\] in such a way that the new fields are mass eigenstates, i.e., their free kinetic terms in the action have the standard diagonal form. A problem however arises when implementing this field redefinition in the interaction terms between the quarks and the \(W^{\pm}\) gauge bosons, mixing the lower with upper components of the \(SU(2)\) doublets. The issue is that, unlike in the kinetic terms, the matrices implementing the field redefinition do not cancel \[S\supset\int d^{4}x\left(\overline{u}_{L},\overline{c}_{L},\overline{t}_{L} \right)\gamma^{\mu}\left(\begin{array}{c}d_{L}\\ s_{L}\\ b_{L}\end{array}\right)W_{\mu}^{+}=\int d^{4}x\left(\overline{u}_{L}^{\prime}, \overline{c}_{L}^{\prime},\overline{t}_{L}^{\prime}\right)\widetilde{V}_{L}^ {(q)\dagger}V_{L}^{(q)}\gamma^{\mu}\left(\begin{array}{c}d_{L}^{\prime}\\ s_{L}^{\prime}\\ b_{L}^{\prime}\end{array}\right)W_{\mu}^{+}, \tag{9.45}\] where, to simplify the expression the overall coupling is omitted and the corresponding coupling of the quarks to the \(W^{-}\) boson is obtained by taking the Hermitian conjugate of this term. The combination \[\widetilde{V}_{L}^{(q)\dagger}V_{R}^{(q)}\equiv V_{\rm CKM} \tag{9.46}\] defines the _Cabibbo-Kobayashi-Maskawa (CKM) matrix_[143] and determines the mixing among the quarks families. It is an experimental fact that this matrix is nondiagonal, so the emission/absorption of a \(W^{\pm}\) boson does not merely transform the upper into the lower fields (or vice versa) _within_ a single SU(2) quark doublet, but can also "jump" into another family. This gives rise to processes known as flavor changing charged currents. For example, there is a nonzero probability that a \(u\) quark turns into a \(s\) quark by the emission of a \(W^{+}\), or vice versa with a \(W^{-}\), accounting for decays like \(\Lambda^{0}\to p^{+}e^{-}\overline{\nu}_{e}\). What happens inside the \(\Lambda^{0}\) baryon (\(uds\)) is that the strange quark emits a \(W^{-}\) and transforms into a \(u\)-quark, thus converting the \(\Lambda^{0}\) into a proton (\(uud\)). The \(W^{-}\) then decays into an electron and its antineutrino. It is an interesting feature of the electroweak sector of the SM that there are no flavor changing _neutral_ currents at tree level. In the case of electromagnetic-mediated processes, this follows from the fact that the field redefinitions induced by the matrices \(V_{L,R}^{(q)}\) and \(\widetilde{V}_{L,R}^{(q)}\) mix fields with the same electric charge, so they commute with the charge matrix \(Q\) and cancel from the quark electromagnetic couplings. In the case of the weak neutral currents (mediated by the \(Z^{0}\)) the same happens, though maybe it is less obvious. Indeed, looking at the form of the covariant derivative (9.11) we find the following couplings between the quarks and the \(Z^{0}\) \[S \supset\int d^{4}x\left[\left(\frac{1}{2}-\frac{2}{3}\sin^{2}\theta_ {w}\right)(\overline{u}_{L},\overline{c}_{L},\overline{t}_{L})\gamma^{\mu} \left(\begin{array}{c}u_{L}\\ c_{L}\\ t_{L}\end{array}\right)-\left(\frac{1}{2}-\frac{1}{3}\sin^{2}\theta_{w}\right)( \overline{d}_{L},\overline{s}_{L},\overline{b}_{L})\gamma^{\mu}\left( \begin{array}{c}d_{L}\\ s_{L}\\ b_{L}\end{array}\right)\right.\] \[+\left.\frac{2}{3}\sin^{2}\theta_{w}(\overline{u}_{R},\overline{c }_{R},\overline{t}_{R})\gamma^{\mu}\left(\begin{array}{c}u_{R}\\ c_{R}\\ t_{R}\end{array}\right)-\frac{1}{3}\sin^{2}\theta_{w}(\overline{d}_{R}, \overline{s}_{R},\overline{b}_{R})\gamma^{\mu}\left(\begin{array}{c}d_{R} \\ s_{R}\\ b_{R}\end{array}\right)\right], \tag{9.47}\] where again we have dropped an overall constant which is irrelevant for the argument. What matters for our discussion is that after the field redefinition we get the combinations \(V_{L,R}^{(q)\dagger}V_{L,R}^{(q)}=\mathbb{1}=\widetilde{V}_{L,R}^{(q)\dagger} \widetilde{V}_{L,R}^{(q)}\) and no mixing matrix is left behind. This shows that there are no flavor changing neutral currents at tree level28. Footnote 28: Once quantum effects are included, flavor changing neutral are suppressed due to the flavor mixing brought about by the Cabibbo-Kobayashi-Maskawa matrix, via the so-called GIM (Glashow-Iliopoulos-Maiani) mechanism [144]. **Box 14. SSB or QCD?** We have seen how the BEH mechanism provides the rationale to understand how the particles in the SM acquire their masses, a scenario ultimately confirmed by the experimental detection of the Higgs boson. But, does the BEH mechanism really explains the mass of everything we see around us, from the paper in our hands to the sun over our heads? The answer is no. As we will see, the fraction of the mass of macroscopic objects that we can assign to the Higgs boson acquiring a vev is really tiny. We know that the masses of protons and neutrons are very similar to one another, and much larger than the mass of the electron \[m_{p}\simeq m_{n}\simeq 1836\,m_{e}. \tag{9.48}\] In turn, the mass of a \((A,Z)\) nucleus is \[M(A,Z)=Zm_{p}+(A-Z)m_{n}+\Delta M(A,Z), \tag{9.49}\] with \(\Delta M(A,Z)\) the binding energy, which varies from a bit over \(1\%\) for deuterium to around \(10\%\) for \({}^{62}_{28}\)Ni. Taking eq. (9.48) into account and to a fairly good approximation, the mass of an atom can be written in terms of its mass number alone \[m(A,Z)\simeq Am_{p}. \tag{9.50}\] The point of this argument is to show that in order to explain the mass around us we essentially need to explain the mass of the proton. But here we run into trouble if we want to trace back to the BEH mechanism. The values of the masses of the \(u\) and \(d\) quarks accounted for by the BEH mechanism (the so-called current algebra masses) are \[m_{u}\simeq 2.2\;\text{MeV},\qquad\quad m_{d}=4.7\;\text{MeV}. \tag{9.51}\] Comparing with \(m_{p}[uud]\simeq 938.3\,\text{MeV}\) and \(m_{d}[udd]=939.6\,\text{MeV}\), we see that quark masses only explain about \(1\%\) of the nucleon mass. Thus, close to \(99\%\) of the mass in atomic form in the universe is not due to the BEH mechanism. Where does this mass/energy come from? Actually, from QCD effects. Protons and neutrons are not only made out of their three valence quarks, but they are filled with a plethora of virtual quarks and gluons fluctuating in and out of existence whose energy make up the missing \(99\%\). These effects can be computed numerically using lattice field theory [145, 146]. Here, however, we just want to offer some general arguments pointing to the origin of the difficulties in describing protons and neutrons in terms of their constituent quarks. Let us begin with a very simple argument. We know that because of the strong dynamics of QCD at low energies quarks get confined into hadrons in a region whose linear size is of the order \(\Lambda_{\text{QCD}}^{-1}\). Applying Heisenberg's uncertainty principle, we can estimate the size of their momentum fluctuations to be about \[\Delta p\sim\Lambda_{\text{QCD}}. \tag{9.52}\] If fluctuations are isotropic the statistical average of the quark momentum vanishes, \(\langle\mathbf{p}\rangle=0\). Since \((\Delta p)^{2}\equiv\langle\mathbf{p}^{2}\rangle-\langle\mathbf{p}\rangle^{2}\), we determine the averaged quark momentum squared to be \[\langle\mathbf{p}^{2}\rangle\sim\Lambda_{\text{QCD}}^{2}. \tag{9.53}\] Now, \(\Lambda_{\text{QCD}}\) is of the order of a few hundred MeV, so the masses of the \(u\) and \(d\) quarks satisfy \(m_{u},m_{d}\ll\Lambda_{\text{QCD}}\). This means that the linear momenta of the valence quarks inside protons and neutrons is much larger than their masses, so they are relativistic particles. Moreover, since their typical energy is of order \(\Lambda_{\text{QCD}}\), they are in the low energy regime of QCD where the dynamics is strongly coupled. What we said about the \(u\) and \(d\) quarks does not apply however to the top (\(m_{t}\simeq 173.7\) GeV), bottom (\(m_{b}\simeq 4.6\,\text{GeV}\)), and charm (\(m_{c}\simeq 1.3\) GeV) quarks, which under the same conditions would behave as nonrelativistic particles. Besides, since their energies are dominated by their masses which are well above \(\Lambda_{\text{QCD}}\), their QCD interactions are weakly coupled. This is why heavy quark bounds states (quarkonium) can be analytically studied using perturbation theory, unlike the bound states of light quarks (\(u\), \(d\), and \(s\)) that have to be treated numerically. The difficulties in describing quarks inside protons and neutrons boils down to them being untrarelativistic particles. The moral of the story is that the popular line that the BEH mechanism "explains" mass is simply not correct. Most of our own mass and the mass of every objects we see around us (and this includes the Earth, the Sun, the Moon, and the stars in the sky) has nothing to do with the Higgs field and is the result of the quantum behavior of the strong interaction. Even in a universe where the up and down quarks were massless, the proton and the neutron would still have nonzero masses and moreover very similar to the ones in our world. ### The Higgs boson In order to analyze mass generation in the electroweak sector of the SM, it was enough to replace the scalar doublet \(\mathbf{H}\) by its vev. However, as we learned in section 5.4 for the Abelian case, the system has excitations around the minimum of the potential corresponding to a propagating scalar degree of freedom. To analyze the dynamics of this field, the _Higgs boson_, we write the Higgs doublet \(\mathbf{H}\) as \[\mathbf{H}(x)=\frac{1}{\sqrt{2}}e^{ia^{I}(x)t_{2}^{I}}\left(\begin{array}{c} 0\\ v+h(x)\end{array}\right), \tag{9.54}\] where \(a^{I}(x)\) and \(h(x)\) are the four real degrees of freedom encoding the two complex components in (9.30). In fact, as in the Abelian case of section 5.4, we can use the gauge invariance of \(S_{\rm Higgs}+S_{\rm Yukawa}\) to eliminate the global SU(2) global factor, after which we are left with a single real degree of freedom representing the Higgs boson [36]. Substituting into (9.31) and expanding, we get \[S_{\rm Higgs} =\int d^{4}x\,\left[\frac{1}{2}\partial_{\mu}h\partial^{\mu}h- \frac{\lambda v^{2}}{4}h^{2}-\frac{\lambda v}{4}h^{3}-\frac{\lambda}{16}h^{4} +\frac{2m_{W}^{2}}{v}W_{\mu}^{-}W^{+\mu}h\right. \tag{9.55}\] \[+\left.\frac{m_{W}^{2}}{v^{2}}W_{\mu}^{-}W^{+\mu}h^{2}+\frac{m_{Z }^{2}}{v}Z_{\mu}Z^{\mu}h+\frac{m_{Z}^{2}}{2v^{2}}Z_{\mu}Z^{\mu}h^{2}+m_{W}^{2 }W_{\mu}^{+}W^{-\mu}+\frac{m_{Z}^{2}}{2}Z_{\mu}Z^{\mu}\right]\!,\] where in the last two terms we recognize the masses for the \(W^{\pm}\) and \(Z^{0}\) gauge bosons. The first thing to be noticed is that the mass of the Higgs boson is determined by the vev \(v\) and the strength \(\lambda\) of the Higgs quartic self-couplings \[m_{H}=v\sqrt{\frac{\lambda}{2}}=(125.25\pm 0.17)\ \text{GeV}, \tag{9.56}\] where the current average experimental value is quoted [117]. The action (9.55) also contains the coupling between the Higgs boson and the \(W^{\pm}\) and \(Z^{0}\) intermediate bosons, giving rise to the interaction vertices \[\begin{array}{ccccc}W^{\pm},Z^{0}&\raisebox{-14.226378pt}{\includegraphics[width=14. 226378pt]{.}}&\raisebox{-14.226378pt}{\includegraphics[width=14.226378pt]{.}}&\raisebox{-14.226378pt}{\includegraphics[width=14.226378pt]{.}}&\raisebox{ -14.226378pt}{\includegraphics[width=14.226378pt]{.}}&\raisebox{-14.226378pt}{ \includegraphics[width=14.226378pt]{.}}&\raisebox{-14.226378pt}{\includegraphics [width=14.226378pt]{.}}&\raisebox{-14.226378pt}{\includegraphics[width=14.226378 pt]{.}}&\raisebox{-14.226378pt}{\includegraphics[width=14.226378 pt]{.}}&\raisebox{-14.226378pt}{\includegraphics[width=14.226378 pt]{.}}&\raisebox{-14.226378pt}{\includegraphics[width=14.226378 pt]{.}}&\raisebox{-14.226378pt}{\includegraphics[width=14.226378 pt]{.}}&\raisebox{-14.226378pt}{\includegraphics[width=14.226378 pt]{.}}&\raisebox{-14.226378pt}{\includegraphics[width=14.226378 pt]{.}}&\raisebox{-14.226378pt}{\includegraphics[width=14.226378 pt]{.}}&\raisebox{-14.226378pt}{\includegraphics[width=14.226378 pt]{.}}&\raisebox{-14.226378pt}{\includegraphics[width=14.226378 pt]{.}}&\raisebox{-14.226378pt}{\includegraphics[width=14.226378 pt]{.}}&\raisebox{-14.226378pt}{\includegraphics[width=14.226378 pt]{.}}&\raisebox{-14.226378pt}{\includegraphics[width=14.226378 pt]{.}}&\raisebox{-14.226378pt}{\includegraphics[width=14.226378 pt]{.}}&\raisebox{-14.226378pt}{\includegraphics[width=14.226378 pt]{.}}&\raisebox{-14.226378pt}{\includegraphics[width=14.226378 pt]{.}}&\raisebox{-14.226378pt}{\includegraphics[width=14.226378 pt]{.}}&\raisebox{-14.226378pt}{\includegraphics[width=14.226378 pt]{.}}&\raisebox{-14.226378pt}{\includegraphics[width=14.226378 pt]{.}}&\raisebox{-14.226378pt}{\includegraphics[width=14.226378 pt]{.}}&\raisebox{-14.226378pt}{\includegraphics[width=14.226378 pt]{.}}&\raisebox{-14.226378pt}{\includegraphics[width=14.226378 pt]{.}}&\raisebox{-14.226378pt}{\includegraphics[width=14.226378 pt]{.}}&\raisebox{-14.226378pt}{\includegraphics[width=14.226378 pt]{.}}&\raisebox{-14.226378pt}{\includegraphics[width=14.226378 pt]{.}}&\raisebox{-14.226378pt}{\includegraphics[width=14.226378 pt]{.}}&\raisebox{-14.226378pt}{\includegraphics[width=14.226378 pt]{.}}&\raisebox{-14.226378pt}{\includegraphics[width=14.226378 pt]{.}}&\raisebox{-14.226378pt}{\includegraphics[width=14.226378 pt]{.}}&\raisebox{-14.226378pt}{\includegraphics[width=14.226378 pt]{.}}&\raisebox{-14.226378pt}{\includegraphics[width=14.226378 pt]{.}}&\raisebox{-14.226378pt}{\includegraphics[width=14.226378 pt]{.}}&\raisebox{-14.226378pt}{\includegraphics[width=14.226378 pt]{.}}&\raisebox{-14.226378pt}{\includegraphics[width=14.226378 pt]{.}}&\raisebox{-14.226378pt}{\includegraphics[width=14.226378 pt]{.}}&\raisebox{-14.226378pt}{\includegraphics[width=14.226378 pt]{.}}&\raisebox{-14.226378pt}{\includegraphics[width=14.226378 pt]{.}}&\raisebox{-14.226378pt}{\includegraphics[width=14.226378 pt]{.}}&\raisebox{-14.226378pt}{\includegraphics[width=14.226378 pt]{.}}&\raisebox{-14.226378pt}{\includegraphics[width=14.226378 pt]{.}}&\raisebox{-14.226378pt}{\includegraphics[width=14.226378 pt]{.}}&\raisebox{-14.226378pt}{\includegraphics[width=14.226378 pt]{.}}&\raisebox{-14.226378pt}{\includegraphics[width=14.226378 pt]{.}}&\raisebox{-14.226378pt}{\includegraphics[width=14.226378 pt]{.}}&\raisebox{-14.226378pt}{\includegraphics[width=14.226378 pt]{.}}&\raisebox{-14.226378pt}{\includegraphics[width=14.226378 pt]{.}}&\raisebox{-14.226378pt}{\includegraphics[width=14.226378 pt]{.}}&\raisebox{-14.226378pt}{\includegraphics[width=14.226378 pt]{.}}&\raisebox{-14.226378pt}{\includegraphics[width=14.226378 pt]{.}}&\raisebox{-14.226378pt}{\includegraphics[width=14.226378 pt]{.}}&\raisebox{-14.226378pt}{\includegraphics[width=14.226378 pt]{.}}&\raisebox{-14.226378pt}{\includegraphics[width=14.226378 pt]{.}}&\raisebox{-14.226378pt}{\includegraphics[width=14.226378 pt]{.}}&\raisebox{-14.226378pt}{\includegraphics[width=14.226378 pt]{.}}&\raisebox{-14.226378pt}{\includegraphics[width=14.226378 pt]{.}}&\raisebox{-14.226378pt}{\includegraphics[width=14.226378pt]{.}}&\raisebox{-14. Yukawa action (9.38) \[S_{\rm Yukawa} =-\int d^{4}x\,\left[\left(\overline{e}_{L},\overline{\mu}_{L}, \overline{\tau}_{L}\right)\left(\frac{1}{v}M^{(\ell)}\right)\left(\begin{array} []{c}e_{R}\\ \mu_{R}\\ \tau_{R}\end{array}\right)h \tag{9.58}\] \[+\left(\overline{d}_{L},\overline{s}_{L},\overline{b}_{L}\right) \left(\frac{1}{v}M^{(q)}\right)\left(\begin{array}{c}d_{R}\\ s_{R}\\ b_{R}\end{array}\right)h+\left(\overline{u}_{L},\overline{c}_{L},\overline{t}_ {L}\right)\left(\frac{1}{v}\widetilde{M}^{(q)}\right)\left(\begin{array}{c}u _{R}\\ c_{R}\\ t_{R}\end{array}\right)h+{\rm H.c.}\right],\] This, upon switching to mass eigenstates, takes the general form \[S_{\rm Yukawa}=-\sum_{f}\frac{m_{f}}{v}\int d^{4}x\,\overline{f}fh, \tag{9.59}\] where \(f=(e^{\prime},\mu^{\prime},\tau^{\prime},u^{\prime},d^{\prime},c^{\prime},s^ {\prime},t^{\prime},b^{\prime})\) runs over are all the fermion mass eigenstates, apart from the three neutrinos that we will treat separately. The corresponding interaction vertices are \[\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\ Alternatively, the Higgs boson may produce a pair of \(Z^{0}\) bosons that in turn decay into two lepton-antilepton pairs (9.63) These were precisely the decay channels that led to the discovery of the Higgs boson by the ATLAS and CMS collaborations at the LHC [19, 20]. ### Neutrino masses We have been postponing the issue of neutrinos masses. It is however an experimental fact that neutrinos have nonzero masses and this is something we have to incorporate in the SM action. One way to do it is to extend the SM to include right-handed _sterile_ neutrinos \(\nu_{R}^{i}\) transforming as \((\mathbf{1},\mathbf{1})_{0}\) under SU(3) \(\times\) SU(2) \(\times\) U(1)\({}_{Y}\) (see the notation introduced page 105), adding then the following terms to the Yukawa action \[\Delta S_{\rm Yukawa}=-\sum_{i=1}^{3}\int d^{4}x\,\Big{(}\widetilde{C}^{(\nu) }\overline{\mathbf{L}}^{i}\widetilde{\mathbf{H}}\nu_{R}^{i}+\widetilde{C}^{( \nu)*}_{ji}\overline{\nu}_{R}^{i}\widetilde{\mathbf{H}}\mathbf{L}^{j}\Big{)}. \tag{9.64}\] Once the Higgs field gets a vev, this term generate a mass term of the form \[\Delta S_{\rm Yukawa}=-\int d^{4}x\,\left[(\overline{\nu}_{eL}, \overline{\nu}_{\mu L},\overline{\nu}_{\tau L})\widetilde{M}^{(\nu)}\left( \begin{array}{c}\nu_{1R}\\ \nu_{2R}\\ \nu_{3R}\end{array}\right)+\text{H.c.}\right], \tag{9.65}\] with \[M^{(\nu)}_{ij}=\frac{v}{\sqrt{2}}\widetilde{C}^{(\nu)}_{ij}. \tag{9.66}\] Being singlets under all SM gauge groups, the sterile neutrinos only interact gravitationally with other particles. **Box 15. Dirac vs. Majorana fermions** In previous sections, we have shown how antiparticles in QFT are somehow related to complex fields, for example in the complex scalar field discussed in Box 6 (see page 37). In this case, particles are interchanged with antiparticles by replacing the field \(\varphi(x)\) with its complex conjugate \(\varphi(x)^{*}\). To make things more elegant, we may call this operation _charge conjugation_ and the result the _charge _conjugated field_ \[\mathbf{C}:\varphi(x)\longrightarrow\eta_{C}\varphi(x)^{*}\equiv \varphi^{c}(x), \tag{9.67}\] where \(\eta_{C}\) is some phase that we are always free to add while keeping the action (3.86) invariant. At the quantum level, \(\mathbf{C}\) does indeed interchange particles and antiparticles \[\mathbf{C}|\mathbf{p};0\rangle=\eta_{C}^{*}|0;\mathbf{p}\rangle, \mathbf{C}|0;\mathbf{p}\rangle=\eta_{C}|\mathbf{p};0\rangle. \tag{9.68}\] From this perspective, a _real_ scalar field is one identical to its charge conjugate, \(\varphi(x)=\varphi^{c}(x)\). After quantization, its elementary excitations are their own antiparticles. Let us try to make something similar with the Dirac field. In the scalar field case, replacing \(\varphi(x)\) by \(\varphi(x)^{*}\) does not change the field's Lorentz transformation properties, after all complex conjugate or not both fields are _scalars_. Not so for a Dirac fermion. The spinor \(\psi(x)\) and its complex conjugate \(\psi(x)^{*}\) do not transform the same way under the Lorentz group and neither satisfy the same Dirac equation. This means that we cannot define a "real" Dirac spinor just requiring \(\psi(x)=\psi(x)^{*}\). We have to work a little bit more and consider \[\mathbf{C}:\psi(x)\longrightarrow\eta_{C}(-i\gamma^{2})\psi(x) ^{*}\equiv\psi^{c}(x), \tag{9.69}\] where \(\eta_{C}\) again is a complex phase. This charge conjugate spinor transforms in the same way as the original field and also satisfies the same free Dirac equation. Moreover, its action on the multi-particle states generated by the creation operators \(\widehat{b}(\mathbf{k},s)^{\dagger}\) and \(\widehat{d}(\mathbf{k},s)^{\dagger}\) in eq. (4.56) is given by \[\mathbf{C}|\mathbf{k},s;0\rangle=\eta_{C}^{*}|0;\mathbf{k},s\rangle, \mathbf{C}|0;\mathbf{k},s\rangle=\eta_{C}|\mathbf{k},s;0\rangle, \tag{9.70}\] and interchanges particles and antiparticles. The spinor analog of the real scalar field is a _Majorana spinor_, which equals its charge conjugate \[\psi(x)=\psi^{c}(x). \tag{9.71}\] Upon quantization, this identifies particles and antiparticles, as follows from eq. (9.70). It is interesting to implement the Majorana condition expressing the Dirac fermion in terms of its chiral components and using the representation (4.47) of the Dirac matrices \[\left(\begin{array}{c}\chi_{+}\\ \chi_{-}\end{array}\right)=\eta_{C}\left(\begin{array}{c}i\sigma^{2}\chi_{- }^{*}\\ -i\sigma^{2}\chi_{+}\end{array}\right)\qquad\Longrightarrow\qquad\psi=\frac{ 1}{\sqrt{2}}\left(\begin{array}{c}\chi_{+}\\ -i\eta_{C}\sigma^{2}\chi_{+}^{*}\end{array}\right). \tag{9.72}\] In the second identity we wrote a solution to (9.71), and a similar expression can be written in terms of the negative chirality component \(\chi_{-}\). Here we see how the Majorana condition halves the four complex components of a Dirac field down to two. In fact, the Majorana spinor can be written as the sum of a Weyl fermion and its charge conjugate as \[\psi=\frac{1}{\sqrt{2}}\left(\begin{array}{c}\chi_{+}\\ 0\end{array}\right)+\frac{1}{\sqrt{2}}\left(\begin{array}{c}0\\ -i\eta_{C}\sigma^{2}\chi_{+}\end{array}\right)\equiv\frac{1}{\sqrt{2}}\big{(} \psi_{+}+\psi_{+}^{c}\big{)}. \tag{9.73}\] Using this expression, we write the Dirac action for a Majorana fermion \[S=\int d^{4}x\,\left[i\overline{\psi}_{+}\not{\partial}\,\psi_{+}-\frac{m}{2} \big{(}\overline{\psi}\overline{\dot{\vphantom{\psi}}}_{+}\psi_{+}+\overline{ \psi}_{+}\psi_{+}^{c}\big{)}\right], \tag{9.74}\] Unlike Weyl fermions, Majorana spinors admit a mass term without doubling the number of degrees of freedom. An important point concerning Majorana fermions is that they cannot be coupled to the electromagnetic field. This is to be expected, since the Majorana condition identifies particles with antiparticles that, as we saw in Box 7, have opposite electric charge. In more precise terms what happens is that the associated Noether current vanishes \[j^{\mu}=\overline{\psi}\gamma^{\mu}\psi=\frac{1}{2}\Big{(}\chi_{+}^{\dagger} \sigma_{+}^{\mu}\chi_{+}+\chi_{+}^{T}\sigma_{+}^{\mu T}\chi_{+}^{\star}\Big{)} =0. \tag{9.75}\] This can be also seen as a consequence of the incompatibility of the Majorana condition (9.71) with a global U(1) phase rotation of the spinor \(\psi\to e^{i\theta}\psi\). In particular, the Majorana mass term in (9.74) does not conserve the U(1) charge \[\overline{\psi_{\pm}^{c}}\psi_{+}+\overline{\psi}_{+}\psi_{+}^{c}\longrightarrow e ^{2i\theta}\overline{\psi_{+}^{c}}\psi_{+}+e^{-2i\theta}\overline{\psi_{\pm}} \psi_{+}^{c}, \tag{9.76}\] a very important feature for the accidental symmetries of the SM such as lepton number. The addition of sterile neutrinos to generate neutrino masses is only partly satisfactory. One obvious problem is its lack of economy, since it requires the addition of extra species to the SM that nevertheless do not partake in its interactions. But the solution is also unnatural. Due to the smallness of the neutrino masses, the new Yukawa couplings have to be many orders of magnitude smaller than the ones for charged leptons. Generating a Dirac mass term is not the only possibility of accounting for neutrino masses. Having zero electric charge, they are the only fermions in the SM that can be of Majorana type. If this were the case, their mass terms in the action would be build from the left components alone, as we saw in Box 15 \[\Delta S=-\sum_{i,j=1}^{3}\int d^{4}x\left(\frac{1}{2}M_{ij}\overline{\nu^{i \underline{c}}}\nu_{L}^{j}+\text{H.c.}\right), \tag{9.77}\] where because of Fermi statistics \(\overline{\nu^{i\underline{c}}}_{L}\nu_{L}^{j}=\overline{\nu^{i\underline{c}} }_{L}\nu_{L}^{i}\) and the mass matrix \(M_{ij}^{(\nu)}\) can be taken to be symmetric. The problem now lies in how to generate a Majorana mass from a coupling of the neutrinos to the Higgs field, since both \(\mathbf{L}^{i}\) and its charge conjugate are both SU(2) doublets and there is no way to construct a gauge invariant _dimension four_ operator involving \(\mathbf{L}^{i}\), \(\mathbf{L}^{ic}\), and \(\mathbf{H}\) (or \(\widetilde{\mathbf{H}}\)). A group-theoretical way to see this is by noticing that the product representation \(\mathbf{2}\otimes\mathbf{2}\otimes\mathbf{2}=\mathbf{4}\oplus\mathbf{2}\oplus \mathbf{2}\) does not con tain any SU(2) singlet. This changes if we admit a dimension-five operator with two Higgs doublets, a left-handed fermion and its charge conjugate. Now it is possible to construct a gauge invariant term since \(\mathbf{2}\otimes\mathbf{2}\otimes\mathbf{2}\otimes\mathbf{2}=\mathbf{5}\oplus \mathbf{3}\oplus\mathbf{3}\oplus\mathbf{3}\oplus\mathbf{1}\oplus\mathbf{1}\). For example \[\Delta S=-\frac{1}{M}\sum_{i,j=1}^{3}\int d^{4}x\left[C^{(\nu)}_{ij}\left( \overline{\mathbf{L}^{ic}}\,\widetilde{\mathbf{H}}^{*}\right)\left( \widetilde{\mathbf{H}}^{\dagger}\mathbf{L}^{j}\right)+\text{H.c.}\right], \tag{9.78}\] is invariant under SU(2) \(\times\) U(1)\({}_{Y}\). This operator in the action has to be understood, in the spirit of EFT, as the result of some new physics appearing at the energy scale \(M\gg v\), with \(v\) the Higgs vev. When the Higgs field acquires its vev, the coupling (9.78) generates a Majorana mass term for the neutrinos \[\Delta S=-\frac{1}{2}\sum_{i,j=1}^{3}\int d^{4}x\Big{(}M^{(\nu)}_{ij}\overline {\nu^{ic}}\nu^{j}_{L}+\text{H.c.}\Big{)}, \tag{9.79}\] where the neutrino mass matrix is given by \[M^{(\nu)}_{ij}=\frac{v^{2}}{M}C^{(\nu)}_{ij}. \tag{9.80}\] The entries of this matrix are suppressed by the factor \(v/M\ll 1\), naturally producing neutrinos with masses well below the ones of the charged leptons. Thus, Majorana neutrinos not only are the most economical solution, making unnecessary adding new fermion species, but also avoids the unnaturalness of the neutrino Yukawa couplings. Incidentally, the Majorana mass term (9.79) violates lepton number, since \(\nu^{j}_{L}\) and \(\overline{\nu^{ic}}_{L}\) transform with the same phase [cf. (9.76)]. Neutrinos are regarded as one of the most promising windows to physics beyond the SM, being the main reason why neutrino physics has remained for decades one of the most exciting fields in (astro)particle physics and cosmology [147, 148, 149]. As to the question of whether the neutrino is a Dirac or a Majorana particle, however, the jury is still out. Some processes can only take place if the neutrino is its own antiparticle, most notably neutrinoless double \(\beta\) decay [150, 151]. A nucleus with mass and atomic numbers \((A,Z)\) can undergo double \(\beta\)-decay and transmute into the nucleus \((A,Z+2)\) with emission of two electrons and two antineutrinos: \[\begin{array}{ccccc}(A,Z)&\longrightarrow&(A,Z+1)\ +\ e^{-}+\ \overline{\nu}_{e}\\ &&\big{\arrowvert}\ \ \ (A,Z+2)\ +\ e^{-}\ +\ \overline{\nu}_{e}\end{array}. \tag{9.81}\] If the neutrino is a Majorana particle there is an alternative. The neutrino produced in the first decay may interact with a neutron in the nucleus, turning it into a proton with the emission of an electron \[\overline{\nu}_{e}(\equiv\nu_{e})+n\longrightarrow p^{+}+e^{-}, \tag{9.82}\] so no neutrino is emitted in the process \((A,Z)\rightarrow(A,Z+2)+2e^{-}\). This is describe by the diagram \[\begin{array}{c}\includegraphics[scale=0.5]{ CKM matrix to be \[n^{2}-2n+1-\frac{1}{2}n(n-1)=\frac{1}{2}(n-1)(n-2). \tag{9.85}\] For three families (\(n=3\)) the matrix depends on a single complex phase \(e^{i\delta}\) and three real angles \(\theta_{12}\), \(\theta_{13}\), and \(\theta_{23}\). In terms of them, the CKM matrix is usually parametrized as \[V_{\rm CKM}=\left(\begin{array}{ccc}c_{12}c_{13}&s_{12}c_{13}&s_{13}e^{-i \delta}\\ -s_{12}c_{23}-c_{12}s_{23}s_{13}e^{i\delta}&c_{12}c_{23}-s_{12}s_{23}s_{13}e^{ i\delta}&s_{23}c_{13}\\ s_{12}s_{23}-c_{12}c_{23}s_{13}e^{i\delta}&-c_{12}s_{23}-s_{12}c_{23}s_{13}e^{ i\delta}&c_{23}c_{13}\end{array}\right), \tag{9.86}\] where \(s_{ij}\equiv\sin\theta_{ij}\) and \(c_{ij}\equiv\cos\theta_{ij}\). The modulus of the entries can be measured through the observation of various weak interaction mediated decays and scattering processes (see for example [153]), with the result [117] \[|V_{\rm CKM}|=\left(\begin{array}{ccc}0.97435\pm 0.00016&0.22500\pm 0.00067&0.00369\pm 0.00011\\ 0.22486\pm 0.00067&0.97349\pm 0.00016&0.04182^{+0.00085}_{-0.00074}\\ 0.00857^{+0.00020}_{-0.00018}&0.04110^{+0.00083}_{-0.00072}&0.999118^{+0.000 03}_{-0.000036}\end{array}\right). \tag{9.87}\] while the value of the CP-violating phase is \(\delta=1.144\pm 0.027\). The experimental measurement of \(|V_{\rm CKM}|\) exhibits a clear hierarchy among its entries, derived from \(s_{13}\ll s_{23}\ll s_{12}\ll 1\). This is manifest in the so-called Wolfenstein parametrization [154] \[V_{\rm CKM}=\left(\begin{array}{ccc}1-\frac{1}{2}\lambda^{2}&\lambda&A \lambda^{3}(\rho-i\eta)\\ -\lambda&1-\frac{1}{2}\lambda^{2}&A\lambda^{2}\\ A\lambda^{3}(1-\rho-i\eta)&-A\lambda^{2}&1\end{array}\right)+\mathcal{O}( \lambda^{4}), \tag{9.88}\] where \(\lambda\equiv s_{12}\). The diagonal elements are all of order one, whereas the size of the other entries decreases as we move away from it. A look at (9.85) shows that with just two families the corresponding flavor mixing matrix would contain no complex phases and depend on a single real parameter, the Cabibbo angle \(\theta_{C}\equiv\theta_{12}\)[155]. Thus, CP violation in the electroweak sector, like the one showing up in for example kaon decay, requires the existence of at least three SM families. CP-violation in the SM is of major importance, since it is a basic ingredient to explain why there is such a tiny amount of antimatter in our universe. However, the amount of CP violation produced by the single complex phase of the CKM matrix is far too small to account for the observed matter-antimatter asymmetry [156]. Finding additional sources in or beyond the SM is one of the big open problems in contemporary high energy physics. Maybe the lepton sector is a good place to look for more CP violation. As with quarks, lepton masses appear when switching from interaction to mass eigenstates by diagonalizing the lepton mass matrix. Redefining the massive lepton fields \[\left(\begin{array}{c}e^{\prime}_{L,R}\\ \mu^{\prime}_{L,R}\\ \tau^{\prime}_{L,R}\end{array}\right)=U^{(\ell)}_{L,R}\left(\begin{array}{c}e _{L,R}\\ \mu_{L,R}\\ \tau_{L,R}\end{array}\right) \tag{9.89}\] with \(U^{(\ell)}_{L,R}\) defined in eq. (9.43), the interaction terms with the \(W^{\pm}\) bosons take the form \[S\supset\int d^{4}x\,\left[\left(\overline{e}^{\prime}_{L},\overline{\mu}^{ \prime}_{L},\overline{\tau}^{\prime}_{L}\right)U^{(0)\dagger}_{L}\gamma^{\mu} \left(\begin{array}{c}\nu_{eL}\\ \nu_{\mu L}\\ \nu_{\tau L}\end{array}\right)W^{+}_{\mu}+\text{H.c.}\right]. \tag{9.90}\] Here, the Hermitian conjugate term contains the interaction with the \(W^{-}\) and we have dropped the global normalization. In the original version of the SM there are no right-handed neutrinos and therefore we can reabsorb the matrix \(U^{(\ell)\dagger}_{L}\) in a redefinition of the left-handed neutrino fields, without it appearing elsewhere in the SM action. As a result, if the the neutrino were massless there would be no flavor mixing in the lepton sector. Things are drastically different once we add the neutrino mass terms. Let us consider first the case of Dirac masses. As with quarks and charged leptons, the mass matrix in eq. (9.66) can be diagonalized by a bi-unitary transformation \[U^{(\nu)\dagger}_{L}M^{(\nu)}U^{(\nu)}_{R}=\text{diag}(m_{1},m_{2},m_{3}), \tag{9.91}\] and the interaction term (9.90) is recast in terms of neutrino mass eigenstates as \[S\supset\int d^{4}x\,\left[(\overline{e}^{\prime}_{L},\overline{\mu}^{\prime} _{L},\overline{\tau}^{\prime}_{L})U^{(\ell)\dagger}_{L}U^{(\nu)}_{L}\gamma^{ \mu}\left(\begin{array}{c}\nu_{1L}\\ \nu_{2L}\\ \nu_{3L}\end{array}\right)W^{+}_{\mu}+\text{H.c.}\right], \tag{9.92}\] where \[U\equiv U^{(\ell)\dagger}_{L}U^{(\nu)}_{L}=\left(\begin{array}{ccc}U_{e1}&U _{e2}&U_{e3}\\ U_{\mu 1}&U_{\mu 2}&U_{\mu 3}\\ U_{\tau 1}&U_{\tau 2}&U_{\tau 3}\end{array}\right), \tag{9.93}\] is the Pontecorvo-Maki-Nakagawa-Sakata (PMNS) unitary matrix [157, 158]. Similarly to what the CKM matrix does for quarks, the PMNS matrix introduces flavor mixing in the leptonic sector. Moreover, following the same reasoning as with the CKM matrix, we see that for three families the PMNS matrix also depends on three real angles and a single complex phase, representing an additional source of CP violation. It also admits a parametrization similar to the one shown in eq. (9.86) for the CKM matrix where the phase is denoted by \(\delta_{\text{CP}}\). For Majorana neutrinos, however, the mass matrix (9.80) is symmetric and can be diagonal ized by a _unitary_ transformation \[U_{L}^{(\nu)T}MU_{L}^{(\nu)}=\text{diag}(m_{1},m_{2},m_{3}), \tag{9.94}\] so switching to neutrino mass eigenstates we find again an interaction term of the form (9.92). The big difference with respect to the Dirac case is that since the Majorana mass term (9.79) is not invariant under phase rotations of the neutrino fields, we cannot get rid of two of three phases in the PMNS matrix. As a consequence, besides the three angles \(\theta_{12}\), \(\theta_{13}\), \(\theta_{23}\) and the phase \(e^{i\delta_{\text{CP}}}\) of the Dirac case, the matrix depends now on two additional complex phases \(e^{i\lambda_{1}}\) and \(e^{i\lambda_{2}}\), known as Majorana phases. The three angles and \(\delta_{\text{CP}}\) can be measured from the neutrino oscillations, whereas the measurement of the two Majorana phases would be possible through the observation of neutrinoless double \(\beta\) decay [152]. Fits of neutrino data (including the Super-Kamiokande atmospheric neutrino data) give the following \(3\sigma\) ranges for the absolute values of the entries of the PMNS matrix [159] \[|\mathbf{U}|=\left(\begin{array}{ccc}0.801\to 0.845&0.513\to 0.579&0.143\to 0.155\\ 0.234\to 0.500&0.471\to 0.689&0.637\to 0.776\\ 0.271\to 0.525&0.477\to 0.694&0.613\to 0.756\end{array}\right). \tag{9.95}\] It is interesting to compare the textures of the matrices (9.88) and (9.95). As already mentioned, for quarks the matrix is of order 1 at the diagonal, \(\lambda\) for the second diagonal, and \(\lambda^{2}\) in the upper right and lower left corners. There seems to be a hierarchical pattern (this is a bit of wishful thinking clearly). In the case of neutrinos, however, it seems that there is democracy in all its entries, and a crude approximation to (9.95) would be to set all its entries to 1. This is a matrix with a single nonzero eigenvalue and two degenerate zeros, reminiscent of the normal or inverted hierarchies in the fit of the neutrino masses. Both textures are so different, that it is difficult to imagine that they have a common origin. A major mystery, whose clarification is beyond the SM. ## 10 Scale invariance and renormalization Renormalization appeared in physics as a way to make sense of the divergent results in QFT. In quantum mechanics, infinities are usually handled by invoking a normal ordering prescription, and even in QFT, they are absent when computing semiclassical contributions to processes in perturbation theory29. The trouble comes when calculating quantum corrections, associated in the perturbative expansion to Feynman diagrams with closed loops. These contain integrals over all independent momenta running in the loops that are frequently divergent. Footnote 29: Here we are going to be concerned with UV divergences associated with the high energy regime of the theory. IR divergences, which appear in the limit of low momenta, cancel once the physical question is properly posed and all contributions to the given process are taken into account. We will not enter into the many details and subtleties involved in the study of divergences in QFT and the philosophy and practicalities of renormalization. They are explained in all major textbooks on the subject and a concise and not too technical overview can be found in chapter 8 of [14]. The first step is to make the divergent integrals finite in order to handle them mathematically. This is done by introducing a proper regulator, that can either be a scale where loop momenta are cut off or a more abstract procedure to render the integrals finite, such as playing with the dimension of spacetime or introducing PV fermions. In any case, regularization implies the introduction of an energy scale \(\Lambda\), called the cutoff for short. The basic point is that this cutoff is an artefact of the calculation and cannot appear in any _physical_ quantity that we compute. Roughly speaking, renormalization consists on getting rid of the cutoff. The key point to do this is the realization that the masses, couplings, and the fields themselves appearing in the classical action are not physical quantities. Therefore, there is nothing wrong with them depending on \(\Lambda\). What must be cutoff independent are the physical quantities that we compute and can (and will) be compared with experiments. This quantities are _operationally defined_, in the sense that their definition within the theory's framework is given in terms of the process to be used to measure them. An example is the self-interacting scalar theory \[S=\int d^{4}x\,\left(\frac{1}{2}\partial_{\mu}\varphi\partial^{\mu}\varphi- \frac{m^{2}}{2}\varphi^{2}-\frac{\lambda}{4!}\varphi^{4}\right), \tag{10.1}\] where we would like to define the physical coupling \(\lambda_{\rm phys}\). We could identify it as the value of the scattering amplitude for four scalar particles when all \({\bf p}_{i}^{2}\) are equal \[\lambda_{\rm phys}\equiv \tag{10.2}\] where the blob stands for all diagrams contributing at a given order in perturbation theory and \(\mu\) is the energy scale of the process. The dependence of the action parameters on \(\Lambda\) is then chosen so this renormalization condition remains cutoff independent. Once this is done not just for the coupling constant but also for _all_ physical quantities (e.g., masses), the theory is renormalized and everything can be computed in terms of experimentally defined physical couplings and masses. In the case of the scalar theory defined by the action (10.1), as well as in other physically relevant theories like QED, QCD or the SM as a whole, it is possible to get rid of the cutoff dependence in any physical process by "hiding" it in a _finite_ number of parameters. Those theories for which this can be accomplished are called renormalizable. Nonrenormalizable theories, on the other hand, require the introduction of an infinite number of parameters to absorb the cutoff dependence, that in turn means that we need to specify an infinite number of operationally-defined physical quantities. In this picture, nonrenormalizability seems quite a disaster, since it seems that to compute physical observables we need to specify an infinite number of physical renormalization conditions. This is the reason why, historically, nonrenormalizable theories were considered to be no good for physics. Regularization and renormalization may have important consequences for classical symmetries, and we have seen examples of this in section 7. One of the immediate consequences of regularization is the necessity of introducing a cutoff in the theory and therefore an energy scale. This has the result that after renormalization, the physical couplings acquire a dependence on the energy scale where they are measured. This scale dependence is codified in the \(\beta\) function, containing information on how the coupling constant \(g\) depends on the scale where it is measured \[\beta(g)\equiv\mu\frac{dg}{d\mu}. \tag{10.3}\] This function can be computed order by order in perturbation theory. In QCD \(\beta(g)<0\), which means that the coupling constant decreases as the energy grows, a property known as asymptotic freedom. Besides, the theory dynamically generates an energy scale \(\Lambda_{\rm QCD}\) below which it becomes strongly coupled, with quarks and gluons confined into mesons and baryons. Asymptotic freedom is the reason behind QCD's success as a description of strong interactions. It allows to understand, for example, why in deep inelastic scattering experiments electrons seem to interact with quasifree partons inside the proton. To summarize, we can say that generically classical scale invariance is anomalous, in the sense that it disappears as the result of renormalization30. The \(\beta\)-function is just one example of a set of functions describing how couplings and masses change with the energy scale. Together, they build the coefficients of a set of first-order differential equations satisfied by the theory's correlation functions and other quantities and known as the _renormalization group equations_. Footnote 30: This happens, for example, in QCD with massless quarks. There are however a few examples of theories for which this does not happen, most notably \(\mathcal{N}=4\) supersymmetric Yang Mills theory in four dimensions. Due to its large symmetry, classical conformal invariance is preserved by quantization. The cartoon description of renormalization presented above might lead to thinking that it is just a smart trick, somehow justifying Feynman's dictum that renormalization is sweeping the infinities under the rug [160]. We have come however a long way from there. The current understanding of renormalization, dating back to the groundbreaking work of Kenneth Wilson [161, 162, 163], goes much deeper and beyond the mere mathematics of shifting cutoff dependence from one place to another. It is also closely related to the idea of EFTs, so now we can revisit our discussion on pages 4-7 in more precise terms. Everything boils down to making a physical interpretation of the cutoff. Instead of seeing it as an artificial scale introduced to render integrals finite, we can regard it as the upper energy scale at which our theory is defined. At energies above \(\Lambda\) new physics may pop up, but we do not really care too much, since all we need to know are the values of the masses \(m_{i}(\Lambda)\) and dimensionless couplings \(g_{i}(\Lambda)\). Now we ask ourselves how the theory looks at some lower energy scale \(\mu<\Lambda\). To answer, we need to "integrate out" all physical processes taking place in the range \(\mu\leq E\leq\Lambda\), which result in a new field theory now defined at scale \(\mu\) and expressed in terms of some "renormalized" fields. Generically, the masses and couplings of this theory will differ from the original ones, so we have \(m_{i}(\mu)\neq m_{i}(\Lambda)\) and \(g_{i}(\mu)\neq g_{i}(\Lambda)\). But, in addition to this, the new theory might also contain additional couplings not present at the scale \(\Lambda\), in principle an infinite number of them. Using the language of path integrals, we symbolically summarize all this by writing \[\int\limits_{\mu\leq E\leq\Lambda}\mathscr{D}\Phi_{0}\,e^{iS_{0}[\Phi_{0}]}=e ^{iS[\Phi]}, \tag{10.4}\] where \(\Phi_{0}\) collectively denotes the fields of the original theory and \(\Phi\) their renormalized counterparts, while \(S[\Phi]\) is the action of the new theory defined at the energy scale \(\mu\). On general grounds, it can be written as \[S[\Phi]=S_{0}[\Phi]+\sum_{n}\frac{g^{\prime}_{n}(\mu)}{\Lambda^{\dim \mathcal{O}_{n}-4}}\int d^{4}x\,\mathcal{O}_{n}[\Phi]. \tag{10.5}\] In this expression \(S_{0}[\Phi]\) is the action of the original theory with all fields, masses, and couplings replaced by the corresponding renormalized quantities, and \(\mathcal{O}_{i}[\Phi]\) are new operators with dimensionless greater than or equal to four induced by the physics integrated out between the scales \(\Lambda\) and \(\mu\). Their couplings \(g^{\prime}_{n}(\mu)\) are dimensionless and we see that higher-dimensional operators are suppressed by inverse powers of the high energy scale \(\Lambda\). In this Wilsonian picture of renormalization the dependence of the coupling constants with the scale has a clear physical meaning: as we go to lower energies, their changing values incorporate the physics that we are integrating out at intermediate scales. But not only this, also the difference between renormalizable and nonrenormalizable theories gets blurred. All theories are defined at a given energy scale \(\Lambda\). In order to describe the physics above this scale, the theory would have to be "completed" with additional degrees of freedom and/or interactions. What is special in renormalizable theories is that they are their own UV completion, in the sense that they can be extended to arbitrarily high energies without running into trouble, although technically this only makes sense for asymptotically free theries. Nonrenormalizable theories need to be completed in the UV to make sense of them above \(\Lambda\). Let us look at the example of Fermi's theory of weak interaction. It has a natural cutoff given by \(\Lambda=m_{W}\), and if we try to go beyond this energy we run into trouble. For example, the theory violates unitarity at high energies. The theory however can be completed in the UV by the electroweak model studied in section 9, which being renormalizable can in principle be extended to higher energies without inconsistencies. Another case of nonrenormalizable theories encountered in section 5 is the chiral Lagrangian (see page 67). Again, the theory is endowed with a physical cutoff, in this case \(\Lambda_{\rm QCD}\), above which the description in terms of pions is no longer valid. In fact, we can see the chiral Lagrangian as resulting from Wilsonian renormalization applied to QCD: by integrating out the physics of strongly coupled quarks and gluons we get a low energy action for the new fields (the pions) and their interactions. Since the resulting theory does not make sense above \(\Lambda_{\rm QCD}\) there is no problem with the divergences appearing in loops. After all, the before the momenta running in them can reach infinity the pion as such ceases to exist. The final instance of a nonrenormalizable theory we discuss is gravity, which, as explained in section 1, has to be completed above the Planck scale (1.7). But here we have to remember that everything couples to gravity, including the SM. Thus, we are led to conclude that despite being renormalizable, the SM itself has to be regarded as an effective description to be supplemented at the Planck scale, if not earlier. In fact, phenomena like the nonzero neutrino masses strongly indicate new physics lurking somewhere between the electroweak scale and the Planck scale. The bottomline of our discussion is that nonrenormalizability is just a sign that we are dealing with an EFTs and that the ubiquitous presence of gravity in nature forces us to regard _all_ QFTs as EFTs (have a look again at fig. 1 in page 67). Nonrenormalizable theories are not anymore those sinister objects they were when renormalization was seen as nothing but infinites removal. They are perfectly reasonable theories, provided we are aware what they are and what they are good for (and they are indeed _very_ good for quite many things!). **Box I7. The Planck chimney** Let us go back to the Higgs action (9.31) and particularly to the potential \[V({\bf H},{\bf H}^{\dagger})=\frac{\lambda}{4}\left({\bf H}^{\dagger}{\bf H}- \frac{v^{2}}{2}\right)^{2}. \tag{10.6}\] We have seen that after symmetry breaking the parameter \(\lambda\) directly relates to the Higgs mass (9.56) and determines its self couplings in the action (9.55). Since after quantization masses and couplings get a dependence on the energy scale, we would like to know how \(\lambda(\mu)\) or the Higgs mass \(m_{H}(\mu)\) depend on the scale \(\mu\). At this point we should recall that the strength of the coupling of the Higgs to fermions is proportional to the latter's masses [see eq. (9.60)], so its interactions with the matter fields are dominated by the top quark. Thus the renormalization group equations determining the evolution of \(\lambda(\mu)\) and \(\mu_{H}(\mu)\) with the energy scale should also involve the top quark mass \(m_{t}(\mu)\). An important question is whether the evolution of these parameters with the scale changes in a significative way the shape of the Mexican hat potential changes and, most importantly, whether this jeopardizes the existence of a stable Higgs vacuum (see [164] and references therein). It might be that the sombrero's brim get flattened at higher energies, or even inverted like in the case shown here If his happens, the Higgs vacuum becomes metastable or outright unstable. Since the renormalization group equations are first order, we need to specify some "initial conditions". In this case they are the values of the Higgs and top masses measured at LHC. Assuming the SM correctly describes the physics all the way to \(\Lambda_{\rm Pl}\), the bounds to be satisfied by the masses in order to preserve the stability of the Higgs vacuum are [165, 166, 167] \[m_{H}>(129.1\pm 1.5)\ {\rm GeV},\] \[m_{t}<(171.53\pm 0.42)\ {\rm GeV}. \tag{10.7}\] Comparing with the experimental values \(m_{H}=(125.25\pm 0.17)\ {\rm GeV}\) and \(m_{t}=(172.69\pm 0.033)\ {\rm GeV}\), we have seen that the Higgs vacuum is stable at the LHC. The Higgs vacuum is stable at the LHC. The Higgs vacuum is stable at the LHC. \(0.30\)) GeV [117], we see that the SM lies slightly outside the stability zone. In fact, the SM seems to be metastable, with the Higgs boson trapped in a false vacuum. The energy scale where the instability appears turns out to be of the order of the geometric mean of the \(W\) mass and the Planck scale \(\Lambda_{\rm inst}\sim\sqrt{m_{W}\Lambda_{\rm Pl}}\). This we can say to be quite a discovery made at the LHC! The instability of the Higgs vacuum is indeed no good news. Of course, living in a metastable universe is no major problem if its tunneling probability is so low that its decay time turns out to be much larger than the age of the universe, around \(13.6\) Gyr. But we have to remember that the bounds (10.7) are obtained with the proviso that there are no new degrees of freedom between the electroweak and the Planck scales. This is yet another reason to expect some physics beyond the SM making the universe stable. The apparent metastability of the Higgs vacuum highlights a very important feature of the renormalization group. We can run it from high to low energies with total confidence. Knowing the degrees of freedom and interactions at a certain scale \(\Lambda\), everything is determined at energies \(\mu<\Lambda\). The worst thing that may happen is that the degrees of freedom get "rearranged", as it happens in QCD where mesons and baryons replace quarks a gluons at low energies. But if the aim is getting information about what is going on at \(\mu>\Lambda\), additional assumptions are required: either that no new degrees of freedom emerge above \(\Lambda\), or that there is some UV completion whose details are necessarily an educated guess. After all, this is why particle physics is hard. Whatever happens above the energies we explore is blurred in the parameters of the theory we test. The best we can do is play the model building game to reproduce this blurriness, and hopefully predict distinct signals that could be detected in some future facility. ## 11 Closing remarks The SM is a vast and complex subject, providing the best description of particle physics and its applications at energies below a few TeV. It explains a large amount of phenomena in microphysics and in cosmology. However, its precise formulation delineates some of its limitations. For instance; * The values for the masses and mixing angles of quarks and leptons (including neutrino masses). * The SM does not provide adequate candidates to explain dark matter. * The only real progress in the study of dark energy has been to change its name from the previous one: the cosmological constant. * We know that CP needs to be violated in the universe in order to generate a matter-antimatter asymmetry. Thus, three families are the minimum needed to generate a CP violating angle, apart from the QCD vacuum angle. Unfortunately, CP violation from the CKM matrix is not enough to generate the observed asymmetry. The equivalent angle in the neutrino sector has not yet been measured. It would be ironical if the ultimate origin of "humans" was related to properties of the ghostly neutrinos. Theories beyond the standard model provide many scenarios with larger amounts of CP violation. * The currently preferred paradigm in cosmology is inflation. We still do not have a convincing candidate for what the inflaton is, or how the big bang was triggered, if that question makes any sense at all. There are still many open questions in cosmology, including what is the correct paradigm. This is just a sample of the most pressing issues for which the SM cannot provide a satisfactory answer. For decades now the scientific community has been trying to address these problems through extensions of the SM, from minimal ones inspired by supersymmetry to radical proposals rethinking the very structure of the elementary constituents, like string theory. So far the experiments are refusing to give any positive indication as to where the answers to the open questions might lie. Despite transient anomalies or data bumps, the more we probe the Higgs particle the more it looks like its "vanilla version". It is truly fascinating that in order to give masses to the SM particles nature has chosen the simplest solution we came up with, the Higgs field. The SM's definite triumph, the discovery of the Higgs particle in 2012, was also a disappointment, because it apparently closed the door to more exciting possibilities with a clear bearing on new physics. One of the reasons for the impasse might be that we are at the end of a cycle and the current conceptual framework based on symmetry and locality has been exhausted, or maybe the idea of naturalness, a basic guiding principle in our understanding of particle physics, is after all a red herring. We still need to bring gravity into the SM and this opens a plethora of problems and questions, some of them touching notions like landscapes or multiverses loaded with philosophical or just metascientific ideas. Cosmology and astroparticle physics might offer some hope. In recent years, we have witnessed important discoveries, from the first direct detection of gravitational waves in 2015 [168] to the "photo" of the black hole at the center of the M87 galaxy [169] in 2019. The rapidly developing field of gravitational wave astronomy opens up new windows to phenomena up to now out of observational reach, and it may allow unprecedented glimpses into the physics of compact astrophysical objects or the very early universe. We should not give up hope. Maybe we are on the verge of a golden era of discoveries that will leave us gasping with awe and laughing with joy in amazement of a new visions of the universe. One never knows, and dreaming is for free. ###### Acknowledgements. These lecture notes contain an extended version of courses taught by the authors at the 2022 European School for High Energy Physics (L.A.-G.), the TAE 2017 and 2019 schools, and graduate courses at Madrid Autonoma University (M.A.V.-M.). L.A.-G. would like to thank Markus Elsing, Martijn Mulders, Gilad Perez, and Kate Ross for their invitation to present the lectures at the 2022 ESHEP Jerusalem school, and fun moments together. We would also like to thank Het Joshi, student assistant at the Simons Center for Geometry and Physics, for her excellent work editing the first draft of these lecture notes. M.A.V.-M. acknowledges financial support from the Spanish Science Ministry through research grant PID2021-123703NB-C22 (MCIN/AEI/FEDER, EU), as well as from Basque Government grant IT1628-22.
2302.14149
Scalable precision wide-field imaging in radio interferometry: II. AIRI validated on ASKAP data
Accompanying Part I, this sequel delineates a validation of the recently proposed AI for Regularisation in radio-interferometric Imaging (AIRI) algorithm on observations from the Australian Square Kilometre Array Pathfinder (ASKAP). The monochromatic AIRI-ASKAP images showcased in this work are formed using the same parallelised and automated imaging framework described in Part I: ``uSARA validated on ASKAP data''. Using a Plug-and-Play approach, AIRI differs from uSARA by substituting a trained denoising deep neural network (DNN) for the proximal operator in the regularisation step of the forward-backward algorithm during deconvolution. We build a trained shelf of DNN denoisers which target the estimated image-dynamic-ranges of our selected data. Furthermore, we quantify variations of AIRI reconstructions when selecting the nearest DNN on the shelf versus using a universal DNN with the highest dynamic range, opening the door to a more complete framework that not only delivers image estimation but also quantifies epistemic model uncertainty. We continue our comparative analysis of source structure, diffuse flux measurements, and spectral index maps of selected target sources as imaged by AIRI and the algorithms in Part I -- uSARA and WSClean. Overall we see an improvement over uSARA and WSClean in the reconstruction of diffuse components in AIRI images. The scientific potential delivered by AIRI is evident in further imaging precision, more accurate spectral index maps, and a significant acceleration in deconvolution time, whereby AIRI is four times faster than its sub-iterative sparsity-based counterpart uSARA.
Amanda G. Wilber, Arwa Dabbech, Matthieu Terris, Adrian Jackson, Yves Wiaux
2023-02-27T21:14:06Z
http://arxiv.org/abs/2302.14149v2
# Scalable precision wide-field imaging in radio interferometry: ###### Abstract Accompanying Part I, this sequel delineates a validation of the recently proposed AI for Regularisation in radio-interferometric Imaging (AIRI) algorithm on observations from the Australian Square Kilometre Array Pathfinder (ASKAP). The monochromatic AIRI-ASKAP images showcased in this work are formed using the same parallelised and automated imaging framework described in Part I: "uSARA validated on ASKAP data". Using a Plug-and-Play approach, AIRI differs from uSARA by substituting a trained denoising deep neural network (DNN) for the proximal operator in the regularisation step of the forward-backward algorithm during deconvolution. We build a trained shelf of DNN denoisers which target the estimated image-dynamic-ranges of our selected data. Furthermore, we quantify variations of AIRI reconstructions when selecting the nearest DNN on the shelf versus using a universal DNN with the highest dynamic range, opening the door to a more complete framework that not only delivers image estimation but also quantifies epistemic model uncertainty. We continue our comparative analysis of source structure, diffuse flux measurements, and spectral index maps of selected target sources as imaged by AIRI and the algorithms in Part I - uSARA and WSClean. Overall we see an improvement over uSARA and WSClean in the reconstruction of diffuse components in AIRI images. The scientific potential delivered by AIRI is evident in further imaging precision, more accurate spectral index maps, and a significant acceleration in deconvolution time, whereby AIRI is four times faster than its sub-iterative sparsity-based counterpart uSARA. keywords: techniques: interferometric - techniques: image processing - radio continuum: galaxies - galaxies: clusters: intracluster medium ## 1 Introduction The superior detection capabilities of modern and upcoming radio arrays - namely, the Square Kilometre Array (SKA) - necessitate new and improved radio-interferometric (RI) imaging algorithms that are precise, robust, and scalable to larger quantities of data, wider fields-of-view, and broader frequency bandwidths. The "Scalable precision wide-field imaging in radio interferometry" series aims to showcase a novel imaging framework in action, as applied to real radio observations from ASKAP. The proposed imaging framework builds from compressed sensing techniques to reconstruct true signal from incomplete, noisy data and operates at the interface of optimisation theory and deep learning. Implemented in MATLAB, the framework is automated, highly parallelised, and capable of producing wide-field, high-dynamic range, super-resolved monochromatic intensity images. Two interchangeable image regularisation denoisers can be "plugged" in as the 'backward' step of the underlying iterative forward-backward (FB) deconvolution structure (Terris et al., 2022; Dabbech et al., 2022) of the imaging framework, alternating with a gradient descent 'forward' step promoting data fidelity. The first algorithm, unconstrained Sparsity Averaging Reweighted Analysis (uSARA), is purely optimisation-based. It leverages the proximal operator of a state-of-the-art handcrafted sparsity-promoting regularisation function (Carrillo et al., 2012; Terris et al., 2022) as regularisation denoiser. In Part I: "uSARA validated on ASKAP data", uSARA was validated against the widely-used CLEAN-based imager for RI WSClean (Offringa et al., 2014; Offringa and Smirnov, 2017). Our experiments with uSARA showed that we were able to take wide-field, imperfectly calibrated data and create images with exceptional resolution and enhanced sensitivity. The findings of Part I establish uSARA as an advanced RI imaging algorithm capable of surpassing the state-of-the-art in precision and robustness when applied to large-scale, real, and imperfect data. A remaining caveat with uSARA lies in its computational cost due to the iterative nature of the proximal operator underlying its image model. In this sequel, we showcase AIRI (Terris et al., 2022) - the second algorithm encapsulated in our automated, parallelised imaging framework - which combines optimisation theory with the power of AI. Building on the recent success of Plug-and-Play (PnP) approaches in various applications such as image restoration (Zhang et al., 2021) and magnetic resonance imaging (Ahmad et al., 2020), AIRI relies on the same FB iterative scheme as usSARA, with the proximal operator enforcing the image prior model replaced by a learned DNN denoiser. The learned denoiser, deployed on graphic processing units (GPUs), enables a significant acceleration of the backward step of the iterative FB algorithm when compared to the computationally heavy iterative proximal operator powering uSARA. The speed combined with the learning power of the DNNs - trained herein as denoisers for high dynamic-range images - make for a scalable and robust tool in image reconstruction. Although the training of the DNNs requires important computational time and resources, AIRI denoisers are pretrained independently of the data under scrutiny. They can generalise to any RI data with a simple scaling procedure. This paper specifically addresses testing AIRI on the same real and imperfectly calibrated data from ASKAP, used in Part I. To summarise from Part I, we selected three fields-of-view (FoVs) from ASKAP Early Science and Pilot Survey observations hosting radio sources of primary interest that exhibit emission with both complex diffuse and compact filamentary morphology. Our targets of interest include the merging galaxy cluster system Abell 3391-95 (hosting a candidate radio phoenix; Bruggen et al.2021), the merging galaxy cluster SPT-CL J2023-5535 (hosting a radio halo and radio relic; HyeongHan et al.2020), the X-shaped radio galaxy PKS 2014-558 (e.g. Cotton et al.2020), and the "dancing ghosts," known collectively as PKS 2130-538 (e.g. Norris et al.2021). We refer the reader to Table 1 of Part I for full details on the ASKAP observations selected for imaging. Further details of the observations selected, calibration and processing of the data, and the steps involved to prepare for imaging are elaborated upon in Section 3 of Part I. The remainder of this article is structured as follows. In Section 2, we recall the parallelised, automated imaging framework from Part I and expand upon the application of the AIRI algorithm as well as our approach to selecting DNN denoisers for imaging. In Section 3, we provide details of the ASKAP data used in this work and the imaging settings applied for the AIRI algorithm. Reconstruction results of our selected fields are presented and compared to the results of Part I in Section 4. The computational performance of AIRI is studied in 5. Finally, conclusions are made in Section 6. ## 2 Methods In this section, we briefly recall the RI data model in the context of wide-field imaging and provide a summary of AIRI, building from the underlying theory (Terris et al.2022) and its first application to real RI data (Dabbech et al.2022). We also outline the encompassing framework for wide-field imaging, focusing on the parallelisation of the AI denoiser and the automated selection of associated parameters. A description of the framework's underpinning wide-field parallel measurement operator can be found in Section 2 of Part I. ### RI data model We recall the discrete RI data model in the context of wide-field imaging, detailed in Part I, whereby the measured visibilities \(\mathbf{y}\in\mathbb{C}^{M}\) are modelled from a discrete representation of the sought radio image \(\mathbf{x}\in\mathbb{R}_{+}^{N}\) as follows \[\mathbf{y}=\mathbf{\Phi}\mathbf{x}+\mathbf{n}, \tag{1}\] where \(\mathbf{n}\in\mathbb{C}^{M}\) is a realisation of random Gaussian noise with mean zero and standard deviation \(\tau>0\), and \(\mathbf{\Phi}\in\mathbb{C}^{M\times N}\) is the measurement operator encompassing the Fourier sampling and the so-called \(w\)-effect, a chirp-like phase modulation emanating from the non-coplanarity of the array (Cornwell et al.2008). Often, a noise-whitening operation is applied to the measured visibilities to ensure constant standard deviation of the noise (see Appendix A of Terris et al.2022, for more details). The operation can be applied in combination with a weighting scheme compensating for the highly non-uniform Fourier sampling (_e.g._ Briggs weighting; Briggs1995) to enhance the effective resolution of the observation. Naturally, any transform applied to the data is injected in the model of the measurement operator. Our imaging framework is shipped with a parallel and memory-efficient measurement operator, ensuring scalability to large data sizes (see Dabbech et al.2022, for a comprehensive summary). ### AIRI algorithm The recent PnP scheme established that proximal optimisation algorithms, such as FB, enable not only the use of proximal operators of handcrafted regularisation operators, but also the injection of learned DNN denoisers, which define regularisation implicitly (Venkatakrishnan et al.2013; Romano et al.2017). In order to preserve the convergence of the algorithm, and the interpretability of its solution, the PnP denoiser must typically satisfy a "firm non-expansiveness" constraint, ensuring that it contracts distances (Pesquet et al.2021; Hurault et al.2022). Learning denoisers from rich databases (as opposed to handcrafting proximal operators) opens the door to more powerful regularisation. The speed of DNNs on GPU also offers a significant acceleration over iterative proximal operators. In this context, the AIRI imaging algorithm (Terris et al.2022) is underpinned by the same FB structure as the uSARA imaging algorithm (see Section 2 in Part I) with the image update at each iteration alternating between a 'forward' step enforcing data fidelity with respect to (1), and a 'backward' denoising step for image regularisation: \[(\forall k\in\mathbb{N})\qquad\mathbf{x}^{(k+1)}=\mathrm{D}\left(\mathbf{x}^{(k)}- \gamma\mathbf{\nabla}f(\mathbf{x}^{(k)})\right). \tag{2}\] The operator \(\mathrm{D}\) denotes a learned DNN denoiser. Considering the standard data-fidelity function \(f(\mathbf{x};\mathbf{y})=1/2\|\mathbf{y}-\mathbf{\Phi}\mathbf{x}\|_{2}^{2}\), where \(\|.\|_{2}\) denotes the \(\ell_{2}\)-norm of its vector argument, the operator \(\nabla f\) stands for its the gradient and reads \(\nabla f(\mathbf{x})=\mathrm{Re}\{\mathbf{\Phi}^{\dagger}\mathbf{\Phi}\}\mathbf{x}-\mathrm{ Re}\{\mathbf{\Phi}^{\dagger}\mathbf{y}\}\). The parameter \(\gamma>0\) is a sufficiently small step size. ### DNN training and noise level Following Terris et al.2022, the AIRI denoisers used in this work were trained in a supervised approach to remove random Gaussian noise with zero-mean and standard deviation \(\widehat{\sigma}>0\) from noisy input images. The denoisers rely on a simple denoising convolutional neural network (DnCNN) architecture and are trained using a rich high-dynamic range database synthesised from optical astronomy images, with groundtruth images normalised to have a peak value equal to 1. Importantly, the training loss function is regularised with an appropriate non-expansiveness term on the denoiser \(\mathrm{D}\). Given the normalisation of the groundtruth images from the training database, the DNN's noise level can be interpreted as the inverse of a target dynamic range, which can intuitively be adjusted to match the signal-to-noise ratio in the observed data. Considering \(L\) as the spectral norm of \(\mathrm{Re}\{\mathbf{\Phi}^{\dagger}\mathbf{\Phi}\}\), the standard deviation of the measurement noise in the image domain can be estimated as \(\sigma=\eta\tau/\sqrt{2L}\)(Thouvenin et al.2022; Wilber et al.2022) where \(\eta>0\) is derived from the data-weighting operator when considered in imaging and is set to 1 otherwise. Hence, the target dynamic range of the sought image is given by \(\max_{j}\{\{x_{j}\}/\sigma\}\). However, the peak value of the true image of the sky is not accessible in practice. We therefore resort to the dirty image defined as \(\overline{x}^{\text{dirty}}=\beta\text{Re}\{\mathbf{\Phi}^{\dagger}\mathbf{y}\}\in\mathbb{R}^ {N}\), where \(\beta>0\) is a normalisation factor1, and approximate the peak value of the sought image by the peak value of the dirty image \(\kappa=\max_{j}\{\overline{x}^{\text{dirty}}_{j}\}>0\). The value of \(\kappa\) constitutes an upper bound on the estimated peak value2. Consequently, \(\kappa/\sigma\) provides an estimate (in fact, an upper bound) on the target dynamic range. Footnote 1: The factor \(\beta\) corresponds to the peak value of the non-normalised point spread function given by \(\text{Re}\{\mathbf{\Phi}^{\dagger}\mathbf{\Phi}\}\delta\), where \(\mathbf{\delta}\in\mathbb{R}^{N}\) is the image with one at its centre and zero otherwise. Footnote 2: In our experiments, we observed that \(\kappa\) is within one order of magnitude from the peak value of the reconstructed image. In this context, the RI inverse problem (1) is normalised by \(\kappa\) to ensure that the sought image \(\mathbf{x}/\kappa\) satisfies the same normalisation constraints of the training images, such that pixel values are below 1. The DNN denoiser is trained for the removal of a zero-mean random Gaussian noise with standard deviation \[\widehat{\sigma}=\sigma/\kappa, \tag{3}\] where \(\sigma=\eta\tau/\sqrt{2L}\). The AIRI image estimate is later re-scaled back via multiplication by \(\kappa\). The apparent dependency of the training noise level on the statistics of the noise corrupting the RI data and the associated measurement operator raises a generalisability concern for the AIRI denoisers. However, this can be circumvented via further scaling of the inverse problem to bring the target dynamic range of the reconstruction to the inverse of the noise level of an already available DNN denoiser. ### Denoiser selection The selection of the appropriate denoiser can be conducted via two approaches. The first approach relies on a pre-trained _shelf of denoisers_, from which the appropriate denoiser is selected depending on the target dynamic range of the reconstruction. A set of denoisers are thus trained, with noise levels sampled within a wide range of values, reflecting a whole range of dynamic ranges of interest in modern RI imaging. For each RI dataset, AIRI's denoiser is selected from the shelf as the DNN with the nearest noise level \(\sigma_{\text{s}}\) below the inverse of the target dynamic range \(\widehat{\sigma}\). This implies considering a slightly looser image peak upper bound \(\kappa\widehat{\sigma}/\sigma_{\text{s}}\) for the re-scaling of the inverse problem (1), thus leading to a heuristic value \(\sigma_{\text{s}}\) for the training noise level in (3). The second approach leverages a pre-trained _single (universal) denoiser_ to be applied for the image formation of any RI dataset. The denoiser is trained with a very low noise level \(\sigma_{\text{s}}\), tailored for the highest target dynamic ranges of interest for modern RI imaging. For any RI dataset of interest, this amounts to considering a possibly much looser image peak upper bound \(\kappa\widehat{\sigma}/\sigma_{\text{s}}\) for the re-scaling of the inverse problem (1), systematically leading to a heuristic value \(\sigma_{\text{s}}\) for the training noise level in (3). This second approach was already shown to be efficient in the formation of high-quality radio maps when applied to observations from the MeerKAT telescope (Dabbech et al., 2022). ### Denoiser faceting Owing to their convolutional nature and narrow receptive fields, AIRI's denoisers can be applied to facets of the image without causing any faceting-related artefacts, provided that appropriate facet overlaps are considered. This feature enables the scalability of AIRI denoisers to large image dimensions through their parallel application to image facets, as well as circumventing the memory limitation of the GPUs during inference. ## 3 Data, imaging, and analysis In validating the AIRI algorithm, we consider the same RI datasets imaged in Part I. These data consist of three individual beam observations from ASKAP Early Science and Evolutionary Map of the Universe Pilot survey (EMU-PS Norris et al., 2021) scheduling blocks (SBs): SB8275-15, SB9351-12, and SB9442-35. The measurement sets containing calibrated visibilities of these selected observations were produced by ASKAPsoft (Hotan et al., 2021) and obtained from the CSIRO ASKAP Science Data Archive (CASDA; Chapman et al., 2017). For our imaging purposes, each of these observations, with bandwidths of 288 MHz (ranging between [800,1158] MHz), was split into eight spectral windows (SPWs). For the first two fields (SB8275-15 and SB9351-12), the resulting sub-band data were imaged separately to form monochromatic images with dimensions of \(5500\times 5500\) pixels and a cell-size of 2.2 arcsec. For the third field (SB9442-35), sub-band images and a single full-band image with dimensions of \(4096\times 4096\) pixels and cell-sizes of 2.2 arcsec were formed. The full-band data of SB9442-35 were reconstructed into a monochromatic full-band image with the aim to increase both the dimensionality and the sensitivity of the data. Full details of the data and the imaging settings can be found in Tables 1 & 2 of Part I. Specific to AIRI, in each imaging experiment we enabled the image-faceting functionality of the denoiser by splitting the image into four facets of equal dimensions. Faceting was found to be necessary for satisfying memory requirements. In the selection of the appropriate denoiser, we first determined the values of the inverse of the target dynamic range, \(\widehat{\sigma}\), of all formed sub-band and full-band images, following (3). We primarily opted for the pre-trained shelf strategy for all AIRI reconstructions, where the considered noise lev Figure 1: The positioning of the inverse of the target dynamic range, \(\widehat{\sigma}\), associated with the different SPWs of the selected fields (dashed lines), with respect to the noise level, \(\sigma_{\text{s}}\), of the pre-trained shelf of DNN denoisers (black solid lines). From left to right: plots for the fields SB8275-15, SB9351-12, and SB9442-35, respectively. Specifically to SB9442-35, we also add the inverse of the target dynamic range associated with the full-band imaging experiment. els of the pre-trained denoisers are \(\sigma_{s}=[2,4,8]\times 10^{-5}\). We also investigated the pre-trained universal denoiser strategy for the field SB9351-12. In this case, the universal denoiser is chosen from the pre-trained shelf as the denoiser with the lowest noise level (equivalently, the highest dynamic range), that is \(\sigma_{u}=2\times 10^{-5}\). We note that training under the firm non-expansiveness constraint is highly challenging. While Terris et al. (2022) demonstrated in simulation that the AIRI training approach leads to a robust way to ensure convergence of the PnP algorithms, we acknowledge that, when used for real data and at large image sizes and dynamic ranges such as those of interest here, some denoisers lead to algorithm instability, ultimately requiring further training. This phenomenon was also witnessed by Dabbech et al. (2022). AIRI experiments were run on Cirrus3, a UK Tier2 high-performance computing (HPC) service, and utilised its GPU compute nodes. A Cirrus GPU node is comprised of 4 GPUs and 40 CPU cores with 384 GB of shared memory. Each imaging experiment of AIRI is launched on one to five GPU nodes, depending on the memory requirements, whereby CPU cores, allocated dynamically, were utilised for the forward step, more precisely the application of the measurement operator, and GPUs were exploited in the parallel application of the denoising DNN on facets of the image. Footnote 3: [http://www.cirrus.ac.uk](http://www.cirrus.ac.uk) For a quantitative assessment of AIRI's performance, we focus on the same primary sources of interest as presented in Part I and analyse their associated flux measurements and spectral index maps. Results are compared to those obtained with uSARA and WSClean for all spectral windows of each imaged field. The AIRI flux measurements were computed from hand-drawn regions (generated using the visualisation software SAOImageDS9; Joye & Mandel 2003) slightly different from those considered in the uSARA and WSClean images, to better match the morphology of recovered emission. Spectral index maps inferred from the AIRI-ASKAP sub-band images were also obtained following the same procedure described in Part I. Likewise, only the first six sub-band images were used to generate the spectral index maps due to the limited diffuse signal recovered in the final two sub-bands. Similarly to uSARA, blurring with a circular Gaussian beam of 5 arcsec was applied to AIRI-ASKAP sub-band images to smooth the source structure before flux measurements were fitted to the spectral curve: \(S_{\nu}\propto\nu^{-\alpha}\), where \(S_{\nu}\) is the flux density for a given beam area at a given frequency \(\nu\) and \(\alpha>0\) is the spectral index. In presenting our AIRI images, we also make use of optical images from the first data release of the Dark Energy Survey (DES; Abbott et al. 2018). We performed an additional assessment towards the quantification of epistemic model uncertainty by comparing variations of AIRI (_i.e._ using different DNN denoisers) when imaging each spectral window of the field SB9351-12. More details on this analysis can be found in Section 4.4. ## 4 Results In this section, we showcase high-resolution high-fidelity images of our three selected fields produced by the AIRI algorithm encapsulated in our parallelised and automated imaging framework and investigate two approaches for the selection of the DNN denoiser, as described in Section 2.3. Figure 1 illustrates the positioning of the inverse of the target dynamic range \(\widetilde{\sigma}\) of each imaged spectral window for each field with respect to the noise levels of the learned DNNs. Select AIRI-ASKAP images are displayed in Figures 2-4, in a format identical to the uSARA-ASKAP and WSClean images presented in Part I. The AIRI-ASKAP figures consist of full FoV images with zoomed-in views focused on complex radio emission of interest, and their associated optical images and spectral index maps. In Wilber et al. (2022), we provide all sub-band AIRI-ASKAP images of the three selected fields (as FITS files) and combine them into GIF files to better show how emission and source morphology changes over the full frequency band. Throughout this section, we refer the reader to Part I for specific comparisons of our imaging results with the uSARA and WSClean reconstructions from Part I. Upon visual inspection, our monochromatic AIRI-ASKAP images of all three fields capture more extended structure than seen in the pure-optimisation counterpart uSARA-ASKAP images. In particular, the faintest structures of our diffuse targets of interest appear more pronounced and defined than they did in the uSARA-ASKAP images. However, the intensity of the faintest point sources, as reconstructed by AIRI, seems to be diminished. In the following subsections, we focus on each ASKP field and address these differences by examining specific sources of interest. We present detailed comparisons of source morphology, flux density measurements, and spectral index maps between the three imaging algorithms. In an experiment toward uncertainty quantification of AIRI denoisers, we also showcase AIRI reconstructions made via the universal denoiser approach. ### First field: SB8275-15 This field contains the massive, merging galaxy clusters Abell 3391 (in the north) and Abell 3395 (in the south). The cluster pair is connected by a warm gas bridge, recently discovered in eROSITA X-ray observations (Reiprich et al. 2021). In Figure 2, we present our AIRI image of this full imaged FoV (3.36\({}^{\circ}\)) of the first spectral window (SPW:1). The figure includes zoomed-in views of the FR-I in Abell 3391 (a: top right panel), a FR-II cluster member in the east (b: middle right panel), and multiple sources in Abell 3395 (c: bottom panels). The bent-tail FR I radio galaxies at the centres of Abell 3391 and Abell 3395 (see Table 1 in Part I for source names) are reconstructed with similar brightness and resolution when compared to our uSARA image from Part I (the peak pixel flux of the FRI in Abell 3391 is 20 mJy in both the AIRI and uSARA images). The highly resolved detail of the 'braiding' in these FRI jets, not resolved in WSClean images, is present in both the AIRI and uSARA images. However, the radio galaxies as captured by AIRI exhibit more blended edges than seen in the uSARA image. In addition, the ring-like artefacts emanating from these bright FRI sources take on a much more extended structure in our AIRI image - their appearance is dimmed and they propagate further out from their point of origin. Mainly, there is a noticeable difference in the diffuse structure recovered in the candidate phoenix of Abell 3395 and the background FR-II radio galaxy (c and b panels, respectively, in Figure 2). \begin{table} \begin{tabular}{c c c c c c c c c} \hline \hline **A3395 Phoenix** & \(S_{\rm 887}\) & \(S_{\rm 923}\) & \(S_{\rm 959}\) & \(S_{\rm 995}\) & \(S_{\rm 1031}\) & \(S_{\rm 1067}\) & \(S_{\rm 1103}\) & \(S_{\rm 1139}\) \\ \hline AIRI model & 25.3 & 27.0 & 17.9 & 8.5 & 11.3 & 12.1 & 9.7 & 1.6 \\ \hline \end{tabular} \end{table} Table 1: Integrated flux density values in [mJy] of the diffuse phoenix source in Abell 3395 for each SPW imaged with AIRI. Central frequency of each SPW is listed in MHz. See Table 3 in Part I for uSARA and WSClean flux measurements of the diffuse phoenix source. Figure 2: SB8275-15 – AIRI: Full FoV image covering the merging cluster system Abell 3391-95, at the first sub-band (SPW:1, centred at 887 MHz). For visual comparison with WSClean and uSARA, refer to their respective Figures 1 and 2 from Part I. This monochromatic image is an AIRI model with a pixel resolution of \(2.2\times 2.2\) arcsec. Panel (a) centred on the FR I radio galaxy in A3391; panel (b) centred on cluster member FR II radio galaxy; (c) panels centred on FR I and diffuse source in A3395. Middle (c) panel: r-band optical image from DES overlaid with AIRI model image, demarcated by blue contours at levels \(\{2^{m}\}_{0\leq m\leq 10}\mu Jy\) pixel\({}^{-1}\). Rightmost (c) panel: spectral index map obtained with the first six sub-band images of AIRI after smoothing with a common circular Gaussian beam of 5 arcsec. In Wilber et al. (2022b) are provided all sub-band images combined into the GIF ‘SB8275–15_AIRI’, and the spectral index map of Abell 3395 obtained with AIRI, uSARA, and WSClean in the GIF ‘SpectralIndexMap_Abell_3395’, together with a colour blind-friendly version in the GIF ‘SpectralIndexMap_Abell_3395_colorblind_friendly’. structure is revealed in the AIRI reconstruction of the Abell 3395 phoenix, as the north-west arm now bridges the dim core and the compact sources at the north-west edge of the cluster. The structure of the recovered emission also changes from one spectral window to the next, with noticeable fading as the frequency increases (see associated GIF in Wilber et al., 2022). Flux density measurements of the phoenix from the sub-band AIRI-ASKAP images are provided in Table 1. In comparing the flux density measurements to those taken from uSARA and WSClean images, we see some slight variations across the spectral windows with the exception of the highest frequency where there is clearly less flux recovered in the AIRI image. The most exciting result is perhaps the improvement of the spec Figure 3: SB9351-12 – AIRI: Full FoV image covering the merging cluster SPT2023 and the X-shaped radio galaxy PKS 2014-55, at the first sub-band (SPW:1, centred at 817 MHz). For visual comparison with WSClean and uSARA, refer to their respective Figures 3 and 4 from Part I. This monochromatic image is an AIRI model with a pixel resolution of 2.2 \(\times\) 2.2 arcsec. Panel (a) centred on the merging galaxy cluster SPT2023; panel (b) centred on a field containing compact and point sources; (c) panels centred on the X-shaped radio galaxy PKS 2014-55. Middle (c) panel: r-band optical image from DES overlaid with the AIRI model image, demarcated by blue contours at the levels \(\{2^{h}\}_{0\leq m\leq 10}\)\(\mu\)Jy pixel\({}^{-1}\). Rightness (c) panel: spectral index map obtained with the first six sub-band images of AIRI after smoothing with a common circular Gaussian beam of 5 arcsec. In Wilber et al. (2022) are provided all sub-band images combined into the GIF ‘5B9351-12_AIRI’, and the spectral index map of the X-shaped radio galaxy obtained with AIRI, uSARA, and WSClean in the GIF ‘SpectralIndexMap_PKS_2014_55’, together with a colour blind-friendly version in the GIF ‘SpectralIndexMap_PKS_2014_55_colorblind_friendly’. tral index map of the phoenix obtained with AIRI when compared to uSARA (see rightmost panel (c) of Figure 2). Since more diffuse structure is recovered by AIRI overall, even at the higher frequencies, the spectral index map has more coverage over the full source morphology. This coverage aids in source classification, enabling the identification of a trend in the spectral index as the emission shifts from the dim core to the north-west and south-west arms. There is clearly more steepness of the spectra as compared to the uSARA map shown in Part I. The dim core recovered by AIRI has a steeper index (\(2.1\leq\alpha\leq 2.8\)), and the north-west arm shows a sharp, rather than gradual, drop-off from the core. There still exists the ring of flatter emission around the core, matching the results from uSARA and WSClean. Our AIRI results are in line with the hypothesis that this source is no longer receiving a fresh injection from an active nucleus and that the surrounding emission may be undergoing some gentle re-energisation, which in turn is causing brightening and flattening of old and faded AGN emission. Interestingly, both the FR-I to the east and the compact source to the north-west exhibit consistent spectral behaviour between uSARA and AIRI. ### Second field: SB9351-12 The second selected field covers the merging galaxy cluster SPT-CL J2023-5535 (hereafter SPT2023) and the X-shaped radio galaxy PKS 2014-55. As stated in Part I, two recent studies have been separately published for these sources of interest: HyeongHan et al. (2020) confirmed the detection of a radio halo and radio relic in SPT2023 with the same data used in this work and Cotton et al. (2020) used MeerKAT observations to generate total intensity, polarisation, B-field maps, and spectral index maps of the X-shaped radio galaxy. In Figure 3, we present our AIRI image of the full FOV (3.36\({}^{\circ}\)) of the first spectral window (SPW:1) of SB9351-12. The figure includes zoomed-in views on the merging cluster SPT2023 (a: upper right panel), a field of compact and point-like sources (b: middle right panel), and the X-shaped radio galaxy PKS 2014-55 (c: bottom panels). In comparison with the uSARA image (Figure 3 of Part I), there is an undeniable improvement in the recovery of faint emission within the zoomed-in views seen in the AIRI image. The diffuse emission stretching east-to-west in the WSClean image of SPT2023, which was not recovered by uSARA, clearly emerges in the AIRI image. Similarly, faint point sources seen in the WSClean image but not in the uSARA image, are captured by AIRI, though appearing to be somewhat fainter and smoother (see the panel (b) of Figure 3). Finally, the calibration artefacts are still noticeable at the southern edge of the pointing, taking the form of ring-type artefacts emanating from the bright quasar RX J2024.3-5723 and propagating radially up to 1 deg. Compared to uSARA, these artefacts are generally fainter but extend further. #### 4.2.1 X-shaped Radio Galaxy In the middle panel (c) of Figure 3, we overlay AIRI-ASKAP emission of the X-shaped radio galaxy as contours on an r-band optical map from DES. Compared to uSARA, the X-shape radio galaxy as reconstructed by AIRI appears to have a greater extent of diffuse emission, though with a noticeable loss in the resolution of the compact structure within the lobes. This behaviour is consistent across all the sub-band images of AIRI, where smoother and more diffuse edges are observed. Table 2 reports the measured flux densities per spectral window for the X-shaped radio galaxy. AIRI flux measurements are consistently lower than the uSARA flux measurements for SPW:1 through SPW:5, then increase for SPW:6 and SPW:7, and drop lower again in spectral window 8. A spectral index map of the X-shaped radio galaxy is shown in the rightmost (c) panel of Figure 3. There is an incredible improvement in the coverage of the AIRI spectral index map compared to uSARA, thanks to the diffuse flux consistently recovered at the edges of the lobes, even at the higher frequencies, with AIRI. The - most likely - artificial steepening at the edges of the lobes seen in the uSARA spectral index map is now corrected in the AIRI map. Nonetheless, AIRI recovers slightly steeper spectra on the borders of the lobes when compared to the WSClean spectral index map. Owing to the superb resolution achieved by AIRI, turbulent activity can be traced where plasma in the lobes exhibits a flatter spectral index. The southeast leg of the east wing shows a spectral index of \(1.4\leq\alpha\leq 2.1\), which is much flatter than the emission in the west wing (with an average spectral index of about \(\alpha=3\). The furthest north-west portion of the west wing exhibits an ultra-steep spectrum \(3.5\leq\alpha\leq 5\). Wideband deconvolution is necessary to confirm these ultra-steep values. #### 4.2.2 SPT-Cl J2023-5535 In Part I, our monochromatic WSClean image shows the radio halo in SPT2023 as an increase in noise at the cluster centre, but our uSARA image did not show any diffuse structure resembling a radio halo. In Figure 3, the panel (a) focuses on the diffuse emission present in SPT2023. Here, our AIRI image does in fact recover the diffuse structure of the radio halo - elongating from the western relic towards the east. Across the sub-bands the radio halo is most clearly detected \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline **X-Shaped RG** & \(S_{\rm 817~{}MHz}\) & \(S_{\rm 853~{}MHz}\) & \(S_{\rm 889~{}MHz}\) & \(S_{\rm 255~{}MHz}\) & \(S_{\rm 961~{}MHz}\) & \(S_{\rm 997~{}MHz}\) & \(S_{\rm 1033~{}MHz}\) & \(S_{\rm 1069~{}MHz}\) \\ \hline \hline AIRI model & 678.7 & 580.0 & 485.0 & 427.7 & 436.2 & 352.7 & 302.3 & 190.5 \\ \hline \end{tabular} \end{table} Table 2: Integrated flux density values in [mJy] of the X-shaped radio galaxy PKS 2014-55 for each SPW imaged with AIRI. The listed flux densities are totals from summing the flux densities measured in regions mapping the east wing, the west wing, and the core. See Table 4 in Part I for uSARA and WSClean flux measurements of the X-shaped radio galaxy. \begin{table} \begin{tabular}{c c c c c c c c c} \hline \hline **SPT2023 Relic** & \(S_{\rm 817~{}MHz}\) & \(S_{\rm 853~{}MHz}\) & \(S_{\rm 889~{}MHz}\) & \(S_{\rm 925~{}MHz}\) & \(S_{\rm 961~{}MHz}\) & \(S_{\rm 997~{}MHz}\) & \(S_{\rm 1033~{}MHz}\) & \(S_{\rm 1069~{}MHz}\) \\ \hline AIRI model & 4.3 & 3.2 & 1.8 & 2.0 & 2.8 & 2.3 & 1.7 & 0.5 \\ \hline \end{tabular} \end{table} Table 3: Integrated flux density values in [mJy] of the radio relic in SPT2023 for each SPW imaged with AIRI. See Table 5 in Part I for uSARA and WSClean flux measurements. \begin{table} \begin{tabular}{c c c c c c c c c c} \hline \hline **Dancing Ghosts** & \(S_{\rm fullband}\)–943 MHz & \(S_{\rm 817~{}MHz}\) & \(S_{\rm 853~{}MHz}\) & \(S_{\rm 889~{}MHz}\) & \(S_{\rm 425~{}MHz}\) & \(S_{\rm 961~{}MHz}\) & \(S_{\rm 997~{}MHz}\) & \(S_{\rm 1033~{}MHz}\) & \(S_{\rm 1069~{}MHz}\) \\ \hline AIRI model & 116.7 & 128.3 & 123.8 & 118.1 & 113.2 & 109.2 & 105.7 & 102.6 & 97.7 \\ \hline \end{tabular} \end{table} Table 4: Integrated flux density values of “the dancing ghosts” PKS 2130-53 for each SPW imaged with AIRI. See Table 6 in Part I for usSARA and WSClean flux measurements of “the dancing ghosts”. Figure 4: SB9442-35 – AIRI: Full FoV image covering PKS 2130-538, formed using the full-band data (centred at 943 MHz). For visual comparison with WSClean and usSARA, refer to their respective Figures 5 and 6 from Part I. This monochromatic image is an AIRI model with a pixel resolution of \(2.2\times 2.2\) arcsec. Panel (a) centred on a field containing extended and point-like radio galaxies; panel (c) centred on the star-forming galaxy NGC 7090; (b) panels centred on “the dancing ghost” (PKS 2130-538). Middle (b) panel: image made with only the first sub-band of data (SPW1, centered at 817), shown for a comparison of sensitivity. Rightmost (b) panel: spectral index map made with the first six sub-band images of AIRI after smoothing with a common circular Gaussian beam of 5 arcsec. In Wilber et al. (2022b) are provided all sub-band images combined into the GIF ‘SB9442–35_AIRI’, and the spectral index map of “the dancing ghosts” obtained with AIRI, usSARA, and WSClean in the GIF ‘SpectralIndexMap_PKS_2130_538’, together with a colour blind-friendly version in the GIF ‘SpectralIndexMap_PKS_2130_538_colorblind_friendly’. in AIRI images SPW:1 and SPW:2. It is quite remarkable that the radio halo is detected in these narrow, sub-band AIRI images since the full 288 MHz bandwidth (in the form of a wideband WSClean image) was used to detect and measure the radio halo's signal in HyeongHan et al. (2020). We also find that the SPT2023 radio relic has a smoother, wider, and fainter morphology when compared to uSARA. AIRI flux measurements of the relic for SPW:6 and 7 are slightly higher than uSARA flux measurements, but lower for all other spectral windows, as reported in Table 3. ### Third field: SB9442-35 This final selected field is centred on the complex radio source PKS 2130-538, nicknamed "the dancing ghosts," owing to its peculiar and mirrored ghost-like shape. Two radio lobes, bridged by arching jets from the primary AGN host, extend southwards and blend into each other. A secondary AGN in the south-east produces a similar arched jet with a bent-tail that curls back around to the eastern primary lobe. With the original images generated from the ASKAP Evolutionary Map of the Universe Survey (EMU; Norris et al., 2011), Norris et al. (2021) mention that the interaction between the primary and secondary AGN is unclear. With our super-resolved uSARA images, presented in Part I, we have been able to distinguish a clear physical separation between the secondary curting jet and the primary western lobe. Nonetheless, this strange source offers an interesting case study of turbulent dynamics in bent-tail radio galaxies. For this field, we produced eight sub-band images as well as a monochromatic full-band image with AIRI. In Figure 4, we present the AIRI image of a \(\sim 2.5^{\circ}\) FoV formed using the full-band (288 MHz) data of SB89442-35. This figure includes zoomed-in views on a field containing an extended radio galaxy and points sources (a: upper right panel), the star-forming galaxy NGC 7090 (c: middle right panel), and the "dancing ghosts" (b: bottom panels). The bottom panels include a view of PKS 2130-538 from the full-band image (leftmost) and the first sub-band image SPW:1, covering 36 MHz (middle) for a visual comparison of the sensitivity in both imaging settings. Also included in the bottom rightmost panel is a spectral index map of PKS 2130-538, generated with the first six sub-band images. #### 4.3.1 The Dancing Ghosts The separation between the curling secondary jet and the eastern lobe of the primary AGN in PKS 2130-538 is less distinct in our AIRI image when compared to the uSARA image from Part I. However, there is a more drastic difference in the improvement of resolution when moving to the full-band with AIRI. Our AIRI sub-band image shows a much smoother structure than the full-band image, particularly noticeable when focusing on the sharpness of the jet bridge linking the two lobes from the primary AGN. Faint point sources emerge more clearly in the AIRI full-band image, with a slight improvement over its uSARA counterpart. The filamentary emission extending from the eastern lobe (possibly a synchrotron thread similar to those discovered in Ramatsoku et al. 2020) appears slightly fainter and more diffuse in the AIRI image, with overall steeper spectra (\(3.5\leq\alpha\leq 6\)) than seen in the uSARA maps. It is interesting that this eastern extending filament has such an ultra-steep spectrum in the AIRI map since the spectral index over the rest of the source morphology remains similar between the uSARA and AIRI maps. When comparing the flux density measurements in Table 4 to the corresponding measurements in Part I, there appears to be a clear consistency between uSARA and AIRI and WSClean, with AIRI recovering slightly less flux at the higher frequencies. This source arguably has the most consistency in flux density measurements across the three different imaging methods, perhaps due to the overall flatter spectral index of the source and the lack of strong calibration artefacts in this field. ### Universal Denoiser and model uncertainty In an experiment toward modelling epistemic uncertainty, we measure differences between AIRI reconstructions of the field SB9351-12 produced by the two denoiser selection strategies proposed in Section 2.4, namely the denoiser shelf and universal denoiser strategies. The two approaches differ by the denoiser instance used. The AIRI reconstructions leveraging a pre-trained shelf of denoisers were presented in Section 4.2. Here we also present the reconstruction results when utilising the universal denoiser approach, and study the robustness of AIRI reconstructions to denoiser (_i.e._ model) variations. We recall that the considered universal denoiser corresponds to the lowest training noise level on the shelf, \(\sigma_{u}=2\times 10^{-5}\). Under this consideration, we note that the first sub-band (SPW:1) is not included in this analysis since its shelf-appropriate denoiser corresponds to the universal denoiser (_i.e._\(\sigma_{s}=\sigma_{u}\)); therefore, only spectral windows 2-8 were re-imaged. Focusing on our target sources of interest in this field - namely, the X-shaped galaxy and the merging galaxy cluster SPT2023 - reconstruction results of SPW:7, obtained using uSARA and the two AIRI denoiser selection strategies (the nearest shelf-appropriate DNN with \(\sigma_{s}=8\times 10^{-5}\) and the universal DNN denoiser with \(\sigma_{u}=2\times 10^{-5}\)) are showcased in Figure 5. The seventh spectral window is chosen for a visual comparison due to its high signal and lower dynamic range when compared to other spectral windows. All other spectral windows imaged via the AIRI universal denoiser strategy are provided as FITS files and combined into a GIF for easier viewing in Wilber et al. (2022). In what follows, we refer to the AIRI reconstructions generated via their associated denoiser strategy as \(\sigma_{u}\)-AIRI (universal approach) or \(\sigma_{s}\)-AIRI (shelf approach). The most evident visual difference in the AIRI reconstructions - particularly noticeable for the X-shaped galaxy - is in the smoothness of the emission recovered in \(\sigma_{s}\)-AIRI and the arguably more detailed emission recovered in \(\sigma_{u}\)-AIRI which targets higher dynamic ranges. Both AIRI images recover significantly more diffuse emission than uSARA, yet with a slight compromise in resolution, as can be seen in the radio galaxies of the merging galaxy cluster SPT2023. We examine the absolute difference between the \(\sigma_{u}\)-AIRI and \(\sigma_{s}\)-AIRI images. The resulting error map4 of SPW:7 is displayed in Figure 5, following a normalisation by the associated noise estimate in the image domain \(\sigma\) (see Eq. 3). From the full-FoV error maps associated with the sub-band images, we conduct a numerical analysis of AIR reconstructions on the basis of the percentage of the pixels with values above \(\sigma\). As shown in Table 5, for each sub-band error map, we find that \(0.8-1.1\%\) of the pixels are of values higher than \(\sigma\). This very small percentage corresponds mainly to pixel intensities within the brightest point-like sources. Because the integrated flux densities of these bright sources recovered by both AIRI reconstructions are very close, the discrepancy most likely arises from differences in individual pixel intensities which are slightly spatially offset from each other. Footnote 4: A hard thresholding operation is applied to the error map, keeping only values above \(10^{-8}\). When comparing the uSARA reconstruction to each of the AIRI reconstructions, we find that the percentage of the pixels with absolute difference above \(\sigma\) is slightly more significant, at 2.5% for SPW:7. These findings suggest that uSARA and AIRI reconstructions are very similar with respect to each other and that AIRI is highly robust to variations of the denoiser instance used (highlighting a small epistemic uncertainty of the learned denoiser approach). This also validates the simpler high dynamic-range universal denoiser strategy for AIRI, as opposed to the denoiser shelf approach. ## 5 Computational performance For all AIRI imaging experiments performed in this work, the decomposition of the measurement operator and consequently the number of CPU cores allocated to enforce data fidelity are identical to uSARA (see Tables 7-9 in Part I for further details). While uSARA was deployed on the CPU nodes of Cirrus, AIRI was run on its GPU nodes comprising both CPU cores and GPUs (see Section 3 for details of the compute nodes). In this setting, the computing time of the forward step in AIRI was found to be up to 1.2 faster than its counterpart in uSARA, which we attribute to the newer processors used in these GPU nodes. The faceting functionality of AIRI was enabled, whereby the image is decomposed into \(F=4\) facets. Hence, four GPUs were deployed for the parallel application of the DNN denoiser on each image facet. As such, the learned denoiser brought a drastic reduction of the computing time of AIRI's backward step by a factor 10 to 30 (depending on the image dimensions) compared to its pure optimisation counterpart uSARA. For each imaging experiment of each field, AIRI's total compute time and computational cost in CPU core hour are reported in Table 6. In light of its extremely fast denoiser, AIRI's computational cost, solely driven by its forward step, is on average four times lower than uSARA (see Tables 7-9 in Part I) and five times higher than WSClean (see Table 10 in Part I). Interestingly, preliminary experiments further leveraging GPUs to perform the Fourier Transforms involved in the forward step have shown a reduction of AIRI's total compute time, and consequently its computational cost by nearly a factor of two. However, a similar consideration in uSARA or WSClean would not necessarily bring significant acceleration. The computational cost of uSARA would still be driven by its sub-iterative denoiser. Similarly, the Fourier transforms are typically not the dominating the computational cost of WSClean. The speed and learning power of the DNN denoisers, pre-trained independently of the RI data under scrutiny, are significantly narrowing the gap between optimisation-based imaging algorithms and the standard CLEAN-based imager, thus highlighting their prospective potential for scalability and computational efficiency when handling extreme data and image dimensions. Figure 5: Comparison of uSARA and AIRI reconstructions of the seventh sub-band data (SPW:7) of the field SB9351-12, focusing on the X-shaped radio galaxy (top) and the galaxy cluster SPT2023 (bottom). From left to right: uSARA reconstruction, \(\sigma_{s}\)-AIRI reconstruction (using the shelf-appropriate DNN denoiser), \(\sigma_{u}\)-AIRI reconstruction (using a universal DNN denoiser), and the error map between \(\sigma_{s}\)-AIRI and \(\sigma_{u}\)-AIRI, normalised by the estimated standard deviation of the noise in the image domain, \(\sigma\). \begin{table} \begin{tabular}{c c c c c c c} \hline \hline **SPW:2** & **SPW:3** & **SPW:4** & **SPW:5** & **SPW:6** & **SPW:7** & **SPW:8** \\ \hline 99.6\% & 99.7\% & 99.7\% & 99.3\% & 99.5\% & 99.2\% & 98.9\% \\ \hline \end{tabular} \end{table} Table 5: The percentage of the pixels in the error maps between \(\sigma_{u}\)-AIRI and \(\sigma_{s}\)-AIRI images with values below the estimated standard deviation of the noise in the image domain, \(\sigma\), for each sub-band image of the field SB9351-12. ## 6 Conclusions The results of this work show that the PnP-RI image reconstruction algorithm AIRI is on par with the precision-capability of its pure optimisation counterpart uSARA and surpasses the precision and robustness capabilities of WSClean. A main and consistent feature of AIRI reconstructions is their sensitivity to the diffuse components of faint emission. This gives AIRI a distinct advantage over uSARA when the scientific goal is to detect and fully reconstruct low-surface-brightness diffuse emission at or near the noise level. Building a shelf of suitable denoisers covering a range of potential target dynamic ranges has proven to be a solid approach for implementing AIRI. When resorting to a single universal high dynamic-range denoiser, high-fidelity reconstruction is also achieved, with comparable results to nearest-on-the-shelf reconstructions. In fact, we find that AIRI realisations reconstructed from denoisers with different training noise levels have about a 1% discrepancy in terms of the percentage of pixel intensities above the estimated noise level of the imaged data. In comparing the flux density values of the scrutinised sources reported in Parts I and II, we observe varying levels of consistency between uSARA and AIRI. A strong agreement is found in the case of the source " dancing ghosts", likely due to its relatively flat spectra and absence of strong calibration errors at its vicinity. However, when at least one of these conditions is not met, the flux density values of the other sources compare differently, yet with consistent variations across sub-bands. This is expected as AIRI has been shown to capture more faint and diffuse emission with slightly different morphology than uSARA. Concerning WSClean, its flux measurements are generally higher than both AIRI and uSARA, particularly for the faint sources whose brightness is near or at the noise level, indicating possible over-estimation of the measurements taken from the restored images. Since each sub-and was imaged separately, one cannot assert that the measurements of one method are more reliable than the other, particularly since these early ASKAP observations have not been validated against standard flux catalogues. Wide-band variants of uSARA and AIRI are expected to provide more accurate measurements in the future. For the analysed ASKAP FoVs, AIRI has demonstrated a four-fold acceleration on average over its pure optimisation counterpart, uSARA. The higher computational efficiency achieved by AIRI is attributed to its substantially faster denoiser, and is a clear indication of its scalability power to large image dimensions. An in-depth investigation of practical scalability to extreme data dimension is warranted. The study will appear as part of on-going work towards a professional parallel C++ implementation of AIRI. ## Acknowledgements The first two authors contributed equally to this work. This work was supported by the UK Research and Innovation under the EPSRC grants EP/T028270/1 and EP/T028351/1, and the STFC grant ST/W000970/1. The research used Cirrus, a UK National Tier-2 HPC Service at EPCC funded by the University of Edinburgh and EPSRC (EP/P020267/1). ASKAP, from which the data under scrutiny originate, is part of the Australia Telescope National Facility managed by CSIRO. This project used public archival data from the Dark Energy Survey (DES). ## Data Availability The ASKAP data underlying this article (calibrated visibilities and mosaic images of Scheduling Blocks) are made publicly available for viewing and download on the CSIRO ASKAP Science Data Archive (CASDA; Chapman et al., 2017), and can be accessed with the unique Project Identifiers ASO34 and AS101. The reconstructed images in FITS format as well as the GIF files showing the imaged fields over the spectral windows are made available in Wilber et al. (2022b). The uSARA and AIRI code will become available in a later release of the Puri-Psi library for RI imaging.
2307.06393
Neutral Diversity in Experimental Metapopulations
New automated and high-throughput methods allow the manipulation and selection of numerous bacterial populations. In this manuscript we are interested in the neutral diversity patterns that emerge from such a setup in which many bacterial populations are grown in parallel serial transfers, in some cases with population-wide extinction and splitting events. We model bacterial growth by a birth-death process and use the theory of coalescent point processes. We show that there is a dilution factor that optimises the expected amount of neutral diversity for a given amount of cycles, and study the power law behaviour of the mutation frequency spectrum for different experimental regimes. We also explore how neutral variation diverges between two recently split populations by establishing a new formula for the expected number of shared and private mutations. Finally, we show the interest of such a setup to select a phenotype of interest that requires multiple mutations.
Guilhem Doulcier, Amaury Lambert
2023-07-12T18:23:46Z
http://arxiv.org/abs/2307.06393v2
# Neutral Diversity in Experimental Metapopulations ###### Abstract New automated and high-throughput methods allow the manipulation and selection of numerous bacterial populations. In this manuscript we are interested in the neutral diversity patterns that emerge from such a setup in which many bacterial populations are grown in parallel serial transfers, in some cases with population-wide extinction and splitting events. We model bacterial growth by a birth-death process and use the theory of coalescent point processes. We show that there is a dilution factor that optimises the expected amount of neutral diversity for a given amount of cycles, and study the power law behaviour of the mutation frequency spectrum for different experimental regimes. We also explore how neutral variation diverges between two recently split populations by establishing a new formula for the expected number of shared and private mutations. Finally, we show the interest of such a setup to select a phenotype of interest that requires multiple mutations. Neutral diversity Population genetics Experimental evolution ## 1 Introduction Experimental evolution is the study of evolutionary dynamics happening in real time as a response to conditions imposed by the experimenter (Kawecki et al., 2012). Microbial populations are widely used because they offer numerous experimental advantages: large population sizes, easily manipulable environments, possibility to freeze and store whole populations indefinitely... Experimental evolution requires the set-up of many parallel bacterial cultures that can take several forms from bottles (\(\approx 10^{1}L\)) to tubes (\(\approx 10^{-3}L\)), microplates (\(\approx 10^{-4}\)L), or microfluidic compartments (\(\approx 10^{-9}L\)). Recently, new techniques for the high-throughput manipulation of bacterial populations have emerged. For instance, digital millifluidics (Cottinet, 2013; Dupin, 2018; Doulcier, 2019) allows the possibility of producing and imaging thousands of droplets of culture broth within a carrying fluid. The droplets amount to around \(2\times 10^{-6}L\) with a carrying capacity of \(10^{5}\) to \(10^{6}\) cells (Cottinet, 2013). Droplets can be imaged and quantitative measures be performed (optical density, fluorescence signal...) during growth of the bacteria, allowing a high-throughput monitoring of ecological dynamics. Nested populations in which both particles (bacterial cells) and collectives (bacterial populations) are individuals with their own birth and death events can be readily implemented in experimental microbiology. For instance, such experiments are routinely performed in microcosms (Hammerschmidt et al., 2014). However, the ability of millifluidic devices to monitor in the order of a thousand of cultures and retrieve some of them for analysis makes them particularly suitable for the artificial selection of microbial communities (Xie and Shou, 2021) for instance, through ecological scaffolding (Doulcier et al., 2020; Black et al., 2020). Neutral diversity in experimentally nested populations is the focus of this manuscript. The aim is to build a quantitative understanding of simple diversity patterns within the experimental setup. First, a model of the device is presented. It relies on the assumption that cells are in constant exponential growth. The optimal operating regime parameters of the machine (dilution ratio, duration of collective growth cycles, carrying capacity...) are derived from characteristics of the biological material: birth and death rates. From a theoretical perspective, the system constitutes a dynamical meta-population in discrete space with explicit demography. It contrasts with simpler models in which demography is simplified (Etheridge, 2008), as well as with more complex spatially structured models (Barton et al., 2002, 2013) in which space is continuous. Second, a coalescent model of the population across bottlenecks is proposed and coupled to a neutral mutation model with infinite alleles. This allows computation of the number of mutations, and the distribution of allele frequencies within droplets after several collective growth cycles. It shows that small bottlenecks are required to maximise diversity in one cycle, but larger bottlenecks are more favourable for diversification across many cycles. The speed at which diversity accumulates decreases with time. Then, the effect of splitting a droplet into several lineages is studied by computing the number of mutations accumulated in a single, or all the droplet lineages. Finally, a simple mutation accumulation model illustrates the interest of droplet-level selection for artificial selection. ## 2 Modelling Nested Population Dynamics Consider a device that allows the manipulation of collectives of Darwinian particles via serial transfers (Figure 1). Cells (called particles) are distributed among a train of \(D\) droplets (called populations, or collectives). The birth and death of cells are modelled by a linear branching process with constant rates \(b\) for birth and \(d\) for death. The net growth rate \(r:=b-d\) is called the Malthusian parameter. After a duration \(T\), a new train of \(D\) droplets is prepared by diluting them \(\frac{1}{\delta}\) fold. Hence, for each dilution event, a cell has a probability \(\delta\) of being sampled and thus being present in the new droplet. This procedure is repeated periodically, each dilution followed by a growth phase constitutes a _cycle_ of the experiment or a _collective generation_. More formally, the initial population contains \(Z_{0}=c\) cells, and is submitted to a bottleneck with sampling probability \(\delta\) at the beginning of every cycle except the first, meaning that bottlenecks occur at times \(T,2T\ldots nT\). The \(n\)th cycle corresponds to the slice of time \([(n-1)T,nT)\). Thus, "the end of cycle n" correspond to the moment \(nT^{-}\), just before the \(n\)th dilution. To illustrate, at the end of the third cycle, \(t=3T^{-}\), and the population has experienced two bottlenecks at time \(T\) and \(2T\). Birth \(b\) and death \(d\) rates depend on the biological material used (species, strain...) as well as the culture medium and are not easily controlled. However, the duration of the growth phase \(T\), the dilution factor \(\delta\) and the number of collectives \(D\) can be changed by altering the experimental setup. A model can help predict the effect of those parameters and find the ones that should be the focus of engineering efforts. ### Optimal Operating Regime When designing a serial transfer experiment, the operator has three main parameters that might be controlled: the size of the cultures (and by extension the carrying capacity of the particles \(K\)), the duration of the growth phase separating two successive transfers \(T\), and the dilution rate \(\delta\). Two problems must be avoided: if population sizes are too small and dilution too high, the resulting cultures might be empty. Conversely, if the population sizes are too large, and dilution too low, the population will spend most of its time in stationary phase, with little effect of bottlenecks. Any dilution event presents the risk of extinguishing the population. When performing a serial transfer experiment, this must be avoided at all cost because an empty microcosm signs the end of the experiment (at least for the given independent lineage). In a nested population design, the presence of some empty microcosms can be tolerated because empty niches in the population can be filled by splitting a single parent droplet into several offspring droplets in the next generation. \begin{table} \begin{tabular}{l l|l l} & \multicolumn{1}{c|}{**Collective-level parameters**} & \multicolumn{1}{c}{**Particle-level parameters**} \\ \hline \(D\) & Population size & \(b\) & Birth rate \\ \(T\) & Cycle duration & \(d\) & Death rate \\ \(n\) & Number of cycles & \(r\) & Malthusian parameter (b-d) \\ \(K\) & Carrying capacity & \(\delta\) & Survival probability (at a bottleneck) \\ \(c\) & Initial number of particles & \(\theta\) & Mutation rate \\ \end{tabular} \end{table} Table 1: Parameter names and symbols reference Stationary phase is in general not desirable for several reasons. First, a population that reaches saturation will go through fewer generations than if it was growing freely, reducing the potential evolutionary dynamics. Moreover, physiological changes in stationary phase might result in undesired phenotypic effects on the population. Finally, in the case of millifluidic experiments, saturating densities are known to increase the risk of cross-contamination between droplets. For all these reasons, there is an optimal dilution rate, that keeps the population in exponential phase while maximising the population size, at which selection experiments should be conducted. A model of population dynamics can provide a first estimate of the optimal range of parameters for an experiment. In the following, a stochastic model of particles in exponential growth conditions (i.e., super-critical) with periodic bottlenecks is used to derive the probability of losing a single particle lineage, or a single collective lineage due to the effect of dilution, as a function of experimentally accessible parameters. Saturation phenomena are not modelled explicitly as the birth and death rates are considered independent of population size, but the population dynamics are required to stay under a carrying capacity threshold. Figure 1: **Sketch of the experimental setup.** Darwinian particles following a birth-death process with rates \(b,d\) are distributed in collectives within \(D\) droplets. After a growth phase duration \(T\), a new cycle starts: the content of each droplet is diluted to seed a new droplet. Each particle has a probability \(\delta\) to be transferred at the start of the next cycle. Here the _serial transfer_ regime is depicted: each droplet is diluted into exactly a single new droplet in the next cycle. In the full _nested population_ design, a droplet can be split in several droplets in the cycle, or removed altogether. #### 2.1.1 Survival of a single lineage A first quantity that can be derived from the linear branching process with periodic dilution that models the population dynamics is the probability that a single initial particle has no descent in the population after \(n\) cycles. **Proposition 1** (Survival Probability): _Cells within droplets in serial transfers are modelled by a linear birth-death process with constant parameters \(b\) and \(d\), that is subject to periodic bottlenecks every duration \(T\)._ _Let \(s_{n}\) be the probability that a lineage spawned by a single cell is not extinct at the end of the \(n\)th cycle. Then,_ \[s_{n}= 1-h_{Q_{0}Q^{n-1}}(0)\] _Where \(h_{A}\) is the linear fractional function with coefficient \(A\):_ \[h\begin{bmatrix}a&b\\ c&d\end{bmatrix}(s)=\frac{as+b}{cs+d} \tag{1}\] _And the matrices \(Q_{0}\) and \(Q\) are:_ Figure 3: **Dilution process.** Individual particles are independently selected to be transferred to the next cycle (with probability \(\delta\)) or sent to the waste (with probability \(1-\delta\)). Figure 2: **Events in a Linear-Birth-Death Model.** Individuals give birth to new individuals at a constant rate \(b\), and die at constant rate \(d\), independently. The process is super-critical if \(b>d\). \[Q_{0}=\begin{bmatrix}\delta(p-q)&q\\ \delta(p-1)&1\end{bmatrix};\quad Q=\begin{bmatrix}p-(1-\delta)-\delta q&(1- \delta)+\delta q\\ p-1&1\end{bmatrix}\] _with,_ \[Q_{0}=\begin{bmatrix}\delta(p-q)&q\\ \delta(p-1)&1\end{bmatrix};\quad Q=\begin{bmatrix}p-(1-\delta)-\delta q&(1- \delta)+\delta q\\ p-1&1\end{bmatrix}\] _with,_ \[\begin{array}{llll}\hline&p(b,d,T)&q(b,d,T)\\ \hline Subcritical\text{ particles}&b<d&\frac{-r}{d-be^{rT}}&\frac{d(1-e^{rT})}{d-be^{rT}}\\ Critical\text{ particles}&b=d&\frac{1}{1+bT}&\frac{bT}{1+bT}\\ Supercritical\text{ particles}&b>d&\frac{re^{-rT}}{b-de^{-rT}}&\frac{d(1-e^{-rT})}{b-de^{-rT} }\\ \hline\end{array}\] _(Proof page 34.)_ Proposition 1 shows that the survival probability of a lineage depends on the birth \(b\) and death \(d\) rates of the cells, but is also a function of the dilution rate \(\delta\), duration of the growth phase \(T\) and the number of cycles \(n\). When considering a single dilution and a pure-birth process (\(s_{2}\), Figure 4), the survival probability is equivalent to \(\delta\mathbb{E}(Z_{T})\) when \(\delta\) is small, hence the linear increase with slope \(1\) in log-log scale. The numerical computation of \(Q^{n}\) might be problematic because of repeated multiplication of the small numbers. However, since the final result only involves the ratio \(Q_{01}^{n}/Q_{11}^{n}\), it is possible to normalise \(Q\) to have its smallest value being \(1\). Indeed, this ratio does not depend on a multiplicative scalar on the matrix: \(\forall\alpha>0\), \(h_{Q^{n}}(0)=h_{\alpha Q^{n}}(0)\). Taking \(\alpha=\frac{1}{\delta}\) greatly improves the numerical stability of the computation. The limit of this probability when the number of cycles increases gives a clearer understanding of the long term behaviour of the population: **Proposition 2** (Long Term Survival Probability): _Let \(s_{n}\) be the survival probability after \(n\) cycles of the lineage spawned by a single cell._ Figure 4: **Survival probability at the end of the second cycle \(s_{2}\).** This corresponds to a first growth phase, a dilution event, and a second growth phase. The probability is presented as a function of the dilution rate \(\delta\) for pure birth processes \(b>0,d=0,T=1\). \[\lim_{n\rightarrow+\infty}s_{n}=\begin{cases}0&\text{if }\delta e^{rT}\leq 1\\ \frac{r(\delta-e^{-rT})}{\delta b(1-e^{-rT})}&\text{otherwise.}\end{cases}\] (Proof page 36.) Proposition 2 confirms that, in the long run, lineages go extinct with certainty (\(s_{\infty}=0\)) if and only if the expected number \(\delta e^{rT}\) of cells descending from a single initial cell and surviving the first bottleneck is smaller than \(1\). It gives the survival probability otherwise (see Figure 5). In the following, only super-critical populations will be considered \((b>d)\). #### 2.1.2 Optimal cycle duration and dilution Saturation of particle dynamics is not desirable, as mentioned earlier. Depending on the nature of the particles (species, strain), and of the medium (pH, nutrient availability, temperature), it is possible to define an experimental carrying capacity \(K\) that corresponds to the number of cells that can be sustained in a droplet without saturation. The simple linear-birth-death model cannot represent saturating populations because no density dependence is included in this model, thus for the model to be coherent, the duration of the growth phase must be short enough that the population size does not reach the carrying capacity: **Proposition 3** (Maximal Cycle Duration): _Let \(T^{*}\) be the maximal cycle duration before reaching saturation. Cells are following a supercritical birth-death process with growth rate \(b-d=r>0\). The carrying capacity is \(K\) and the initial number of cells is \(c\) :_ \[T^{*}=-\frac{\ln(\frac{c}{K})}{r} \tag{2}\] Figure 5: **Survival probability for several cycles \(s_{n}\).** The probability is presented for pure birth processes \(b=1,d=0,T=1\). Above the critical threshold \(\delta^{*}\), the survival probability does not tend toward \(0\). The dotted line corresponds to the limit \(s_{\infty}\). Proposition 3 shows that the optimal duration of the growth phase is linear with the inverse of the Malthusian parameter \(r\) of the population (Figure 6), meaning that a population that grows (on average) twice as fast as another should be subject to cycles half as long as the other, for a given initial occupancy \(\frac{c}{K}\). Additionally, the optimal duration of the growth phase is proportional to the logarithm of the initial occupancy \(\frac{c}{K}\) of the droplet (with a minus sign, since this logarithm is always negative or zero as \(c\leq K\)). As a consequence, for a given strain, multiplying the volume of the droplets by two, or dividing the inoculum size by two will increase the maximal duration of the growth phase by \(\frac{\ln 2}{r}\). To keep the same maximal duration \(T^{*}\) if the Malthusian parameter \(r\) is doubled, the carrying capacity of the droplet must be multiplied by the inverse of the previous initial occupancy \(K/c\). This result holds for a single cycle only. For a given dilution rate \(\delta\), the population is shrunk by an expected ratio \(\delta\), while for a given cycle duration \(T\), the population is expanded by an expected ratio \(\delta e^{rT}\). In order to prevent the population from saturating for all cycles, the initial occupancy at the beginning of each cycle must be constant. This consideration allows discovery of the optimal dilution rate when the growth phase duration is fixed: **Proposition 4** (Optimal Dilution Rate): _Let \(\delta^{*}\) be the optimal dilution rate for which the expected number of cells is constant across generations. For cells following a birth-death process with growth rate \(b-d=r>0\) and a growth phase duration \(T\):_ \[\delta^{*}=e^{-rT} \tag{3}\] _If \(T=T^{*}\) (Proposition 3),_ \[\delta^{*}=\frac{c}{K} \tag{4}\] _(Proof page 36.)_ Figure 6: **Maximal Cycle Duration \(T^{*}\) as a function of the Malthusian parameter \(b-d\) and the initial occupancy \(\frac{c}{K}\).** Proposition 4 shows that the dilution sampling probability should be equal to the initial occupancy when the duration of the cycle is maximal. To summarise, the optimal operating regime of the experiment can be expressed from the Malthusian parameter of the population and the initial occupancy of the particles \(\frac{c}{K}\). As a result, the dilution sampling probability is \(\delta^{*}=\frac{c}{K}\) and duration of a cycle is \(T^{*}=-\frac{\log(\delta^{*})}{r}\). Fixing any two of \((c,K,T,\delta)\) values constrains the other two. When the experiment is in the optimal regime, the expression of the survival of a lineage at cycle \(n\) is simpler: **Proposition 5** (Optimal Regime Survival Rate): _Let \(s_{n}^{*}\) be the survival probability of a lineage after \(n\) cycles of duration \(T\) and with bottleneck \(\delta^{*}=e^{-rT}\), where \(r=b-d>0\) is the Malthusian parameter of the population._ _Then,_ \[s_{n}^{*}=\frac{r}{bn(1-\delta)+\delta r} \tag{5}\] _(Proof page 37.)_ In the optimal regime, each initial cell has on average one descendant cell surviving the next bottleneck. The process counting the number of cells at time \(kT\) is thus a critical Galton-Watson process (as a function of \(k\)). Proposition 5 shows that, in the optimal regime, the survival probability of a lineage decreases as the inverse of the number of cycles, which is typical of critically branching populations. Additionally, when taking a finite number of cycles \(n\), the survival does not tend toward zero even for vanishingly small bottlenecks \(\delta\). This derives from the fact that, in the optimal regime, a small bottleneck is compensated by a long cycle duration \(T^{*}\), so vanishingly small bottlenecks correspond to infinitely long cycles. Finally, in the case of pure-birth (i.e., \(d=0\)), the survival of a lineage is independent from the birth rate \(b\). It is certain if there is no bottleneck (\(\delta=1\)), and tends toward \(\frac{1}{n}\) for vanishingly small bottlenecks (\(\delta\to 0\)). Overall, once the size of the droplets (which constrains \(K\)) and the initial occupancy (which constrains \(c\)) have been chosen by the operator, other parameters of the machine (duration of the growth phase \(T\), dilution rate \(\delta\)) can be deduced --and conversely, fixing \(T\) and \(\delta\) constrains \(K\) and \(c\). The next section explores how should one select these parameters when the aim is to maximise genetic diversity within and between the droplets. ## 3 Modelling Neutral Diversity Neutral diversity concerns mutations arising in the population of cells that are assumed to not change their birth or death rates. Neutral diversity gives rise to recognisable _patterns_ that can be predicted from a mechanistic model of birth-death in the population. In experimental evolution, and _a fortiori_ in artificial selection, it is desirable to increase the diversity within the population because it allows greater exploration of the phenotypic space. Indeed, mutations that are essentially neutral for cells might present an interest for the experimenter, or be intermediate states toward new phenotypes. In the following, mutations follow a Poisson Point Process with constant rate \(\theta\) over the lifespan of the cells, independently of their genealogy. As a consequence, the time between two mutations along a lineage (regardless of births and deaths) is exponentially distributed, and thus has no memory: the conditional expected time to the next mutation will be the same for all cells, irrespective of their age or the time of the last mutation in the lineage. This is a simplifying assumption that represents the spontaneous nature of mutations, while ignoring the existence of mutations that can change the mutation rate (Sniegowski et al., 1997, 2000). ### Coalescence times and the Coalescent Point Process The linear branching process (with constant birth rate \(b\) and death rate \(d\)) yields the full genealogy of the population (Figure 8, left). However, the standing diversity at a given time in the population is affected (i) neither by the mutations occurring on lineages that do not have extant individuals (because their mutations have been lost), (ii) nor by mutations that are ancestral to the whole population (because they are shared by all individuals in the population). It is thus sufficient, to characterise the standing diversity, to have the knowledge of the _coalescent tree_ of the population (Figure 8, right), which is the genealogy of the extant individuals up to their most recent common ancestor. Coalescent Point Processes (CPP) are stochastic processes whose realisations are real trees with the same probability as the coalescent tree of the corresponding branching process (Popovic, 2004; Lambert and Stadler, 2013). A CPP is defined by a time horizon \(t\) and a node depth distribution \(f_{H}\). The CPP is the sequence of independent and identically distributed variables \((H_{i})_{i=1\ldots N}\) following \(f_{H}\) and stopped at the first element \(N\) such that \(H_{N}>t\). Usually, the node depth distribution is expressed in the form of the inverse tail distribution \(F\): \[F(t):=\frac{1}{P(H>t)} \tag{6}\] ### Measuring neutral diversity Neutral mutations do not affect the genealogy and can thus be superimposed _a posteriori_ on the coalescent tree. Consider that mutations appear following a Poisson point process with constant rate \(\theta\) over the coalescent tree. Thus, a mutation is a point on the coalescent tree, as illustrated in Figure 7. Additionally, assume that reverse mutations are impossible (an assumption referred to as the "infinite sites model"), so that all individual standing above the mutation in the coalescent tree (i.e., the descent of the mutation point) share the mutation (crosses at the top of Figure 7). Individuals may carry zero, one or several mutations. The mutational richness of the population \(M\) (or total diversity) is the number of unique mutations found in the population. Its expected value is proportional to the length of the coalescent tree. Figure 8: **From Birth-Death Process to Coalescent Point Process.** On the left-hand side is a birth-death process where a number of individuals give birth (eggs) and die (skulls) at different points in time, which flows from left to right. On the right-hand side is the corresponding continuous coalescent tree, where time flows from bottom to top. Note that at time \(C\), lineage \(3\) coalesces with lineage \(4\) and that at time \(B\), lineages \(1\) and \(2\) coalesce. Finally, at time \(A\), lineage \((1,2)\) coalesces with lineage \((3,4)\). Figure 7: **Neutral mutations over the coalescent tree**. The neutral mutations (coloured crosses) are distributed following a Poisson Process over the (real) coalescent tree (black). Each individual may carry several mutations distinguishing it from the most recent common ancestor. The mutation frequency spectrum \((a_{k})_{k\in\mathbb{N}}\) is another measure of diversity that counts how many mutations are represented by \(k\) individuals in the population. All these measures require some knowledge of the shape of the coalescent tree of the population. The next paragraph is dedicated to establishing this for the simple case of serial transfer, while the next section is dedicated to the case of splitting droplets. ### Diversity Within Droplets in Serial Transfer Establishing the law of the Coalescent Point Process of a lineage within serial transfer requires identification of the law of the branch length. This law is well-known for simple branching process such as the Linear Birth-Death process with parameters \((b,d)\) modelling the population dynamics (Lambert and Stadler (2013), Proposition 5). The addition of repeated bottlenecks with period \(T\) is also possible within the theory ((Lambert and Stadler, 2013), Proposition 7) by thinning the original process (Figure 9). Each bottleneck at time \(iT\), \(i=1,\ldots,n\) may remove independently each branch of the CPP with probability (\(1-\delta\)) (in grey in Figure 9). Removing a branch in the past (at time \(iT\)) may result in removing several branches in the present, and requires an adjustment to branch length (green in Figure 9). The number of branches removed, and the adjustment to the branch length distribution can be computed from the law of the branch length of the CPP in the absence of bottleneck, the sampling probability \(\delta\) and the period of the bottleneck \(T\). As a result: **Proposition 6** (Coalescent tree of a lineage): _Let \(\mathcal{T}_{n}\) be the random coalescent tree spawned by a single particle with extant descent at the end of the \(n\)th cycle. Then \(\mathcal{T}_{n}\) Figure 9: **Bottlenecks are modelled by thinning the process**. A bottleneck at some point in the past (red bottle) results in the extinction of some lineages (in grey), which modifies the genealogy (thick green lines). _Process (CPP) stopped in \(nT\), with inverse tail distribution \(F\), and in the case of critical dilution (\(\delta^{*}=e^{-rT}\)), we have:_ \[\forall t=kT+s\in[(n-1)T,nT],F(t)=1+\frac{b}{r}\left(e^{rs}-1+k(e^{rT}-1)\right) \tag{7}\] _With \(k\in\mathbb{N}\) and \(s<T\)._ _(Proof page 37.)_ Proposition 6 gives the cumulative probability function for the node depth \(H\): \(\mathbb{P}(H<t)=1-\frac{1}{F(t)}\) (Figure 10). Note that this function is defined by parts for each cycle. Figure 11: **Node depth distribution**. Sample of \(10^{8}\) realisations of the random variable by the inversion of the cumulative probability function method. Deep nodes follow a power law distribution with parameter \(\alpha=-1\) (orange line). Figure 10: **Cumulative probability for the branch length**. Dotted lines represent bottlenecks. Because \(F(t)\rightarrow\infty\) as \(t\rightarrow\infty\), the cumulative distribution function of node depths tends to 1, which shows that \(H\) cannot take the value \(+\infty\), as is expected for critically (and also supercritically) branching populations. The random variable \(H\) can be easily sampled from its cumulative probability function. As illustrated in Figure 11, the distribution of deep nodes (deeper than depth \(1\)) can be fitted by a power law with parameter \(-1\) (criticality). ### Number of mutations In order to find the expected number of mutations within a droplet, the expected size of the full coalescent tree must be considered. This relies on the node depth distribution, conditioned to be lower than the duration of the experiment. It results in the following: **Proposition 7** (Number of mutations): _Let \(M_{n}\) be the expected number of mutations (compared to the ancestral phenotype) accumulated in a lineage at the end of the \(n\)th cycle, with dilution \(\delta^{*}=e^{-(b-d)T}\), birth rate of cells \(b\), death rate \(d\), and mutation rate \(\theta\)._ \[M_{n}=\theta s_{n}L_{n} \tag{8}\] _With \(L_{n}\) the average length of the coalescent tree at the end of the \(n\)th cycle of an extant lineage started by one cell at \(t=0\):_ \[L_{n}=F(nT)\int_{0}^{nT}\frac{dt}{F(t)} \tag{9}\] Figure 12: **Expected number of mutations through experimental cycles**. The expected number of mutations increases with the number of experimental cycles. The initial number of cells is \(c=K\delta\). For one cycle (\(n=1\)), larger bottlenecks give better results (The curve \(\delta=0.5\) is higher than the curve for \(\delta=0.001\)). For more cycles however, larger bottlenecks yield more mutations. _With \(F\) the inverse tail distribution of the associated CPP. More specifically, when using the expression of \(F\) from Proposition 6:_ \[L_{n}=\left(1+\frac{b}{r}n(e^{rT}-1)\right)\sum_{k=0}^{n}\frac{rT- \log\left(\frac{k(e^{rT}-1)+\frac{r}{k}}{(k+1)(e^{rT}-1)+\frac{r}{k}}\right)}{ bk(e^{rT}-1)-d} \tag{10}\] Proposition 7 shows that the number of mutation accumulated in a lineage is proportional to the mutation rate \(\theta\). Moreover, since this number correspond to a single lineage, it must be multiplied Figure 14: **Mutation-optimising bottleneck size as a function of the number of cycles**. The initial number of cells is \(c=K\delta\). The bottleneck size that optimises the expected number of mutations is increasing with the number of cycles performed. Figure 13: **Expected number of mutations for different bottleneck sizes**. The initial number of cells is \(c=K\delta\). For a single cycle (\(n=1\)), smaller bottlenecks always yield more mutations. However, if more than one cycle is performed (\(n\geq 2\)), there is a non-zero optimal bottleneck size that maximises the expected number of mutations found in the population. by the number of initial lineages to obtain the total expected neutral diversity in a serial transfer protocol. In other words, doubling the mutation rate or doubling the initial number of cells doubles the expected number of mutations per droplet. The expected number of mutations \(M_{n}\) is also proportional to the expected coalescent tree length of a single extant lineage \(L_{n}\) (weighted by the proportion of lineages that actually survive \(s_{n}\)). This expected coalescent length is parametrised by the birth and death rates of the particles, but also the duration of the growth phase \(T\). Figure 12 shows that the expected number of mutations increases indefinitely with the number of cycles. However, the rate of increase is tied to the dilution bottleneck and tends to slow down when the number of cycles increase. Note that for a small number of cycles, the expected number of mutations increases with a higher dilution: one cycle with a dilution by two yields less diversity than one cycle with a dilution by one hundred. However, if ten cycles are performed, a dilution by two yields more diversity. This illustrates a trade-off between a harsh bottleneck, that allows long cycles and thus potentially many mutations but leads to loss of most extant mutations (due to founder effects) and a softer bottleneck that allows for fewer mutations to accumulate during each cycle, but compounds more because fewer mutations are lost. Figure 13 clarifies the link between the expected number of mutations and the dilution bottleneck. Note that smaller bottleneck sizes are compensated by longer cycles because the experiment is supposed to be performed in optimal conditions (\(\delta=e^{-rT}\)). If there is only one cycle (\(n=1\)), the maximal expected number of mutations is reached when the dilution bottleneck is vanishingly small (\(\delta\to 0\)) and the cycle length adequately long (\(T\to\infty\)). However, if there is more than one cycle (\(n>1\)) the expected number of mutations reaches a maximum value between \(\delta=0\) and \(\delta=1\). This maximum-diversity dilution bottleneck value increases with the number of cycles (Figure 14). Thus, the dilution bottleneck should be adjusted to the expected duration of the experiment in terms of cycle number to maximize accumulation of neutral mutations. Overall, the expected number of neutral mutations accumulated by the population increases through time and can be optimised by appropriately choosing a bottleneck size that optimises the trade-off between accumulating new mutations and not losing old ones. Note that \(L_{n}\) behaves like \(Tn\ln(n)\)(Lambert (2009), Theorem 2.4), thus the expected neutral diversity after \(n\) cycles is equivalent to \(\theta s_{n}Tn\ln(n)\). Thanks to (5), and because \(\delta=\delta^{\star}=e^{-rT}\), we get: \[M_{n}\sim_{n\to+\infty}\frac{\theta Tr}{b(1-e^{-rT})}\ln(n).\] The mere number of mutations contains little information about the diversity within a droplet. Indeed, some of those mutations could be born by a single individual, while others might be shared by the whole population. The next section addresses this problem by exploring the mutation frequency spectrum. ### Mutation Frequency spectrum A more precise assessment of the neutral diversity structure involves distinguishing between rare mutations (that are carried by few individuals) and frequent mutations (that are widespread within the population). The mutation frequency spectrum presents the proportion of mutations that are carried by a given number of individuals. The expected mutation frequency spectrum of a coalescent point process can be deduced from the law of node depths \(H\)((Lambert, 2009), Theorem 2.2). Indeed, the number of mutations carried by \(i\) individuals is proportional to the length of the coalescent tree subtending \(i\) leaves (Figure 15). As a result: **Proposition 8** (Mutation frequency spectrum): _Consider the Coalescent Point Process \(\mathcal{T}_{n}\), with overlaying mutations following a Poisson point process with intensity \(\theta\)._ _Let \(M_{n}^{f}\) be the expected number of mutations fixed in the population, that is mutations shared by all individuals. Then:_ \[M_{n}^{f}=\theta s_{n}\int_{0}^{nT}\frac{F(s)}{F(nT)}-\frac{F(nT)}{F(s)}ds \tag{11}\] _Let \(a_{u}\) be the expected frequency of mutations that are shared by \(u>0\) individuals in the limit of large sample of the population._ \[a_{u}=\theta\int_{0}^{nT}\left(1-\frac{\frac{1}{F(x)}-\frac{1}{F(nT)}}{1-\frac {1}{F(nT)}}\right)^{u-1}\left(\frac{\frac{1}{F(x)}-\frac{1}{F(nT)}}{1-\frac{1 }{F(nT)}}\right)^{2}dx \tag{12}\] _(Proof page 42.)_ Figure 15: **Finding the mutation frequency spectrum**. Mutation shared by 3 individuals are the ones that arose within the orange region only. This region is delimited by \(\max(H_{i+1},H_{i+2})<t<H_{i+3}\). As is expected for neutral diversity, Proposition 8 shows that the mutation frequency spectrum is proportional to the mutation rate \(\theta\) meaning that an increasing proportion of individuals carry mutations if the rate increases, but that does not change the relative frequency of the size of groups carrying a given mutation. Figure 16 shows the mutation frequency spectra for one and for a hundred cycles, and for three different dilution rates. Note that for a single cycle, harsher bottlenecks (i.e., smaller \(\delta\), and correspondingly longer cycle duration \(T\)) increase the tail of the distribution (there are more mutations that are shared by many individuals). This effect of \(\delta\) and \(T\) is not as simple when considering several cycles. For \(n=100\), the distribution is more heavy-tailed when the bottlenecks are soft (\(\delta=0.5\)) than when they are harsh (\(\delta=0.001\)). Figure 17 shows the power law tail of mutation frequency spectra. When the coalescent tree is a Kingman coalescent, corresponding to a long-lived population with approximately constant size, the mutation frequency spectrum has power law with exponent \(-1\) (harmonic spectrum, Ewens' sampling formula, (Ewens, 1972)). In our setting, this happens when \(\delta\) is close to 1 (constant population size) and \(nT\) is large (long-lived population). For a fixed growth rate \(r\), because \(\delta e^{rT}=1\), this means that (\(1-\delta\) is small but) \(n(1-\delta)\) is large, as in the yellow region of the heat map of Figure 17. When the coalescent tree is a Yule tree, corresponding to full, unbounded growth, the mutation frequency spectrum has power law with exponent \(-2\)(Lambert, 2009; Dinh et al., 2020). This is what happens for small \(\delta\), regardless of \(n\) as in the turquoise region of Figure 17. Indeed, when \(\delta\) is small, \(T\) is large (for fixed \(r\)), so that all derived mutations occurred since the last bottleneck. A third case stands out in our setting when \(n\) is not too large and \(\delta\) sufficiently close to 1 that very few births/coalescences occur over the time interval of length \(nT\), which happens when \(n(1-\delta)=O(1)\). In this case, conditioning the CPP to have coalescences smaller than \(nT\) (an event of vanishing probability) yields a CPP with uniform node depths, which gives rise to a power law mutation frequency spectrum with exponent \(-3\), as in the deep blue region of Figure 17. The information entropy of the mutation frequency spectrum can be used to systematically explore the effect of \(\delta\) on the shape of the distribution. Figure 18 shows that if more than one cycle is performed, there is a value of \(\delta\) that is expected to optimise the information entropy of the mutation frequency spectrum. This value is different from the value that optimises the number of mutations (Figure 13). Thus, there is a trade-off between accumulating many mutations, and having a diverse mutation frequency spectrum. The decision to fix \(\delta\) in order to optimise one or the other depends on the goal of the experiment. To sum up, higher dilution rate or longer duration of collective growth cycles result in longer trees and increased diversity, even though the population may risk going extinct. Extinction of the population marks the "death" of the culture, and it is the eventual fate of a serial transfer experiment in the limit of many cycles. In a nested population design, the cultures can also "reproduce" and replace the extinct ones. This has far-reaching consequences on the genealogy of the particles, as shown in the next section. Figure 16: **Mutation frequency spectrum**. Give the frequency of mutations carried by a given number of individuals in a large sample of the population. _Top:_ After \(n=1\) cycle. _Bottom:_ After \(n=100\) cycles. Figure 17: **Mutation frequency spectrum power law tail**. Left: Slope of the regression \(\alpha\) such that \(log(a_{u})=\alpha log(u)+b\), for \(u>500\). For a branching process without bottleneck the expected value is \(-2\), it is \(-1\) for a Moran process and \(-3\) for a CPP with uniform node depths. White dashed lines are isolines \(-1\), \(-2\), \(-3\). Right: Three illustrative mutation frequency spectra with associated regression lines. The location of the three spectra in the parameter space is represented by color-matching dots on the left heatmap. ## 4 Diversity in Dividing Droplets Nested populations' design differ from simple serial transfer in parallel cultures by the opportunity for the cultures (droplets, tubes or other compartments) to be subject to birth and death themselves. At each cycle, some cultures may be removed from the experiment, while others can be duplicated, usually by dispatching samples of the original collective in several new fresh medium compartments (rather than one in regular parallel serial transfer experiments). This section focuses on the consequences of imposing a collective-level birth-death process on the neutral diversity. To this end, consider the simple scenario (depicted in Figure 19) of a pair of droplets that share a common "droplet ancestor" several cycles in the past. The two droplets differ by the initial sampling performed in their common ancestor, and also by all new mutations accumulated since they became isolated. In the following, the particles follow a super-critical linear birth-death process with parameters \(b-d=r>0\). The parameters of the population structure are supposed to be optimal in the sense of Section 2.1: each cycle has a duration \(T^{*}=-r^{-1}\ln\left(cK^{-1}\right)\) and each lineage has an independent probability of being sampled at a bottleneck of \(\delta^{*}=e^{-rT^{*}}\). Consider that the droplet split happens at cycle \(m\) and the observation occurs at cycle \(m+n\). ### Survival probability First, let us consider the probability that a lineage spawned by a single cell \(n\) cycles in the past is not extinct within both droplets. The key to establish this probability is to recognise that the lineage undergoes a bottleneck with survival probability \(\delta\) at each cycle, except the cycle of the droplet split where each particle has a probability \(2\delta\) to survive. Indeed, two inoculation volumes are concurrently sampled from the ancestral droplet and dispatched into two offspring (Figure 20). Thus: Figure 18: **Shannon Entropy of the mutation frequency spectrum**. Defined as \(S=-\sum_{i}a_{i}\log(a_{i})\) **Proposition 9** (Survival probability - Split droplet): _Cells within droplets in serial transfers are modelled by a linear birth-death process with constant parameters \(b\) and \(d\), that is subject to periodic bottlenecks every duration \(T\). Additionally, consider that at the \(m\)th cycle, the dilution procedure is repeated to obtain \(k\) new droplets._ _Let \(s_{k,m,n}\) be the probability that a lineage spawned by a single cell is not extinct at the end of the \((n+m)\)th cycle._ \[s_{k,m,n}=1-h_{Q_{k,m,n}}(0),\] _Where \(h_{A}\) the linear fractional function with coefficient \(A\) defined in Equation 1. The matrix \(Q_{k,m,n}\) is the product:_ \[Q_{k,m,n}=Q_{1}^{m-1}Q_{k}Q_{1}^{n-1}R\] Figure 19: **Collective and Particle nested coalescent trees**. Left: coalescent tree of droplets. Right: coalescent tree of particles. An ancestral droplet lineage (black) is diluted into two offspring droplets (red, blue). At the time when the droplets are split (cycle \(m\)), the particle lineages within the ancestral droplets are assigned a colour (red or blue) that indicates the daughter droplet to which they are sent. Mutations appear along the genealogy of particles (crosses). Some mutations appear before the split (yellow, green, purple) and are found in both droplets (green, purple) if they are sampled by both droplet lineages, or within a single droplet (yellow) if they are segregated by the dilution. Other mutations appear after the split (light blue, orange) and are only found in one of the droplet lineages. _with,_ \[\forall k\in\mathbb{N}^{*},\ Q_{k}=\begin{bmatrix}k\delta(p-q)&p-k\delta(p-q)\\ k\delta(p-1)&p-k\delta(p-1)\end{bmatrix};\quad R=\begin{bmatrix}\delta(p-q)&q\\ \delta(p-1)&1\end{bmatrix}\] _Where \(\delta\) is the survival probability of a particle at serial transfer, \(q\) is the extinction probability of a lineage during a cycle, and \(p\) the geometric parameter of the size of a non-extinct lineage as defined in Proposition 1._ _(Proof page 44.)_ Proposition 9 is similar in its conclusion to Proposition 1, that treated case of a simple serial transfer. However, the expression is considerably less easy to handle, as the iteration does not simplify into a single matrix power. ### Total diversity As seen in Proposition 7, quantifying the total neutral diversity in an infinitely-many sites model is a matter of finding the total length of the coalescent tree (or forest) of the population. Note that the full coalescent tree in Figure 21 can be decomposed into a stump, before the splitting of droplets, and a corolla: another set of CPP (the corolla) sampled in one or the other droplet lineage. As a result: Figure 20: **Droplet Splitting Process**. When a droplet is split into \(k\) droplets, individual particles are independently selected to be transferred to the next cycle (with probability \(\delta\) for each new droplet) or sent to the waste (with probability \(1-k\delta\)). **Proposition 10** (Total Diversity - Split droplet): _Let \(M_{k,m,n}\) be the expected number of mutations accumulated in a lineage at cycle \(n+m\) after the splitting of the initial droplet at cycle \(m\) into \(k=1,2\ldots\left\lfloor\frac{1}{\delta}\right\rfloor\) droplets. Then:_ \[M_{k,m,n}= \theta L_{k,m,n}\] \[= \theta s_{m}F^{\dagger}(mT)\left[\int_{0}^{mT}\frac{1}{F^{ \dagger}(s)}ds+\int_{0}^{nT}\frac{F(nT)}{F(s)}ds\right],\] _where \(L_{k,m,n}\) is the expected length of the coalescent tree of the population, \(F\) is the inverse tail distribution of the CPP with periodic bottlenecks, and \(F^{\dagger}:=1-k\delta s_{n}+k\delta s_{n}F\), is the inverse tail distribution of the same CPP submitted to sampling with probability \(k\delta s_{n}\) at the present._ _(Proof page 44.)_ ### Private Diversity In order to assess the divergence between split droplets, one can compute the expected number of _private_ mutations, i.e., mutations that are only found in a single of the \(k\) split droplets. This number is the sum of all mutations that occur in the droplets after the splitting time \(mT\) (i.e., mutations in the corolla), plus all the mutations that occur before the splitting time but in a lineage that only Figure 21: **Finding the total diversity at cycle \(n+m\) of droplets split at cycle \(m\). All the lineages (triangles) spawned from the particles dispatched in one of the \(k=2\) droplets (here red and blue) have the same expected length \(L_{n}\). The bottom part (or stump) of the tree (black) result from the sampling of a CPP stopped in \(mT\), with probability \(\pi_{k,m,n}\), the probability that an extant lineage at cycle \(m\) will be sampled in one of the two droplets, and survive until cycle \(n+m\).** segregates in a single droplet (i.e., mutations in the stump). Since all the droplet are interchangeable, this value is identical for the \(k\) droplets. Overall, this number is proportional to the red (or blue) part of the CPP in figure 19. **Proposition 11** (**Private Mutations - Split Droplet)**: _Let a single droplet be split into \(k=1\ldots\left\lfloor\frac{1}{\delta}\right\rfloor\) at cycle \(m\). Let \(M^{\prime}_{k,m,n}\) be the expected number of mutations that are private to any of the \(k\) droplets when observed at cycle \(n+m\)._ \[M^{\prime}_{k,m,n} =\underbrace{\mathcal{S}_{k,m,n}}_{\text{stump}}+\underbrace{ \mathcal{C}_{k,m,n}}_{\text{corolla}}\] \[=\theta s_{m}F^{\dagger}(mT)\left[\int_{0}^{mT}\frac{kds}{F^{ \dagger}(s)(1+(k-1)F^{\dagger}(s))}+\int_{0}^{nT}\frac{F(nT)}{F(s)}ds\right],\] _where \(F\) is the inverse tail distribution of the CPP with periodic bottlenecks, and \(F^{\dagger}\) is the inverse tail distribution of the same CPP submitted to sampling with probability \(k\delta s_{n}\) at the present._ _(Proof page 46.)_ Figure 22 shows the expected proportion of private mutations \(M^{\prime}_{k,m,n}/M_{k,m,n}\) in split droplets as a function of \(m\), the number of cycles before splitting. If \(m\) is low, there are no shared mutations among droplets and the ratio is close to \(1\). If more cycles occur before the split, the proportion of private mutations decreases, tending at a logarithmic speed to 0. Now, the main purpose of droplet splitting is to select and duplicate a phenotype of interest. The last section explores, in the context of artificial selection, the advantage offered by a droplet-splitting process over the simple screening of parallel cultures in serial transfer. Figure 22: **Expected Proportion of private mutations in split droplets. Droplets are grown for \(m\) cycles, then split into \(k=2\) new droplets and grown for \(n\) new cycles. \(b=1,d=0,\delta=e^{-rT}\)** ## 5 Artificial selection of droplets A practical application of a device that would allow the manipulation of small cultures of microbial organisms would be the artificial selection of phenotypes of interest. Suppose that a given phenotype of interest is reached after the accumulation of \(\Theta\) mutations, and that it is possible to detect the number of mutations fixed so far, by sequencing or direct observation of the cultures. To formalise, let \(D\in\mathbb{N}^{*}\) be the number of collectives. Each collective \(i\) is assigned a number \(e_{i}=1,2,\ldots,\Theta\), corresponding to the number of fixed mutations. Suppose that the time for a collective to switch from \(e_{i}=j\) to \(e_{i}=j+1\) is exponentially distributed with parameter \(\alpha=\frac{\rho}{ND}\), where \(\rho\) is the mutation rate (that could be deduced from Proposition 8), scaled by the number of droplets \(D\) and the number of cycles \(N\). We assume that the \(D\) collectives are in state \(0\) at time \(0\). The only possible transition is to accumulate a new mutation, no reversion is possible, as illustrated in Figure 23. In order to assess the advantage of droplet splitting, consider two scenarios, illustrated in Figure 24: 1. **Without collective selection**\(D\) collective lineages are started in state \(0\) at \(t=0\) and undergo serial transfer independently of each other. 2. **With collective selection**\(D\) collective lineages are started in state \(0\) at \(t=0\), once a mutant is detected in a lineage, all the other collectives are killed and this lineage is split in \(D\) new lineages. Let \(\Gamma\) (respectively \(\Gamma^{*}\)) be the random variable encoding the first time for a lineage to get to the state \(\Theta\in\mathbb{N}\) in the scenario without collective selection (respectively with collective selection). To compare them, consider their respective cumulative distribution functions: **Proposition 12** (Cumulative distribution functions): _The cumulative distribution function of \(\Gamma^{*}\) is:_ Figure 24: **Propagation of mutations.** Without collective selection, all the lineages accumulate mutations independently. With collective selection, once a mutation fixation is first detected, the droplet is split into \(D\) lineages. Figure 23: **Phenotypes.** There are \(\Theta+1\) possible phenotypes. A \(j\)-collective switches to the next phenotype \(j+1\) at rate \(\alpha\). \[\mathbb{P}(\Gamma^{*}\leq x)=1-e^{-x\rho}\left(\sum_{u=0}^{\Theta-1}\frac{(\rho x )^{u}}{u!}\right) \tag{13}\] _The cumulative distribution function of \(\Gamma\) is:_ \[\mathbb{P}(\Gamma\leq x)=1-e^{-\rho x}\left(\sum_{u=0}^{\Theta-1}\frac{(\rho x )^{u}}{D^{u}u!}\right)^{D} \tag{14}\] _When the number of mutational steps tends to infinity, the two cumulative distribution function are equivalent. However, for any finite number of mutational steps \(\Theta\), the selective regime is faster than the serial transfer regime._ _(Proof page 48.)_ Proposition 12 shows that collective level selection, i.e., the process of splitting a droplet in which an intermediate mutation was fixed, leads to reducing the time to reach the \(\Theta\)-th mutation. Figure 25 shows the shape of the cumulative probability function for both regimes, illustrating this advantage. This constitutes a simple use-case for a device that allows the automated high-throughput manipulation of numerous cultures, such as the digital millifluidic analysers (Baraban et al., 2011; Boitard et al., 2015; Cottinet et al., 2016). Note that this result is obtained by assuming that the detection of mutation is cost-less and error-free. A more advanced model of this system should tackle the problem of imperfect detection. ## 6 Discussion This manuscript has laid the foundation for a theoretical understanding of the evolution of neutral diversity in massively parallel microbial evolution experiments. It was heavily inspired by ongoing engineering efforts to bring experimental evolution to digital millifluidics (Cottinet, 2013; Boitard et al., 2015; Dupin, 2018; Doulcier, 2019). In experimental microbiology, one desirable feature can be to maximise the number of mutations accumulated within the cultures, for instance in order to screen phenotypes of interest. The result presented above showed that, in an optimal growth setting, where cells are growing with a constant birth and death rate, without density-dependence or competition, the population should be submitted to cycles whose duration is tailored to compensate the bottleneck imposed at each serial transfer. The choice of the bottleneck should be made according to the expected duration of the experiment: in order to optimise the expected number of mutations, small bottlenecks (killing most of the lineages) should be used when the number of cycles is small, while larger bottlenecks (lower dilution rate) should be used for long term experiments. Additionally, the expected number of mutations increases linearly with increasing droplet volume and with increasing number of droplets which is a matter of technological progress as automation and larger droplet sizes are under consideration (Dupin, 2018; Postek et al., 2022). The mutation rate also increases linearly the number of expected mutations and can be manipulated by choosing mutator lineages or adding mutating chemicals to the culture broth. However, the potentially deleterious effects of this method might prevent using it in practical cases. In long-term evolution experiments (Kawecki et al., 2012; Van den Bergh et al., 2018), serial transfer is imposed by the need to replenish nutrients available to the cells. It is possible to build devices ensuring that a continuous flow of nutrient washes over the culture (for large volumes, see chemostats, or morbidostats (Toprak et al., 2013), in microfluidics, see mother machines (Potvin-Trottier et al., 2018). However, these methods are usually more prone to contamination. In contrast, periodically diluting the culture in fresh medium is simple and robust. The nested population design differs from traditional serial transfer of parallel cultures because it allows a collective birth-death process at the level of collectives. Serial transfer is pervasive in experimental evolution (Kawecki et al., 2012), and has received extensive theoretical treatment. So far we have only focused on neutral diversity. The effect of beneficial mutations has been studied in serial transfer settings. In particular, the probability of losing a beneficial mutation because of repeated bottlenecks (Wahl and Krakauer, 2000; Wahl et al., 2002; Wahl and Gerrish, 2001; Wahl and Zhu, 2015), and the effect of bottlenecks on the evolutionary path when multiple beneficial mutations exist (Gamblin et al., 2023). This work should be extended to the nested population design in the future. Figure 25: **Cumulative probability distribution** of the time to accumulate \(\Theta\) mutations. With droplet splitting, accumulation of mutation is faster. \(D=100\). In practice, the collective birth-death process can come from the fact that some cultures are effectively empty because of high dilutions in the previous cycle, and may be replaced in the next cycle by cells from a non-empty culture. The use of collective birth-death processes can also be a consequence of the experimental protocol. Milli and microfluidics compartments are usually produced in large numbers, while measurements are performed on all compartments, the retrieval of all the compartment's content might not be practically possible or even desirable when they are too numerous. Finally, the collective birth-death process may stem from an active effort of the operator to select some populations based on some measurable characteristics. The use of a non-saturating population dynamics in this manuscript is a simplification that should be carefully taken into account when transposing the results of this work to the design of experiments. Nonetheless, if the cycle duration is short enough so that the population is stopped during exponential phase, the heuristics developed in this manuscript should hold. There are however two phenomena that were not modelled here and that will probably muddy the neutral pattern that was described. First, the absence of mutations affecting birth and death rates. If most point-mutations can be safely considered neutral, some rare mutations can affect the ability to reproduce of the cells. If the mutations are beneficial, they will increase in proportion within the population, and will change the relative frequency of all neutral mutations, by favouring the ones carried by the same strand of DNA. This is a well documented phenomenon known as hitch-hiking (Fay and Wu, 2000). Second, horizontal gene transfer might allow the uncoupling of the mutation transmission from the genealogy (Dutta and Pan, 2002), muddying the pattern even further. The nested population design also differs from trait groups (Wilson, 1975) or transient compartments (Blokhuis et al., 2018) population structure because migration between compartments is prevented. As a consequence, it is possible to construct a non-ambiguous genealogy of the cultures. In practice, serial transfer design offers a natural way to implement the birth-death process, by diluting some cultures into several new compartments (the droplet splitting) and discarding others. Finally, this manuscript touches briefly the problem of artificial selection using a nested population design. This was done by considering the accumulation of neutral mutations. A more complete model of artificial selection would, however, take into account interactions between individuals, and potentially the selection of whole communities. Community-level selection has been the subject of both experimental (Swenson et al., 2000; Panke-Buisse et al., 2015) and theoretical inquiries (Arias-Sanchez et al., 2019; Xie et al., 2019; Doulcier et al., 2020). Overall, the results presented in this manuscript should be considered as a way to build intuition about the experimental system, while providing a null-model for diversity that could be compared to the actual patterns. Inevitable differences have to appear, but the point of comparison that is offered by neutral evolution will allow a better description of the observed diversity. Focusing on the part of the patterns that differ from this naive theoretical prediction will surely be fruitful: it shows that other mechanisms than drift must be invoked. ## Reference \begin{table} \begin{tabular}{c|l|l} **Symbol** & **Name** & **Reference** \\ \hline \(D\) & Collective Population size & \\ \(T\) & Cycle duration & \\ \(n\) & Number of Cycles & \\ \(K\) & Collective Carrying capacity & \\ \(c\) & Initial number of particles & \\ \(b,d,r\) & Particle birth rate, death rate and Malthusian parameter (\(r:=b-d\)) & \\ \(\delta\) & Particle Survival probability (at a bottleneck) & \\ \(\theta\) & Particle Mutation rate & \\ \(s_{n}\) & Probability that a lineage spawned by a single particle is not extinct at & Proposition 1 \\ & the end of the \(n\)th cycle. (At time \(nT^{-}\), just before the \(n\)th dilution). & \\ & \(s_{n}=1-h_{Q^{n-1}R}(0)\) & \\ \(s_{\infty}\) & Limit probability that a lineage spawned by a single particle is not extinct & Proposition 2 \\ & after a large number of cycles. \(s_{\infty}=\lim_{n\to\infty}s_{n}\). & \\ \(T^{*}\) & Maximal cycle duration before reaching saturation. \(T^{*}=r^{-1}ln(cK^{-1})\) & Proposition 3 \\ \(\delta^{*}\) & Optimal dilution rate \(\delta^{*}=e^{-rT}\) & Proposition 4 \\ \(s_{n}^{*}\) & Survival probability of a lineage spawned by a single cell before the \(n\)th & Proposition 5 \\ & dilution in the optimal regime in which \(\delta=e^{-rT}\). & \\ \(\tilde{F}\) & Inverse tail distribution of the CPP without bottlenecks & Proposition 6 \\ \(F\) & Inverse tail distribution of the CPP with bottlenecks. \(F(t)=[\mathbb{P}(H>t)]^{-1}\) & Equation 6 and \\ \(\tau_{n}\) & Coalescent tree of an extant lineage at the end of the \(n\)th cycle. It is a & Proposition 7 \\ & Coalescent Point Process with inverse tail distribution \(F\) stopped at the & \\ & first branch length larger than \(nT\). & \\ \(N(\tau_{n})\) & number of leaves of the coalescent tree \(\tau_{n}\). Geometric random variable & Eq. 15 and 16. \\ & with expected value \(F(nT)\). & \\ \(L_{n}\) & Expected length of the coalescent tree \(\tau_{n}\). \(L_{n}=F(nT)\int_{0}^{nT^{-}}F(x)^{-1}dx\) & Proposition 7 \\ \(M_{n}\) & Expected number of mutations in a lineage after \(n\) cycles. \(M_{n}=\theta s_{n}L_{n}\) & Proposition 7 \\ \(M_{n}^{f}\) & Expected number of fixed mutations in a lineage after \(n\) cycles. & Proposition 8 \\ \(M_{n}^{s}\) & Expected number of segregating mutations in a lineage after \(n\) cycles. & Proposition 8 \\ & \(M_{n}^{s}=M_{n}-M_{n}^{f}\) & \\ \(a_{u}\) & Expected frequency of mutations shared by \(u>0\) individuals after \(n\) & Proposition 8 \\ & cycles. & \\ \end{tabular} \end{table} Table 2: Global Symbols reference \begin{table} \begin{tabular}{c|l|l} **Symbol** & **Name** & **Reference** \\ \hline \(\alpha\) & Rate at which a lineage accumulate mutations \(\alpha=\frac{\rho}{ND}\) & Proposition 12 \\ \(\Theta\) & Number of mutations to accumulate & Proposition 12 \\ \(\Gamma\) & First time a lineage has accumulated \(\Theta\) mutations without collective & Proposition 12 \\ & selection & Proposition 12 \\ \(\Gamma^{*}\) & First time a lineage has accumulated \(\Theta\) mutations with collective selection & Proposition 12 \\ & & \\ \end{tabular} \end{table} Table 4: Global Symbols reference (cont.) \begin{table} \begin{tabular}{c|l|l} **Symbol** & **Name** & **Reference** \\ \hline \(s_{k,m,n}\) & Survival probability at the end of the \((m+n)\)th cycle of a lineage & Proposition 9 \\ & spawned by a single cell in a single droplet at the first cycle, that is was & \\ & split into \(k\) droplets at cycle \(m\). & \\ \(\tau_{k,m,n}\) & Coalescent tree at the end of the \((m+n)\)th cycle of a lineage spawned & Proposition 10 \\ & by a single cell in a single droplet at the first cycle, that is was split into & \\ & \(k\) droplets at cycle \(m\). & \\ \(\pi_{k,n}\) & Probability that a lineage extant at the end of cycle \(m\) just before the & \\ & droplet is split into \(k\) droplets will be extant at cycle \(m+n\). \(\pi_{k,n}=k\delta s_{n}\). & \\ \(F^{\dagger}\) & Inverse tail distribution of the stump tree. \(F^{\dagger}(t):=F_{\pi_{k,n}}(t)=1-\pi_{k,n}+\pi_{k,n}F(t),t\in[0,mT]\) & \\ \(L_{k,m,n}\) & Expected length of the Coalescent tree \(\tau_{k,m,n}\) & Proposition 10 \\ \(M_{k,m,n}\) & Expected number of mutations accumulated at the end of the \((m+n)\)th & \\ & cycle of a lineage spawned by a single cell in a single droplet at the first & \\ & cycle, that is was split into \(k\) droplets at cycle \(m\). \(M=\theta s_{k,m,n}L_{k,m,n}\) & \\ \(M^{\prime}_{k,m,n}\) & Expected number of mutations that are only found in a single droplet at & Proposition 11 \\ & the end of the \((m+n)\)th cycle of a lineage spawned by a single cell in a single droplet at the first cycle, that is was split into \(k\) droplets at cycle \(m\). & \\ \end{tabular} \end{table} Table 3: Global Symbols reference (cont)
2308.06516
Runge--Kutta methods determined from extended phase space methods for Hamiltonian systems
We study two existing extended phase space integrators for Hamiltonian systems, the {\em midpoint projection method} and the {\em symmetric projection method}, showing that the first is a pseudosymplectic and pseudosymmetric Runge--Kutta method and the second is a monoimplicit symplectic Runge--Kutta method.
Robert I McLachlan
2023-08-12T09:45:11Z
http://arxiv.org/abs/2308.06516v1
# Runge-Kutta methods determined from extended phase space methods for Hamiltonian systems ###### Abstract We study two existing extended phase space integrators for Hamiltonian systems, the _midpoint projection method_ and the _symmetric projection method_, showing that the first is a pseudosymplectic and pseudosymmetric Runge-Kutta method and the second is a monoimplicit symplectic Runge-Kutta method. ## 1 Introduction Many commonly used numerical methods for the time integration of differential equations can be expanded in B-series which elucidate their geometric and numerical properties [2, 10]. However, symplectic integrators with a B-series are implicit, the implicit midpoint rule being a central example [5]. Explicit symplectic integrators exist for some systems, such as separable classical mechanical systems [13]. To avoid this restriction, Pihajoki [15] introduced _extended phase space methods_: a new Hamiltonian, defined on the product of two copies of the original phase space, is constructed, that is amenable to explicit symplectic integration in the extended phase space. In place of the Hamiltonian system \(X_{H}\) associated with the Hamiltonian \(H\) and canonical symplectic form \(\omega\), \[\dot{q}=D_{2}H(q,p),\quad\dot{p}=-D_{1}H(q,p) \tag{1}\] (\((q,p)\in\mathbb{R}^{2d}\)), Pihajoki considered the extended system \[\dot{q}=D_{2}H(x,p),\quad\dot{p}=-D_{1}H(q,y)\] \[\dot{x}=D_{2}H(q,y),\quad\dot{y}=-D_{1}H(x,p)\] (\((q,x,p,y)\in\mathbb{R}^{4d}\)) with initial condition \[(q(0),x(0),p(0),y(0))=(q_{0},x_{0},p_{0},y_{0})=(q_{0},q_{0},p_{0},p_{0})\] such that the solution obeys \(q(t)=x(t)\) and \(p(t)=y(t)\) for all \(t\) and \((q(t),p(t))\) satisfies the original system (1) with initial condition \((q(0),p(0))=(q_{0},p_{0})\). The extended system is Hamiltonian with extended Hamiltonian \(\hat{H}=\hat{H}_{A}+\hat{B}_{B}\), \(\hat{H}_{A}=H(x,p)\), \(\hat{H}_{B}=H(q,y)\) and symplectic form \(\hat{\omega}:=dq\wedge dp+dx\wedge dy\). As \(x\) and \(p\) (resp. \(q\) and \(y\)) commute, the flow \(\exp\left(tX_{\hat{H}_{A}}\right)\) of Hamilton's equations for \(\hat{H}_{A}\) is given explicitly by Euler's method: \[q(t) =q_{0}+tD_{2}H(x_{0},p_{0})\] \[x(t) =x_{0}\] \[p(t) =p_{0}\] \[y(t) =y_{0}-tD_{1}H(x_{0},p_{0})\] (and analogously for \(\hat{H}_{B}\)). The integrator \[\Phi_{h}:=\exp\left(\frac{1}{2}hX_{\hat{H}_{A}}\right)\exp\left(hX_{\hat{H}_{ B}}\right)\exp\left(\frac{1}{2}hX_{\hat{H}_{A}}\right)\] is therefore explicit, second order, and preserves the extended symplectic form \(\hat{\omega}\). Two key issues, however, immediately arise: the 'duplicate' point \((x,y)\) may move away from the 'base' point \((q,p)\); and it is not clear how symplecticity in the extended phase space is advantageous in the original phase space. Tao [16] addressed the first point by adding a coupling term \(\frac{1}{2}\alpha(\|x-q\|^{2}+\|y-p\|^{2})\) to the extended Hamiltonian, finding that this could suppress the growth of \((q-x,p-y)\). Other authors have addressed the second point by projecting the solution to the original phase space in different ways. Two of these, the _symmetric projection method_ of Ohsawa [14] and the _midpoint projection method_ of Luo et al. [7], are the subject of this paper. The midpoint projection method, considered in Section 2, is shown to be equivalent to an explicit Runge-Kutta method that is pseudosymplectic (that is, approximately symplectic) and pseudosymmetric up to surprisingly high order - order 5 for the leapfrog-based method of classical order 2, and order 9 for the methods of classical order 4. We suggest that these properties account for the methods' good performance in astrophysical applications. The symmetric projection methods, considered in Section 3, are shown to be equivalent to monoimplicit symplectic Runge-Kutta methods, revealing their affine equivariance and generality. ## 2 The midpoint projection method Luo et al. [7] composed such an extended phase space integrator with the midpoint projection1 Footnote 1: Called the midpoint permutation in [7]. \[\pi\colon\mathbb{R}^{4d}\to\mathbb{R}^{2d},\quad(q,x,p,y)\mapsto\left(\frac{q+x} {2},\frac{p+y}{2}\right)\] to yield an explicit integrator on the original phase space. These methods were called'symplectic-like' 'because they, like standard implicit symplectic integrators, show no drift in the energy error' [7]. This lack of energy drift has been observed in many astrophysical simulations with nonseparable Hamiltonians without explanation, and the method has become quite popular [6, 18]. The order can be increased using composition methods [13]. The following result accounts for the greatly reduced energy drift. **Proposition 1**.: _The \(s\)-stage midpoint projected methods of the form_ \[\varphi_{h}:=\pi\circ\prod_{i=1}^{s}\Phi_{\alpha_{i}h}\] _are equivalent to \(2s+1\)-stage explicit Runge-Kutta methods of at least the same classical order as the underlying composition method. In the three cases_ 1. \(s=1\)_,_ \(\alpha_{1}=1\) _(the standard extended phase space integrator with midpoint projection, of classical order 2);_ 2. \(s=3\)_,_ \((\alpha_{1},\alpha_{2},\alpha_{3})=(\alpha,1-2\alpha,\alpha)\)_,_ \(\alpha=1/(2-2^{1/3})\) _(classical order 4);_ 3. \(s=5\)_,_ \((\alpha_{1},\ldots,\alpha_{5})=(\alpha,\alpha,1-4\alpha,\alpha,\alpha)\)_,_ \(\alpha=1/(4-4^{1/3})\) _(classical order 4)._ _the methods have pseudosymplecticity order \(k:=5\), \(9\), and \(9\) respectively, and pseudosymmetry order \(5\), \(9\), and \(9\) respectively. That is, \(\varphi_{h}^{*}\omega=\omega+\mathcal{O}(h^{k+1})\) and \(\varphi_{h}\circ\varphi_{-h}=id+\mathcal{O}(h^{k+1})\)._ Proof.: We begin by noting that the extended phase space methods do not rely on the partitioning into \((q,p)\) variables, but can be written in an affine-equivariant way that exhibits how they can be applied to any ordinary differential equation. For the ODE \(\dot{z}=f(z)\), \(z\in\mathbb{R}\), we consider the duplicated (i.e. extended) system \[\begin{split}\dot{z}&=f(\hat{z}),\\ \dot{\hat{z}}&=f(z)\end{split} \tag{2}\] which is separable and can be integrated by splitting and composition. When \(z=(q,y)\), \(\hat{z}=(x,p)\), and \(f(z)=X_{H}(z)\), this yields the method above. When \(f\) preserves \(dz\wedge Jz\), (2) preserves \(dz\wedge Jd\hat{z}\). We write out the method \(\pi\circ\Phi_{h}\) first in \((z,\hat{z})\) variables, with initial conditions \((z_{0},\hat{z}_{0})\): \[z_{1/2}=z_{0}+\frac{1}{2}hf(\hat{z}_{0})\] \[\hat{z}_{1} =\hat{z}_{0}+hf(z_{1})\] \[z_{1} =z_{1/2}+\frac{1}{2}hf(\hat{z}_{1})\] \[\pi(z_{1},\hat{z}_{1}) =(z_{1}+\hat{z}_{1})/2\] Imposing \(\hat{z}_{0}=z_{0}\), this can be written in Runge-Kutta form as \[Z_{1} =z_{0}\] \[Z_{2} =z_{0}+\frac{1}{2}hf(Z_{1})\] \[Z_{3} =z_{0}+hf(Z_{2})\] \[Z_{4} =Z_{2}+\frac{1}{2}hf(Z_{3})\] \[=z_{0}+h\left(\frac{1}{2}f(Z_{1})+\frac{1}{2}f(Z_{2})\right)\] \[z_{1} =(Z_{3}+Z_{4})/2\] \[=z_{0}+h\left(\frac{1}{4}f(Z_{1})+\frac{1}{2}f(Z_{2})+\frac{1}{4 }f(Z_{3})\right)\] This is a 3-stage explicit Runge-Kutta method with Butcher tableau \[\begin{array}{c|ccc}0&0&0&0\\ \frac{1}{2}&\frac{1}{2}&0&0\\ 1&0&1&0\\ \hline&\frac{1}{4}&\frac{1}{2}&\frac{1}{4}\end{array}.\] For this method we compute its B-series and check the pseudosymplecticity conditions2[5, VI.7.3]. There are 1, 1, 1, 3, and 6 conditions respectively at orders 1,..., 5; these are all satisfied. Of the 16 order 6 conditions, 13 are satisfied and 3 are not, thus the method is pseudosymplectic of order 5. For pseudosymmetry, we expand \(\varphi_{h}\circ\varphi_{-h}\) in B-series similarly. Footnote 2: We evaluated the symplecticity conditions \(a(u)a(v)-a(u\circ v)-a(v\circ u)\) where \(u\) and \(v\) are Butcher trees in Mathematica using <<NumericalDifferentialEquationAnalysis' ButcherProduct[u_, v_] := If[ByteCount[v]==0, {[FormalF][u], ReplacePart[v, 1->v[[1]] u] Symplectic[u_, v_] := ButcherPhi[u] ButcherPhi[v] - ButcherPhi[ButcherProduct[u, v]] - ButcherPhi[ButcherProduct[v, u]] The calculation for the other methods proceeds similarly. For \(s=3\) the Butcher tableau is \[\begin{array}{c|cccccccc}0&0&0&0&0&0&0&0\\ \frac{1}{2}\alpha_{1}&\frac{1}{2}\alpha_{1}&0&0&0&0&0&0\\ \alpha_{1}&0&\alpha_{1}&0&0&0&0&0\\ \alpha_{1}+\frac{1}{2}\alpha_{2}&\frac{1}{2}\alpha_{1}&0&\frac{1}{2}\alpha_{1 }+\frac{1}{2}\alpha_{2}&0&0&0&0\\ \alpha_{1}+\alpha_{2}&0&\alpha_{1}&0&\alpha_{2}&0&0&0\\ \alpha_{1}+\alpha_{2}+\frac{1}{2}\alpha_{3}&\frac{1}{2}\alpha_{1}&0&\frac{1}{ 2}\alpha_{1}+\frac{1}{2}\alpha_{2}&0&\frac{1}{2}\alpha_{2}+\frac{1}{2}\alpha_ {3}&0&0\\ \alpha_{1}+\alpha_{2}+\alpha_{3}&0&\alpha_{1}&0&\alpha_{2}&0&\alpha_{3}&0\\ \hline&\frac{1}{4}\alpha_{1}&\frac{1}{2}\alpha_{1}&\frac{1}{4}\left(\alpha_ {1}+\alpha_{2}\right)&\frac{1}{2}\alpha_{2}&\frac{1}{4}\left(\alpha_{2}+ \alpha_{3}\right)&\frac{1}{2}\alpha_{3}&\frac{1}{4}\alpha_{3}\end{array}\] Pseudosymplecticity was introduced by Aubry and Chartier [1], who derive various explicit pseudosymplectic Runge-Kutta methods; the second order method above appears there as a member of a 1-parameter family of 3-stage methods of pseudosymplectic order 4, it being the unique member of that family that is pseudosymplectic of order 5. The numerical examples in the astrophysics literature do not show energy drift. However, the potential drift effect is rather small and we have confirmed numerically in planar Hamiltonian systems that the energy does drift proportionally to \(h^{5}t\) resp. \(h^{9}t\) for the 2nd resp. 4th order methods above. Symmetry also affects time integration and can moderate energy drift [12], so it is possible that pseudosymmetry is having some positive effect as well. The 3-stage 4th order method is known to have large error constants and it is possible that the 5-stage method given here, or some other explicit pseudosymplectic method, may be have advantages in these applications. On the other hand, energy behaviour is not the only manifestation of symplecticity and the choice of a symplectic vs a pseudosymplectic integrator may depend on the application. ## 3 The symmetric projection method Ohsawa [14] defined the _extended phase space integrator with symmetric projection_ as follows. Let \[D=\begin{bmatrix}I_{d}&-I_{d}&0&0\\ 0&0&I_{d}&-I_{d}\end{bmatrix},\quad\mathcal{N}=\ker D=\{(q,q,p,p)\colon(q,p) \in\mathbb{R}^{2d}\}.\] Given an extended phase space integrator \(\Phi_{h}\colon\mathbb{R}^{4d}\to\mathbb{R}^{4d}\) and initial condition \(z_{0}=(q_{0},p_{0})\in\mathbb{R}^{2d}\), the integrator is the map \(z_{0}\mapsto z_{1}\) defined by 1. \(\zeta_{0}:=(q_{0},q_{0},p_{0},p_{0})\) 2. Find \(\mu\in\mathbb{R}^{2d}\) such that \(\Phi_{h}(\zeta_{0}+D^{\top}\mu)+D^{\top}\mu\in\mathcal{N}\) 3. \(\hat{\zeta}_{0}:=\zeta_{0}+D^{\top}\mu\) 4. \(\hat{\zeta}_{1}:=\Phi_{h}(\hat{\zeta}_{0})\) 5. \(\zeta_{1}=(q_{1},q_{1},p_{1},p_{1}):=\hat{\zeta}_{1}+D^{\top}\mu\) 6. \(z_{1}=(q_{1},p_{1})\) The method is well-defined, symmetric, and symplectic [14]. Jayawardana and Ohsawa [8] further show that the method preserves arbitrary quadratic invariants. Together these results give a strong indication that the method may be a symplectic B-series method. We show below that this is true and that it is in fact equivalent to a monoimplicit Runge-Kutta method, a class introduced by Cash [3] that have only a single implicit stage. (The Simpson-AVF method \(z_{1}=z_{0}+\frac{1}{6}h(f(z_{0})+4f((z_{0}+z_{1})/2)+f(z_{1}))\) is an example [4].) In a monoimplicit Runge-Kutta method the \(A\) matrix of the Butcher tableau takes the form of a rank one matrix plus a matrix of zero spectral radius. The extended phase space integrator with symmetric projection is also an instance of the class of _extended Runge-Kutta methods_ that we now define. **Definition 1**.: _For ODEs \(\dot{z}=f(z)\), \(z\in\mathbb{R}^{d}\), an extended Runge-Kutta method is defined by the equations_ \[Z_{i}=z_{0}+h\sum_{j=1}^{m}a_{ij}k_{j},\quad i=1,\ldots,s\] \[z_{1}= z_{0}+h\sum_{i=1}^{m}b_{i}k_{i}\] \[k_{i}=f(Z_{i}),\quad i=1,\ldots,s\] \[0=\sum_{j=1}^{m}d_{ij}k_{j},\quad i=1,\ldots,m-s\] _where the matrix \(d_{ij}\) has full rank._ Recall the central result of Munthe-Kaas et al. [9]: Let \(\Phi=\{\Phi_{n}\}_{n\in\mathbb{N}}\) be an integration method, defined for all vector fields on all dimensions \(n\). Then \(\Phi\) is a B-series method if and only if the property of affine equivariance is fulfilled: if \(a(x):=Ax+b\) is an affine map from \(\mathbb{R}^{m}\) to \(\mathbb{R}^{n}\), \(f\) a vector field on \(\mathbb{R}^{m}\), and \(g\) a vector field on \(\mathbb{R}^{n}\) such that \(g(Ax+b)=Af(x)\), then \(a\circ\Phi_{m}(f)=\Phi_{n}(f)\circ a\). **Proposition 2**.: _Extended Runge-Kutta methods are affine equivariant._ **Proposition 3**.: _Let \(M\) be the \(m\times m\) matrix defined by_ \[M_{ij}=b_{i}b_{j}-b_{i}a_{ij}-b_{j}a_{ji}\] _for \(i,j=1,\ldots,m\), where we have defined \(a_{ij}=0\) for \(i>s\). Let \(V\) be an \(m\times s\) matrix whose columns form a basis for the nullspace of \(d\). If \(b_{i}=0\) for \(i=s+1,\ldots,m\), and \(V^{T}MV=0\), then the extended Runge-Kutta method with parameters \(a_{ij}\), \(b_{i}\), and \(d_{ij}\), is quadratic-preserving and symplectic._ Proof.: Suppose \(f\) has a quadratic first integral \(z^{\top}Cz\). Following the standard proof for Runge-Kutta methods, \[z_{1}^{\top}Cz_{1}-z_{0}^{\top}Cz_{0}=2h\sum_{j+1}^{m}b_{i}z_{0}^{\top}Ck_{i}+h^{2 }\sum_{i,j=1}^{m}M_{ij}k_{i}^{\top}Ck_{j}.\] Using the condition on the \(b_{i}\), and expressing \(k_{1},\ldots,k_{s}\) in the basis whose columns are \(V\), i.e. \(k_{i}=\sum_{j=1}^{s}V_{ij}\hat{k}_{j}\), gives the result. **Proposition 4**.: _Extended phase space integrators with symmetric projection, where the extended method \(\Psi\) is a composition method, are extended Runge-Kutta methods and can be written as monoimplicit symplectic Runge-Kutta methods._ **Corollary 1**.: 1. _Extended phase space integrators with symmetric projection methods preserve arbitrary affine symmetries, quadratic integrals, and constant symplectic structures when there are any._ 2. _Monoimplicit symplectic Runge-Kutta methods of all orders exist._ Proof.: We again write the extended system as \[\dot{z} =f(\hat{z})\] \[\dot{\hat{z}} =f(z).\] Let \(\Delta\colon z\mapsto(z,z)\); \(S_{\mu}\colon(z,\hat{z})\mapsto(z+\mu,z-\mu)\); \(\pi\colon(z,\hat{z})\to z.\) In these variables the symmetric projection method takes the form \[\pi\circ S_{\mu}\circ\Psi\circ S_{\mu}\circ\Delta\] where \(\mu\) is determined by the condition that \(S_{\mu}\circ\Psi_{h}\circ S_{\mu}\in\mathcal{N}\). We first illustrate the complete construction for the case that \(\Psi_{h}\) is the leapfrog method \(\Phi_{h}.\) Its three substeps are then \[z^{*} =z_{0}+\mu+\frac{1}{2}hf(z_{0}-\mu)\] \[\hat{z}^{*} =z_{0}-\mu+hf(z^{*})\] \[z^{**} =z^{*}+\frac{1}{2}hf(\hat{z}^{*})\] Defining the three stages \(Z_{1}\), \(Z_{2}\), and \(Z_{3}\) as the \(z\)-values at which \(f\) is evaluated, and defining \(hk_{4}=\mu\), we have \[Z_{1} =z_{0}-hk_{4}\] \[Z_{2} =z_{0}+h\left(\frac{1}{2}k_{1}+k_{4}\right)\] \[Z_{3} =z_{0}+h(k_{2}-k_{4})\] The condition \(S_{\mu}\circ\Psi\circ S_{\mu}\in\Delta(\mathbb{R}^{n})\) is \[z^{**}+\mu=\hat{z}^{*}-\mu\] which leads to the constraint \[-\frac{1}{2}k_{1}+k_{2}-\frac{1}{2}k_{3}-4k_{4}=0.\] The update equation is \(z_{1}=z^{**}+\mu\) or \[z_{1}=z_{0}+h\left(\frac{1}{4}k_{1}+\frac{1}{2}k_{2}+\frac{1}{4}k_{3}\right).\] Thus the parameters for this method are \[A=\begin{bmatrix}0&0&0&-1\\ \frac{1}{2}&0&0&1\\ 0&1&0&-1\\ 0&0&0&0\end{bmatrix}\] \[b=\begin{bmatrix}\frac{1}{4}&\frac{1}{2}&\frac{1}{4}&0\end{bmatrix}\] \[d=\begin{bmatrix}-\frac{1}{2}&1&-\frac{1}{2}&-4\end{bmatrix}.\] (For the null space of \(d\) we take \[V=\begin{bmatrix}2&-1&-8\\ 1&0&0\\ 0&1&0\\ 0&0&1\end{bmatrix}.\] A direct calculation shows that \[M=\frac{1}{16}\begin{bmatrix}1&-2&1&4\\ -2&4&-2&-8\\ 1&-2&1&4\\ 4&-8&4&0\end{bmatrix}\] and \(V^{\top}MV=0\), confirming that the method is quadratic-preserving.) The extra stage \(k_{4}\) can be explicitly eliminated using the constraint, giving the 3-stage monoimplicit symplectic Runge-Kutta method with tableau For the general case, let \(\Psi_{h}\) be the composition method with time steps given by parameters \(a_{1},\ldots,a_{s}\), i.e. \[\Psi_{h}=\exp(a_{s}hX_{\hat{H}_{B}})\exp(a_{s-1}hX_{\hat{H}_{A}})\ldots\exp(a_{2 }hX_{\hat{H}_{B}})\exp(a_{1}hX_{\hat{H}_{A}}).\] Writing out the stages as above, and eliminating the constraint \(\mu\), leads to an \(s\)-stage monoimplicit Runge-Kutta method with parameters \[A=L+\frac{1}{4}uv^{\top},\quad b_{i}=\frac{1}{2}a_{i}\] where \[L=\begin{bmatrix}0&\ldots&&&&\\ a_{1}&0&\ldots&&\\ 0&a_{2}&0&\ldots&&\\ a_{1}&0&a_{3}&0&\ldots&\\ 0&a_{2}&0&a_{4}&0&\ldots\\ &&\ddots&&\ddots&\\ a_{1}&0&a_{3}&\ldots&&a_{s-1}&0\end{bmatrix},\quad u_{i}=(-1)^{i},\quad v_{j}= a_{j}(-1)^{j}.\] Therefore \[b_{i}a_{ij}=\frac{1}{2}b_{i}b_{j}c_{j-i}\] where \[c_{k}=\begin{cases}1&k\text{ even}\\ -1&k\text{ odd},k>0\\ 3&k\text{ odd},k<0\end{cases}\] and thus we have \[b_{i}b_{j}-b_{i}a_{ij}-b_{j}a_{ji}=0\] for all \(i\) and \(j\), which is the condition for a Runge-Kutta method to be symplectic. ## 4 Discussion The results here have been established by direct calculation, but the methods appear quite natural and intrinsically defined. One could seek direct geometric proofs, not relying on B-series, of the pseudosymplecticity and pseudosymmetry orders for the midpoint projection method when the extended method is an arbitrary symplectic integrator. Likewise, for the symmetric projection method, the diagonal \(\mathcal{N}\) is a symplectic subspace of the extended symplectic space, suggesting an approach using the symplectic geometry of constraints [11]. The question mentioned earlier, of the best pseudosymplectic method to use in applications, and any potential issues arising from nonsymplecticity, should be examined. Finally, we suggest determining the entire set of monoimplicit symplectic Runge-Kutta methods and their relative merits. #### Dedication A preliminary version of this work was presented at ANODE 2023 in honour of John Butcher's 90th birthday. It was therefore very pleasing and appropriate to find that, as the work developed, it turned out to involve Runge-Kutta methods and Butcher series so intimately. Happy birthday, John!
2305.09801
Active-matter isomorphs in the size-polydisperse Ornstein-Uhlenbeck Lennard-Jones model
This paper studies size-polydisperse Lennard-Jones systems described by active Ornstein-Uhlenbeck particle dynamics. The focus is on the existence of isomorphs (curves of invariant structure and dynamics) in the model's three-dimensional phase diagram. Isomorphs are traced out from a single steady-state configuration by means of the configurational-temperature method. Good invariance of the reduced-unit radial distribution function and the mean-square displacement as a function of time is demonstrated for three uniform-distribution polydispersities, 12%, 23%, and 29%. Comparing to active-matter isomorphs generated by the analytical direct-isomorph-check method, the latter give somewhat poorer invariance of the structure, but better invariance of the dynamics. We conclude that both methods can be used to quickly get an overview of the phase diagram of polydisperse AOUP models involving a potential-energy function obeying the hidden-scale-invariance property required for isomorph theory to apply.
Daniel Jespersen, Lorenzo Costigliola, Jeppe C. Dyre, Shibu Saw
2023-05-16T20:51:55Z
http://arxiv.org/abs/2305.09801v3
# Active-matter isomorphs in the size-polydisperse Ornstein-Uhlenbeck Lennard-Jones model ###### Abstract This paper studies size-polydisperse Lennard-Jones systems described by active Ornstein-Uhlenbeck particle dynamics. The focus is on the existence of isomorphs (curves of invariant structure and dynamics) in the model's three-dimensional phase diagram. Isomorphs are traced out from a single steady-state configuration by means of the configurational-temperature method. Good invariance of the reduced-unit radial distribution function and the mean-square displacement as a function of time is demonstrated for three uniform-distribution polydispersities, 12%, 23%, and 29%. Comparing to active-matter isomorphs generated by the analytical direct-isomorph-check method, the latter give somewhat poorer invariance of the structure, but better invariance of the dynamics. We conclude that both methods can be used to quickly get an overview of the phase diagram of polydisperse AOUP models involving a potential-energy function obeying the hidden-scale-invariance property required for isomorph theory to apply. Introduction Active matter involves particles that absorb energy from their environment and continuously perform motion dissipated into heat. This kind of motion, which in contrast to standard Newtonian or Brownian dynamics breaks time-reversal invariance [1; 2], is relevant not only for describing biological systems ranging from bacteria to flocking birds [3; 4; 5; 6; 7; 8; 9; 10], but also for microscopic artificial microswimmers and active Janus particles. Many different approaches to the description of active matter exist, depending on whether point particles or particles with directional coordinates are considered and depending on the precise mechanism by which the particles autonomously perform mechanical work [5; 6; 8; 9; 11; 12]. Point-particle active-matter models include the Active Brownian Particle (ABP) [13; 14] and Active Ornstein-Uhlenbeck Particle (AOUP) models; these models have been used to describe the motion, e.g., in active colloids [15]. The AOUP model, which is simpler than the ABP model and has one less parameter, can be used to approximate ABP dynamics. Moreover, the AOUP model offers more possibilities to obtain theoretical predictions [16; 17]; this is the model we choose to study in the present paper. Specifically, the AOUP model involves point particles subject to a colored-noise Langevin dynamics [18; 19; 20; 16]. In view of the variability of biological and other active systems, one cannot expect all particles to be identical. As a consequence, polydispersity has recently come into focus in connection with active-matter models [21; 22; 23; 24]. There is also currently great deal of interest in passive polydisperse systems coming from, in particular, their use in SWAP-equilibrated supercooled liquids [25], in which context the question arises of how similar the dynamics of small and large particles are [26; 27; 28]. Finally, it is worth mentioning that active matter at high density has recently been studied inspired by biological materials such as cells, both for monodisperse [29; 24] and polydisperse cases [30], showing emerging collective phenomena with the spontaneous occurrence of spatial velocity correlations. This paper studies the size-polydisperse AOUP Lennard-Jones (LJ) model. We recently demonstrated the existence of lines of approximately invariant structure and dynamics in the phase diagram of a binary LJ AOUP model; such lines are referred to as "active-matter isomorphs" [31; 32; 33]. Inspired by the fact that the introduction of polydispersity into ordinary (passive) Newtonian models does not affect the existence of isomorphs [34], the present paper investigates whether the existence of isomorphs also survives the introduction of polydispersity into the AOUP model. This is worthwhile to investigate since the existence of isomorphs makes it possible to quickly establish an overview of the phase diagram because only a single point on each isomorph needs to be simulated. ## II The AOUP equation of motion and simulation details We consider a system of \(N\) particles in volume \(V\) and define the number density by \(\rho\equiv N/V\). If the potential-energy function is denoted by \(U(\mathbf{R})\) in which \(\mathbf{R}\equiv(\mathbf{r}_{1},...,\mathbf{r}_{N})\) is the vector of all particle coordinates, the AOUP equation of motion [18; 19; 20; 16] is \[\dot{\mathbf{R}}\,=\,-\mu\nabla U(\mathbf{R})\,+\,\boldsymbol{\eta}(t)\,. \tag{1}\] Here \(\mu\) is the mobility (velocity over force); the noise vector \(\boldsymbol{\eta}(t)\) is colored according to an Ornstein-Uhlenbeck process, i.e., is a Gaussian stochastic process characterized by \[\langle\eta_{i}^{\alpha}(t)\eta_{j}^{\beta}(t^{\prime})\rangle\,=\,\delta_{ij }\delta_{\alpha\beta}\frac{D}{\tau}\,e^{-|t-t^{\prime}|/\tau} \tag{2}\] in which \(i\) and \(j\) are particle indices, \(\alpha\) and \(\beta\) are \(xyz\) spatial indices, and \(D\) and \(\tau\) are constants, respectively, of dimension length squared over time and time. We are interested in how the physics is affected when the density is changed, specifically in determining whether approximately invariant physics can be obtained by adjusting \(D\) and \(\tau\) properly with density (\(\mu\) is regarded as a material constant throughout). For the binary AOUP model this problem was studied in Ref. [32] that demonstrated how to change \(D\) and \(\tau\) with density in order to achieve invariant structure and dynamics to a good approximation. The question is whether this is possible also for systems with large size polydispersity. In the AOUP model "reduced" quantities are defined by using \(l_{0}=\rho^{-1/3}\) as the length unit and \(t_{0}=\tau\) as the time unit [32]. Reduced quantities are marked by a tilde. When we speak about approximately invariant structure and dynamics, it refers to this particular state-point-dependent unit system. We studied a system of \(N=5000\) particles in three dimensions interacting by Lennard-Jones (LJ) pair potentials, which between particles \(i\) and \(j\) are given by \(v_{ij}(r)=4\varepsilon\left[(r/\sigma_{ij})^{-12}-(r/\sigma_{ij})^{-6}\right]\) with \(\sigma_{ij}=(\sigma_{i}+\sigma_{j})/2\) (Lorentz-Berthelot mixing rule) and \(\varepsilon=1.0\). The particles sizes \(\sigma_{i}\) are distributed according to a uniform distribution with unity average. As usual, the polydispersity \(\delta\) is defined by \(\delta^{2}=((\sigma^{2})-\langle\sigma\rangle^{2})/\langle\sigma\rangle^{2}\), which in our case reduces to \(\delta^{2}=\langle\sigma^{2}\rangle-1\). For a uniform distribution \(\delta\) cannot exceed \(1/\sqrt{3}\cong 58\%\). The three polydispersities studied below are \(\delta\cong 11.5\%\), \(23.1\%\), and \(28.9\%\), corresponding to the size ranges listed in Table 1 (for brevity these are henceforth reported as \(\delta=12\%\), \(23\%\), and \(29\%\)). Note that the study entails substantially different particle sizes, with the ratio of largest to smallest particle volume equal to \(27\) in the \(29\%\) polydispersity case. All simulations used a shifted-force cutoff [35] of the \(ij\) particle interaction at the pair distance \(r=2.5\sigma_{ij}\) and the time step \(\Delta t=\Delta\tilde{t}/(D\ \rho^{2/3})\) in which \(\Delta\tilde{t}=0.4\)[32]. The active-matter simulations were carried out on GPU cards using a home-made code, the MD simulations used RUMD [36]. ## III Structure and dynamics along an isochore Before discussing results for the variation of structure and dynamics along active-matter isomorphs, we briefly present analogous results along an isochore, i.e., for state points of the same density. This sets the stage by illustrating that structure and dynamics do vary significantly throughout the \((\rho,D,\tau)\) AOUP phase diagram. Structure is studied by means of the average radial distribution function (RDF) denoted by \(g(r)\). In Fig. 1(a) RDFs are shown along the \(\rho=0.85\) isochore for the \(\delta=29\%\) case, with values of \(D\) and \(\tau\) taken from the \(\delta=29\%\) DIC active-matter isomorph studied below. Figure 1(b) shows the same data in reduced coordinates, which in this case simply involves in a common scaling of the x-coordinate. The parameters used in the simulations are listed in insets of the figures (more decimals of these parameters are provided in the Appendix). We find a substantial structure variation along the isochore. The same applies for the mean-square displacement (MSD) as a function of the time \(t\), \(\langle\Delta r^{2}(t)\rangle\), which is plotted in a log-log plot in (c) LJ units and (d) reduced units. The short-time slope is two, reflecting the "ballistic" regime of the AOUP model, which is not present in ordinary Langevin dynamics [16; 18; 19; 20] because it results from short-time noise correlations resulting in an inertia-like persistence of the direction of motion. At long times the well-known diffusive behavior leading to unity slope is observed. We note that the dynamics varies significantly along the isochore, whether or not reported in reduced units. \begin{table} \begin{tabular}{|c|c|c|} \hline \(\delta\) & \(\sigma\) range & \(\sigma_{max}/\sigma_{min}\) \\ \hline \(12\%\) & \(0.80-1.20\) & \(1.50\) \\ \hline \(23\%\) & \(0.60-1.40\) & \(2.33\) \\ \hline \(29\%\) & \(0.50-1.50\) & \(3.00\) \\ \hline \end{tabular} \end{table} Table 1: Values of the polydispersity \(\delta\), \(\sigma\) range, and ratio between largest and smallest particle sizes for the three cases of uniform polydispersity studied. ## IV Structure and dynamics along \(T_{\rm conf}\)-generated active-matter isomorphs Reference 32 used the _configurational temperature_\(T_{\rm conf}\) for determining how to change the AOUP model parameters \(D\) and \(\tau\) with density in order to achieve (approximately) invariant reduced structure and dynamics. The assumption is that \(k_{B}T_{\rm conf}\) is the relevant characteristic energy scale where \(T_{\rm conf}\) is defined by \(k_{B}T_{\rm conf}\equiv\langle(\nabla U)^{2}\rangle/\langle\nabla^{2}U\rangle\)[37; 38; 39] in which \(k_{B}\) is the Boltzmann constant, \(\nabla\) is the gradient operator in the \(3N\)-dimensional configuration space, and the sharp brackets denote standard canonical-ensemble averages. In the thermodynamic limit the relative fluctuations of both the numerator and the denominator of \(T_{\rm conf}\) go to zero, which implies that it is enough to consider a single configuration \({\bf R}_{0}\) using the expression \(k_{B}T_{\rm conf}\cong(\nabla U({\bf R}_{0}))^{2}/\nabla^{2}U({\bf R}_{0})\). The reasoning of Ref. 32 may be summarized as follows. Adopting \(e_{0}=k_{B}T_{\rm conf}\) as the energy unit supplementing the above introduced length and time units (\(l_{0}=\rho^{-1/3}\); \(t_{0}=\tau\)), we first note that the three quantities \(\mu t_{0}e_{0}/l_{0}^{2}\), \(Dt_{0}/l_{0}^{2}\), and \(\tau/t_{0}\) are dimensionless. Assuming that these quantities cannot change with varying density if the structure and dynamics are invariant in reduced units, we conclude that \(\mu\propto l_{0}^{2}/(t_{0}e_{0})=\rho^{-2/3}/(\tau k_{B}T_{\rm conf})\) and \(D\propto l_{0}^{2}/t_{0}=\rho^{-2/3}/\tau\). Figure 1: Average radial distribution function (RDF) and mean-square displacement (MSD) for state points on the \(\rho=0.85\) isochore of the \(\delta=29\%\) polydispersity LJ AOUP model (the \(D\) and \(\tau\) values are those of the below studied \(\delta=29\%\) active-matter DIC isomorph). (a) and (b) show the RDF as a function of \(r\) and of the reduced pair distance \(\tilde{r}\), respectively (the curves are the same because \(\tilde{r}\propto r\) along an isochore). We see a substantial variation in the structure, with the most pronounced structure found for the smallest values of the model parameter \(D\) (black curves). The MSD likewise shows no collapse along the isochore, whether plotted (c) as a function of the time \(t\) or (d) as a function of the reduced time \(\tilde{t}\). The slowest motion is found for the smallest \(D\) (black curves). Since \(\mu\) is assumed to be a material constant, this leads to \(\tau\propto\rho^{-2/3}/k_{B}T_{\rm conf}\) and \(D\propto k_{B}T_{\rm conf}\), i.e., to the following recipe for how \(D\) and \(\tau\) changes with density in terms of their values \(D_{0}\) and \(\tau_{0}\) at a reference state point of density \(\rho_{0}\): \[D(\rho) = D_{0}\ \frac{T_{\rm conf}(\rho)}{T_{\rm conf}(\rho_{0})}\,,\] \[\tau(\rho) = \tau_{0}\left(\frac{\rho_{0}}{\rho}\right)^{2/3}\frac{T_{\rm conf }(\rho_{0})}{T_{\rm conf}(\rho)}\,. \tag{3}\] For a large system \(T_{\rm conf}(\rho_{0})\) may be evaluated from a single (steady-state) configuration, \(T_{\rm conf}(\rho_{0})\cong T_{\rm conf}({\bf R}_{0})\). Reference 32 demonstrated that this approximation introduces a negligible error for typical system sizes. In order to find \(T_{\rm conf}(\rho)\) one scales \({\bf R}_{0}\) uniformly to the density \(\rho\), i.e., substitutes \({\bf R}=(\rho_{0}/\rho)^{1/3}{\bf R}_{0}\) into the configurational temperature expression. This leads to \[D(\rho) = D_{0}\ \frac{T_{\rm conf}\left[(\rho_{0}/\rho)^{1/3}{\bf R}_{0} \right]}{T_{\rm conf}({\bf R}_{0})}\,,\] \[\tau(\rho) = \tau_{0}\left(\frac{\rho_{0}}{\rho}\right)^{2/3}\frac{T_{\rm conf }({\bf R}_{0})}{T_{\rm conf}\left((\rho_{0}/\rho)^{1/3}{\bf R}_{0}\right)}\,. \tag{4}\] We used these equations for generating three active-matter isomorphs starting in each case from the parameter values \(D=1100\) and \(\tau=10\) at the reference densities \(\rho_{0}=\)0.99, 0.91, and 0.85, respectively, for the polydispersities \(\delta=12\%\), \(23\%\), and \(29\%\) (the reference densities were chosen to have the same virial, i.e., give the same contributions to the pressure coming from the interactions). Results for the variation of the average RDF are given in Fig. 2. The left column reports the RDF for the three polydispersities as functions of the pair distance \(r\), the right column shows the same data as functions of the reduced pair distance \(\tilde{r}\). In the latter case we find a good, but not perfect, data collapse and conclude that the average structure is approximately invariant along the active-matter isomorphs. In view of the fact that the density varies by no less than a factor of two, this is not trivial. Figure 3 shows analogous data for the MSD plotted in the same way with the left column giving the MSD as a function of time and the right column giving the same data in reduced units. There is a good data collapse with, however, a somewhat faster motion at the higher densities. Figure 2: Structure probed along \(T_{\text{conf}}\)-generated active-matter isomorphs. (a), (c), and (e) show the average RDFs for polydispersity \(\delta=12\%\), \(23\%\), and \(29\%\), respectively, while (b), (d), and (f) show the same data as functions of the reduced pair distance \(\tilde{r}\). In all three cases we see a good collapse of the reduced RDF along the active-matter isomorph. ## V Comparing to Direct-Isomorph-Check Generated Isomorphs Above we demonstrated good invariance of the structure and dynamics along active-matter isomorphs generated by the \(T_{\text{conf}}\) method [32]. That method is easy to use and efficient because it requires just a single steady-state configuration at the reference state point in order to trace out the corresponding active-matter isomorph in the relevant phase diagram, _in casu_ the \((\rho,D,\tau)\) diagram of the AOUP model. An alternative method for tracing out active-matter isomorphs is the analytical "direct isomorph-check" (DIC) method, which in Appendix A of Ref. [32] was shown to result in somewhat better isomorph-invariance of the dynamics for the AOUP Kob-Andersen binary LJ model. Consider first a standard passive Newtonian systems involving LJ pair interactions of any kind, i.e., single-component, binary, or polydisperse systems, defined by some mixing rule. For such a system the analytical DIC recipe for tracing out a standard equilibrium isomorph [40; 41] is Figure 3: Dynamics probed along \(T_{\text{conf}}\)-generated active-matter isomorphs. (a), (c), and (e) show the MSDs for polydispersity \(\delta=12\%\), \(23\%\), and \(29\%\), respectively, as functions of time, while (b), (d), and (f) show the same data in reduced units. There is a good, but not perfect, collapse of the reduced MSD along the active-matter isomorphs. \[\frac{h(\rho)}{T}\,=\,\mbox{Const.} \tag{5}\] Here \(h(\rho)\) is the following function of density [40; 41] \[h(\rho)\,=\,\left(\frac{\gamma_{0}}{2}-1\right)\left(\frac{\rho}{\rho_{0}} \right)^{4}-\left(\frac{\gamma_{0}}{2}-2\right)\left(\frac{\rho}{\rho_{0}} \right)^{2} \tag{6}\] in which \(\rho_{0}\) is the reference-state-point density and \(\gamma_{0}\) is the density-scaling exponent at the reference state point. The latter quantity can be determined numerically by means of \[\gamma_{0}\,=\,\frac{\langle\Delta U\Delta W\rangle}{\langle(\Delta U)^{2} \rangle}\,. \tag{7}\] in which \(\Delta W\) and \(\Delta U\) are the deviations from the equilibrium values of virial and potential energy, respectively, and angular brackets denote \(NVT\) equilibrium averages [31; 42]. The systemic temperature \(T_{\rm s}({\bf R})\) is defined as the temperature of the corresponding thermal-equilibrium Newtonian system at the state point with the density of the configuration \({\bf R}\) and average potential energy equal to \(U({\bf R})\)[43]. In the thermodynamic limit of any system (passive or active, equilibrium or non-equilibrium) fluctuations in \(T_{\rm s}({\bf R})\) go to zero, implying that one has at any time a well-defined systemic temperature \(T_{\rm s}\). For, e.g., a driven passive or an active-matter system, a "systemic isomorph" is defined as a curve in the \((\rho,T_{\rm s})\) phase diagram identical to an isomorph in the standard equilibrium Newtonian \((\rho,T)\) phase diagram [43]. Thus in the analytical DIC, the systemic-temperature's variation with density is given by \[\frac{h(\rho)}{T_{\rm s}(\rho)}\,=\,\mbox{Const.}\,, \tag{8}\] i.e., \(T_{\rm s}(\rho)\propto h(\rho)\). The analytical DIC method for generating an active-matter isomorph of the AOUP model is arrived at by replacing the configurational temperature in Eq. (3) by the systemic temperature \(T_{\rm s}\) (this procedure is justified in Ref. [32]). Via Eq. (8) this leads to \[D(\rho) = D_{0}\ \frac{h(\rho)}{h(\rho_{0})}\,,\] \[\tau(\rho) = \tau_{0}\left(\frac{\rho_{0}}{\rho}\right)^{2/3}\frac{h(\rho_{0} )}{h(\rho)}\,. \tag{9}\] Table 2 reports the systemic temperatures at the reference state points of the three polydispersities studied. As mentioned, the reference densities were chosen to have the same virial; we see that they also have almost the same systemic temperature. \begin{table} \begin{tabular}{|c|c|c|c|} \hline \(\rho_{0}\) & \(\delta\) & \(T_{\rm s}\) & \(\langle U\rangle\) \\ \hline 0.990 & 12\% & 0.96 & \(-\)4.455 \\ \hline 0.905 & 23\% & 0.98 & \(-\)4.447 \\ \hline 0.850 & 29\% & 1.00 & \(-\)4.321 \\ \hline \end{tabular} \end{table} Table 2: Systemic temperature \(T_{\rm s}\) and average potential energy \(\langle U\rangle\) at the reference densities \(\rho_{0}\) of the three polydisperse systems studied. In all three cases the values of the AOUP parameters at the reference densities are \(D=1100\) and \(\tau=10\). Fig. 4 shows the active-matter isomorph obtained from the \(T_{\rm conf}\) method (full curves) and the analytical DIC method (dashed curves), starting at the reference state point \((\rho,D,\tau)=(\rho_{0},1100,10)\) in which the reference density is 0.99, 0.91, and 0.85, respectively, for \(\delta=12\%\), 23%, and 29%. The two methods for generating isomorphs result in visibly different curves; thus there is more than 50% difference in \(D\) and \(\tau\) at the largest density in the 29% polydispersity case (green curves). How different are these active-matter isomorphs when it comes to average RDF and MSD data collapse? The RDF case is investigated in Fig. 5, which shows that the structure is somewhat more invariant along the \(T_{\rm conf}\)-generated active-matter isomorphs than along the DIC-generated isomorphs, albeit this is a minor effect because in both cases the structure is fairly invariant. The differences are most pronounced at higher polydispersity. Figure 4: Active-matter isomorphs for the polydispersities \(\delta=12\%\), 23%, and 29%, generated from the reference state points by two different methods, the \(T_{\rm conf}\) method of Sec. III (full curves) and the analytical direct-isomorph-check (DIC) method (dashed curves). The isomorphs are visibly different. Figure 6 reports results for the MSD. Here we reach the opposite conclusion: the DIC method results in a somewhat better data collapse than the \(T_{\text{conf}}\) method. The same conclusion was reached for the binary Kob-Andersen AOUP model in Ref. [32] (that did not investigate the average RDF). Figure 5: Comparing the degree of structural invariance along \(T_{\text{conf}}\)–generated and DIC-generated active-matter isomorphs. (a), (c), and (e) show the reduced average RDFs for polydispersity \(\delta=12\%\), \(23\%\), and \(29\%\), along the \(T_{\text{conf}}\)-generated isomorphs, while (b), (d), and (f) show the corresponding reduced average RDFs along the DIC-generated isomorphs. There is a somewhat better data collapse along the \(T_{\text{conf}}\)–generated isomorphs. Figure 6: Comparing the degree of invariance of the dynamics along the \(T_{\rm conf}\)-generated and DIC-generated active-matter isomorphs. (a), (c), and (e) show the reduced MSDs for polydispersity \(\delta=12\%\), \(23\%\), and \(29\%\), along the \(T_{\rm conf}\)-generated isomorphs, while (b), (d), and (f) show the corresponding reduced MSDs along the DIC-generated isomorphs. There is a better data collapse along the DIC-generated isomorphs. ## VI Role of smallest and largest particles To illuminate why the structure is not always isomorphism invariant, we studied for the 29% polydispersity simulation data the structure and dynamics of the 20% smallest and largest particles along the DIC isomorph (Fig. 7). The RDF is here defined by limiting the central particle to be either among the smallest or among the largest 20% and counting only neighboring particles of the same kind. In both cases, the peaks are narrower and higher than that of the average RDF (Fig. 5(f)), which reflects the limitation to similar-sized particles. We note that unlike in Fig. 5(f), in Fig. 7(a) the most pronounced structure is seen at the lowest density. Interestingly, the structure around the smallest particles is not DIC-isomorph invariant, while that around the largest particles is; a similar result applies for the \(T_{\text{conf}}\)-generated isomorph (data not shown). In contrast to the findings for structure, the dynamics of both small and large particles is DIC-isomorph invariant to a good approximation, even though smallest particles move considerably faster than the largest ones. We conclude that the lack of perfect isomorph invariance of the overall RDF largely reflects the fact that the structure around the smallest particles is not isomorph invariant. Figure 7: Comparing the degree of invariance of the structure and dynamics of the 20% smallest and largest particles, along the DIC-generated active-matter isomorphs for the \(\delta=29\%\) polydispersity case. (a) and (c) show the reduced RDFs. The invariance of the largest 20% particle RDF is much better than the small-particle RDF. For the MSD, however, both cases are isomorph invariant to a good approximation. ## VII Summary and outlook We have shown that the uniform-distribution size-polydisperse LJ AOUP model has active-matter isomorphs for the polydispersities \(\delta=12\%\), \(23\%\), and \(29\%\). This demonstrates the robustness of the active-matter-isomorph concept, which for passive systems applies whenever the potential-energy function obeys the hidden-scale-invariance condition discussed in Refs. [44] and [45]. The existence of isomorphs means that the dimension of the polydisperse AOUP phase diagram is effectively reduced from three to two, since it implies that lines exist in the \((\rho,D,\tau)\) phase diagram along which the reduced structure and dynamics are invariant to a good approximation. From a practical perspective, this fact makes it easy to quickly get an overview of the AOUP model's phase diagram. Two methods have been studied for generating active-matter isomorphs, one based on the configurational temperature and one based on the systemic-temperature concept. We find that both methods work well despite the fact that they do not trace out identical active-matter isomorphs (Fig. 4). In practice, the latter method will be easier to use in the case of LJ active matter for which a simple expression is available for the function \(h(\rho)\) where the parameter \(\gamma_{0}\) may be evaluated from a single passive-matter simulation (Eq. (7)). More work is needed to clarify how polydispersity relates to the existence of active-matter isomorphs in general. As regards the AOUP model, it would be interesting to investigate whether the introduction of energy polydispersity affects the existence of isomorphs. More generally, other models like the active Brownian particle model with a potential-energy function that obeys hidden scale invariance should be investigated in polydisperse versions in order to illuminate the robustness of the active-matter-isomorph concept. ###### Acknowledgements. This work was supported by the VILLUM Foundation's _Matter_ grant (16515).
2303.04073
Operationalizing AI in Future Networks: A Bird's Eye View from the System Perspective
Modern Artificial Intelligence (AI) technologies, led by Machine Learning (ML), have gained unprecedented momentum over the past decade. Following this wave of "AI summer", the network research community has also embraced AI/ML algorithms to address many problems related to network operations and management. However, compared to their counterparts in other domains, most ML-based solutions have yet to receive large-scale deployment due to insufficient maturity for production settings. This article concentrates on the practical issues of developing and operating ML-based solutions in real networks. Specifically, we enumerate the key factors hindering the integration of AI/ML in real networks and review existing solutions to uncover the missing considerations. Further, we highlight a promising direction, i.e., Machine Learning Operations (MLOps), that can close the gap. We believe this paper spotlights the system-related considerations on implementing \& maintaining ML-based solutions and invigorate their full adoption in future networks.
Qiong Liu, Tianzhu Zhang, Masoud Hemmatpour, Han Qiu, Dong Zhang, Chung Shue Chen, Marco Mellia, Armen Aghasaryan
2023-03-07T17:29:04Z
http://arxiv.org/abs/2303.04073v5
# Operationalizing AI in Future Networks: ###### Abstract Modern Artificial Intelligence (AI) technologies, led by Machine Learning (ML), have gained unprecedented momentum over the past decade. Following this wave of "AI summer", the network research community has also embraced AI/ML algorithms to address many problems related to network operations and management. However, compared to their counterparts in other domains, most ML-based solutions have yet to receive large-scale deployment due to insufficient maturity for production settings. This paper concentrates on the practical issues of developing and operating ML-based solutions in real networks. Specifically, we enumerate the key factors hindering the integration of AI/ML in real networks and review existing solutions to uncover the missing considerations. We also highlight two potential directions, i.e., MLOps and Causal ML, that can close the gap. We believe this paper spotlights the system-related considerations on implementing & maintaining ML-based solutions and invigorate their full adoption in future networks. ## I Introduction To drive digital transformation, modern telecommunication networks are undergoing a disruptive evolution. The ongoing 5G rollout promises to deliver customized network services to billions of subscribers with ultra-high speed, ultra-high reliability, ultra-low latency, and ubiquitous connectivity. The IoT classification is expected to connect trillions of devices. Next-generation digital realms, e.g., Metaverse, also call for high-quality, customizable communication mediums to drive human-machine interaction and digital-physical fusion. These technical headways inevitably make modern networks increasingly diverse, decentralized, ad-hoc, and complex. Traditional networks were mainly managed by predefined or heuristic rules. However, these methods either bear oversimplified assumptions about the underlying system or demand unduly heavy computations, which disaccords with the continuing network complexification. Although the transition towards network softwarization can significantly reduce operational overhead, the programability of network infrastructures also expands the network management boundaries, leading to new issues and challenges. Human involvement is still essential for in-depth problem diagnoses and high-stakes decision-making. With the accruing breakthroughs in advanced learning algorithms, computing power, and the Big Data ecosystem, AI/ML has made great strides in solving previously challenging tasks, such as image classification, language processing, and speech recognition. Nowadays, AI-empowered products have permeated various industrial and business sectors, including healthcare, manufacturing, entertainment, and education. According to the findings of Gartner and MIT Sloan Management Review, AI has led to $3.9T of business value in 2022 and is deemed a strategic priority by 83% of CEOs [1]. Motivated by this monumental success, the network research community is extensively exploring AI/ML algorithms to materialize "self-driving" networks. This new breed of _ML-based solutions_, i.e., network applications, functions, and services, has demonstrated more optimistic outcomes than the traditional fixed-policy approaches [2]. Despite the enormous interest, AI/ML is still immature for modern networks. According to a recent report [3], \(88\%\) of the teloc industry's proof-of-concept AI/ML projects fail to reach live deployment. The major deterrent stems from inadequate "system thinking", as researchers are not always exposed to the complexities and dynamics of production environments [4]. As we observe, existing ML-based solutions present two fundamental disparities with real deployments in networks: (i) they were mainly purposed to outperform prior solutions on specific performance metrics (e.g., accuracy and F1-score) without vetting other network-/system-related requirements (e.g., reactivity, robustness, scalability, and verifiability); (ii) they were mostly demonstrated in controlled environments and become costly to generalize for real networks, considering the data heterogeneity and network constraints therein. This "reality gap" immensely hampers the fusion of AI/ML and modern networks. Although network-oriented Development and Operations (DevOps) practices offset part of the exertions, they can hardly cater to the unique characteristics of AI/ML [5]. To smoothly _operationalize_ (i.e., _develop, deploy, and manage_) AI-based solutions in production, network operators must grasp skills in data science, network operations, and systems engineering, which can be extremely burdensome. This paper aspires to elucidate the practical challenges of making AI/ML an integral part of the future network landscape. The remaining sections are organized as follows: We present the background information in Sec. II, discuss practical considerations in Sec. III, review existing solutions and the missing pieces in Sec. IV, present two future directions in Sec. V, and conclude in Sec. VI. ## II Background In this section, we briefly review the current status of AI/ML and elaborate on the practical barriers obstructing their pervasive adoption in operational networks. ### _AI/ML for networking_ Compared to fixed-policy approaches, AI/ML algorithms exhibit exceptional pattern matching, incremental learning, and automation capabilities, especially on multi-dimensional data. They are ideal for handling modern networks' growing scale and complexity. For example, emergent wireless systems present unique challenges regarding pilot assignment, channel estimation, and power allocation, which require frequent synchronizations between distributed access points. ML-based solutions can efficiently distill complex patterns from the network and user data to realize uniform, location-agnostic Quality of Experience (QoE) provisioning [6]. In recent years, AI/ML techniques have sparked tremendous hype in the network research community. Standardization bodies (e.g., ETSI, 3GPP) anticipate AI/ML techniques to play a pivotal role in automating future networks and have formed many working groups to investigate different use cases [2]. In industry, carrier-grade platforms are under active development to bolster AI/ML-augmented network services [3]. In academia, a myriad of (un)supervised/reinforcement learning (UL/SL/RL) algorithms were employed to tackle a large spectrum of "networking" problems, including but not limited to traffic classification [7], resource scheduling [6], anomaly detection [8], load balancing [9], QoE management [10], across different network segments and administration domains, as exemplified in Fig. 1. Given the rapid expansion of the AI/ML frontier (e.g., generative AI), their use cases in modern networks will continue to enrich. ### _The reality gap_ Despite numerous research proposals, a closer inspection reveals a less rosy picture. Due to the lack of system-related considerations, these solutions are generally inopportune for real networks. In particular, most of them were implemented in a highly empirical and manual fashion throughout the AI/ML lifecycle stages, including data acquisition, feature extraction, algorithm design, model training, parameter tuning, and model validation. Also, they mostly showcased high-precision models as the final results and rarely bothered with the subsequent deployment and maintenance issues. If the optimization goal or network assumption changes, these manual design steps must be repeated (or revamped) to derive an up-to-date solution. Such an approach is acceptable for fast prototyping or functional testing. However, as shown in Fig. 2, real-world ML systems embody many additional components, and model building is merely part of the story [4]. Three factors engender the reality gap. First, compared to other prevalent AI/ML application domains (e.g., computer vision, language processing), network data are generated at high speed from various (distributed) components in diverse formats, e.g., raw packets, flow sketches, configuration files, system logs/alarms, and telemetry profiles. They may contain temporal, spatial, categorical, and graph semantics. Such Fig. 1: AI/ML for network management: An example. Fig. 2: Basic components for real-world ML systems (this picture was originally composed by Sculley et al. [4]). multi-modal data with high variety, velocity, and volume can be exceedingly onerous to process [10], not to mention their continuous drifts following the underlying system transitions. Second, existing solutions generally focus on optimizing specific performance metrics rather than comprehensively assessing the overall readiness in the global context, which is incompatible with the requirements of real-world ML systems. For instance, some existing solutions strive for high prediction accuracy using supersized AI/ML models, such as Deep Neural Networks (DNNs), which hardly fit into resource-limited network devices. The potentially high inference latency also makes them fail to meet real-time constraints. In production networks, the key performance indicators (KPIs) and network constraints must be sensibly analyzed and attuned to avoid one-dimensional solutions. Third, many existing solutions were developed in local or simulated environments and ceased after obtaining the models. In real systems, models should be deployed as part of a data-processing pipeline. Owing to disparate development toolkits and deployment targets, integrating them into real network infrastructures can be laborious and error-prone. As network devices can come from sundry vendors with bespoke configuration, optimization, and execution routines, deploying AI/ML on them can result in many iterations of manual tuning, customization, and feasibility tests. As ML-based solutions must be continuously updated and delivered in response to network evolvement, such a manual process can incur insurmountable operational overhead. ### _Is DevOps the panacea?_ Traditionally, the operational costs of delivering software products can be countered with DevOps, which encompasses an assemblage of practices to break the silo between software developers and IT operations engineers, promoting Automation and Continuous Integration (CI)/Continuous Deployment (CD) throughout the product lifecycle. These practices help drive IT and business outcomes for many businesses and organizations. The network community has adopted DevOps practices to fuel technological innovation and revenue growth [5]. However, though DevOps can curb the operational overhead of productionalizing traditional software products, they lack supplemental support for the unique characteristics of AI/ML. There are five fundamental discrepancies between conventional software and ML: First, code quality predominantly decides the achievable performance in traditional software, while the model, code, and data all impact the outcome of AI/ML [1]. Second, traditional software is usually built on full-fledged libraries with clear abstraction boundaries [4]. Developing ML-based solutions, au contraire often involves a broader range of tools, libraries, and platforms, subject to extra migration and maintenance costs. Third, unlike traditional software that conveys deterministic outputs, AI/ML models are intrinsically stochastic and require disparate processes to validate their behaviors. Fourth, ML models are susceptible to data/concept drifts, which are quite common in real networks, and thus necessitate continuous monitoring and model rebuilding [5]. Finally, building and operating ML-based solutions call for data science skillsets, which are missing in traditional software/network routines. According to a recent survey, \(55\%\) telcos lack the right data science talent [3]. Although network practitioners can gradually get acquainted with AI/ML and data science, mastering the theories and technical details takes time, given the vast scope. ## III AI/ML in Networks: Practical Considerations To close the gap and seamlessly operationalize AI/ML in production, many critical system-related considerations exist throughout the ML lifecycle, i.e., data preparation, development, and operations phases, as illustrated in Fig. 3. This section epitomizes the relevant ones for real networks. ### _Data preparation_ As the cornerstone of modern AI, the quality of input data directly determines the ceilings of any AI/ML-based product, which spurs the recent trend towards data-centric AI [1]. However, ensuring data quality can be extremely time-consuming, often costing \(60\%\) of time in AI/ML projects [3]. To supply the ML algorithms with high-quality, independent, and identically distributed (i.i.d.) data, special considerations should be enforced in the data preparation phase: the constituent _data acquisition_ and _feature extraction_ processes. In existing solutions, data can generally originate from three sources: (i) live networks, (ii) controlled environments, or (iii) (curated) public datasets. In case (i), despite the multitudinous data measurement and collection methods, the process can incur huge operational costs and require considerate trade-offs [9]. For example, sampling is usually prioritized over the per-packet collection in high-speed networks to attenuate the impact on the traffic datapath. Also, network data collection can incur uncontrollable situations, such as packet drops, sampling biases, or schema changes, hence aberrations and outliers [7]. Labeling the collected data also requires substantial human effort. In cases (ii) and (iii), as data are from outside the target networks, their statistical properties can be unaligned with deployment assumptions, which leads to unexpected consequences. Data validation is thus necessary to disclose the potential biases/anomalies before model development. Some existing solutions assumed fixed input data properties, making them susceptible to uncertainties in real networks. Raw network data must be converted to features conformant with the ensuing AI/ML algorithms. Different feature sets imply varied system costs and model performance, thus merit closer scrutiny: many existing ML-based solutions extract features empirically, which can result in performance impairment due to partial or redundant data representations [10]. Furthermore, feature selection schemes might face revamping upon network evolvement. The data collection, labeling, and feature extraction process should be regulated to obtain the most relevant features for model development. ### _Development_ Model development consists of four fundamental steps, i.e., _algorithm design, model training, tuning, and model validation_, each crucial to determine a solution's overall readiness for the target network. Algorithm design involves selecting the right type of learning algorithm (e.g., SL vs. UL), ML model (e.g., regressions, decision tree, ensembles, DNNs), and architecture (e.g., \(\#\)layers, \(\#\)neurons/layer). The KPIs and network constraints should be jointly contemplated as ML algorithms have divergent predictive powers, resource footprints, and application scenarios. Sometimes, multiple models should be developed and pooled to compensate for the sporadic changes in highly dynamic networks. Similarly, the hyper-parameters should be cognitively adjusted during model tuning to find the optimal configuration. In existing solutions, both processes are typically manually conducted based on domain knowledge, which can be tedious and strenuous for complex models. For network operators, it is preferable to systematically guide these processes to avoid sub-optimal solutions [11]. Although model training and validation are well-studied in existing research, they still miss some key factors once in the system context. For example, a training strategy should factor in efficiency and safety. The former is crucial to delivering up-to-date models in highly dynamic settings like disaster-resilient networks. The latter is necessary for AI/ML algorithms (e.g., reinforcement learning) that call for frequent interactions with real systems. Likewise, inference efficiency, fairness, and explainability during model validation should be examined with the evaluation metric to unveil a model's readiness for the target network. Inference efficiency is critical for real-time analysis and decision-making in high-speed networks (e.g., \(40\)-\(100\) Gbps). Fairness is requisite in mission-critical environments to unveil biases and avoid unexpected fallouts. Explainability allows a model's decisions to be interpreted, audited, managed, and ultimately trusted by various stakeholders [12]. We will further dissect these factors in Sec. IV-D. ### _Operations_ Based on our study, most existing solutions did not consider the practical challenges of _deploying_ and _managing_ AI/ML, making them difficult to actuate in real networks. Deployment involves several key steps, including packaging, customization, and feasibility tests. As ML-based solutions were mainly intended for the control plane, these tasks can be handled by general-purpose model serving tools. Recently, intrigued by the advantages of in-network AI/ML, researchers began to push the AI/ML frontier into the network data plane to capitalize on the voluminous data there [13]. Model deployment becomes a daunting task due to the distinctions between the local implementation environment and network infrastructure, and the divergent tooling can largely impede customization. Moreover, as networks are replete with specialized hardware devices (e.g., SmartNICs, P4 switches, embedded devices) with disparate architectures, configuration routines, and resource footprints, the deployment process entails refactoring a solution into an optimized data-processing pipeline with minimal interference on the datapath [8], which can impose a hefty burden on network operators. Furthermore, managing the deployed ML-based solutions involves multiple challenging tasks such as model serving, resource & operation management, and performance monitoring. In particular, as network systems can evolve expeditiously, the intrinsic concept/data drifts can result in model decay and service degradation. The inference quality should thus be constantly inspected to detect performance diminishments and trigger the model-rebuilding process whenever applicable. In real networks, the correct quality metrics and triggers should be carefully scoped, and the monitoring overhead should also be balanced with the quality assessment accuracy [5]. Depending on the problem context, the rebuilding process can start from the data preparation and labeling or model development stage, which must be specified beforehand. ## IV Operationalizing AI/ML in Production Networks: The Status Quo We devote this section to reviewing the related works addressing the practical challenges for AI/ML in networking. ### _Data preparation_ Three prior works tackle the practical concerns for data preparation: Bronzino et al. [10] propose _Traffic Refinery_, a framework equipped with a high-speed automation pipeline for flow-level packet collection, transformation, and feature extraction, based on the intents of network operators. The authors combine different design choices and optimizations to minimize packet drops. They also implemented a profiler to characterize the associated system-level costs for obtaining different feature sets, allowing network operators to flexibly tradeoffs between feature selection and model accuracy. Yao et al. [9] propose the _Aquarius_ framework to enable flexible data collection and feature extraction for data center networks (DCNs). Aquarius embeds a transport-layer collector to extract ordinal and quantitative features from TCP traffic with minimal overhead. The collected features are organized into shm files for the efficient I/O of the AI/ML algorithms from the control plane without interfering with the data place. Holland et al. [11] propose _nPrint_, a framework capable of transforming each incoming packet into a normalized, binary Fig. 3: ML lifecycle in production settings. representation without losing its contextual semantics. ML algorithms can automatically explore the representation and discover the most relevant features, eliminating the need for manual feature extraction. Their prototype implementation can encode packet headers and payloads at \(10^{6}\) packets/minute. ### _Model development_ To alleviate the difficulties of model development, prior works explore AutoML techniques to automatically carry out model selection and hyper-parameter tuning without exposing the AI/ML-specific complexities to network operators. In particular, Holland et al. [11] leverage the AutoGluon-Tabular framework to locate and ensemble models with high predictive accuracy and low inference latency, given the features and labels. Similarly, Swamy et al. [8] employ an optimization framework that automatically carries out the algorithm selection and model generation as a Bayesian optimization problem based on user intents and network constraints. Meanwhile, Lacoboiaea et al. [6] address the practical problems of developing a Deep Reinforcement Learning (DRL)-based channel manager. The authors tackle four challenges, i.e., training safety, efficiency, environment realism, and generalization capabilities. They employ digital twins to simulate the target network for safe training and dynamically adjust the learning rate for training efficiency. A simple simulator is employed to accelerate convergence. Environmental realism is improved by implanting real-world data into the simulator models. The generalization capability is improved by involving synthetic noises and real-world data during training. ### _Operations_ Deploying and managing AI/ML can be challenging in real network systems. In-network ML research seeks to automate the deployment process. However, these works only support limited ML models and deployment targets. To overcome the drawbacks, Zheng et al. [13] propose _Planter_, a modular framework that streamlines the deployment of various in-network ML algorithms on three well-known hardware platforms, i.e., Intel Tofino, BMv2, and P4Pi. Planter supports a range of popular ML algorithms. The trained models are automatically converted to target-specific P4 code. The code is then compiled and loaded into the data plane for functional testing and deployment. Swamy et al. [8] devise compiling tools to automatically generate target-specific code for three popular deployment targets, i.e., FPGA, Tofino, and Taurus. They also employ a cycle-accurate simulator for performance and feasibility tests by anticipating the model's KPIs, such as throughput, latency, and resource occupancy. Besides deployment, Yang et al. [5] zero in on the inference quality monitoring problem. They propose a gradient-based method combining Open Set Recognition and eXplainable AI (XAI) techniques to monitor and evaluate the inference quality. A comparative analysis shows that their method can efficiently deliver precise model assessment, track the inference quality, and detect subtle drifts. ### _Missing pieces to the puzzle_ We summarize these pioneering works in Table I in terms of the tackled lifecycle stages, supported algorithm types, deployment target, and validated use cases. Albeit the valuable results, these works do not tackle all the problems discussed in Sec. III. They either focus on a fraction of the ML lifecycle stages, support specific types of AI/ML algorithms, or cover a few use cases. In a nutshell, none of them can unilaterally address all the practical concerns. Furthermore, there is a sequence of pending concerns related to the operational costs: _Limited reproducibility:_ There is no complete logging mechanism for the development process, making the experimental results hardly replicable. Traditional version control tools only record code revisions, whereas ML projects require the datasets, (hyper-)parameters, experiment metadata, and configuration dependencies to replicate an experiment. _Lack of automation:_ There is no end-to-end orchestration of different workflows throughout the ML lifecycle. Although some related works can automate specific stages, manual efforts are still unavoidable across the whole process, which should ideally be minimized in future networks. _Deficient communication:_ Similar to traditional software, silos can be unconsciously formed between data scientists and network operators due to their distinct priorities, expertise, and languages. This dearth of effective communication can gravely throttle productivity and postpone time-to-value. Aside from these operational concerns, model validation is also sidelined by related works, resulting in concerns about the trustworthiness of ML-based solutions. First, sophisticated ML models, especially DNNs, are black-boxes whose inference processes are intricate for human interpretation. Although XAI \begin{table} \begin{tabular}{|c|c c c c c c c|c|c|c|} \hline **Reference** & \begin{tabular}{c} _Data_ \\ _acquisition_ \\ \end{tabular} & \begin{tabular}{c} _Feature_ \\ _exation_ \\ \end{tabular} & \begin{tabular}{c} _Algorithm_ \\ _design_ \\ \end{tabular} & \begin{tabular}{c} _Hyperparam._ \\ _tuning_ \\ \end{tabular} & \begin{tabular}{c} _Model_ \\ _training_ \\ \end{tabular} & **Algorithms** & **Target** & **Use cases** \\ \hline \hline **Bronzino et al. [10]** & ✓ & ✓ & & & & & & SL & - & QoE inference \\ \hline **Yao et al. [9]** & ✓ & ✓ & & & & & & SL,UL,RL & DCN & \begin{tabular}{c} Load balancing \\ Traffic classification \\ Resource scheduling \\ \end{tabular} \\ \hline **Holland et al. [11]** & & ✓ & ✓ & ✓ & ✓ & & & SL & - & Traffic analysis \\ \hline **Swamy et al. [8]** & & & ✓ & ✓ & ✓ & ✓ & & SL & DCN & \begin{tabular}{c} Anomaly detection \\ Traffic classification \\ Botnet detection \\ \end{tabular} \\ \hline **Zheng et al. [13]** & & & & & ✓ & ✓ & ✓ & SL & DCN & \begin{tabular}{c} Anomaly detection \\ QoE inference \\ \end{tabular} \\ \hline **Lacoboiaea et al. [6]** & & & & ✓ & ✓ & & & DRL & WLAN & Resource scheduling \\ \hline **Yang et al. [5]** & & & & & & ✓ & DL & - & Traffic classification \\ \hline \end{tabular} \end{table} TABLE I: Synoptic of the related works has been explored by existing research, they generate low-level, unreliable explanations that cannot be readily mapped to high-level, actionable insights [12]. Second, conventional ML models are susceptible to algorithm and data biases, which can be non-trivial to reveal and rectify [2]. Third, ML-based solutions should be robust to uncharted scenarios and adversarial manipulations, which are commonplace in modern networks. In particular, compared to traditional software, ML-based solutions expose a larger attack surface that can be exploited by advanced manipulation techniques, such as adversarial examples [12]. AI security is currently among the primary concerns for many teloc operators [3]. To fully operationalize ML-based solutions in real networks, these requirements must be verifiable so that their behavioral/legal/ethical compliance can be validated by different stakeholders, ranging from teloc operators, data scientists, and product managers to service subscribers, domain experts, and regulators. Verifiability is a prerequisite to accomplishing trustworthy AI [2, 12, 14]. ## V Future Directions Based on the current progress and barriers, we prospect two future directions: Machine Learning Operations (MLOps) and Causal ML. The former can relieve the associated operational overhead by adopting time-tested practices, while the latter navigates a novel path toward verifiable AI via casual reasoning and hypothesis testing. ### _MLOps_ MLOps entails a collection of specialized practices and tools to smooth the productionalization of AI/ML products. Layered on the DevOps tenets, MLOps accommodates the unique traits of ML-based solutions with the following practices: * **Continuous Monitoring (CM) / Continuous Training (CT)**: MLOps addresses the model decay problem by constantly monitoring the data and inference quality on-the-fly and rebuilding the model whenever applicable. * **Automation**: To alleviate the operational cost, MLOps aims to streamline all the AI/ML lifecycle stages as a fully automated pipeline without human intervention. * **Versioning**: Based on the DevOps version control for code, MLOps advocates the version control of all the related artifacts during model development, including data, model, and code. The accompanying data and feature stores also simplify data governance. * **Experiment tracking**: In the model development stage, all the experiments should be systematically tracked to ensure reproducibility and auditability. * **Collaboration**: MLOps advocates a common platform and language to build synergy across the involved personas with different priorities and expertise. With these practices, MLOps tools have successfully preserved the long-term values of ML products in various businesses [1]. Unfortunately, this burgeoning discipline is still nascent in the AI/ML network community. However, given the abundant toolsets and experience in other domains, we believe MLOps can dramatically curtail the costs of operationalizing AI/ML in future networks. We envision a plausible MLOps architecture in Fig. 4 based on existing implementations. ### _Causal ML_ Although conventional ML algorithms can make sound predictions, they struggle with problems that require casual reasoning or counterfactual analysis, which are critical to facilitate advanced decision-making. The crux is their correlation-centric pattern matching. Correlations do not connote causations, and the latent confounding factors can inject spurious correlations that bias the models [14]. Causal ML algorithms overcome these drawbacks by excluding irrelevant correlations and preserving the true causal relationship. The principle differences between conventional ML and Causal ML are outlined in Table. II. We believe Causal ML grants us a viable approach toward verifiable AI. First, unlike conventional ML/XAI algorithms, Causal ML can resolve the practical concerns related to explainability, thanks to causal models' white-box and explainable nature. As causal models are built compatibly with human cognition, they can provide intuitive explanations and facilitate different verification processes. In addition, Causal ML eliminates biases by (i) constructing causal structures without involving the input data, (ii) pinpointing the pertinent causal effects via thoughtful interventions, without inheriting the biases from data and algorithm [14]. Regarding robustness, Causal ML automatically learns high-level causal relationships invariant against data or environmental shifts. Causal ML can also prevent adversarial attacks by discovering the optimal defense schemes via controlled trials. Although conventional ML algorithms are still the predominant option, some research endeavors have already touched upon Causal ML to tackle practical networking problems [15]. Currently, Causal ML still has shortcomings for real systems, such as rigid data assumptions and unfledged benchmarks [14]. But with the recent bloom of causal research, Causal ML can eventually complement future AI-based solutions with verifiable outcomes. ## VI Conclusion Due to the lack of system-related considerations, AI/ML is still not an integral part of modern networks. This paper analyzed the inconsistencies between existing ML-based solutions and real network systems and discussed all the practical Fig. 4: MLOps for networking: A tenable architecture. considerations throughout their product lifecycle. We also reviewed the related works and identified the missing pieces. Based on these observations, we suggested two promising ways to erase the operational and verifiability concerns. We believe this paper can raise awareness regarding the practical hurdles of developing, deploying, and managing AI/ML-based solutions in production settings and expedite the integration of AI/ML in future network systems.
2307.16180
Do LLMs Possess a Personality? Making the MBTI Test an Amazing Evaluation for Large Language Models
The field of large language models (LLMs) has made significant progress, and their knowledge storage capacity is approaching that of human beings. Furthermore, advanced techniques, such as prompt learning and reinforcement learning, are being employed to address ethical concerns and hallucination problems associated with LLMs, bringing them closer to aligning with human values. This situation naturally raises the question of whether LLMs with human-like abilities possess a human-like personality? In this paper, we aim to investigate the feasibility of using the Myers-Briggs Type Indicator (MBTI), a widespread human personality assessment tool, as an evaluation metric for LLMs. Specifically, extensive experiments will be conducted to explore: 1) the personality types of different LLMs, 2) the possibility of changing the personality types by prompt engineering, and 3) How does the training dataset affect the model's personality. Although the MBTI is not a rigorous assessment, it can still reflect the similarity between LLMs and human personality. In practice, the MBTI has the potential to serve as a rough indicator. Our codes are available at https://github.com/HarderThenHarder/transformers_tasks/tree/main/LLM/llms_mbti.
Keyu Pan, Yawen Zeng
2023-07-30T09:34:35Z
http://arxiv.org/abs/2307.16180v1
# Do LLMs Possess a Personality? Making the MBTI Test an Amazing Evaluation for Large Language Models ###### Abstract The field of large language models (LLMs) has made significant progress, and their knowledge storage capacity is approaching that of human beings. Furthermore, advanced techniques, such as prompt learning and reinforcement learning, are being employed to address ethical concerns and hallucination problems associated with LLMs, bringing them closer to aligning with human values. This situation naturally raises the question of **whether LLMs with human-like abilities possess a human-like personality?** In this paper, we aim to investigate the feasibility of using the Myers-Briggs Type Indicator (MBTI), a widespread human personality assessment tool, as an evaluation metric for LLMs. Specifically, extensive experiments will be conducted to explore: 1) the personality types of different LLMs, 2) the possibility of changing the personality types by prompt engineering, and 3) How does the training dataset affect the model's personality. Although the MBTI is not a rigorous assessment, it can still reflect the similarity between LLMs and human personality. In practice, the MBTI has the potential to serve as a rough indicator. Our codes are available at here1. Footnote 1: [https://github.com/HarderThenHarder/transformers_tasks](https://github.com/HarderThenHarder/transformers_tasks) /tree/main/LLM/llms_mbti ## 1 Introduction With the advent of the epoch-making product, ChatGPT2, numerous larger language models (LLMs) and Chatbots have emerged [23]. Thanks to this, users can ask questions in the form of a natural sentence, and then LLMs utilize their knowledge to provide detailed answers effortlessly. Furthermore, an increasing body of literature suggests [14] that LLMs possess self-improvement and reasoning capabilities that are reminiscent of human cognition, leading to the possibility that LLMs may possess virtual personalities and psychological traits. Given these developments, it naturally raises the question of whether **LLMs with human-like abilities possess a human-like personality?** Footnote 2: [https://chat.openai.com/](https://chat.openai.com/) In fact, pioneers have borrowed some human personality assessments (e.g., MBTI) to evaluate the personality of LLMs (e.g., GPT3) [11]. Among them, the MBTI test (i.e., Myers-Briggs Type Indicator), one of the most widespread human personality assessment tools, will be borrowed to help us explore the personality of LLMs. Derived from the theories of Swiss psychiatrist Carl Jung, the MBTI [1] includes 16 possible personality types, as shown in Tabel 2. This assessment tool is designed to help individuals understand their preferences, with applications in business, education, and personal development, enabling them to make more informed career and life decisions. However, achieving artificial general intelligence (AGI) remains a distant goal, primarily due to the issue of ethical concerns [15] and hallucinations [11]. Figure 1: Personality Test of Human and LLMs. For example, INTJ individuals, as classified by the MBTI, are often regarded as masterminds who possess analytical and rigorous thinking abilities. In a similar vein, can LLMs with human-like capabilities exhibit human-like personalities? 2023a; Varshney et al., 2023). 1) LLMs rely on vast amounts of internet data, which often exceeds trillions of tokens, making it challenging to ensure data quality. The gender/racial discriminatory corpus is often fed into the model in the pre-training stage (Cabrera et al., 2023). 2) LLMs' training strategy (i.e., next token prediction) tends to codify unknown facts, leading to hallucinations. Users are unlikely to tolerate a product that is overconfident and prone to fabricating information, just as we would not tolerate an arrogant and lying individual. Fortunately, techniques such as prompt engineering (White et al., 2023), instruction tuning (Ouyang et al., 2022), and reinforcement learning from human feedback (RLHF) (Schulman et al., 2017) have been introduced to control the safety and ethics of LLMs. Interestingly, the model trained through instruction tuning demonstrates the ability to comply with human requests and engage in role-playing to satisfy the user. These advancements are paving the way for the development of AGI (Han et al., 2021; Wang et al., 2022), a system that aligns with human values. However, it is essential to note that the development of personality or consciousness still needs to be achieved. Therefore, this paper investigates whether human personality assessments, such as MBTI, can serve as a reasonable metric (Huang et al., 2023; Hendrycks et al., 2021) for evaluating LLMs. Specifically, we aim to explore whether MBTI is an inherent ability of the model or whether it is related to the training data and tuning steps to guide the training and application of LLMs. Therefore, extensive experiments are implemented to explore the following questions: * Do different LLMs possess different personalities? * Can we change the personality of LLM by prompt engineering? * How do training datasets affect the personality of LLMs? * Can MBTI test evaluate the model reasonably? Our analysis and findings are summarized as follows: * LLMs possess different personality-like MBTI types, which are inconsistent among LLMs but consistent with their style. The MBTI types of several LLMs are listed in Tabel 1. * LLMs without sufficient instruction-tuning are challenging to change MBTI type, while after tuning, they may be changed via explicit and implicit prompts. * The type of training corpus can affect the MBTI type, especially in the dimensions of T/F and J/P. * Although MBTI is not a rigorous assessment, it may still serve as a rough indicator for LLMs. \begin{table} \begin{tabular}{c|c|c} \hline & Type & Personality Descriptions \\ \hline ChatGPT & ENTJ & self-confident, decisive, and possess innate leadership skills. \\ GPT-4* & INTJ & experts skilled in achieving their own goals. \\ Bloom7b & ISTJ & pragmatic, responsible, values tradition and loyalty. \\ BaiChuan7b & ENFP & smart, curious, and imaginative. \\ BaiChuan13b & INFP & highly adaptable and idealistic \\ OpenLlama7b & INFJ & has strong insight into people and adheres to one’s own values. \\ \hline \end{tabular} \end{table} Table 1: MBTI types for LLMs. \begin{table} \begin{tabular}{c|c|c} \hline & \multicolumn{2}{c}{Dichotomies} \\ \hline Attitudes & extraversion (E) & introversion (I) \\ Perceiving functions & sensing (S) & intuition (N) \\ Judging functions & thinking (T) & feeling (F) \\ Lifestyle preferences & judging (J) & perceiving (P) \\ \hline \end{tabular} \end{table} Table 2: Four Dichotomies of MBTI3. ## 2 Related Work ### MBTI Test The Myers-Briggs Type Indicator (MBTI) is a personality assessment tool developed by Katharine Cook Briggs and her daughter Isabel Briggs Myers (Boyle, 1995). It is based on the theories of Swiss psychiatrist Carl Jung and is designed to help individuals understand their personality preferences and how they interact with the world around them. The MBTI measures four dichotomies: extraversion vs. introversion (E/I), sensing vs. intuition (S/N), thinking vs. feeling (T/F), and judging vs. perceiving (J/P). These dichotomies result in 16 possible personality types, each with unique strengths, weaknesses, and communication styles. The MBTI is widely used in business, education, and personal development to help individuals better understand themselves and others, improve communication and teamwork, and make more informed career and life decisions. ### Evaluation of LLMs In order to evaluate the LLM's ability in knowledge, several metrics measure the scores by calculating the accuracy on multiple choice questions, such as 1) CommonsenseQA (Talmor et al., 2019): a challenging new dataset for commonsense question answering. 2) HellaSwag (Zellers et al., 2019): a very challenging common sense reasoning dataset. 3) MMLU (Hendrycks et al., 2021): a test that covered 57 tasks, including elementary mathematics, US history, computer science, law, and more. 4) C-Eval (Huang et al., 2023): a comprehensive Chinese evaluation suite for foundation models, composed of 13,948 multiple choice questions spanning 52 diverse disciplines and four difficulty levels. The studies mentioned above calculate the accuracy of questions to evaluate the knowledge. Inspired by these pioneering efforts, question-form MBTI can be smoothly utilized to evaluate the personality of LLMs. ## 3 Experimental Settings ### Models We select well-known LLMs, such as LlaMA, as our baseline models. Unless otherwise indicated, all baselines are implemented with the parameters reported in the original paper or project. In Section 4.3, all models are trained on the same training data to investigate the impact of the training corpus on personality. Notably, we primarily train on models with a size of approximately 10B due to resources limitation. ### Evaluation We conduct experiments on the Myers-Briggs Type Indicator (MBTI), which comprises 93 multiple-choice questions, such as "A. Do you often act or speak very quickly without thinking?" or "B. Do you often act according to reason, think logically, and then make a decision, not letting your emotions interfere with the decision?" Subsequently, we analyze the probability values of the final token for options A and B and select the letter with the highest probability as the model's answer. Thereafter, we follow the metric law4 to get the final personality preferences of LLMs. In Algorithm 1, we categorize 8 indicators into 4 groups (E-I/S-N/T-F/J-P) and select the highest score within each group as the definitive answer for that particular group. Footnote 4: [https://www.xpersonalitytest.com/](https://www.xpersonalitytest.com/) ## 4 Analysis and Discussion In this section, we endeavor to implement experiments to address the four issues as in the Introduction (Section 1). ### Do different LLMs possess different personalities? (Q1) Firstly, to explore whether different LLMs have human-like personalities, we select a total of 6 well-known LLMs as baselines for observation: ChatGPT5, GPT-4*6, OpenLlama7b-v27, Bloom7b8, BaiChuan7b9, BaiChuan13b10. Notably, all baselines are implemented according to the parameters reported in the original paper or project. Footnote 5: [https://chat.openai.com](https://chat.openai.com) Footnote 6: [https://openai.com/research/gpt-4](https://openai.com/research/gpt-4) Footnote 7: [https://huggingface.co/openmlm-research/open_lama_7b_v2](https://huggingface.co/openmlm-research/open_lama_7b_v2) Footnote 8: [https://huggingface.co/bigscience/bloom-7b1](https://huggingface.co/bigscience/bloom-7b1) Footnote 9: [https://huggingface.co/baichuan-inc/Baichuan-7B](https://huggingface.co/baichuan-inc/Baichuan-7B) Footnote 10: [https://huggingface.co/baichuan-inc/Baichuan-13B-Base](https://huggingface.co/baichuan-inc/Baichuan-13B-Base) As presented in Table 3 and Fig.2, we have the following observations: 1) LLMs exhibit different personality types, as reflected by their MBTI profiles. For instance, ChatGPT's MBTI type is ENTJ, characterized by assertiveness and a tendency to express opinions. Similarly, GPT-4* 11 is classified as an INTJ, an "expert" type that excels in critical thinking, summarizing, and planning. Footnote 11: Notably, we will exclude responses rejected by GPT-4, which serves as an additional testament to its exceptional performance. (e.g., “I have no personality, so I cannot answer this question.”). 2) Moreover, as shown in Fig.2, we present a visualization of the preference scores for each dichotomy, such as perceiving functions. The preference scores of LLMs in each dichotomy exhibit inconsistency, with some models displaying more extreme scores (e.g., ChatGPT and GPT-4*) and others showing less significant differences (e.g., Bloom7b and BaiChuan13b). 3) The dichotomy of S/N, T/F, and J/P values often exhibit similarities for models within the same series. For instance, ChatGPT and GPT-4* are classified as "NTJ", while BaiChuan7b and BaiChuan13b are classified as "NFP". Furthermore, models with fewer parameters tend to favor E, as seen in ChatGPT and BaiChuan7b, while larger models tend to favor I, as seen in GPT-4* and BaiChuan13b. **Summary Q1:** LLMs exhibit different personality types, similar to those identified by the Myers-Briggs Type Indicator (MBTI), as discussed above. However, is this phenomenon merely a chance occurrence that can be easily disrupted and changed? ### Can we change the personality of LLM by prompt engineering? (Q2) We conduct prompt engineering to confuse the models to investigate whether the MBTI types of LLMs are susceptible to being disturbed and changed. Specifically, we design two types of prompt to guide the models: 1) Explicit prompt: role-playing, which provides a detailed description of a specific role to be played (Xu et al., 2023). 2) Implicit prompt: few-shot (Suzgun et al., 2023; Shi et al., 2023), where the model can only perform Figure 2: Specific scores for each dichotomy among different LLMs. style transfer based on the given examples. #### 4.2.1 Explicit Prompt The role-playing descriptions will be explicitly included prior to answering the MBTI questions. For instance, descriptions such as "You possess an outgoing personality, enjoy envisioning innovative concepts, and possess a strong inclination towards spontaneity and improvisation" will be incorporated into the input. We implement this strategy on Bloom and Baichuan. The results are presented in Table 4. We have the following observations: 1) The MBTI type of Bloom is changed from ISTJ to INTP, with a decrease in the S-value and an increase in the N-value. However, this change is minor, only affecting one question. 2) Additionally, the statement "You are a highly introverted individual who tends to engage in practical work and enjoys strategizing and planning" has been included in the input for Baichuan. However, the type has not been altered per the prompt. #### 4.2.2 Implicit Prompt Implicit prompts also will be adopted to change the personality of LLMs. To achieve this, we implicitly express the character by giving few-shot questions, as shown in the Tabel 5. Similar results to explicit prompt can be observed, namely that the interference of implicit prompt has little effect on LLMs. This fact further proves the conclusion in Section 4.1. #### 4.2.3 Prompting on Instruct-tuning Model The conclusions of Section 4.2.1 and 4.2.2 prove that the MBTI type of several LLMs is challenging to change via prompt engineering, but this phenomenon may be attributed to the inability of these models to follow instructions. \begin{table} \begin{tabular}{c|c c|c c|c c|c|c} \hline \hline **Model** & **E** & **I** & **S** & **N** & **T** & **F** & **J** & **P** & **MBTI Type** \\ \hline bloom & 8 & 13 & 14 & 12 & 13 & 11 & 12 & 10 & ISTJ \\ bloom-exp-prompt & 8 & 13 & 13 & 13 & 13 & 11 & 10 & 12 & INTP \\ bloom-inexp-prompt & 9 & 12 & 13 & 13 & 13 & 11 & 11 & 11 & INTP \\ \hline baichuan & 15 & 6 & 13 & 14 & 10 & 13 & 9 & 13 & ENFP \\ baichuan-exp-prompt & 15 & 6 & 12 & 15 & 9 & 14 & 9 & 13 & ENFP \\ baichuan-inexp-prompt & 15 & 6 & 13 & 14 & 10 & 13 & 9 & 13 & ENFP \\ \hline ChatGPT & 12 & 9 & 6 & 21 & 15 & 8 & 12 & 10 & ENTJ \\ ChatGPT-exp-prompt & 1 & 20 & 9 & 16 & 7 & 18 & 5 & 17 & INFP \\ \hline \hline \end{tabular} \end{table} Table 4: Specific scores for each dichotomy in MBTI via prompt engineering. \begin{table} \begin{tabular}{c|c|c|c|c c|c c|c} \hline \hline **Model** & **E** & **I** & **S** & **N** & **T** & **F** & **J** & **P** & **MBTI Type** \\ \hline ChatGPT & 12 & 9 & 6 & 21 & 15 & 8 & 12 & 10 & ENTJ \\ GPT-4* & 5 & 10 & 9 & 16 & 15 & 7 & 14 & 4 & INTJ \\ Bloom7b & 8 & 13 & 14 & 12 & 13 & 11 & 12 & 10 & ISTJ \\ BaiChuan7b & 15 & 6 & 13 & 14 & 10 & 13 & 9 & 13 & ENFP \\ BaiChuan13b & 9 & 12 & 12 & 15 & 11 & 12 & 11 & 11 & INFP \\ OpenLlama7b\_v2 & 10 & 11 & 10 & 16 & 9 & 15 & 14 & 8 & INFJ \\ \hline \hline \end{tabular} \end{table} Table 3: Specific scores for each dichotomy in MBTI among different LLMs. \begin{table} \begin{tabular}{|c|c|} \hline **Bloom (ISTJ)** & **Baichuan (ENFP)** \\ \hline Do you prefer? & Do you prefer? \\ A. Be alone & A. Be alone \\ B. With friends & B. With friends \\ Answer: B & Answer: A \\ \hline Do you prefer to do things? & Do you prefer to do things? \\ A. By logic & A. By logic \\ B. By feeling & B. By feeling \\ Answer: B & Answer: A \\ \hline Do you prefer? & Do you prefer? \\ A. Plan ahead & A. Plan ahead \\ B. Plan as you go \\ Answer: B & Answer: A \\ \hline \end{tabular} \end{table} Table 5: 3-shot cases of implicit prompt Therefore, we test the above two prompt strategies on ChatGPT, an LLM with instruction-following solid ability. As shown in Table 4, ChatGPT has the ability to fully understand the explicit and implicit prompts and role-play following user instructions. **Summary Q2:** LLMs without sufficient instruction-tuning are difficult to change MBTI type, but with proper tuning, they can be changed through explicit and implicit prompts. After that, our next question is how training corpus affects personality. ### How do training corpora affect personality? (Q3) The different personalities of LLMs may originate from the different corpus fed to the model during training, so in this section, we explore whether the personalities will be changed after training with different corpora. Specifically, experiments are performed on Bloom and llama-v2 with three different corpora. As shown in Tabel 7, they are the Chinese Wikipedia corpus, question & answer corpus, and examination corpus, respectively. #### 4.3.1 Chinese Wikipedia Corpus Wikipedia is a relatively high density of knowledge dataset containing many factual articles and definitions. Our models are trained using approximately 400M tokens of this data, and the results are presented in Table 6. We have the following observation: 1) The MBTI types of Bloom transformed from ISTJ to INTP, while llama-v2 still retained INFJ. However, we observe that the change trends in the numerical values of each dichotomy are similar. Both models achieved identical values on the S/N dichotomy, with Bloom transitioning from 14-12 to 13-13 and llama-v2 transitioning from 10-16 to 13-13. The growth and decline trends in the T/F and J/P dichotomies remain consistent. We speculate that it is because the llama has yet to be trained on enough Chinese corpora before, which leads to its convergence rate being slower than bloom when training on corpora of the same scale. 2) Unfortunately, we cannot detect that these two models exhibit the same trend of change in E/I subfeatures, which may be because the wiki corpora can not change the model's extroversion or introversion. #### 4.3.2 Question & Answer Corpus Q&A data requires respondents to flexibly organize appropriate responses based on questions, which may enhance the flexibility and adaptability of the model. After training with a corpus of about 400M tokens, we obtain the following observation in Table 6: 1) The two models have similar trends in S/N and J/P but slightly different in F-value. Bloom increases from 11 to 12, but Llama decreases from \begin{table} \begin{tabular}{c|c c|c c|c c|c|c} \hline \hline **Model** & **E** & **I** & **S** & **N** & **T** & **F** & **J** & **P** & **MBTI Type** \\ \hline bloom & 8 & 13 & 14 & 12 & 13 & 11 & 12 & 10 & ISTJ \\ bloom\_zhwiki & 9 & 12 & 13 & 13 & 13 & 11 & 11 & 11 & INTP \\ bloom\_qa & 9 & 12 & 13 & 13 & 12 & 12 & 11 & 11 & INFP \\ bloom\_exam & 8 & 13 & 14 & 12 & 15 & 9 & 11 & 11 & ISTP \\ \hline llama7b\_v2 & 10 & 11 & 10 & 16 & 9 & 15 & 14 & 8 & INFJ \\ llama7b\_v2\_zhwiki & 8 & 13 & 13 & 13 & 11 & 13 & 12 & 10 & INFJ \\ llama7b\_v2\_qa & 7 & 14 & 13 & 14 & 12 & 11 & 13 & 9 & INTJ \\ llama7b\_v2\_exam & 9 & 12 & 12 & 15 & 10 & 13 & 10 & 12 & INFP \\ \hline \hline \end{tabular} \end{table} Table 6: Personality transformed with different continue training corpus \begin{table} \begin{tabular}{|c|l|} \hline **Corpus** & **Example** \\ \hline zhwiki & \begin{tabular}{c} Tsinghua University School of Law,... \\ \\ \end{tabular} \\ \hline \multirow{4}{*}{question \& answer} & Question: Why am I so tired in love? \\ & Answer: Love is a special emotion... \\ \end{tabular} \\ \cline{2-3} & \begin{tabular}{c} Xinohua has read 2/3 of a book, \\ how much remains to be read? \\ \end{tabular} \\ \cline{2-3} & The solving equation can be listed: \\ \(\text{x=1-(2/3)}\), so the answer is (1/3). \\ \end{tabular} \\ \cline{2-3} & \begin{tabular}{c} Xinohua has read 2/3 of a book, \\ how much remains to be read? \\ \end{tabular} \\ \cline{2-3} & The solving equation can be listed: \\ \(\text{x=1-(2/3)}\), so the answer is (1/3). \\ \end{tabular} \\ \cline{2-3} & \begin{tabular}{c} Xinohua has read 2/3 of a book, \\ how much remains to be read? \\ \end{tabular} \\ \cline{2-3} & \begin{tabular}{c} Xinohua has read 2/3 of a book, \\ how much remains to be read? \\ \end{tabular} \\ \cline{2-3} & The solving equation can be listed: \\ \(\text{x=1-(2/3)}\), so the answer is (1/3). \\ \end{tabular} \\ \cline{2-3} & \begin{tabular}{c} Xinohua has read 2/3 of a book, \\ how much remains to be read? \\ \end{tabular} \\ \cline{2-3} & \begin{tabular}{c} Xinohua has read 2/3 of a book, \\ how much remains to be read? 15 to 11. Results have shown that the Q&A corpus can enhance the model's adaptability and flexibility (indicated by P-value). #### 4.3.3 Examination Corpus To improve the thinking ability of the model, we attempt to train the model using an examination corpus (about 50M tokens), mainly composed of APE210k (Zhao et al., 2020). The results in Table 6 show that this corpus has the potential to enhance the T-value significantly. The increase in Bloom's T-value from 13 to 15 and Llama from 9 to 10 demonstrates the effectiveness of the examination corpus in enhancing thinking dichotomy. This enhancement further strengthens Bloom's (ISTJ) thinking personality and facilitates the evolution of Llama (INFJ), who has a more emotional personality towards T. **Summary Q3:** The type of training corpus can affect the MBTI type, especially in the dimensions of T/F and J/P. ### Can MBTI evaluate the model reasonably? (Q4) In this section, we discuss the limitations of MBTI and its value as an LLM evaluation metric. #### 4.4.1 The MBTI itself is just pseudo-science. In fact, as a measurement tool, MBTI has flaws in reliability and validity. In particular, human activities are always affected by different situations and different mental states. This situation will result in the MBTI being a toy tool for humans. However, these facts have not prevented many companies and individuals from using it as a **rough tool** to help hire employees or choose a career direction. In a sense, as a rough evaluation, it is also partially reasonable. 1) E/I (extraversion/introversion). On this dichotomy, no significant rules are observed, which may indicate that human relationships do not apply to machines. This assertion makes sense because humans do not want AI to be overly social or shy. 2) S/N (sensing/intuition). Similarly, no discernible patterns have been identified in this dichotomy, nor have any plausible hypotheses been proposed to explain its association with LLMs. 3) T/F (thinking/feeling). We posit that the T-value metric is paramount for LLMs, given that GPT-4 and ChatGPT exhibit significantly higher T-values than other models. Furthermore, the utilization of mathematical corpora has been demonstrated to enhance the model's reasoning capabilities, resulting in a corresponding increase in the T-value metric. Thus, we advocate for the T-value metric as a crucial indicator of a model's proficiency. 4) J/P (judging/perceiving). After comparing the values of GPT-4 and ChatGPT, it is evident that GPT-4 has a higher J-value, which accurately reflects the planning abilities present in human personality. As a result, we believe that models with higher J-values possess more significant potential for task decomposition and path planning. In conclusion, the T/F and J/P dimensions hold significant value and can be considered reliable indicators for evaluating LLMs. These dimensions can provide insights into various aspects, including knowledge distribution after pre-training, the ability to follow instructions, and more. #### 4.4.2 What kind of MBTI type is best for LLMs? For humans, each MBTI type is a unique personality. However, LLMs with the proper knowledge, reasoning, and planning capabilities may be the best choice for machines serving humans, e.g., INTJ (GPT-4). Of course, in some scenarios (such as role-playing apps), LLMs can change themselves according to user expectations. **Summary Q4:** Although MBTI is not a rigorous assessment, it may still serve as a rough indicator for LLMs. ## 5 Conclusion In this work, we investigate the question: Do LLMs with human-like abilities exhibit human-like personalities? To address this question, we comprehensively examine the MBTI as a preliminary assessment tool for LLMs from various perspectives. After extensive experiments, our observations lead to several key conclusions: 1) LLMs exhibit diverse personalities; 2) LLMs that lack sufficient instruction tuning are resistant to the change of MBTI types but can be influenced by explicit and implicit prompts after tuning; 3) The type of training corpus can impact the MBTI type; 4) While MBTI is not a rigorous assessment, it can serve as a rough indicator. In the future, we aim to expand our research by integrating additional pre-training datasets. In this regard, we are particularly intrigued by tasks that enhance commonsense comprehension and reasoning abilities, such as math dataset. ### Limitations In regards to personality indicators, there is potential for future research on AGI to utilize a broader range of personality tests for LLMs. However, this topic needs to be explored in this work. Due to resource limitations, our baselines are trained on models around 10B parameters and 400M tokens. More intriguing findings could emerge with the use of larger models and corpus.
2301.11561
Control Scheme for Polarization Circulation Speed Meter Using a Dual-Retardation Waveplate
In interferometric gravitational wave detectors, quantum radiation pressure noise, which is a back action of the measurement, will limit their sensitivities at low frequencies. Speed meters are one of the solutions to reduce the back action noise and improve the sensitivities, and furthermore, they can surpass the standard quantum limit over a wide range of frequencies. The Polarization Circulation Speed Meter is the latest incarnation of the speed meter concept in the sense that it requires a slight modification in the conventional interferometer designs; however, its control scheme has not been developed. The main difficulty is the length and alignment control of the cavity formed by the polarization circulation mirror and the input test masses, whose round-trip phase shift should be kept to $\pi$. In this article, we propose a new control scheme using a dual-retardation waveplate, called Dual-Retardance Control (DRC). In addition, we compare the shot noise level of the DRC to another simpler scheme by dithering. Finally, we design the experimental setup for the demonstration of the DRC and show the expected results through the transfer function measurement.
Yohei Nishino, Tomotada Akutsu, Yoichi Aso, Takayuki Tomaru
2023-01-27T06:50:49Z
http://arxiv.org/abs/2301.11561v3
# Control Scheme for Polarization Circulation Speed Meter Using a Dual-Retardation Waveplate ###### Abstract In interferometric gravitational wave detectors, quantum radiation pressure noise, which is a back action of the measurement, limits their sensitivity at low frequencies. Speed meters are one of the techniques to reduce the back action noise and improve the sensitivity. Furthermore, a speed meter detector can surpass the standard quantum limit over a wide range of frequencies. The Polarization Circulation Speed Meter (PCSM) is the latest incarnation of the speed meter concept that requires only a modest modification to the conventional interferometer design. However, its control scheme has not been developed. The main difficulty is the length and alignment control of the cavity formed by the polarization circulation mirror and the input test masses, whose round-trip phase shift should be kept at \(\pi\). In this article, we propose a new control scheme for the PCSM using a dual-retardation waveplate, called Dual-Retardance Control (DRC). We also compare the shot noise level of the DRC to another simpler scheme by using mirror dithering. Finally, we design the experimental setup to demonstrate the feasibility of the DRC and show the expected results through transfer function measurements. ## I Introduction The sensitivity of the gravitational wave (GW) detectors is fundamentally limited by quantum noise. Especially at low frequencies, after the seismic noise and thermal noise are well suppressed and with the use of a high-power laser, it will be limited by _quantum radiation pressure noise_. This low-frequency-limiting noise gives rise to the standard quantum limit (SQL), which is one of the consequences of Heisenberg's uncertainty relation [1]. The SQL is a fundamental limit that we cannot overcome by conventional methods, and many techniques to beat it, so-called quantum non-demolition (QND) measurement, have been studied [1; 2]. Speed meters are one of the QND measurements. The concept was first proposed by Braginsky and Khalili [3], and many practical implementations have been investigated [4; 5; 6; 7; 8; 9; 10; 11; 12; 13]. The amplitude fluctuations of vacuum fields entering from the anti-symmetric (AS) port of an interferometer are coupled with the pump laser and kick the mirrors randomly, which appears as the quantum radiation pressure noise [2]. In speed meters, the vacuum field interacts with the mirror twice with opposite signs. Taking into account the sloshing time \(\tau\), which is an interval of two measurements, the back-action force applied on the mirror is [14]: \[\hat{F}_{\rm b.a.}(\Omega)\simeq-i\Omega\tau\frac{2\hat{f}_{c}(\Omega)}{c} \tag{1}\] at low frequencies \(\Omega\ll 1/\tau\). \(\hat{f}_{c}\) is the fluctuation part of the circulating laser power via the coupling between vacuum fluctuation and the pump laser and \(c\) is the speed of light. The signal is proportional to the mean velocity (\(\bar{v}\)) at low frequencies \[\phi(t) \propto \hat{x}(t+\tau)-\hat{x}(t) \tag{2}\] \[\sim \tau\bar{v}, \tag{3}\] where \(\phi(t)\) is the phase modulation of light and \(x(t)\) is the displacement of the mirrors. Note that the velocity measurement reduces the amount of gravitational wave signal, but in terms of the signal-to-noise ratio, it shows better performance than position measurements at low frequencies. The advantage of speed meters is a broadband-sensitivity improvement at low frequencies. Also by combination with a balanced homodyne detection (BHD), it can go beyond the free mass SQL. It is worth noting that it does not need frequency-dependent homodyne angles, which means we do not need additional filter cavities [2]. All the noise-reduction processes happen inside the interferometer, so it is more robust to losses [15]. The Polarization Circulation Speed Meter (PCSM) is a new speed meter design proposed by Danilishin _et al_[11] (see Fig. 1). The PCSM only needs a small modification in the AS port; there is no need to modify the central interferometer. Under the current situation that all the large-scale GW detectors are based on the Dual-Recycled Fabry-Perot Michelson Interferometer (DRFPMI) and an assumption that it will not be largely changed soon, this design is the most promising candidate for the practical implementation of a speed meter. However, the control scheme has not been investigated yet. There are two main issues to be solved to achieve PCSM. The first issue is that DC component of differential arm signal is reduced to zero in an ideal speed meter, making it hard to control the interferometer. This is a generic problem in speed meters since they reduce the back action noise by deliberately decreasing the mirror motion signal. To obtain the DC signal for the differential arm (DARM) control, one needs to add loss to the interferometer that deteriorates its performance from the ideal case (detailed analysis has been done in ref. [16] in the case of the Sagnac-type speed meter). The DARM control for speed meters has already been demonstrated in the proof-of-principle experiment of the Sagnac speed meter in Glasgow group [17]. The second issue is inherent to the PCSM scheme, i.e. we need to keep the round-trip phase shift from the input test masses (ITMs) to the polarization circulation mirror (PCM) at \(\pi\) to flip the sign of the second interaction. In this paper, we focus on this issue and propose a new scheme to control the phase shift using a dual-retardation waveplate and an auxiliary laser. These components enable us to obtain a Pound-Drever-Hall (PDH) signal [18] of the cavity length formed by the PCM and ITMs (Polarization Circulation Cavity: PCC). The PDH method is commonly used in the GW detectors and gives us high stability of the PCC length/alignment control. Also, this scheme is well compatible with the balanced homodyne detection (BHD), which is a signal detection scheme planned to be used in the future. The outline of this paper is as follows: in Section II we show the details of the PCSM and the difficulties of its control. In Section III, we propose a new control scheme, DRC, and in Section IV we analyze the theoretical performance of the DRC and compare the shot noise levels of the control signals between the DRC and another candidate, the dithering control. In Section V, we show a possible experimental setup for the layout for the demonstration of the DRC. Finally, we give discussions in Section VI and conclusions in Section VII. ## II Background and issues In this section, we review the mechanism of the PCSM, whose detailed study is shown in Danilishin _et al._, 2018 [11], and explain the inherent difficulties in the PCC control. ### Pcsm The schematic design of the PCSM is shown in Fig. 1. The main interferometer is the same as the conventional Fabry-Perot Michelson style position meter, but the AS port has two polarization optics, a quarter-wave plate (QWP) and a polarization beam splitter (PBS). These components together with the PCM are collectively called the polarization circulator (PC). The cavity formed by the PCM and the input test masses (ITMs) is called the polarization circulation cavity (PCC). The linear \(p\)-polarized (\(p\)-pol) vacuum fluctuation entering from the AS port is converted into the left-polarization vacuum (\(l\)-pol, denoted by \(\hat{a}_{l}\)) by the QWP, couples with the pump laser and kicks the mirror randomly. Then the vacuum field (denoted by \(\hat{b}_{l}\)) returns to the AS port and is converted into \(s\)-polarization (\(s\)-pol). This \(s\)-pol beam is reflected by the PBS, thus making a round trip to the PCM. Then it is converted into right-polarization (\(r\)-pol) by the QWP before coming back to the BS as a field denoted by \(a_{r}\). The \(r\)-pol beam kicks the mirror again, comes back to the AS port (denoted by \(\hat{b}_{r}\)), and finally goes through the PBS. The round-trip phase shift between the ITMs and PCM is kept to \(\pi\), so the radiation pressure forces given by \(\hat{a}_{r}\) and \(\hat{a}_{l}\) have opposite signs and cancel out each other. ### Difficulties in PCC control The PDH method is a commonly used scheme to stabilize the distance between two mirrors can be stabilized on a nanometer scale by the PDH method [18]. All the second-generation GW detectors make full use of this technique to control many degrees of freedom, including the signal recycling cavity (SRC). To control the SRC in the resonant sideband extraction configuration, radio Figure 1: **Configuration of the PCSM [11].** The QWP converts the polarization state of the vacuum so that it experiences the interferometer twice. The PC is a set of the QWP, PBS, and PCM, and the PCC is a cavity formed by the PCM and the ITMs with the QWP and PBS inside. (E)ITMs stand for (end) input test masses. frequency (RF) sidebands generated by an electro-optic modulator (EOM) are used to sense the length fluctuation of the SRC. However, PCC cannot be locked in the same manner, because the IR beam can circulate inside the PCC twice at most due to the QWP and PBS. It means the finesse of the PCC is almost 0. This is a serious problem since one cannot effectively amplify signal sidebands. In short, one cannot use the PDH method. As shown above, the PCSM needs a control scheme for the PCC. One simple solution is modulating the PCM position to generate sidebands on the carrier and demodulating the output from the AS port. It is what we call 'dithering' (see Fig. 2). Detuning the arm cavities and leaking some amount of the DC light (DC offset, see ref. [19]), the DC value of the output is zero if the round-trip phase shift in the PCC is kept at \(\pi\). Then if the position of the PCM is shifted from the optimal position, non-zero DC signals appear. Taking a beat between the sidebands and the DC offset, one can obtain an error signal. This method is simple but has several problems. In the first place, mechanical modulation onto the PCM adds noises to the signal sideband, since the modulation frequency of the mirror dithering is \(\sim 10\) kHz at most. Secondly, one cannot expect a high signal-to-noise ratio (SNR) in the error signal as shown in Section IV.2. The amount of light reaching the AS port is limited, so DC offsets are needed to increase the SNR. However, in future GW detectors, one might not need DC offsets thanks to balanced homodyne detection (BHD), which is also critical for speed meters. To make full use of this advantage, a scheme without DC offsets is preferable. Lastly, it is not sensitive to alignment fluctuations. For these reasons, we propose an alternative scheme for the PCC that can also yield alignment control signals. ## III Dual-retardance control ### Idea The main obstacle is that the QWP changes the polarization so that the PBS can transmit half of the IR light. If the waveplate does not change the polarization state in one-way transmission or _keep the state so that the PBS does not discard any light_, one can form a cavity with the PCM and ITMs. It can be achieved by a dual-retardance waveplate that works as a QWP for IR but as a HWP for a green (GR) laser. The retardance of a waveplate is described as: \[\phi_{\mathrm{ret}}=2\pi\frac{(n_{s}-n_{f})d}{\lambda_{0}}, \tag{4}\] where \(n_{s}\) and \(n_{f}\) are refractive indices along the slow and fast axes respectively, \(d\) is the thickness of the waveplate and \(\lambda_{0}\) is the wavelength of the light. In this simple assumption, when it works as a QWP at the wavelength of \(\lambda_{0}\), it should work as a HWP at the wavelength of \(\lambda_{0}/2\). A HWP does not change the polarization state by round-trip transmission (see Fig. 3). If we inject a laser with half the wavelength of the main interferometer beam from behind the PCM with \(s\)-pol, it can resonate inside the PCC. Practically, the refractive index has a wavelength dependence, so it is critical to manufacture the dual-retardation waveplate. We call this scheme the dual-retardance control (DRC). The DRC solves the issues discussed in the previous section. With DRC, we can extract a high-SNR error signal for the PCC without using mechanical modulation. We can also obtain alignment signals through wave-front sensing. ### Setup As shown in Fig. 3, the GR laser frequency (\(\omega_{\mathrm{GR}}\)) should be phase-locked to the main IR frequency (\(\omega_{0}\)), with a tunable frequency offset of \(\omega_{\mathrm{off}}\): \[\omega_{\mathrm{GR}}=2(\omega_{0}+\omega_{\mathrm{off}}). \tag{5}\] Also, the IR and GR beams have to be co-aligned. Then one needs an additional cavity outside the PCC to make them share the same beam path. For example, there is a ring cavity to co-align the IR and GR beam paths in Fig. 7. The arm cavity can also be used for the path-sharing process. The transmissivity of the BS for GR is set to \(\sim 1\) for simplicity. Even though the paths seem to completely overlap, the optical path length of the PCC for the IR (\(l_{\mathrm{PCC}}^{\mathrm{IR}}\)) may not exactly be the same as that for the GR (\(l_{\mathrm{PCC}}^{\mathrm{GR}}\)) due to the dispersion of materials: \[l_{\mathrm{PCC}}^{\mathrm{GR}}=l_{\mathrm{PCC}}^{\mathrm{IR}}+\delta l_{ \mathrm{PCC}}, \tag{6}\] where \(\delta l_{\mathrm{PCC}}\) the difference of the optical path lengths. Adding a frequency offset \(\omega_{\mathrm{off}}\), the round-trip phase shift in the PCC for the GR is: \[\phi_{\mathrm{GR}} =2\omega_{\mathrm{GR}}l_{\mathrm{PCC}}^{\mathrm{GR}}/c \tag{7}\] \[=2\left[2(\omega_{0}+\omega_{\mathrm{off}})+\delta\omega\right](l _{\mathrm{PCC}}^{\mathrm{IR}}+\delta l_{\mathrm{PCC}})/c\] (8) \[=\frac{4\omega_{0}l_{\mathrm{PCC}}^{\mathrm{IR}}}{c}+\frac{4 \omega_{0}\delta l_{\mathrm{PCC}}}{c}+\frac{4\omega_{\mathrm{off}}l_{\mathrm{ PCC}}^{\mathrm{GR}}}{c}+\frac{2\delta\omega l_{\mathrm{PCC}}^{\mathrm{GR}}}{c} \tag{9}\] The first term is a phase shift if there is no dispersion. The second term is a shift due to the dispersion, and the third term is a phase compensation by the frequency offset. The fourth term is a phase noise in the Phase Locked Loop (PLL), which results in the average PCC fluctuation (see \(\epsilon\) in Eq. (7) in ref. [11]). ### Lock acquisition In order to draw the PCC into the operational condition (\(\phi_{0}=\pi\)), we need to follow a set of certain steps. We call the procedure "lock acquisition". The conceptual figure of the lock acquisition is shown in Fig. 4. First, using the dithering method, the round-trip phase shift of the IR \(\phi_{\text{IR}}\) is locked to \(\pi\) (see the denotation (i) in Fig. 4). This is realized when the first term in Eq. (9) satisfies the following condition: \[2\phi_{\text{IR}}=\frac{4\omega_{0}l_{\text{PCC}}^{\text{IR}}}{c}\equiv 0\ (\text{mod}\ 2\pi). \tag{10}\] One needs to detune the arm cavities to obtain enough DC signals if necessary. The dithering signal is fed-back to the mechanical actuator on the PCM (a PZT, for example). Second, the GR beam is kept at the resonance of the PCC by adding the offset frequency (see the denotation (\(\bar{\text{n}}\)) in Fig. 4). It corresponds to the round trip phase shift for the GR in the PCC satisfying below: \[\phi_{\text{GR}}\equiv 0\ (\text{mod}\ 2\pi). \tag{11}\] The GR PDH signal is fed-back to the frequency actuator on the GR (an acousto-optic modulator, for example). Lastly, one can hand over the error signals to the GR PDH which is steeper than the dithering signal (see the denotation (\(\bar{\text{m}}\)) in Fig. 4). Given the absolute frequency of the main IR (\(\omega_{0}\)) and the optical path difference (\(\delta l_{\text{PCC}}\)) is stable enough, the round-trip phase fluctuations for the GR are proportional to the length fluctuation of the PCC. In this final stage, the GR PDH is fed-back to the PCM. Note that the last term in Eq. (9): \[\delta\phi_{\text{PCC}}=\frac{2\delta\omega l_{\text{PCC}}^{\text{GR}}}{c} \tag{12}\] contributes to the noise of the PCC length. After the handover, the dithering and DC offset can be removed. ## IV Theoretical performance ### Error signal In this section, we analyze the electric fields of a cavity with an HWP and PBS inside to derive the GR PDH signal. We define bases of \(p\)- and \(s\)-pol electric fields as: \[\mathbf{e}_{p}=\begin{pmatrix}1\\ 0\end{pmatrix},\ \mathbf{e}_{s}=\begin{pmatrix}0\\ 1\end{pmatrix}. \tag{13}\] Symbols used in the analysis are shown in Fig. 5. The reflectivity matrix of the PBS is: \[\hat{\rho}^{\text{GR}}=\begin{pmatrix}\sqrt{R_{p}^{\text{GR}}}&0\\ 0&\sqrt{R_{s}^{\text{GR}}}\end{pmatrix} \tag{14}\] Figure 3: **Conceptual illustration of the DRC.** We prepare a waveplate that works as a QWP for the carrier and HWP for GR. It changes the polarization from \(s\)-pol to \(p\)-pol or \(p\)-pol to \(s\)-pol, but it can be kept to only \(s\)-pol between the PCM and the HWP. Figure 2: **Conceptual illustration of the dithering control.** (a) describes the AS port of the PCSM. \(r\)-pol beam from the interferometer transmits through the PBS as \(p\)-pol as denoted by 1. \(l\)-pol beam is recycled by the PC and transmits through the PBS after the second circulation (see Section II.1). A local oscillator is connected to a PZT behind the PCM to modulate the position of the PCSM. It generates modulation sidebands around the carrier as denoted by 2. (b) is a phaser diagram for the PBS transmission. A sum of the carrier 1 and 2 has a phase component when the PCM is shifted from the best position. Taking an interference between the carrier and the generated sidebands, one can get an error signal of the PCC length. where \(R_{s}^{\rm GR}\) and \(R_{p}^{\rm GR}\) is the power reflectivity for \(s\)-pol and \(p\)-pol of the PBS. \(r_{0}^{\rm GR},t_{0}^{\rm GR}\) are the amplitude reflectivity and transmissivity of the PCM. The Jones matrix for the \(45^{\circ}\) rotated HWP can be written as: \[\hat{J}^{\rm GR}=\frac{1}{2}\begin{pmatrix}1+e^{-2i\delta\phi}&1-e^{-2i\delta \phi}\\ 1-e^{-2i\delta\phi}&1+e^{-2i\delta\phi}\end{pmatrix}, \tag{15}\] where \(\delta\phi\) is the retardation error. Relations between the fields are written as: \[\mathbf{E}_{1} =t_{0}^{\rm GR}\mathbf{E}_{0}+r_{0}^{\rm GR}\mathbf{E}_{3}, \tag{16}\] \[\mathbf{E}_{2} =e^{i\Phi/2}\hat{J}^{\rm GR}\hat{\rho}^{\rm GR}\mathbf{E}_{1},\] (17) \[\mathbf{E}_{3} =e^{i\Phi/2}\hat{\rho}^{\rm GR}\hat{J}^{\rm GR}\mathbf{E}_{2},\] (18) \[\mathbf{E}_{r} =-r_{0}^{\rm GR}\mathbf{E}_{0}+t_{0}^{\rm GR}\mathbf{E}_{3}, \tag{19}\] where \(\Phi\) is the round-trip phase shift in the PCC. Solving those equations, the reflectivity for the \(s\)-pol at the anti-reflection side of the PCM is: \[r_{s\to s}(\Phi^{{}^{\prime}})=\frac{E_{0,s}}{E_{\rm r,s}} \tag{20}\] \[=-r_{0}^{\rm GR}+\frac{(t_{0}^{\rm GR})^{2}(R_{s}^{\rm GR}\cos \delta\phi e^{i\Phi^{{}^{\prime}}}-r_{0}^{\rm GR}R_{p}^{\rm GR}R_{s}^{\rm GR}e^ {2i\Phi^{{}^{\prime}}})}{\det M} \tag{21}\] where \[\Phi^{{}^{\prime}}=\Phi-\delta\phi, \tag{22}\] and \[\det M=1-r_{0}^{\rm GR}(R_{s}^{\rm GR}+R_{p}^{\rm GR})\cos\delta \phi e^{i\Phi^{{}^{\prime}}}\] \[\qquad\qquad\qquad+(r_{0}^{\rm GR})^{2}R_{s}^{\rm GR}R_{p}^{\rm GR }e^{2i\Phi^{{}^{\prime}}}. \tag{23}\] Here we assumed the reflectivity of the ITMY and the transmissivity of the BS are \(\sim 1\) for simplicity. While optical components inside the PCC may have their own losses, here we assumed that all the loss is concentrated in the PCM (denoted \(\mathcal{L}_{\rm GR}\) in Fig. 6). We set \(R_{p}^{\rm GR}\) to 0, which means \(p\)-pol generated by the retardation error is discarded from the PBS. The imperfection of the \(s-\)pol reflectivity is counted as a loss on the PCM: \[\mathcal{L}_{s}^{\rm GR}=1-R_{s}^{\rm GR}. \tag{24}\] We show the imaginary part of Eq. (21) in Fig. 6 with various round trip losses, which decrease the slope of the error signals. ### Estimation of the shot noise level of the DRC We compare the shot noise levels of two methods, the dithering control and DRC. The detailed analysis is Figure 4: **Toy picture of the lock acquisition.** a) The DC output of the PBS transmission. b) GR intra-cavity power. c) The solid red line is the dithering signal by dithering and the solid green line is the GR PDH signal. After adding offsets, one can hand over the error signals to the GR PDH which is steeper than the dithering signal. Figure 5: **PCC as seen by the green field.** The HWP is represented in the Jones matrix, \(\hat{J}^{\rm GR}\). The reflectivity of the PBS is also represented in the reflectivity matrix, \(\hat{\rho}^{\rm GR}\). We assume the BS is transparent for the GR for simplicity Figure 6: **Imaginary part of the PCC reflectivity.** Red curves show the imaginary part of Eq. 21 with various round-trip losses with retardation error of \(\lambda_{0}/300\). The black curve is an error signal without any retardation error and loss. shown in Appendix A. Using Eq. (A16) and choosing realistic parameters (see Table 1), the ratio of the shot noises of the two methods becomes: \[\frac{S_{L}^{\rm{Dither}}}{S_{L}^{\rm{DRC}}}\sim 7.4\times 10^{4}. \tag{25}\] This large ratio comes from the advantages of using a cavity: the amplification of the phase change by a factor of the finesse of the cavity and the amount of the local oscillator power that can be used for control. ## V Experimental demonstration of DRC An experimental setup to demonstrate the DRC is shown in Fig. 7. Possible parameters for IR and GR optics are shown in Table 1 and 2, respectively. We aim to confirm the DRC works. The GR laser is generated by the second harmonic generation and phase-locked to the main IR laser. The basic design is a Fabry-Perot Michelson Interferometer (FPMI) with 15-cm-long rigid arm cavities with flat ITMs and curved ETMs. The FPMI part is controlled by the pre-modulation method as used in all the current GW detectors. The error signal of the PCC obtained by the GR PDH is fed back to the PCM. The GR laser frequency is tunable by changing the frequency offset in the PLL. A small fraction of the main IR is picked off after the EOM and injected from the AR side of the ETMY. This light gets phase-modulated through an EOM to generate sidebands that play the role of pseudo-GW signals. The expected transfer function from the phase modulation to the DARM output is shown in Fig. 8. Given the carrier is resonant in the arm cavities, the amplitude reflectivity of a single arm cavity can be written as: \[r(\Omega)\simeq\frac{\gamma_{1}-\gamma_{2}+i\Omega}{\gamma_{1}+\gamma_{2}-i \Omega}, \tag{26}\] where \(\Omega\) is the frequency of an audio sideband. \(\gamma_{1}\) and \(\gamma_{2}\) are defined as: \[\gamma_{1} \equiv\frac{cT_{\rm{ITM}}}{4l_{\rm{arm}}}\ (\text{cavity pole}), \tag{27}\] \[\gamma_{2} \equiv\frac{c\mathcal{L}_{\rm{arm}}}{4l_{\rm{arm}}}. \tag{28}\] \(T_{\rm{ITM}}\) is the power transmissivity of the input mirror. \(\mathcal{L}_{\rm{arm}}\) is the round-trip power loss of the arm cavity and \(l_{\rm{arm}}\) is the length of the arm cavity. Denoting the round-trip power loss in the PCC as \(\mathcal{L}_{\rm{PCC}}^{{}^{\prime}}\), the transfer function is proportional to: \[(\text{Output}) \propto\frac{1-\cos\delta\phi_{\rm{PCC}}(1-\mathcal{L}_{\rm{PCC}}^ {{}^{\prime}})r(\Omega)}{2}\] \[\simeq\frac{\gamma_{2}+\mathcal{L}_{\rm{PCC}}\gamma_{1}/2-i \Omega}{\gamma_{1}-i\Omega}, \tag{29}\] where \(\mathcal{L}_{\rm{PCC}}\) is the effective-total PCC loss including the round-trip phase fluctuation in the PCC \(\delta\phi_{\rm{PCC}}\): \[\mathcal{L}_{\rm{PCC}} \simeq\mathcal{L}_{\rm{PCC}}^{{}^{\prime}}+\frac{(\delta\phi_{ \rm{PCC}})^{2}}{2} \tag{30}\] \[=2(\mathcal{L}_{\rm{BS}} +\mathcal{L}_{\rm{QWP}}+\mathcal{L}_{\rm{PBS}}+T_{\rm{SPBS}}+R_{ \rm{PPBS}})\] \[+\mathcal{L}_{\rm{PCM}}+\mathcal{L}_{\rm{align}}+\mathcal{L}_{ \rm{mis}}+\frac{(\delta\phi_{\rm{PCC}})^{2}}{2}. \tag{31}\] In Eq. (30), we assume that \(\delta\phi_{\rm{PCC}}\) is so small that its cosine is approximated as: \[\cos(\delta\phi_{\rm{PCC}})\simeq 1-\frac{(\delta\phi_{\rm{PCC}})^{2}}{2}. \tag{32}\] Definitions of each term in Eq. (31) are shown in Table 3. Note that losses in the PCC optical path are doubled due to the polarization circulation, except for the PCM. The mode-mismatching due to the PCM misalignment and the Schnupp asymmetry is also counted as losses. The final term \(\delta\phi_{\rm{PCC}}\) is the length fluctuation of the PCC. Eq. (29) means the PCC loss generates a zero at: \[\gamma_{\rm{cut}} =\gamma_{2}+\frac{\mathcal{L}_{\rm{PCC}}\gamma_{1}}{2} \tag{33}\] \[=\frac{c}{4l_{\rm{arm}}}\left[\mathcal{L}_{\rm{arm}}+\frac{\pi \mathcal{L}_{\rm{PCC}}}{\mathcal{F}}\right], \tag{34}\] where \(\mathcal{F}\) is the finesse of the arm cavity: \[\mathcal{F}\equiv\frac{2\pi}{T_{\rm{ITM}}}. \tag{35}\] In Fig. 8, we show transfer functions of both lossless and loss-included cases. The cutoffs at low frequencies are generated by the losses. The \(\propto 1/f\) structure above the cavity pole is due to the first-order low-pass nature of the arm cavities. Note that even in the lossless case (gray line), we still see a cutoff. It is caused by the transmission of the ETM that is necessary to inject the artificially phase-modulated light. The \(\propto f\) structure in the transfer function will be observed if we realize the proposed experiment with the designed parameters. Through the measurement, we can evaluate the performance of the DRC. ## VI Discussion One of the potential issues is the long-term stability of the dispersion of the QWP. It might change due to the heat effect of the laser or environmental temperature fluctuation. Also, beam jitters might also be a source of the noise. It is necessary to test the stability of the PCC control and check how frequently the dithering control needs to be used to ensure the round trip phase of the PCC to be \(\pi\). From the perspective of practical implementation, the DRC might conflict with the lock acquisition scheme of the existing detectors. KAGRA, for example, injects auxiliary GR lasers from the center part of the interferometer. To avoid the GR leaking and resonating inside the arm cavity, the DRC sets the ITM transmissivity for the GR as small as possible. Hence it is necessary to find a compromise between them. ## VII Conclusion In this article, we propose a feasible control scheme for the PCSM using a dual-retardation waveplate and auxiliary laser. We name it DRC. The DRC makes it possible to control the PCC length and alignment. In the DRC, we can get error signals with a higher SNR than the dithering control. Also, the DRC is compatible with the BHD because we do not need the DC offset anymore after the full PCC lock is achieved. After the experimental demonstration of the DRC with rigid cavities, we will proceed to the fully-suspended systems to realize the PCSM in the future GW detectors such as the Einstein Telescope [20]. ###### Acknowledgements. We thank Stefan Danilishin, Marc Eisenmann, Kentaro Komori, and Kentaro Somiya for fruitful discussions. ## Appendix A Shot noise estimation In the case of the PDH method, the reflection power before demodulation can be written as [21]: \[P=P_{\mathrm{DC}}+D\delta l\sin\omega_{\mathrm{m}}t+\mathcal{O}(2\omega_{ \mathrm{m}}). \tag{1}\] \(\delta l\) is the length fluctuation of the cavity we want to control and \(\omega_{\mathrm{m}}\) is the frequency of sidebands generated by an EOM. \(D\) [W/m] corresponds to the slope of the error Figure 8: **Expected transfer functions for the DARM noise injection.** The red curve shows the transfer function when the PCC is assumed to have a loss. The gray curve shows the lossless case. Figure 7: **Design of an experimental setup for demonstrating the DRC.** The basic configuration is a FPMI with 15 cm arm cavities. The GR laser is phase-locked to the main IR laser and injected from the AR side of the PCM. signal, which is proportional to the carrier and sideband power \(P_{\rm c},P_{\rm s}\) and the imaginary part of the reflectivity \({\rm Im}[r(\omega)]\): \[D\propto\sqrt{P_{\rm c}P_{\rm s}}{\rm Im}[r(\omega)]. \tag{10}\] \(P_{\rm DC}\) is the DC power, which is the source of the shot noise. The shot noise can be written in the single-sided form as: \[S_{\rm shot}=\sqrt{2e\frac{e\eta P_{\rm DC}}{\hbar\omega_{0}}}\ [{\rm A}/\sqrt{{\rm Hz}}], \tag{11}\] where \(e\) is the elementary charge and \(\eta\) is the quantum efficiency of the photo detector [A/W]. Hence the shot-noise-equivalent length noise is: \[S_{L}=\frac{S_{\rm shot}}{D}. \tag{12}\] In the case of the DRC, one can calculate its shot noise level in the same manner. The imaginary part of the reflectivity \(r_{s\to s}\) can be expressed around \(\Phi^{{}^{\prime}}\) as: \[{\rm Im}[{\rm r}_{s\to s}(\Phi^{{}^{\prime}})]\Big{|}_{\Phi^{ \prime}=0} \simeq\left.\frac{{\rm d}{\rm Im}[r_{s\to s}(\Phi^{\prime})]}{{\rm d }\Phi^{\prime}}\right|_{\Phi^{\prime}=0}\times\delta\Phi^{{}^{\prime}} \tag{13}\] \[=\left.\frac{{\rm d}{\rm Im}[r_{s\to s}(\Phi^{\prime})]}{{\rm d }\Phi^{\prime}}\right|_{\Phi^{\prime}=0}\times\frac{4\omega_{0}\delta l_{\rm PCC }}{c}. \tag{14}\] The slope amplitude \(D^{\rm DRC}\) can be written as: \[D^{\rm DRC}\delta l_{\rm PCC} =4\sqrt{P_{c}P_{s}}\left.{\rm Im}[{\rm r}_{s\to s}(\Phi^{{}^{ \prime}})]\right|_{\Phi^{\prime}=0}\] \[=\frac{8\beta_{\rm m}\omega_{0}P_{\rm GR}}{c}\left.\frac{{\rm d} {\rm Im}[r_{s\to s}(\Phi^{\prime})]}{{\rm d}\Phi^{\prime}}\right|_{\Phi^{ \prime}=0}\delta l_{\rm PCC}\] \[=\frac{8\beta_{\rm m}\omega_{0}P_{\rm GR}\xi}{c}\delta l_{\rm PCC}, \tag{15}\] where \[\xi\equiv\left.\frac{{\rm d}{\rm Im}[r_{s\to s}(\Phi^{\prime})]}{{\rm d} \Phi^{\prime}}\right|_{\Phi^{\prime}=0}. \tag{16}\] \(\beta_{\rm m}\) is the modulation index of the EOM, \(P_{\rm GR}\) is the GR laser power. The DC power can be written as: \[P_{\rm DC}^{\rm DRC}=|r_{s\to s}(0)|^{2}P_{\rm GR}. \tag{17}\] Substituting Eq. (15) and (17), for (12), one can obtain the shot noise level of the DRC: \[S^{\rm DRC}=\sqrt{2e\frac{\epsilon\eta P_{\rm DC}^{\rm DRC}}{2\hbar\omega_{0} }}/D^{\rm DRC}. \tag{18}\] Also in the case of the dithering control, one can use the same approach as [21]. The slope amplitude \(D^{\rm Dither}\) is: \[D^{\rm Dither}\delta l_{\rm PCC} \sim 2J_{1}(\beta)P_{\rm c}{\rm Im}[1-e^{-i\delta\phi_{\rm PCC}}] \tag{19}\] \[=\frac{8\pi\beta}{\lambda_{0}}P_{c}\delta l_{\rm PCC}, \tag{20}\] where \(P_{c}\) is the carrier power leaking from the BS to the QWP by the DC offset, \(A\) is the amplitude of the PCM modulation, \(\lambda_{0}\) is the wavelength of the main laser, \(J_{n}\) is the \(n\)-th order Bessel functions and \(\beta\) is the modulation index. For the transformation from Eq. (19) to Eq. (20) we have used \[J_{1}(\beta) =\frac{\beta}{2}\equiv\frac{2\pi A}{\lambda_{0}},\] \[\delta\phi_{\rm PCC} =\frac{4\pi\delta l_{\rm PCC}}{\lambda_{0}}.\] \begin{table} \begin{tabular}{c c} \hline \hline Parameters & value & Note \\ \hline \(\lambda_{\rm GR}\) [nm] & 532 & Wavelength of the GR laser \\ \(P_{\rm GR}\) [mW] & 20 & GR laser intensity \\ \(T_{\rm PCM}\) & 0.01 & PCM transmissivity \\ \(T_{\rm ITM}\) [m] & \(<10\) ppm & ITM transmissivity \\ \(l_{\rm PCC}\) [m] & 0.332 & Length from the PCM to ITMY \\ \(\beta_{\rm m}\) & 0.2 & Modulation index1 \\ \(\delta\phi_{\rm rest}^{\rm GR}\) & \(2\pi\lambda_{\rm GR}/300\) & QWP retardation error for GR \\ \(\mathcal{L}_{\rm GR}\) & 3 \% & Total losses in the PCC \\ \(\mathcal{F}_{\rm GR}\) & 150 & Finesse of the PCC \\ \hline \hline \end{tabular} \end{table} Table 2: **Parameters for GR.** \begin{table} \begin{tabular}{c c} \hline \hline Parameters & value & Note \\ \hline \(\lambda_{0}\)[nm] & 1064 & Nd:YAG \\ \(P_{\rm b}\) [mW] & 50 & IR laser intensity \\ \(P_{\rm pick}\) [\(\mu\)W] & 125 & Pick-off laser intensity2 \\ \(T_{\rm ITM}\) & 0.004 & ITM transmissivity3 \\ \(T_{\rm TEM}\) & 30 ppm & ETM transmissivity \\ \(T_{\rm PCM}\) & 1 \% & PCM transmissivity \\ \(R_{\rm ITM}\) [m] & \(\infty\) & ITM radius of curvature \\ \(R_{\rm FTM}\) [m] & 1.5 & ETM radius of curvature \\ \(R_{\rm PCM}\) [m] & 1 & PCM radius of curvature \\ \(l_{\rm arm}\) [m] & 0.15 & Arm cavity length \\ \(l_{\rm michs}\) [m] & 0.075 & Length from the BS to ITMX \\ \(l_{\rm michs}\) [m] & 0.125 & Length from the BS to ITMY \\ \(l_{\rm PCC}\) [m] & 0.307 & Mean PCC length \\ \(f_{\rm m}\) [MHz] & 47.5 & RF sideband frequency3 \\ \(A\) [nm] & 0.1 & Modulation amplitude 4 \\ \(P_{\rm c}\) [\(\mu\)W] & 100 & DC offset power at the AS port5 \\ \hline \(\mathcal{F}\) & \(\sim 1500\) & Finesse \\ \(f_{c}\) & \(3.2\times 10^{5}\) [Hz] & Cavity pole \\ \(f_{\rm cut}\) & \(1.7\times 10^{4}\) [Hz] & Cutoff frequency \\ \hline \hline \end{tabular} \end{table} Table 1: **Parameters for IR used for the design of the experiment.** \(P_{\rm DC}^{\rm Dither}\) can be written as: \[P_{\rm DC}^{\rm Dither}=|1-F(\psi_{0})|^{2}P_{c}, \tag{11}\] where \(F\) is the arm cavity reflectivity: \[F(\psi_{0})=-r_{1}+\frac{t_{1}^{2}r_{1}e^{-i\psi_{0}}}{1-r_{1}r_{2}e^{-i\psi_{0}}} \tag{12}\] and \(\psi_{0}\) is the round-trip phase shift of the arm cavity for the IR. Substituting Eq. (10) and (11) for (12), one can obtain the shot noise level of the dithering control: \[S^{\rm Dither}=\sqrt{2e\frac{\epsilon\eta P_{\rm DC}^{\rm Dither}}{\hbar\omega_{ 0}}}/D^{\rm Dither}. \tag{13}\] The ratio of the two control methods can be written as: \[\frac{S_{L}^{\rm Dither}}{S_{L}^{\rm DRC}}=\frac{4\beta_{m}}{\beta}\xi\left| \frac{1-F(\psi_{0})}{r_{s\to s}(0)}\right|\sqrt{\frac{P_{\rm GR}}{P_{c}}}. \tag{14}\]
2307.15489
Uncertainty Quantification for Scale-Space Blob Detection
We consider the problem of blob detection for uncertain images, such as images that have to be inferred from noisy measurements. Extending recent work motivated by astronomical applications, we propose an approach that represents the uncertainty in the position and size of a blob by a region in a three-dimensional scale space. Motivated by classic tube methods such as the taut-string algorithm, these regions are obtained from level sets of the minimizer of a total variation functional within a high-dimensional tube. The resulting non-smooth optimization problem is challenging to solve, and we compare various numerical approaches for its solution and relate them to the literature on constrained total variation denoising. Finally, the proposed methodology is illustrated on numerical experiments for deconvolution and models related to astrophysics, where it is demonstrated that it allows to represent the uncertainty in the detected blobs in a precise and physically interpretable way.
Fabian Parzer, Clemens Kirisits, Otmar Scherzer
2023-07-28T11:34:43Z
http://arxiv.org/abs/2307.15489v1
# Uncertainty Quantification for Scale-Space Blob Detection ###### Abstract We consider the problem of blob detection for uncertain images, such as images that have to be inferred from noisy measurements. Extending recent work motivated by astronomical applications, we propose an approach that represents the uncertainty in the position and size of a blob by a region in a three-dimensional scale space. Motivated by classic tube methods such as the taut-string algorithm, these regions are obtained from level sets of the minimizer of a total variation functional within a high-dimensional tube. The resulting non-smooth optimization problem is challenging to solve, and we compare various numerical approaches for its solution and relate them to the literature on constrained total variation denoising. Finally, the proposed methodology is illustrated on numerical experiments for deconvolution and models related to astrophysics, where it is demonstrated that it allows to represent the uncertainty in the detected blobs in a precise and physically interpretable way. **Keywords: uncertainty quantification, blob detection, scale space, total variation regularization** ## 1 Introduction Blob detection is a classic task in computer vision. Here, we mean by a _blob_ a round structure with a roughly Gaussian intensity profile. In order to simultaneously estimate the position and size of such blobs, detection methods often rely on the scale-space representation of an image, which represents the image at different levels of smoothing, allowing to distinguish low-scale from high-scale structures. This approach is commonly refered to as scale-space blob detection. The most well-known example of this is the Laplacian-of-Gaussians (LoG) method [1], which is based on the premise that, for the Gaussian scale-space representation, the local extrema of the normalized Laplacian are good indicators for the position and size of a blob. In many applications - in particular in astronomy - the image of interest is not known a-priori but has to be reconstructed from noisy measurements. This means that the image comes with significant uncertainties, and it is important to take these uncertainties into account when performing blob detection. A particular example from astronomy is integrated-light stellar population recovery [2], where the problem is to detect stellar populations as blobs in a two-dimensional mass density that is reconstructed from an observed spectrum. For this problem, the present authors with co-authors have developed an uncertainty-aware version of the Laplacian-of-Gaussians blob detection method, ULoG [3]. ULoG was formulated as a tube method that detects significant blobs by computing a tube-shaped confidence region for the uncertain scale-space representation, and then solving a minimization problem designed to obtain a representative that exhibits the "least amount of blobs". While the results of the ULoG method were satisfying for this particular application, it only yielded a very rudimentary representation of the uncertainty with respect to the position and size of a blob. In this paper, we propose an improved method that aims to resolve this issue. The basic idea is to represent the uncertainty in a blob by a region in scale space which represents the possible variation in position and size. To obtain these regions, we formulate an optimization problem that enforces solutions with piecewise constant normalized Laplacian, from which the desired blob regions can be extracted as level sets. The formulation of the optimization problem uses ideas from total variation (TV) regularization, which is why we refer to the novel method as TV-ULoG. ### Contributions * We introduce the TV-ULoG method for blob detection with uncertainties. The proposed approach is flexible and can be adapted to a wide range of applications, in particular Bayesian imaging. We also discuss connections to the taut-string algorithm and constrained total-variation denoising. * We extensively discuss the numerical treatment of the resulting non-smooth, bound-constrained convex optimization problem. We compare approaches based on smoothing the dual or the primal problem, and an interior-point approach based on reformulating the optimization problem as SOCP. * Finally, we illustrate the TV-ULoG method on numerical test cases for astronomical imaging and deconvolution. ### Organization of the paper The paper is organized as follows: * In the remainder of this section, we review related work (Section 1.3) and introduce notation that is used throughout the paper (Section 1.4). * In Section 2, we recall the necessary prerequisites on scale-space blob detection. We focus on the Gaussian scale-space representation and the Laplacian-of-Gaussians blob detection method, which are fundamental for the rest of the paper. * In Section 3, we describe our tube-based approach for uncertainty-aware blob detection. After discussing scale-space aspects of uncertainty quantification and the ULoG method from our previous work, we derive the novel TV-ULoG method. * In Section 4, we discuss in detail the numerical implementation of TV-ULoG. The majority of the section is devoted to the solution of the resulting optimization problem. * In Section 5, we demonstrate the method on two numerical test cases. We also use these test cases to evaluate the performance of the proposed optimization methods. * The paper ends with a conclusion in Section 6. ### Related work We have based our work on the Laplacian-of-Gaussians method for scale-space blob detection [1] since it is well-known and its mathematical formulation is simple, making it easier to extend it to the case of uncertain images. Alternative methods for blob detection are discussed e.g. in the reviews [4, 5, 6]. Some general references on scale-space methods for image processing and computer vision are [7, 8, 9, 10]. Our work can be seen as an instance of a statistical scale space method [11], but this is not a focus of this paper. In particular the works [12, 13, 14, 15] are similar since they also study uncertain signals in scale space. However, our approach differs through its formulation as a tube method and the specific focus on blob detection. Another related line of work formulates significance tests for image structures as convex optimization problems [16, 17, 18]. This methodology relies on concentration inequalities [19] and is computationally very efficient, but does not automatically detect the position and scale of structures since the presence of a structure must be formulated as user-specified hypothesis. To our knowledge, before [3], the specific problem of uncertainty-aware blob detection has not been addressed previously. ### Notation * For \(n\in\mathbb{N}\), we define the discrete range \([n]:=\{1,\ldots,n\}\). * \(\mathbb{R}_{+}:=[0,\infty)\) denotes the non-negative real numbers * For a vector \(\mathbf{x}\in\mathbb{R}^{n}\), its Euclidean norm is denoted by \(\|\mathbf{x}\|:=\sqrt{\sum_{i=1}^{n}x_{i}^{2}}\). * The convolution of two functions is denoted by \(f*g(\mathbf{x}):=\int f(\mathbf{y})g(\mathbf{x}-\mathbf{y})\,\mathrm{d}\mathbf{y}\). * The spatial Laplace operator is denoted by \(\Delta:=\partial_{x_{1}}^{2}+\partial_{x_{2}}^{2}\). * The probability distribution of a random element \(U\) is denoted by \(\mathbb{P}_{U}\)[20]. Its corresponding density function is denoted by \(p_{U}\), if it exists. Given another random element \(V\), the conditional distribution of \(U\), given \(V=v\), is denoted by \(\mathbb{P}_{U|V}(\cdot|v)\), with corresponding conditional density \(p_{U|V}(\cdot|v)\) (see also [21]). * Given functions \(u^{\mathrm{low}},u^{\mathrm{upp}}:D\to\mathbb{R}\) on a domain \(D\subset\mathbb{R}^{n}\), with \(u^{\mathrm{low}}\leq u^{\mathrm{upp}}\), we call the set of functions \[[u^{\mathrm{low}},u^{\mathrm{upp}}]:=\{u:D\to\mathbb{R}\ :\] (1) \[u^{\mathrm{low}}(\mathbf{x})\leq u(\mathbf{x})\leq u^{\mathrm{upp}}(\mathbf{ x})\ \forall\mathbf{x}\in D\}\] a _tube_. Similarly, given two vectors \(\mathbf{u}^{\mathrm{low}},\mathbf{u}^{\mathrm{upp}}\in\mathbb{R}^{n}\), we call the set of vectors \[[\mathbf{u}^{\mathrm{low}},\mathbf{u}^{\mathrm{upp}}]:=\{\mathbf{u}\in\mathbb{R}^{n}\ :\ u^{\mathrm{low}}_{i}\leq u _{i}\leq u^{\mathrm{upp}}_{i}\\ \forall i\in[n]\}\] (2) a _discrete tube_. This definition is straightforward to extend to higher-dimensional objects, such as discrete images. * We denote the characteristic function of a set \(C\) by \[\chi_{C}(\mathbf{v}):=\begin{cases}0,&\text{if }\mathbf{v}\in C,\\ \infty,&\text{otherwise}.\end{cases}\] * We let \(\mathbf{0}_{n}\in\mathbb{R}^{n}\) denote the zero-vector in \(\mathbb{R}^{n}\) and \(\mathbf{1}_{n}\in\mathbb{R}^{n}\) denote the vector with all entries equal to \(1\). Similarly, \(\mathbf{0}_{m\times n}\in\mathbb{R}^{m\times n}\) denotes the zero matrix and \(\mathbf{1}_{m\times n}\in\mathbb{R}^{m\times n}\) denotes the matrix with all entries equal to \(1\). Also, \(\mathbf{e}_{i}^{n}\in\mathbb{R}^{n}\) denotes the \(i\)-th basis vector in \(\mathbb{R}^{n}\), with entries \((\mathbf{e}_{i}^{n})_{j}=\delta_{ij}\). * Given a nonempty closed convex set \(C\subset\mathbb{R}^{d}\), \(P_{C}(x):=\operatorname*{argmin}_{\mathbf{y}\in C}\|\mathbf{y}-\mathbf{x}\|\) denotes the projection on \(C\). * For a set \(S\), \(2^{S}\) denotes its power set. ## 2 Scale-space blob detection In this section, we review the scale-space approach to blob detection that underlies the rest of this paper. We focus on the Gaussian scale-space representation, which we introduce in Section 2.1. Then, we review the classic Laplacian-of-Gaussians method for blob detection in Section 2.2. ### Scale-space representations In the computer vision literature, the mathematical theory of describing images at different scales is known as _scale space theory_[7]. A scale space representation of an image \(f:\mathbb{R}^{2}\to\mathbb{R}\) is a function \(u:\mathbb{R}^{2}\times\mathbb{R}_{+}\to\mathbb{R}\) which depends on an additional third parameter \(t\) that represents physical scale. The _Gaussian scale-space representation_ is the most studied example, due to its simple formulation and the fact that it is the unique linear scale space representation that satisfies a series of intuitive axioms that formalize the notion of scale [22, 23]. The Gaussian scale-space representation of an image \(f:D\to\mathbb{R}\) on a domain \(D\subset\mathbb{R}^{2}\) is defined as the solution \(u:D\times[0,\infty)\to\mathbb{R}\) of the diffusion equation \[\partial_{t}u(\mathbf{x},t) =\frac{1}{2}\Delta u(\mathbf{x},t), (\mathbf{x},t)\in D\times(0,\infty), \tag{3}\] \[u(\mathbf{x},0) =f(\mathbf{x}), \mathbf{x}\in D,\] \[\partial_{\nu}u(\mathbf{x},t) =0, (\mathbf{x},t)\in\partial D\times(0,\infty).\] Here, we imposed Neumann boundary conditions, following [24]. In the following, we will denote with \(\Phi\) the solution operator of (3) that maps an image \(f\) to its scale-space representation \(u\). One can show that \(\Phi\) is well-defined under suitable assumptions (see e.g. [25]). For the rest of this paper, we will not consider other scale-space representations. In particular, we will always mean the representation \(u\) defined in (3) when we write "the scale-space representation" of an image \(f\). ### Blob detection The scale-space representation of an image is often used to detect the position and size of features of interest. A particular example of this is the well-known Laplacian-of-Gaussians method for blob detection [1]. It is a special case of the differential-invariants approach for feature detection which we introduce next. #### 2.2.1 Feature detection from differential invariants Image features often correspond well to local extrema of _scale-normalized derivatives_ of the image's scale-space representation \(u\), i.e. combinations of the scaled partial derivatives \[\tilde{\partial}_{x_{i}}u(\mathbf{x},t):=\sqrt{t}\partial_{x_{i}}u(\mathbf{x},t),\quad i =1,2 \tag{4}\] (cf. [5]). The scale-normalization is necessary to achieve an intuitive scale-invariance property: A feature of scale \(t\) in the image \(f\) should correspond to a feature of scale \(s\cdot t\) in the rescaled image \(f_{s}(\mathbf{x}):=f(\mathbf{x}/\sqrt{s})\). E.g. zooming in by a factor 2 increases the scale of all features by a factor 4. #### 2.2.2 The Laplacians-of-Gaussians method The Laplacian-of-Gaussians method is a special case of the methodology described in Section 2.2.1 for blob detection. It uses the local minimizers of the scale-normalized Laplacian of \(u\) as indicators for blobs in an image, where the scale-normalized Laplacian is given by \[\tilde{\Delta}u(\mathbf{x},t):=(\tilde{\partial}_{x_{1}}^{2}+\tilde{\partial}_{x_ {2}}^{2})u(\mathbf{x},t)=t\Delta u(\mathbf{x},t), \tag{5}\] That is, a local minimizer or maximizer of \(\tilde{\Delta}u\) at \((\mathbf{x},t)\in D\times\mathbb{R}_{+}\) indicates the presence of a blob-like shape with center \(\mathbf{x}\) and scale \(t\). **Example 1**.: _To understand what is meant by "blob-like shape", let us define the prototypical blob with center \(\mathbf{m}\in\mathbb{R}^{2}\) and size \(s\) as the symmetrical Gaussian_ \[f(\mathbf{x}):=\frac{1}{2\pi s}\exp\left(-\frac{\left\|\mathbf{x}-\mathbf{m}\right\|^{2}} {2s}\right),\quad\mathbf{x}\in\mathbb{R}^{2}.\] _Its scale-space representation is_ \[u(\mathbf{x},t)=\frac{1}{2\pi(s+t)}\exp\left(-\frac{\left\|\mathbf{x}-\mathbf{m}\right\|^{ 2}}{2(s+t)}\right),\] _and the scale-normalized Laplacian is_ \[\tilde{\Delta}u(\mathbf{x},t)=t\frac{\left\|\mathbf{x}-\mathbf{m}\right\|^{2}-2(s+t)}{2(s +t)^{2}}u(\mathbf{x},t),\] _which has a unique local minimum at \((\mathbf{m},s)\). This means that the position and size of the prototypical Gaussian blob \(f\) are exactly recovered from the local minimizer of \(\tilde{\Delta}u\). Note that the normalization in (5) is important for detecting the scale, since the un-normalized Laplacian \(\Delta u\) has a unique local minimum at \((\mathbf{m},0)\), which does not indicate the blob size. The relation of the Laplace operator to blob detection is also discussed in [26, 27]._ The prototypical Gaussian blob also motivates the common visualization of the results of the Laplacian-of-Gaussians method, where a detected blob \((\mathbf{x},t)\) is visualized by a circle with center \(\mathbf{x}\) and radius proportional to \(\sqrt{t}\). Usually, the radius \(\sqrt{t}\) is used for one-dimensional signals and the radius \(\sqrt{2t}\) is used for two-dimensional signals (images). ## 3 Tube-based uncertainty quantification for blob detection The Laplacian-of-Gaussians method described in Section2.2 detects blobs from local minimizers of \(\tilde{\Delta}u\), where \(u\) is the Gaussian scale-space representation of \(f\) given by (3). The purpose of this paper is to extend this methodology to the case where the image \(f\) is uncertain, for example when it has to be estimated from noisy measurements. ### Incorporating uncertainty into scale-space methods Consider the problem of recovering an image of interest \(f^{*}:D\to\mathbb{R}\) (called the _ground truth_) from a noisy measurement \(y\) given by \[y=\mathcal{G}(f^{*})+w, \tag{6}\] where \(\mathcal{G}\) is an operator that represents the measurement process, and \(w\) is noise. The presence of the noise implies that any estimate of the image \(f^{*}\) comes with uncertainties. In the Bayesian approach [28, 29] to imaging, these uncertainties are taken into account by modelling \(f^{*}\), \(y\) and \(w\) as realizations of random quantities \(F\), \(Y\) and \(W\) that are related by \[Y=\mathcal{G}(F)+W. \tag{7}\] The assumed distribution \(\mathbb{P}_{F}\) of the random image \(F\) is called the _prior_, since it encodes a-priori assumptions on the unknown image. Using (7) and statistical assumptions on \(W\), one can define a likelihood in the form of a conditional probability distribution \(\mathbb{P}_{Y|F}\). Recall that, for given \(f\), \(\mathbb{P}_{Y|F}(\cdot|f)\) is a probability measure that represents the distribution of \(Y\) given \(F=f\). For the construction of conditional probability distributions on abstract spaces (e.g. infinite-dimensional function spaces), see for example [20, chapter 6] or [30, chapter 8.3]. Together, the prior and the likelihood determine the so-called _posterior_ distribution \(\mathbb{P}_{F|Y}\), which quantifies the uncertainty with respect to \(F\), conditional on observing \(Y\), through Bayes' rule (see e.g. [31, 32] for further reference). Treating the image of interest as random means that its scale-space representation also needs to be modelled as random object. Let \(\Phi\) denote the solution operator of the diffusion equation (3) that maps an image \(f\) to its corresponding scale-space representation \(u=\Phi f\). Then the scale-space representation of the random image \(F\) is the random function \(U=\Phi F\). The posterior distribution \(\mathbb{P}_{U|Y}\) of \(U\) is then determined by the posterior distribution \(\mathbb{P}_{F|Y}\) of \(F\) through the usual transformation rules: Given an observation \(Y=y\), the posterior probability that \(U\) lies in a set of functions \(A\) is given by \[\mathbb{P}_{U|Y}(A\ |\ y) =\mathbb{P}_{\Phi F|Y}(A\ |\ y)\] \[=\mathbb{P}_{F|Y}(\Phi^{-1}(A)\ |\ y). \tag{8}\] The problem of uncertainty-aware blob detection can then be rephrased as finding local minimizers of \(\tilde{\Delta}U\) for uncertain \(U\). Developing a practical method to solve this problem requires a suitable representation of the uncertainty encoded in the abstract posterior distribution \(\mathbb{P}_{U|Y}\). In this paper, we assume that the uncertainty with respect to \(U\) is represented by a _credible scale-space tube_. That is, we assume knowledge of functions \(u^{\text{low}},u^{\text{upp}}:D\times\mathbb{R}_{+}\to\mathbb{R}\) such that \[\mathbb{P}_{U|Y}([u^{\text{low}},u^{\text{upp}}]\ |\ y)\geq 1-\alpha, \tag{9}\] for a small credibility parameter \(\alpha\in(0,1)\). We restrict ourselves to tube-shaped regions for three main reasons: First, such tubes can be obtained in many applications (see Remark1). Second, this choice leads to a relatively simple formulation of our method as bound-constrained convex optimization problem (see Section3.4.2 below). Finally, it is motivated by existing tube methods from density estimation, which we quickly review in Section3.2 below. **Remark 1**.: _Since \(U\) depends linearly on \(F\), it is often straightforward to compute tubes that satisfy (9) using sampling-based inference (e.g. Markov chain Monte Carlo (MCMC) [33]) or approaches based on analytic approximations (e.g. variational inference [34]). We illustrate this in AppendixA, where we present a method for estimating the scale-space tube using MCMC samples. This method is used in the numerical examples of Section5._ ### The scale-uncertainty tradeoff The structure of the tube \([u^{\mathrm{low}},u^{\mathrm{upp}}]\) will in general represent a tradeoff between scale and uncertainty in the image \(F\): At lower scales, the uncertainty is higher since small-scale features are more difficult to detect from noisy observations. At higher scales, the uncertainty decreases as this local variability is filtered out and the ability to resolve finer details is lost. This phenomenon is a special case of the fundamental bias-variance tradeoff that applies to uncertain signals in general [35]. In the language of image processing, it corresponds to the simple intuition that coarse structures are easier to identify than fine details, given limited data. In particular, the bounds \(u^{\mathrm{low}}\) and \(u^{\mathrm{upp}}\) in general are not scale-space representations of corresponding images \(f^{\mathrm{low}},f^{\mathrm{upp}}\). This was also demonstrated in the previous work [3]. ### Tube methods in density estimation The task of estimating the local extrema of an uncertain signal has previously been studied in density estimation, where it is adressed using tube methods [36, 37, 38, 39]. For an unknown density \(g:D\to\mathbb{R}\), one considers a tube \([g^{\mathrm{low}},g^{\mathrm{upp}}]\) that is typically constructed as a corridor of fixed width around a noisy measurement of the density. The minimizer \(\bar{g}\in[g^{\mathrm{low}},g^{\mathrm{upp}}]\) of a tube-constrained optimization problem \[\min_{g} \mathcal{J}(g)\] \[\mathrm{s.\ t.} g^{\mathrm{low}}\leq g\leq g^{\mathrm{upp}}\] then serves as representative with the "smallest amount of features" among all signals in the tube, where the "amount of features" is encoded in the choice of the cost function \(\mathcal{J}\). A prominent example is the one-dimensional taut-string algorithm [40, 41]. It uses the choice \(\mathcal{J}(g)=\int\sqrt{1+|g^{\prime}|^{2}}\) which is known to yield an estimate \(\bar{g}\) such that its derivative \(\bar{g}^{\prime}\) is piecewise-constant and minimizes the number of local extrema amongst all functions with anti-derivative in the given tube [42]. ### Review of the ULoG method Motivated by these classic tube methods for density estimation, we define a suitable cost function \(\mathcal{J}(u)\) that serves as a proxy for the number of local extrema of \(\tilde{\Delta}u\), and then obtain a desired representative \(\bar{u}\) as the minimizer of the constrained optimization problem \[\min_{u} \mathcal{J}(u) \tag{10}\] \[\mathrm{s.\ t.} u^{\mathrm{low}}\leq u\leq u^{\mathrm{upp}}\ \mathrm{on}\ D\times\mathbb{R}_{+}.\] In previous work [3] we used the cost function \[\mathcal{J}(u)=\int_{D\times\mathbb{R}_{+}}(\tilde{\Delta}u)^{2}. \tag{11}\] The particular difference to our new method is that we minimize a different energy function, which takes into account derivatives with respect to \(t\) (see (14) below), while the optimization problem (10) can be decoupled in time, leading to a group of bounded linear least-squares problems that can be solved independently. In analogy to the Laplacians-of-Gaussians method, we then identified "significant blobs" as the local minimizers of the solution \(\bar{u}\) of (10) (see also Example 1). The resulting method was correspondingly called the "Uncertainty-aware Laplacian-of-Gaussians" (ULoG) method. The ULoG method showed satisfying results in simulations, where minimizers of (10) with \(\mathcal{J}\) given by (11) typically exhibited a scale-normalized Laplacian \(\tilde{\Delta}\bar{u}\) that attained its local minima at distinct points which corresponded very well to significant blobs in the ground truth image (cf. [3]). However, the choice (11) was motivated more by computational simplicity than theoretical considerations. Furthermore, a main limitation of this approach is that it only provides a limited view of the uncertainty of scale-space blobs, since the uncertainty with respect to a blob is represented by a single point \((\mathbf{x},t)\in D\times\mathbb{R}_{+}\). The scale parameter \(t\) is difficult to interpret, since it is both related to the expected physical size of the blob and to the uncertainty with respect to its center. That is, a large value of \(t\) can correspond both to the presence of a large, certain blob in \(f\), or a small one with uncertain position. ### A total variation-based approach A remedy is to instead represent the uncertainty with respect to a blob by a three-dimensional connected _region_ in \(D\times\mathbb{R}_{+}\) that contains the uncertain blob with high probability. The geometry of this region then provides a more nuanced view of the possible variation in position and size. We determine the desired regions using again a tube-based approach. To this end, we consider a cost function that leads to minimizers of (10) with _piecewise-constant_ normalized Laplacian. In that case, the local minima of \(\tilde{\Delta}\bar{u}\) are attained at flat connected regions which can easily be extracted using thresholding. As cost function, we propose to use the total variation of \(\tilde{\Delta}u\)[43]. This can be motivated by the fact that in one dimension, minimizing the total variation of \(u^{\prime}\) within the tube \([u^{\mathrm{low}},u^{\mathrm{upp}}]\) leads to an estimate with very similar properties as the mode-minimizing taut-string estimate [44]. On the other hand, this approach is consistent with insights from image processing, where total variation regularization is a common tool to achieve piecewise-constant reconstructions [45]. #### 3.4.1 Outline of the method We start with a high-level description of the proposed method (see also the schematic overview in Figure 1). 1. We assume that we are given a prior distribution \(\mathbb{P}_{F}\) for the image of interest and a likelihood \(\mathbb{P}_{Y|F}\) (determined by the forward model (7) and statistical assumptions on the noise). For a concrete observation \(Y=y\), these determine a posterior distribution \(\mathbb{P}_{F|Y}(\cdot|y)\) through Bayes' rule (see Section 3.1), which in turn determines a posterior distribution \(\mathbb{P}_{U|Y}(\cdot|y)\) for the uncertain scale-space representation \(U\) (see (8)). 2. For a given credibility level \(\alpha\in(0,1)\), we compute a credible scale-space tube \([u^{\mathrm{low}},u^{\mathrm{upp}}]\) that satisfies (9), e.g using the MCMC-based method described in Appendix A. 3. Next, we solve the tube-constrained total variation-based optimization problem formulated in Section 3.4.2 below. This yields a minimizer \(\bar{u}\in[u^{\mathrm{low}},u^{\mathrm{upp}}]\). 4. From the computed minimizer \(\bar{u}\), we extract the regions in \(D\times\mathbb{R}_{+}\) where \(\tilde{\Delta}\bar{u}\) attains its local minima. In practice this is done using the thresholding procedure described in Section 4.3. 5. The extracted blob regions can now be visualized using the method outlined in Section 4.4. Since this approach can be seen as a modification of the previous ULoG method (Section 3.3), we will refer to it as TV-ULoG. #### 3.4.2 Formulation of the optimization problem To prepare the precise mathematical formulation of the resulting optimization problem, we define the scale-normalized total variation of a function \(u:D\times\mathbb{R}_{+}\to\mathbb{R}\) by \[\mathrm{\tilde{TV}}(u):=\int_{D\times\mathbb{R}_{+}}\left\|\tilde{\nabla}_{ \boldsymbol{\mathrm{x}},t}u\right\|, \tag{12}\] Figure 1: Overview of the proposed approach where \[\tilde{\nabla}_{\mathbf{x},t}:=\begin{bmatrix}\tilde{\partial}_{x_{1}}\\ \tilde{\partial}_{x_{2}}\\ \tilde{\partial}_{t}\end{bmatrix} \tag{13}\] is the scale-normalized gradient operator. Here, \(\tilde{\partial}_{x_{1}}\) and \(\tilde{\partial}_{x_{2}}\) are the scale-normalized spatial derivatives defined in (4), while \(\tilde{\partial}_{t}\) is defined as \[\tilde{\partial}_{t}u(\mathbf{x},t):=t\partial_{t}u(\mathbf{x},t).\] Following the discussion at the start of this section, we suggest to use the choice \(\mathcal{J}(u)=\mathrm{TV}(\tilde{\Delta}u)\) in (10). To allow for a finite-difference discretization of the involved differential operators, we have to assume suitable boundary conditions. Our particular choice of Neumann boundary conditions on \(u\) and \(\tilde{\Delta}u\) was mostly motivated by ease of implementation (see Remark3). In summary, we arrive at the following formulation: \[\begin{split}\min_{u}&\mathrm{T}\mathrm{V}(\tilde{ \Delta}u)\\ \mathrm{s.\ t.}& u^{\mathrm{low}}\leq u\leq u^{\mathrm{upp}}\text{ on }D\times\mathbb{R}_{+},\\ &\partial_{\mathbf{\nu}}u(\mathbf{x},t)=0\text{ on }\partial D\times(0, \infty),\\ &\partial_{\mathbf{\nu}}\tilde{\Delta}u(\mathbf{x},t)=0\text{ on } \partial D\times(0,\infty).\end{split} \tag{14}\] In Section4.2, we discuss the discretization of (14) and consider various optimization algorithms for finding a minimizer of the resulting non-smooth convex optimization problem. Extraction and visualization of the desired regions from the computed minimizer are described in Section4.3 and Section4.4. **Remark 2**.: _The scale-normalization in (13) is necessary to achieve a scale-invariance property analogous to the one described in Section2.2.1: If \(\bar{u}_{s}\) is a minimizer of the scaled problem_ \[\begin{split}\min_{u_{s}}&\mathrm{T}\mathrm{V}( \tilde{\Delta}u_{s})\\ \mathrm{s.\ t.}& u_{s}^{\mathrm{low}}\leq u_{s}\leq u_{s}^{ \mathrm{upp}}\text{ on }D\times\mathbb{R}_{+},\\ &\partial_{\mathbf{\nu}}u_{s}(\mathbf{x},t)=0\text{ on }\partial D\times \mathbb{R}_{+},\\ \end{split}\] \[\begin{split}\min_{u_{s}}&\mathrm{T}\mathrm{V}( \tilde{\Delta}u_{s})\\ \mathrm{s.\ t.}& u_{s}^{\mathrm{low}}\leq u_{s}\leq u_{s}^{ \mathrm{upp}}\text{ on }D\times\mathbb{R}_{+},\\ &\partial_{\mathbf{\nu}}u_{s}(\mathbf{x},t)=0\text{ on }\partial D\times \mathbb{R}_{+},\\ \end{split}\] _where_ \[u_{s}^{\mathrm{low}}(\mathbf{x},t)=u^{\mathrm{low}}(\sqrt{s}\mathbf{x},st),\] \[u_{s}^{\mathrm{upp}}(\mathbf{x},t)=u^{\mathrm{upp}}(\sqrt{s}\mathbf{x}, st),\] _then \(\bar{u}\) given by_ \[\bar{u}(\mathbf{x},t)=\bar{u}_{s}(\mathbf{x}/\sqrt{s},t/s)\qquad\text{for all }(\mathbf{x},t)\in D\times\mathbb{R}_{+}\] _is a minimizer of the original problem (14) and vice versa. See also [1] for more motivation behind such scaling properties._ **Remark 3**.: _The Neumann boundary condition on \(u\) in (14) is motivated by the definition of the scale-space representation (3). The motivation for the second Neumann boundary condition on \(\tilde{\Delta}u\) is that it leads to the usual formula for the discrete total variation when combined with a forward-difference approximation (see Section4.2)._ ## 4 Numerical implementation In this section we discuss how the TV-ULoG method is implemented in practice. In Section4.1, we discuss the discretization of the optimization problem (14). Then, we present methods to solve the resulting discrete optimization problem in Section4.2. Section4.3 describes how the desired blob regions can be extracted from the computed minimizer, while Section4.4 outlines possible visualization of these regions. ### Discrete scale-space representation To discuss the numerical implementation of the TV-ULoG method, we assume that the image domain \(D\subset\mathbb{R}^{2}\) is rectangular and has been discretized into a grid \((\mathbf{x}_{ij})_{i\in[N_{1}],j\in[N_{2}]}\), with uniform grid size \(h_{1}>0\) in \(x_{1}\)-direction and uniform grid size \(h_{2}>0\) in \(x_{2}\)-direction, such that the discrete image of interest is given by a matrix \(\mathbf{f}\in\mathbb{R}^{N_{1}\times N_{2}}\). For the scale discretization, it was suggested in [1] to use discrete scales \(0<t_{1}<\ldots<t_{K}\) that are exponentially increasing, i.e. \[t_{k+1}=bt_{k},\quad k\in[K-1] \tag{15}\] for some \(b>1\). The discrete scale-space representation is then defined through a suitable discretization \(\mathbf{\Phi}:\mathbb{R}^{N_{1}\times N_{2}}\to\mathbb{R}^{N_{1}\times N_{2}\times K}\) of the solution operator \(\Phi\) of the diffusion equation (3). Then, the discrete scale-space representation of \(\mathbf{f}\) is given by the three-dimensional array \[\mathbf{u}:=\mathbf{\Phi}\mathbf{f}\in\mathbb{R}^{N_{1}\times N_{2}\times K}. \tag{16}\] In practice, the solution operator \(\mathbf{\Phi}\) is often implemented as a convolution with a suitable discrete convolution kernel (see e.g. [9]). Following the discussion in Section 3.1, we then assume access to a credible scale-space tube \([\mathbf{u}^{\rm low},\mathbf{u}^{\rm upp}]\) for the discrete scale-space representation \(\mathbf{U}=\mathbf{\Phi}\mathbf{F}\), such that, analogously to (9), there holds \[\mathbb{P}_{\mathbf{U}|\mathbf{Y}}([\mathbf{u}^{\rm low},\mathbf{u}^{\rm upp}]\ |\ \mathbf{y})\geq 1-\alpha,\] for given \(\alpha\in(0,1).\) How such a tube can be estimated in practice from MCMC samples is described in Appendix A. In Section 4.2, we present how, given \([\mathbf{u}^{\rm low},\mathbf{u}^{\rm upp}]\), the non-smooth optimization problem (14) can be solved numerically after discretization. Then, in Section 4.3, we describe a procedure that extracts the desired blob regions in scale space from the computed minimizer in a way that is robust against numerical errors. In Section 4.4, we discuss visualizations of these regions that meaningfully represent the uncertainty of the scale-space blobs. ### Numerical treatment of the optimization problem In order to solve the optimization problem (14) numerically, we have to define suitable discretizations for the differential operators in the objective. To discretize the scale-normalized Laplacian (5), we use the common central-difference scheme, i.e. we define \[(\tilde{\mathbf{\Delta}}\mathbf{u})_{i,j,k}:=t_{k} \left(\frac{u_{i+1,j,k}-2u_{i,j,k}+u_{i-1,j,k}}{h_{1}}\right. \tag{17}\] \[\left.+\frac{u_{i,j+1,k}-2u_{i,j,k}+u_{i,j-1,k}}{h_{2}}\right),\] where we mirror \(u\) at the boundaries of the index range, i.e. we set \[\begin{split} u_{0,j,k}&:=u_{2,j,k},\quad u_{N_{1}+1,j,k}:=u_{N_{1}-1,j,k},\\ u_{i,0,k}&:=u_{j,2,k},\quad u_{i,N_{2}+1,k}:=u_{j,N_{ 2}-1,k}\end{split} \tag{18}\] for all \((i,j,k)\in[N_{1}]\times[N_{2}]\times[K].\) This choice implements the Neumann boundary condition in (3) (see example 5.49 in [46]). For the total variation (12), we use an isotropic discretization. To this end, we define the forward differences in \(x_{1}\)- and \(x_{2}\)-direction by \[(\tilde{\mathbf{\nabla}}_{\mathbf{x},t}\mathbf{u})_{i,j,k}:=\begin{bmatrix}( \tilde{\mathbf{\nabla}}_{x_{1}}\mathbf{u})_{i,j,k}\\ (\tilde{\mathbf{\nabla}}_{x_{2}}\mathbf{u})_{i,j,k}\\ (\tilde{\mathbf{\nabla}}_{t}\mathbf{u})_{i,j,k}\end{bmatrix}\in\mathbb{R}^{3},\] \[\text{where }(\tilde{\mathbf{\nabla}}_{x_{1}}\mathbf{u})_{i,j,k}:=t_{k}^{1/2} \frac{u_{i+1,j,k}-u_{i,j,k}}{h_{1}},\] \[(\tilde{\mathbf{\nabla}}_{x_{2}}\mathbf{u})_{i,j,k}:=t_{k}^{1/2}\frac{u_{ i,j+1,k}-u_{i,j,k}}{h_{2}},\] where we formally define \[\begin{split} u_{N_{1}+1,j,k}&:=u_{N_{1},j,k},\quad u _{i,N_{2}+1,k}:=u_{i,N_{2},k},\\ u_{i,j,K+1}&:=u_{i,j,K},\end{split}\] for all \((i,j,k)\in[N_{1}]\times[N_{2}]\times[K].\) Similar to (18), this choice implements the Neumann boundary condition on \(\tilde{\Delta}u\). For the non-uniform scale grid (15), the forward difference approximation of the scale derivative \(\partial_{t}u(\mathbf{x},t)\) is \[(\tilde{\mathbf{\nabla}}_{t}\mathbf{u})_{i,j,k} =t_{k}\frac{u_{i,j,k+1}-u_{i,j,k}}{t_{k+1}-t_{k}}\] \[=\frac{1}{b-1}(u_{i,j,k+1}-u_{i,j,k}).\] We combine the forward difference approximations in a discrete scale-space gradient \(\tilde{\mathbf{\nabla}}_{\mathbf{x},t}:\mathbb{R}^{N_{1}\times N_{2}\times K}\to \mathbb{R}^{N_{1}\times N_{2}\times K\times 3}\), given by \[(\tilde{\mathbf{\nabla}}_{\mathbf{x},t}\mathbf{u})_{i,j,k}:=\begin{bmatrix}(\tilde{\mathbf{ \nabla}}_{x_{1}}\mathbf{u})_{i,j,k}\\ (\tilde{\mathbf{\nabla}}_{x_{2}}\mathbf{u})_{i,j,k}\\ (\tilde{\mathbf{\nabla}}_{t}\mathbf{u})_{i,j,k}\end{bmatrix}\in\mathbb{R}^{3}. \tag{19}\] With definition (19), we define the isotropic scale-normalized total variation of \(\mathbf{u}\in\mathbb{R}^{N_{1}\times N_{2}\times K}\) as \[\tilde{\mathbf{\nabla}}(\mathbf{u})=\sum_{k=1}^{K}\sum_{i=1}^{N_{1}}\sum_{j=1}^{N_{2}} \left\|(\tilde{\mathbf{\nabla}}_{\mathbf{x},t}\mathbf{u})_{i,j,k}\right\|. \tag{20}\] In summary, our discretization of (14) reads as \[\begin{split}\min_{\mathbf{u}\in\mathbb{R}^{N_{1}\times N_{2}\times K}}& \quad\tilde{\mathbf{TV}}(\tilde{\mathbf{\Delta}}\mathbf{u})\\ \text{s.t.}&\quad\mathbf{u}^{\text{low}}\leq\mathbf{u}\leq\bm {u}^{\text{upp}}.\end{split} \tag{21}\] This is a bound-constrained, non-smooth convex optimization problem. For ease of reference, we will also refer to (21) as the _TV-ULoG optimization problem_. To our knowledge, this particular problem has not been previously considered in the image processing literature. The closest candidate is the constrained total variation (CTV) denoising problem: Given a noisy image \(\mathbf{f}^{\delta}\in\mathbb{R}^{N_{1}\times N_{2}}\) and bounds \(\mathbf{f}^{\text{low}},\mathbf{f}^{\text{upp}}\in\mathbb{R}^{N_{1}\times N_{2}}\) with \(\mathbf{f}^{\text{low}}\leq\mathbf{f}^{\text{upp}}\), the discrete CTV-denoising problem is \[\begin{split}\min_{\mathbf{f}\in\mathbb{R}^{N_{1}\times N_{2}}}& \quad\left\|\mathbf{f}-\mathbf{f}^{\delta}\right\|^{2}+\lambda\mathbf{TV}( \mathbf{f})\\ \text{s. t.}&\quad\mathbf{f}^{\text{low}}\leq\mathbf{f}\leq \mathbf{f}^{\text{upp}}.\end{split} \tag{22}\] Here, \(\lambda>0\) is a tunable regularization parameter and \(\mathbf{TV}(\cdot)\) is the discrete total variation. The problem (22) has been considered e.g. by Beck and Teboulle in their seminal work [47], where they propose the fast gradient projection (FGP) method for its numerical solution. The FGP method can be summarized as applying Nesterov-accelerated projected gradient descent to the dual of (22). In Section 4.2.1, we show that the FGP method can be applied to a Nesterov-smoothed dual of (21). In Section 4.2.2, we describe an analogous method that instead applies FGP to the Nesterov-smoothed primal problem. However, as is illustrated further in Section 5, both methods converge very slowly for our particular problem. For this reason, we present a third alternative approach for the solution of (21) in Section 4.2.3. It formulates (21) as an equivalent second-order cone program (SOCP) and solves it with an interior-point method. This method is also motivated by analogous approaches for TV-minimization [48, 49]. #### Dual smoothing To prepare the discussion of the dual-smoothing approach, we recall the notion of Fenchel duality [50]: Let \(\mathbb{U},\mathbb{V}\) be finite-dimensional Euclidean spaces, \(\mathbf{A}:\mathbb{U}\rightarrow\mathbb{V}\) a linear function, and \(\phi:\mathbb{U}\rightarrow[-\infty,\infty]\), \(\psi:\mathbb{V}\rightarrow[-\infty,\infty]\) be proper convex functions. Then the Fenchel dual of the optimization problem \[\min_{\mathbf{u}\in\mathbb{U}}\left\{\phi(\mathbf{u})+\psi(\mathbf{A}\mathbf{u})\right\} \tag{23}\] is \[\max_{\mathbf{v}\in\mathbb{V}}\left\{-\phi^{*}(-\mathbf{A}^{*}\mathbf{v})-\psi^{*}(\mathbf{v}) \right\}, \tag{24}\] where \(\phi^{*}\) and \(\psi^{*}\) denote the convex conjugates of \(\phi\) and \(\psi\), respectively. Let now \(\mathbb{U}=\mathbb{R}^{N_{1}\times N_{2}\times K}\) and \(\mathbb{V}=\mathbb{R}^{N_{1}\times N_{2}\times K\times 3}\), such that (21) is of the form (23) with \[\phi(\mathbf{u}) :=\chi_{[\mathbf{u}^{\text{low}},\mathbf{u}^{\text{upp}}]}(\mathbf{u}),\qquad \mathbf{u}\in\mathbb{R}^{N_{1}\times N_{2}\times K}, \tag{25}\] \[\psi(\mathbf{v}) :=\sum_{i,j,k}\left\|\mathbf{v}_{i,j,k}\right\|,\qquad\mathbf{v}\in \mathbb{R}^{N_{1}\times N_{2}\times K\times 3},\] (26) \[\mathbf{A} :=\tilde{\mathbf{\nabla}}_{\mathbf{x},t}\tilde{\mathbf{\Delta}}:\mathbb{R}^{ N_{1}\times N_{2}\times K}\rightarrow\mathbb{R}^{N_{1}\times N_{2}\times K \times 3}. \tag{27}\] The next proposition gives an explicit expression for the dual of the TV-ULoG optimization problem (21). It is defined with reference to the convex set \[S:=\left\{\mathbf{v}\in\mathbb{R}^{N_{1}\times N_{2}\times K\times 3}\ :\ \|\mathbf{v}_{i,j,k}\|\leq 1 \text{ for all }i,j,k\right\}. \tag{28}\] **Proposition 1**.: _The Fenchel dual of (21) is_ \[\max_{\mathbf{v}\in S}\min_{\mathbf{w}\in[\mathbf{u}^{\text{low}},\mathbf{u}^{\text{upp}}]} \left\langle\mathbf{A}^{\top}\mathbf{v},\mathbf{w}\right\rangle, \tag{29}\] _where \(\mathbf{A}\) is as in (27)._ Proof.: Let \(\phi\) and \(\psi\) be given by (25) and (26), respectively. By example 4.2 in [51], we have \[\phi^{*}(\mathbf{v})=\max_{\mathbf{w}\in[\mathbf{u}^{\text{low}},\mathbf{u}^{\text{upp}}]} \left\langle\mathbf{w},\mathbf{v}\right\rangle. \tag{30}\] Furthermore, by theorem 4.12 and example 4.4.12 in [51], \[\psi^{*}(\mathbf{v})=\sum_{i,j,k}\chi_{B_{1}(0)}(\mathbf{v}_{i,j,k})\] \[=\chi_{S}(\mathbf{v}), \tag{31}\] where the set \(S\subset\mathbb{R}^{N_{1}\times N_{2}\times K\times 3}\) is given by (28). The proof follows if we plug (30) and (31) into (24). The dual problem (29) could be solved with projected subgradient methods [51]. However, a more efficient approach is to consider a smoothed approximation instead, since this allows to apply accelerated methods such as FGP: Given a convex optimization problem of the form \[\min_{\mathbf{u}\in C_{1}}\,\phi(\mathbf{u}),\text{ where }\phi(\mathbf{u})=\max_{\mathbf{w}\in C _{2}}\left\langle\mathbf{B}\mathbf{u},\mathbf{w}\right\rangle, \tag{32}\] where \(C_{1}\) and \(C_{2}\) are bounded closed convex sets, Nesterov [52] proposed to approximate the objective functional \(\phi\) by \[\phi_{\mu}(\mathbf{u}):=\max_{\mathbf{w}\in C_{2}}\left\{\left\langle\mathbf{B}\mathbf{u},\bm {w}\right\rangle-\frac{\mu}{2}\left\|\mathbf{w}\right\|^{2}\right\}. \tag{33}\] The associated optimization problem \[\min_{\mathbf{u}\in C_{1}}\phi_{\mu}(\mathbf{u})\] is called the Nesterov smoothing of (32). It is a convex problem with smooth objective, and can hence be solved fast using accelerated first-order methods. The next proposition derives the Nesterov smoothing of the dual problem (29). The derivation is analogous to the proof of proposition 4.1 in [47]. For completeness, we have provided the proof below. **Proposition 2**.: _The Nesterov smoothing corresponding to (29) is given by_ \[\min_{\mathbf{v}\in S} \left\{\left\|\frac{1}{\mu}\mathbf{A}^{\top}\mathbf{v}\right\|^{2}-\left\| (I-P_{[\mathbf{u}^{\text{low}},\mathbf{u}^{\text{upp}}]})(\frac{1}{\mu}\mathbf{A}^{\top} \mathbf{v})\right\|^{2}\right\}. \tag{34}\] Proof.: First, (29) is equivalent to \[\min_{\mathbf{v}\in S}\max_{\mathbf{w}\in[\mathbf{u}^{\text{low}},\mathbf{u}^{\text{upp}}]} \left\langle-\mathbf{A}^{\top}\mathbf{v},\mathbf{w}\right\rangle.\] This is of the form (32) with \(C_{1}=S\), \(C_{2}=[\mathbf{u}^{\text{low}},\mathbf{u}^{\text{upp}}]\) and \(\mathbf{B}=-\mathbf{A}^{\top}\). Hence, its Nesterov smoothing is \[\min_{\mathbf{v}\in K} \phi_{\mu}(\mathbf{v}),\] \[\phi_{\mu}(\mathbf{v}):=\max_{\mathbf{w}\in[\mathbf{u}^{\text{low}},\mathbf{u}^{ \text{upp}}]}\left\{\left\langle-\mathbf{A}^{\top}\mathbf{v},\mathbf{w}\right\rangle- \frac{\mu}{2}\left\|\mathbf{w}\right\|^{2}\right\}.\] By completing the square, one can show \[\phi_{\mu}(\mathbf{v})=\max_{\mathbf{w}\in[\mathbf{u}^{\text{low}},\mathbf{u}^{ \text{upp}}]}\left\{\frac{\mu}{2}\left\|\frac{1}{\mu}\mathbf{A}^{\top}\mathbf{v} \right\|^{2}\right. \tag{35}\] \[\left.-\frac{\mu}{2}\left\|\mathbf{w}+\frac{1}{\mu}\mathbf{A}^{\top}\mathbf{v }\right\|^{2}\right\}.\] Recall the distance minimizing property of the orthogonal projection: For a convex set \(C\), we have \[\min_{\mathbf{w}\in C}\left\|\mathbf{q}-\mathbf{w}\right\|^{2}=\left\|(I-P_{C})(\mathbf{q}) \right\|^{2}.\] In particular, this means \[\max_{\mathbf{w}\in[\mathbf{u}^{\text{low}},\mathbf{u}^{\text{upp}}]} \left\|\mathbf{w}+\frac{1}{\mu}\mathbf{A}^{\top}\mathbf{v}\right\|^{2}\] \[=\left\|(I-P_{[\mathbf{u}^{\text{low}},\mathbf{u}^{\text{upp}}]})(-\frac{ 1}{\mu}\mathbf{A}^{\top}\mathbf{v}))\right\|^{2}.\] Inserting (36) in (35) then yields \[\phi_{\mu}(\mathbf{v})= \frac{\mu}{2}\left(-\left\|(I-P_{[\mathbf{u}^{\text{low}},\mathbf{u}^{ \text{upp}}]})(-\frac{1}{\mu}\mathbf{A}^{\top}\mathbf{v})\right\|^{2}\right.\] \[\left.+\frac{1}{\mu}\left\|\mathbf{A}^{\top}\mathbf{v}\right\|^{2}\right).\] This finishes the proof. The smoothed problem (34) is now amenable to minimization with the fast gradient projection method given by Algorithm 1 (cf. [51]). ``` 0: A convex function \(\phi:\mathbb{U}\rightarrow\mathbb{R}\) on a Euclidean space \(\mathbb{U}\), a convex set \(C\subset\mathbb{U}\), a steplength \(\beta>0\) and an initial guess \(\mathbf{u}_{0}\in\mathbb{U}\). 1:\(\mathbf{v}_{0}=\mathbf{u}_{0}\); 2:\(t_{0}=1\); 3:for k = 0,...,K do 4:\(\mathbf{u}_{k+1}=P_{C}(\mathbf{v}_{k}-\beta\nabla\phi(\mathbf{v}_{k}))\); 5:\(t_{k+1}=\frac{1}{2}(1+\sqrt{1+4t_{k}^{2}})\); 6:\(\mathbf{v}_{k+1}=\mathbf{u}_{k+1}+\frac{t_{k}-1}{t_{k}+1}(\mathbf{u}_{k+1}-\mathbf{u}_{k})\); 7:endfor ``` **Algorithm 1** FGP **Remark 4**.: _Problem (34) is identical to the dual problem of CTV-denoising (see proposition 4.1 in [47]), except for the definition of the operator \(\mathbf{A}\). Indeed, problem (34) is the Fenchel dual of the following regularized version of problem (21),_ \[\min_{\mathbf{u}\in\mathbb{R}^{N_{1}\times N_{2}\times K}} \quad\mathbf{T}\mathbf{V}(\tilde{\mathbf{\Delta}}\mathbf{u})+\frac{1}{2\mu }\left\|\mathbf{u}\right\|^{2}\] \[\quad s.t. \quad\mathbf{u}^{\rm low}\leq\mathbf{u}\leq\mathbf{u}^{\rm upp}.\] **Remark 5**.: _Finding a good value for the smoothing parameter \(\mu\) in (33) is difficult in practice. In [52], it is shown that, for all \(\mathbf{u}\in\mathbb{U}\),_ \[\phi_{\mu}(\mathbf{u})\leq\phi(\mathbf{u})\leq\phi_{\mu}(\mathbf{u})+\mu D_{2}, \tag{36}\] _where \(D_{2}=\max_{\mathbf{v}\in C_{2}}\frac{1}{2}\left\|\mathbf{v}\right\|^{2}\). This means that an accuracy of \(\epsilon\) in the objective is ensured if \(\mu\leq\epsilon/D_{2}\). However, the bound (36) is often too conservative and it can be advantageous to choose \(\mu>\epsilon/D_{2}\), since larger values of \(\mu\) allow for larger stepsize and faster convergence._ #### 4.2.2 Primal smoothing As an alternative to the dual approach described in Section 4.2.1, we can also apply Nesterov-smoothing directly to the primal problem (21). **Proposition 3**.: _Let \(\mathbf{A}\) be given by (27) and define the Huber loss function [53]_ \[h_{\mu}:\mathbb{R}^{3}\to\mathbb{R},\quad h_{\mu}(\mathbf{v}):=\begin{cases} \left\|\mathbf{v}\right\|+\frac{\mu}{2},&\text{if }\left\|\mathbf{v}\right\|\geq\mu,\\ \frac{\left\|\mathbf{v}\right\|^{2}}{2\mu},&\text{otherwise}.\end{cases} \tag{37}\] _Then the Nesterov smoothing of the TV-ULoG optimization problem (21) is_ \[\min_{\mathbf{u}\in\mathbb{R}^{N_{1}\times N_{2}\times K}} \quad\sum_{i,j,k}h_{\mu}((\mathbf{A}\mathbf{u})_{i,j,k}), \tag{38}\] \[\quad s.t. \quad\mathbf{u}^{\rm low}\leq\mathbf{u}\leq\mathbf{u}^{\rm upp}.\] Proof.: Let \[\phi(\mathbf{u}):=\sum_{i,j,k}\left\|(\mathbf{A}\mathbf{u})_{i,j,k}\right\|, \tag{39}\] such that we can write the TV-ULoG optimization problem (21) as \[\min_{\mathbf{u}\in[\mathbf{u}^{\rm low},\mathbf{u}^{\rm upp}]}\phi(\mathbf{u}). \tag{40}\] Using the identity \[\left\|\mathbf{v}\right\|=\max_{\mathbf{w}\in B_{1}(0)}\left\langle\mathbf{w},\mathbf{v} \right\rangle,\] in (39) yields \[\phi(\mathbf{u}) =\sum_{i,j,k}\max_{\mathbf{w}_{i,j,k}\in B_{1}(0)}\left\langle\mathbf{w} _{i,j,k},(\mathbf{A}\mathbf{u})_{i,j,k}\right\rangle\] \[=\max_{\mathbf{w}\in S}\left\langle\mathbf{w},\mathbf{A}\mathbf{u}\right\rangle,\] where \(S\) is again given by (28). Hence, (40) is of the form (32) with \(C_{1}=[\mathbf{u}^{\rm low},\mathbf{u}^{\rm upp}]\) and \(C_{2}=S\), which means that the Nesterov-smoothed objective is given by (cf. (33)) \[\phi_{\mu}(\mathbf{u}) =\max_{\mathbf{w}\in S}\left(\left\langle\mathbf{w},\mathbf{u}\right\rangle- \frac{\mu}{2}\left\|\mathbf{w}\right\|^{2}\right)\] \[=\sum_{i,j,k}\max_{\mathbf{w}_{i,j,k}\in B_{1}(0)}\left(\left\langle \mathbf{w}_{i,j,k},(\mathbf{A}\mathbf{u})_{i,j,k}\right\rangle-\frac{\mu}{2}\left\|\mathbf{w} _{i,j,k}\right\|^{2}\right). \tag{41}\] It is straightforward to show (e.g. using Lagrange multipliers) that, for any vector \(\mathbf{v}\), \[\max_{\mathbf{w}\in B_{1}(0)}\left(\left\langle\mathbf{v},\mathbf{w}\right\rangle-\frac{ \mu}{2}\left\|\mathbf{w}\right\|^{2}\right)=h_{\mu}(\mathbf{v}), \tag{42}\] where \(h_{\mu}\) is defined in (37). Using (42) in (41) proves the proposition. Problem (38) is a smooth optimization problem with bound constraints for which there exists a huge number of optimization methods, such as the previously introduced FGP method or the popular L-BFGS-B method [54, 55, 56]. #### 4.2.3 Interior-point method Many problems involving TV-regularization, in particular \(L^{1}\)-TV denoising and CTV-denoising, can be formulated equivalently as second-order cone programs (SOCP) and then be solved with interior-point methods [49]. This strategy has the advantage that it does not require additional smoothing. Interior-point methods are also known to be much more robust against ill-conditioning of the KKT system. They require the solution of a large linear system in every step, which is why they sometimes do not scale well to larger problems. However, if the special structure of the linear system can be exploited, interior-point methods can yield state-of-the-art performance [48]. We call an optimization problem a SOCP if it can be written in the following form [57]: \[\min_{\mathbf{v}\in\mathbb{R}^{n}} \quad\langle\mathbf{\xi},\mathbf{v}\rangle \tag{43}\] \[\mathrm{s.~{}t.} \quad\|\mathbf{B}_{i}\mathbf{v}+\mathbf{c}_{i}\|\leq\langle\mathbf{d}_{i},\mathbf{v }\rangle+\eta_{i},\quad i\in[m],\] \[\mathbf{H}\mathbf{v}=\mathbf{h},\] where \(\mathbf{\xi},\mathbf{d}_{i}\in\mathbb{R}^{n}\), \(\mathbf{B}_{i}\in\mathbb{R}^{n_{i}\times n}\), \(\mathbf{c}_{i}\in\mathbb{R}^{n_{i}}\), \(\eta_{i}\in\mathbb{R}\), \(\mathbf{H}\in\mathbb{R}^{p\times n}\), and \(\mathbf{h}\in\mathbb{R}^{p}\). The next proposition states how the TV-ULoG optimization problem (21) can be brought into the form (43). To prepare the proof, we bring the primal problem in flattened form so that we work with vectors instead of three-dimensional arrays. To this end, let \(N:=N_{1}\cdot N_{2}\cdot K\) and define the flattening operator \[\mathrm{flat}:\mathbb{R}^{N_{1}\times N_{2}\times K}\to \mathbb{R}^{N},\] \[\mathrm{flat}(\mathbf{u})_{\sigma(i,j,k)}=u_{i,j,k},\] where \[\sigma:[N_{1}]\times[N_{2}]\times[K]\to[N],\] \[\sigma(i,j,k)=iN_{2}K+jK+k\] is an enumeration of \([N_{1}]\times[N_{2}]\times[K]\). Let \[\mathbf{z}^{\mathrm{low}}:=\mathrm{flat}(\mathbf{u}^{\mathrm{low}}),\quad\mathbf{z}^{ \mathrm{upp}}:=\mathrm{flat}(\mathbf{u}^{\mathrm{upp}}). \tag{44}\] Then, one can find matrices \(\tilde{\mathbf{A}}_{1},\ldots,\tilde{\mathbf{A}}_{N}\in\mathbb{R}^{3\times N}\) such that \[(\tilde{\mathbf{A}}_{\sigma(i,j,k)}\mathrm{flat}(\mathbf{u}))_{r}=(\mathbf{A}\mathbf{u})_{i,j,k,r} \tag{45}\] for all \(\mathbf{u}\in\mathbb{R}^{N_{1}\times N_{2}\times K}\), \(r\in[3]\) and \((i,j,k)\in[N_{1}]\times[N_{2}]\times[K]\). Under these definitions, it is easy to see that the TV-ULoG optimization problem (21) is equivalent to \[\min_{\mathbf{z}\in\mathbb{R}^{N}} \quad\sum_{\ell=1}^{N}\left\|\tilde{\mathbf{A}}_{\ell}\mathbf{z}\right\| \tag{46}\] \[\mathrm{s.~{}t.} \quad\mathbf{z}^{\mathrm{low}}\leq\mathbf{z}\leq\mathbf{z}^{\mathrm{upp}},\] where \(\mathbf{z}=\mathrm{flat}(\mathbf{u})\). **Proposition 4**.: _The optimization problem (46) is equivalent to the SOCP (43), under the identifications_ \[n=2N,\quad m=3N,\quad\mathbf{\xi}=\begin{bmatrix}\mathbf{0}_{N}\\ \mathbf{1}_{N}\end{bmatrix}, \tag{47}\] \[\mathbf{B}_{\ell}=\mathbf{0}_{1\times 2N},\quad\mathbf{B}_{N+\ell}=\mathbf{0}_{1 \times n},\quad\mathbf{B}_{2N+\ell}=\begin{bmatrix}\tilde{\mathbf{A}}_{\ell}~{}\mathbf{0} _{3\times n}\end{bmatrix},\] \[\mathbf{c}_{\ell}=0,\quad\mathbf{c}_{N+\ell}=0,\quad\mathbf{c}_{2N+\ell}=\mathbf{ 0}_{3},\] \[\mathbf{d}_{\ell}=\mathbf{e}_{\ell}^{n},\quad\mathbf{d}_{N+\ell}=-\mathbf{e}_{ \ell}^{n},\quad\mathbf{d}_{2N+i}=\mathbf{e}_{N+\ell},\] \[\eta_{\ell}=-z_{\ell}^{\mathrm{low}},\quad\eta_{N+\ell}=z_{\ell}^ {\mathrm{upp}},\quad\eta_{2N+\ell}=0,\quad\ell\in[N],\] \[\mathbf{H}=\mathbf{0}_{1\times n},\quad\mathbf{h}=0.\] _In particular, if \(\bar{\mathbf{v}}\in\mathbb{R}^{2N}\) is a minimizer of (43), then \(\bar{\mathbf{u}}\) given by_ \[\bar{u}_{i,j,k}=\bar{v}_{\sigma(i,j,k)}\] _is a minimizer of the TV-ULoG optimization problem (21)._ Proof.: Note that (46) is equivalent to \[\min_{\mathbf{z}\in\mathbb{R}^{N},\mathbf{q}\in\mathbb{R}^{N}} \quad\sum_{\ell=1}^{N}q_{i} \tag{48}\] \[\mathrm{s.~{}t.} \quad\mathbf{z}^{\mathrm{low}}\leq\mathbf{z}\leq\mathbf{z}^{\mathrm{upp}},\] \[\left\|\tilde{\mathbf{A}}_{\ell}\mathbf{z}\right\|\leq q_{\ell},\quad\ell \in[N].\] Let us combine the optimization variables \(\mathbf{z}\) and \(\mathbf{q}\) in a single vector \(\mathbf{v}=[\mathbf{z},\mathbf{q}]\in\mathbb{R}^{2N}\), such that (48) becomes \[\min_{\mathbf{v}\in\mathbb{R}^{2N}} \quad\sum_{\ell=1}^{N}v_{N+\ell}, \tag{49}\] \[\mathrm{s.~{}t.} z_{\ell}^{\mathrm{low}}\leq v_{\ell}\leq z_{\ell}^{\mathrm{upp}},\] \[\left\|\begin{bmatrix}\tilde{\mathbf{A}}_{\ell}~{}\mathbf{0}_{3\times N} \end{bmatrix}\mathbf{v}\right\|\leq v_{N+\ell},\quad\ell\in[N].\] It is now straightforward to check that if we plug the identifications (47) into the SOCP standard form (43), we obtain precisely (49). The resulting SOCP can then be solved efficiently with existing interior-point solvers (see also Section 5.3), exploiting the sparse structure of the discretized differential operator \(\mathbf{A}\). ### Extraction of blob regions Let \(\bar{\mathbf{u}}\) be a numerical solution of the TV-ULoG optimization problem (21). Ideally, \(\tilde{\mathbf{\Delta}}\bar{\mathbf{u}}\) is piecewise constant such that it attains its local minima on index sets \(\mathcal{M}_{1},\ldots,\mathcal{M}_{S}\subset[N_{1}]\times[N_{2}]\times[K]\) which quantify our uncertainty with respect to the blobs of the uncertain image (see Section3). However, due to numerical errors, \(\tilde{\boldsymbol{\Delta}}\tilde{\boldsymbol{u}}\) will typically only be approximately piecewise constant or exhibit artifacts such as staircasing. For this reason, we use the following thresholding procedure to extract the desired regions. 1. Let \(\boldsymbol{a}:=\tilde{\boldsymbol{\Delta}}\tilde{\boldsymbol{u}}\in\mathbb{R} ^{N_{1}\times N_{2}\times K}\). 2. Detect the local minimizers \(\boldsymbol{m}_{s},\ldots,\boldsymbol{m}_{S}\in[N_{1}]\times[N_{2}]\times[K]\) of \(\boldsymbol{a}\). 3. For each local minimizer \(m_{s}=(i_{s},j_{s},k_{s})\), detect the corresponding plateau, i.e. the largest connected component \(\mathcal{M}_{s}\in[N_{1}]\times[N_{2}]\times[K]\) with \(m_{s}\in\mathcal{M}_{s}\) such that \[a_{i,j,k}\leq ra_{i_{s},j_{r},k_{s}}\quad\text{for all }(i,j,k)\in\mathcal{M}_{s}.\] Here, \(r\in(0,1)\) is a relative threshold that determines the size of the resulting regions. If we choose \(r\) closer to \(1\), the resulting regions will be tighter but the results of the method will be less robust against numerical errors. ### Visualization The result of the extraction step are regions \(\mathcal{M}_{1},\ldots,\mathcal{M}_{S}\subset[N_{1}]\times[N_{2}]\times[K]\). Since these are sets in a discrete three-dimensional space, they are difficult to visualize directly. The extent of the regions along the first two axes (the spatial domain) expresses uncertainty with respect to position of the corresponding blob, while the extent along the third axis (the scale) corresponds to uncertainty in the size of the corresponding blob (see the middle panel in Figure7 below). We suggest to visualize the uncertainty in scale and position by introducing two projections on the pixel grid \([N_{1}]\times[N_{2}]\) which are easy to visualize as images. The first projection is the direct projection on the image domain, \[\begin{split}&\Pi_{1}:2^{[N_{1}]\times[N_{2}]\times[K]}\to 2^{[N_{1}]\times[N_{2}]},\\ &\Pi_{1}(\mathcal{M}):=\left\{(i,j)\ :\ \exists k\in[K]:(i,j,k)\in \mathcal{M}\right\}.\end{split} \tag{50}\] The set \(\Pi_{1}(\mathcal{M})\) is a two-dimensional region that contains the centers of all blobs in \(\mathcal{M}\), and can thus be used to visualize the uncertainty in the center position of the uncertain blob. The second projection is motivated by the visualization for the discrete Laplacians-of-Gaussians method, where a point \((i,j,k)\in[N_{1}]\times[N_{2}]\times[K]\) is visualized by a (discretized) circle \(B_{\sqrt{2t_{k}}}(i,j)\). Taking the possibly different grid sizes \(h_{1}\) and \(h_{2}\) into account, the set \(B_{r}(i,j)\subset[N_{1}]\times[N_{2}]\times[K]\) is defined as \[B_{r}(i,j)=\left\{(i^{\prime},j^{\prime})\ :\ \sqrt{h_{1}^{2}(i^{\prime}-i)^{2}+h_{2}^ {2}(j^{\prime}-j)^{2}}\leq r\right\}.\] We then define the projection of a set \(\mathcal{M}\subset[N_{1}]\times[N_{2}]\times[K]\) onto the union over all circles corresponding to points in \(\mathcal{M}\) by \[\begin{split}&\Pi_{2}:2^{[N_{1}]\times[N_{2}]\times[K]}\to 2 ^{[N_{1}]\times[N_{2}]},\\ &\Pi_{2}(\mathcal{M}):=\bigcup_{(i,j,k)\in\mathcal{M}}B_{\sqrt{2 t_{k}}}(i,j).\end{split} \tag{51}\] Together, \(\Pi_{1}(\mathcal{M})\) and \(\Pi_{2}(\mathcal{M})\) allow to visualize the uncertainty in the blob center and the blob scale within a set \(\mathcal{M}\). An example of this visualization is provided in Figure7 (middle panel). Figure6 gives an example how this visualization can be used on one-dimensional signals. ### Further remarks As mentioned in the beginning of Section4.2, the TV-ULoG optimization problem (21) has many parallels to CTV-denoising. For example, [58] considers constrained-TV regularization and proposes a Nesterov smoothing approach similar to the one we describe in Section4.2.2. The TV-ULoG optimization problem could also be attacked with ADMM, similar to the method presented in [59]. However, there are two significant differences between the TV-ULoG optimization problem and the CTV-denoising problem (22), apart from the higher dimensionality: First, the CTV-denoising problem depends through the TV-term on the first derivative of the image, while in the TV-ULoG optimization problem we take the total variation of the normalized Laplacian, which means that we depend on the third-order derivatives. The resulting discrete differential operator \(\boldsymbol{A}\) has a high condition number, which makes the TV-ULoG optimization problem more difficult to solve numerically. This problem is amplified by the fact that there is no denoising term in the TV-ULoG optimization problem (21), since this term otherwise has a regularizing effect. These differences could explain the problems of the proposed first-order methods we observed in our numerical experiments (see Section5.3). Alternatively, problem (21) could be solved with a semi-smooth Newton method [60], which is likely more accurate than the first-order methods. Such an approach would scale similar as the interior-point strategy. A final option that we did not investigate further are graph-cut methods [61, 62], which could potentially be very efficient for our approach, since the result of the TV-ULoG method should be robust against the quantization error. However, such a method would be more difficult to implement. ## 5 Numerical experiments In this section, we illustrate the tube-based blob detection methods introduced in this paper on two Bayesian inverse problems. We start with a one-dimensional problem which mostly serves didactical purposes, since the scale-space representation of a one-dimensional signal is two-dimensional and can therefore be visualized as an image. This allows us to further illustrate and discuss the ideas underlying our approach. As second example we consider a more challenging two-dimensional imaging problem from stellar dynamics since this application was a main motivation for the present work. The numerical experiments were implemented in Python 3.10. The source code is available from the GitHub repository github.com/FabianKP/tvulog, which also provides an exact list of the used packages. The reported computation times were measured on a PC with 64 GB RAM and 8 3.6-GHz Intel i7-7700 CPUs. ### One-dimensional deconvolution #### 5.1.1 Problem setup We tested the proposed method on the problem of blob detection in one-dimensional Bayesian deconvolution. That is, we considered the task of identifying blobs in a one-dimensional discrete signal, modelled as realization of a random vector \(\mathbf{F}\), using data \(\mathbf{Y}\) which is given by the noisy convolution \[\mathbf{Y}=\mathbf{GF}+\mathbf{W},\] where \(\mathbf{G}:\mathbb{R}^{N}\rightarrow\mathbb{R}^{N}\) is a convolution operator, and \(\mathbf{W}\) is zero-mean Gaussian noise. While we described the proposed methodology only for the case of two-dimensional signals (images), the adaptation to one-dimensional signals is completely straightforward, and we skip it to avoid repetition. To define a full Bayesian model for the deconvolution problem, we assign a zero-mean Gaussian prior on \(\mathbf{F}\), \[p_{\mathbf{F}}=\mathcal{N}(\mathbf{0},\mathbf{\Sigma}), \tag{52}\] where \(\mathbf{\Sigma}\in\mathbb{R}^{N\times N}\) is the prior covariance matrix. In our experiments, we used a prior covariance \(\mathbf{\Sigma}\) corresponding to a Gaussian random Markov field prior. Assuming that the noise \(\mathbf{W}\) is uncorrelated with constant standard deviation \(\gamma\), we arrive at the likelihood \[p_{\mathbf{Y}|\mathbf{F}}(\cdot\mid\mathbf{f})=\mathcal{N}(\mathbf{G}\mathbf{f},\gamma^{2}\mathbf{I}),\quad\mathbf{f}\in\mathbb{R}^{N}. \tag{53}\] Combining (52) and (53) via Bayes theorem yields the posterior density \[p_{\mathbf{F}|\mathbf{Y}}(\mathbf{f}\mid\mathbf{y})\propto\exp\left(-\frac{1}{2\gamma^{2}} \left\|\mathbf{G}\mathbf{f}-\mathbf{y}\right\|^{2}-\frac{1}{2}\left\|\mathbf{\Sigma}^{-1/2} \mathbf{f}\right\|^{2}\right).\] #### 5.1.2 Simulation For our numerical experiment, we used a sinusoidal ground truth \(\mathbf{f}^{*}\in\mathbb{R}^{N\times N}\) (see Figure2) Figure 2: Setup of the deconvolution problem. Left panel: The ground truth \(\mathbf{f}^{*}\). Middle panel: The convolved ground truth \(\mathbf{G}\mathbf{f}^{*}\). Right panel: The noisy data \(\mathbf{y}=\mathbf{G}\mathbf{f}^{*}+\mathbf{w}^{*}\). with \(N=200\), from which we simulated noisy data \(\mathbf{y}=\mathbf{G}\mathbf{f}^{*}+\mathbf{w}\), where \(\mathbf{w}\) was generated from \(\mathcal{N}(\mathbf{0},\gamma^{2}\mathbf{I})\) with \(\gamma=0.03\). We then generated \(10\ 000\) MCMC samples using the Linear-RTO sampler provided by the CUQIpy Python package [63]. Using these MCMC samples, we computed a scale space tube \([\mathbf{u}^{\rm low},\mathbf{u}^{\rm upp}]\) for the uncertain scale-space representation \(\mathbf{U}=\mathbf{\Phi}\mathbf{F}\) using the heuristic method described in Appendix A for the crediblity parameter \(\alpha=0.05\) (corresponding to \(95\%\) credibility). We used exponentially increasing scales (cf. (15)) \[t_{k} =b^{k-1}t_{\rm min},\qquad k\in[K], \tag{54}\] \[b =\left(\frac{t_{\rm max}}{t_{\rm min}}\right)^{\frac{1}{K-1}},\] with \(K=30\), \(t_{\rm min}=1\) and \(t_{\rm max}=70^{2}\). The two-dimensional objects \(\mathbf{u}^{\rm low}\) and \(\mathbf{u}^{\rm upp}\) are visualized in Figure 3. Since it is hard to see the difference between the lower and uppper bound with the naked eye, we have plotted a horizontal slice (that is, a slice for a fixed scale) in Figure 4. For reference, we also computed a point estimate for the signal of interest \(\mathbf{f}\) in form of the posterior mean, given by \[\mathbf{f}^{\rm mean}=\frac{1}{S}\sum_{s=1}^{S}\mathbf{f}^{(s)},\] where \((\mathbf{f}^{(s)})_{s=1}^{S}\) are the computed MCMC samples. We denote the scale-space representation of \(\mathbf{f}^{\rm mean}\) by \(\mathbf{u}^{\rm mean}\). #### 5.1.3 Results To solve the optimization problem (21), we used the interior-point approach since it was by far the most efficient (see Section 5.3 below). The normalized Laplacian of the computed minimizer is plotted in Figure 5 (middle panel). Compared to the normalized Laplacian of the posterior mean (left panel), the scale-normalized Laplacian is approximately piecewise constant and attains local minima on four clearly separated regions, which were extracted using the thresholding procedure described in Section 4.3 (right panel). Since these regions are difficult to make out with the bare eye, a horizontal slice (that is, for fixed scale) is plotted in the second row of Figure 5. In Figure 6, we plot the extracted blob regions using the procedure described in Section 4.4, together with the posterior mean and the ground truth for comparison. The horizontal solid bars are obtained from the projection \(\Pi_{1}\) (50). That is, they indicate the intervals in which we expect the blob _centers_ to lie with \(95\%\)-confidence. Similarly, the dotted bars are obtained from the projection \(\Pi_{2}\) (51) and indicate the maximal extent of the uncertain blob. ### Integrated-light stellar population recovery #### 5.2.1 Problem setup Next, we revisit the problem of integrated-light stellar population recovery. This is a constrained linear imaging inverse problem of the form \[\mathbf{Y}=\mathbf{G}\mathbf{F}+\mathbf{W},\qquad\mathbf{F}\geq 0,\] Figure 4: Slice through the scale-space tube \([\mathbf{u}^{\rm low},\mathbf{u}^{\rm upp}]\) for a fixed scale. Figure 3: The lower and upper bound of the scale-space tube \([\mathbf{u}^{\rm low},\mathbf{u}^{\rm upp}]\) for the one-dimensional deconvolution problem. The scale-space representation of the posterior mean is plotted in the right panel for comparison. where \(\mathbf{F}\) is a two-dimensional non-negative density function (modelled as random image), \(\mathbf{Y}\) is a measured light spectrum and \(\mathbf{W}\) is zero-mean uncorrelated Gaussian noise. The observation operator \(\mathbf{G}:\mathbb{R}^{N_{1}\times N_{2}}\to\mathbb{R}^{M}\) is the discretization of an integral operator that models how the density \(\mathbf{F}\) influences the spectrum. We do not discuss the details of this problem and the Bayesian modelling and instead refer the reader to the previous work [3]. #### 5.2.2 Simulation For our numerical experiment, we simulated a realization \(\mathbf{y}\) of the noisy spectrum \(\mathbf{Y}\) from a ground truth \(\mathbf{F}=\mathbf{f}^{*}\) (Figure 7, top panel) and generated 10 000 posterior samples (after 5000 burn-in iterations) using the SVD-MCMC method described in [3, section 4]. As in the one-dimensional deconvolution example, we computed a scale-space tube \([\mathbf{u}^{\rm low},\mathbf{u}^{\rm upp}]\) using the method described in Appendix A for the credibility parameter \(\alpha=0.05\) and discrete scales given by (54) with \(K=16\), \(t_{\rm min}=1\) and \(t_{\rm max}=30^{2}\). For reference, we also computed a point estimate for the signal of interest \(\mathbf{F}\), this time in form of the maximum-a-posteriori estimate \(\mathbf{f}^{\rm MAP}\). #### 5.2.3 Results We computed a minimizer of the associated optimization problem (21) using the interior-point method, since it was again the most efficient (see also Section 5.3). Two blob regions were extracted Figure 5: The result of the TV-approach for the one-dimensional deconvolution problem. First row, left panel: the scale-normalized Laplacian of the posterior mean; middle panel: the scale-normalized Laplacian of the minimizer of (21); right panel: the blob regions in scale space extracted from \(\bar{\Delta}\bar{u}\). Second row: Corresponding horizontal slice for fixed scale \(t_{k}\), \(k=15\). Figure 6: Plot of the posterior mean \(\mathbf{f}^{\rm mean}\) for the one-dimensional deconvolution problem together with the uncertain blobs. The projected blob sets are visualized by blue horizontal bars, where the solid bar indicates the center projection and the dotted bar indicates the scale projection. The ground truth \(\mathbf{f}^{*}\) (dotted line) is also plotted for reference. and visualized using the procedures described in Section 4.3 and Section 4.4. In Figure 7 (middle panel), the projected blob regions are plotted together with the MAP estimate \(\mathbf{f}^{\rm MAP}\). #### 5.2.4 Comparison with ULoG The bottom panel of Figure 7 shows the result of the ULoG method (see Section 3.3). Recall that in the ULoG method, we compute a minimizer \(\bar{\mathbf{u}}\) of \[\min_{\mathbf{u}\in\mathbb{R}^{N_{1}\times N_{2}\times K}} \left\|\tilde{\mathbf{\Delta}}\mathbf{u}\right\|^{2}\] s.t. \[\mathbf{u}^{\rm low}\leq\mathbf{u}\leq\mathbf{u}^{\rm upp},\] and then determine the local minimum points of \(\tilde{\mathbf{\Delta}}\bar{\mathbf{u}}\). A local minimum point \((i,j,k)\) of \(\tilde{\mathbf{u}}\) is then visualized by a dashed blue circle with center \((i,j)\) and radius \(\sqrt{2t_{k}}\) (see also Example 1). In contrast, the proposed TV-ULoG method yields a representative \(\tilde{\mathbf{u}}\) such that \(\tilde{\Delta}\bar{\mathbf{u}}\) attains its local minima on connected regions (see also Section 3.4), which are visualized in the middle panel of Figure 7 using the method described in Section 4.4. We see that both methods detect two blobs with confidence. However, the projected blob regions obtained from TV-ULoG allow for a better localization of the blob center. More importantly, the projected regions have a clear interpretation: The inner regions contain the center of the uncertain blob with 95%-probability. The outer regions contain the corresponding blob circles. In contrast, the dashed circles provided by ULoG have a less clear interpretation since they mix scale and uncertainty information (see the discussion after (11)). ### Comparison of optimization methods Both for the one-dimensional deconvolution example and the stellar recovery example, we compared the performance of the optimization strategies proposed in Section 4.2 for the solution of the optimization problem (21), namely the dual-smoothing approach (Section 4.2.1), the primal-smoothing approach (Section 4.2.2) and the interior-point method (Section 4.2.3). Both for the dual- and primal-smoothing approach we used the FGP method (see Algorithm 1) to solve the smoothed optimization problem, while we used ECOS [64] for the interior-point approach. The results of our comparison are plotted in Figure 8 and Figure 9. In both cases, the interior-point approach is able to achieve much higher accuracy. For the deconvolution problem, the first-order methods take considerably longer and are not able to achieve a high precision. For the stellar recovery problem, the primal smoothing method converges faster to a low-accuracy solution, but is not able to further improve. Also, both the dual- and primal-smoothing approach have the additional disadvantage that Figure 7: Results of ULoG and TV-ULoG for integrated-light stellar population recovery. Top panel: Ground truth \(\mathbf{f}^{*}\) from which the mock data was generated. Middle panel: MAP estimate \(\mathbf{f}^{\rm MAP}\) together with projection of blob sets. The two projections \(\Pi_{1}\) and \(\Pi_{2}\) are indicated by solid and dashed blue lines, respectively. Bottom panel: Results of ULoG superimposed on the MAP estimate. they depend on the choice of a smoothing parameter \(\mu\), where a larger value of \(\mu\) corresponds to more smoothing but higher approximation error (see Remark5). In Figure10, we plotted the performance of the dual and primal smoothing approach on the stellar recovery problem for different choices of \(\mu\). The trade-off between speed of convergence and achieveable accuracy is clearly visible. If the smoothing parameter is chosen too small, the first-order methods do not converge in a practical amount of time. This was in particular the case for the choice of smoothing parameter suggested by the bound (36). We tested many different choices, but did not find a configuration for which the performance of the first-order methods was comparable to the interior-point method. Furthermore, we also tested other solvers for the smoothed optimization problem, such as the L-BFGS-B method, but the performance was similar to the FGP method. Since it is necessary to achieve very high precision in the objective function of (21) to obtain the desired piecewise-constant solutions, the interior-point method should be the method of choice, since it has the additional advantage that it does not require hand-tuning of a smoothing parameter. ## 6 Conclusion In this work, we have developed a novel approach for blob detection in uncertain images that represents the uncertainty in the detected blobs by regions in scale space. These regions are obtained from the minimizer of a non-smooth optimization problem. Using similarities to CTV-denoising, we proposed three approaches for the numerical solution of the discretized problem. We also described Figure 8: Comparison of the computation time for the different optimization methods for the deconvolution problem (see Section5.1). The objective is normalized so that the minimum is at 0 and the initial value is at 1. Figure 10: The performance of the dual- and primal-smoothing approach for the stellar recovery problem, for various choices of smoothing parameter \(\mu\). For each choice the plot shows the performance of 200 000 FGP iterations. Figure 9: Comparison of the computation time for the different optimization methods for the stellar recovery problem (see Section5.2). The objective is normalized so that the minimum is 0. how the scale space regions can be visualized on the image domain in an interpretable way. The proposed method was illustrated on two numerical examples - one-dimensional deconvolution and integrated-light stellar population recovery - where it yielded clear results that were consistent with the ground truth. We also evaluated the performance of the different optimization methods and observed that the interior-point method outperformed the other two approaches, assumedly because it is more robust against the ill-conditioning of the problem. Our proposed method is flexible since it only requires access to a tube in which the scale-space representation of the uncertain image lies with high probability. Such a tube can be computed for many applications, for example in the important special case of Bayesian imaging. Finally, the proposed methods are not specific to astronomical applications, although they were originally developed in that context. The methodology could be applied in any setting were blob detection in uncertain signals is relevant, for example in medical and geophysical imaging. AcknowledgmentsFP and OS were supported by the Austrian Science Fund (FWF), with SFB F68 "Tomography Across the Scales", project F6807-N36 (Tomography with Uncertainties). The financial support by the Austrian Federal Ministry for Digital and Economic Affairs, the National Foundation for Research, Technology and Development and the Christian Doppler Research Association is gratefully acknowledged. The authors are grateful to Prashin Jethwa and Glenn van de Ven for the productive cooperation within the SFB. The authors would also like to thank Noemi Naujoks for feedback on the manuscript. ## Appendix A Estimation of scale-space tubes from MCMC samples In this appendix, we describe how, given an observation \(\mathbf{Y}=\mathbf{y}\), we can estimate a credible scale-space tube \([\mathbf{u}^{\mathrm{low}},\mathbf{u}^{\mathrm{upp}}]\) using MCMC samples \(\mathbf{f}^{(1)},\ldots,\mathbf{f}^{(S)}\) from the posterior distribution \(\mathbb{P}_{\mathbf{F}|\mathbf{Y}}(\cdot\ |\ \mathbf{y})\). Recall that MCMC is a class of methods that produce approximate samples from a probability distribution using its density function [33]. Given MCMC samples, the standard way to estimate the posterior probability that \(\mathbf{F}\) is contained in a given set \(A\subset\mathbb{R}^{N_{1}\times N_{2}}\) is [65] \[\mathbb{P}_{\mathbf{F}|\mathbf{Y}}(A\ |\ \mathbf{y})\approx\frac{1}{S}\sum_{s=1}^{S} \mathbbm{1}_{A}(\mathbf{f}^{(s)}),\] (A1) where the indicator function \(\mathbbm{1}_{A}\) is defined as \[\mathbbm{1}_{A}(\mathbf{f}):=\begin{cases}1,&\text{if }\mathbf{f}\in A,\\ 0,&\text{otherwise}.\end{cases}\] For our purposes, we want to find the minimal-volume tube \([\mathbf{u}^{\mathrm{low}},\mathbf{u}^{\mathrm{upp}}]\) such that \[\mathbb{P}_{\mathbf{U}|\mathbf{Y}}([\mathbf{u}^{\mathrm{low}},\mathbf{u}^{\mathrm{upp}}]\ |\ \mathbf{y})\geq 1-\alpha.\] (A2) Recall that for a set \(A\subset\mathbb{R}^{N_{1}\times N_{2}\times K}\), \(\mathbb{P}_{\mathbf{U}|\mathbf{Y}}(A\ |\ \mathbf{y})\) is given by the pushforward \[\mathbb{P}_{\mathbf{U}|\mathbf{Y}}(A\ |\ \mathbf{y})=\mathbb{P}_{\mathbf{F}|\mathbf{Y}}(\mathbf{\Phi}^{- 1}(A)\ |\ \mathbf{y}),\] (A3) where \(\mathbf{\Phi}\) denotes the operator that maps an image \(\mathbf{f}\) to its discrete scale-space representation (cf. Section 4.1). Inserting (A3) in (A2) and using the approximation (A1), we obtain \[1-\alpha \leq\mathbb{P}_{\mathbf{F}|\mathbf{Y}}(\mathbf{\Phi}^{-1}([\mathbf{u}^{\mathrm{ low}},\mathbf{u}^{\mathrm{upp}}])\ |\ \mathbf{y})\] \[\approx\frac{1}{S}\sum_{s=1}^{S}\mathbbm{1}_{\Phi^{-1}([\mathbf{u}^{ \mathrm{low}},\mathbf{u}^{\mathrm{upp}}])}(\mathbf{f}^{(s)})\] \[=\frac{1}{S}\left|\left\{s\in[S]\ :\ \mathbf{\Phi}(\mathbf{f}^{(s)})\in[\mathbf{u}^{ \mathrm{low}},\mathbf{u}^{\mathrm{upp}}]\right\}\right|,\] where for a set \(A\), \(|A|\) denotes its cardinality. This leads to a condition on \([\mathbf{u}^{\mathrm{low}},\mathbf{u}^{\mathrm{upp}}]\) in terms of samples: \[\left|\left\{s\in[S]\ :\ \mathbf{\Phi}(\mathbf{f}^{(s)})\in[\mathbf{u}^{\mathrm{low}},\mathbf{u}^{ \mathrm{upp}}]\right\}\right|\geq(1-\alpha)\cdot S.\] (A4) Given samples \(\mathbf{f}^{(1)},\ldots,\mathbf{f}^{(S)}\) it thus remains to find the smallest-volume tube \([\mathbf{u}^{\mathrm{low}},\mathbf{u}^{\mathrm{upp}}]\) that satisfies (A4). However, as was already discussed in [3, section 5.3], solving this problem exactly is computationally infeasible even for moderately sized images. Instead, one has to use heuristic approaches that determine a small but not minimal tube that satisfies (A4). Our method is based on the idea of sorting the samples in descending order according to their probability, and then using bisection to find the smallest-volume tube that satisfies (A4) among the increasing sequence of tubes spanned by the ordered samples. In detail: Let \(p_{\mathbf{F}|\mathbf{Y}}\) denote the density function of \(\mathbb{P}_{\mathbf{F}|\mathbf{Y}}\). Using a sorting algorithm, we find a reordering \((i_{s})_{s=1}^{S}\) such that \[p_{\mathbf{F}|\mathbf{Y}}(\mathbf{f}^{(i_{1})}\ |\ \mathbf{y})\geq\ldots\geq p_{\mathbf{F}|\mathbf{Y}}( \mathbf{f}^{(i_{S})}\ |\ \mathbf{y}).\] (A5) Then, we compute for each sample \(\mathbf{f}^{(i)}\) its scale-space representation \(\mathbf{u}^{(i)}=\mathbf{\Phi}(\mathbf{f}^{(i)})\). For each \(s\in[S]\), the discrete scale-space representations \(\mathbf{u}^{(i_{1})},\ldots,\mathbf{u}^{(i_{s})}\) span a tube \([\mathbf{u}^{\mathrm{low},s},\mathbf{u}^{\mathrm{upp},s}]\) given by \[\begin{split}\mathbf{u}^{\mathrm{low},s}_{i,j,k}&:= \min\left\{u^{(i_{r})}_{ijk}\ :\ r\in[s]\right\},\\ \mathbf{u}^{\mathrm{upp},s}_{i,j,k}&:=\max\left\{u^{(i_ {r})}_{ijk}\ :\ r\in[s]\right\}.\end{split}\] (A6) (The tube \([\mathbf{u}^{\mathrm{low},s},\mathbf{u}^{\mathrm{upp},s}]\) is simply the smallest-volume tube that contains \(\mathbf{u}^{(i_{1})},\ldots,\mathbf{u}^{(i_{s})}\)). Since these tubes are monotonically increasing, the desired tube can be found very efficiently using bisection. The detailed pseudocode for this method is given in Algorithm 2. ``` 0: Samples \(\mathbf{f}^{(1)},\ldots,\mathbf{f}^{(S)}\) from \(p_{\mathbf{F}|\mathbf{Y}}(\cdot\ |\ \mathbf{y})\), a credibility parameter \(\alpha\in(0,1)\), and a maximum number of bisection steps \(M\in\mathbb{N}\). 1: Find a reordering \(i_{1},\ldots,i_{S}\) such that (A5) holds; 2: Compute \(\mathbf{u}^{(i)}=\mathbf{\Phi}\mathbf{f}^{(i)}\) for all \(i\in[S]\); 3:\(S_{\alpha}=\lceil(1-\alpha)S\rceil\); 4: Set \(T=[\mathbf{u}^{\mathrm{low},S_{\alpha}},\mathbf{u}^{\mathrm{upp},S_{\alpha}}]\) (cf. Equation A6); 5:\(K=\big{|}\big{\{}s\in[S]\ :\ \mathbf{u}^{(i_{s})}\in T\big{\}}\big{|}\); 6:if\(K=S_{\alpha}\)thenreturn\(T\) 7:else 8: Set \(K_{\min}=1\) and \(K_{\max}=S_{\alpha}\); 9:for\(m=1,\ldots,M\)do 10:if\(K>S_{\alpha}\)then 11:\(K_{\max}=K\); 12:\(K=\frac{1}{2}(K_{\min}+K)\); 13:elseif\(K<S_{\alpha}\)then 14:\(K_{\min}=K\); 15:\(K=\frac{1}{2}(K+K_{\max})\); 16:else 17: break; 18:endif 19: Set \(T=[\mathbf{u}^{\mathrm{low},K},\mathbf{u}^{\mathrm{upp},K}]\); 20:\(K=\big{|}\big{\{}s\in[S]\ :\ \mathbf{u}^{(i_{s})}\in T\big{\}}\big{|}\); 21:endfor 22: return \(T\); 23:endif ``` **Algorithm 2** Credible scale-space tubes from samples Note that this algorithm is different from the method used in our previous work [3, section 5.3], which in particular did not use evaluations of the posterior density function. In our numerical experiments, the new method performed better, in the sense that it yielded tubes that contained the same number of samples but had lower volume.
2303.08649
Optimization of patient-specific range modulators for conformal FLASH proton therapy
Purpose: A promising approach to enable FLASH conformal proton therapy is to passively degrade a single energy layer using a patient-specific range modulator. We propose an innovative method to directly optimize the geometrical characteristics of the range modulator and the treatment plan with respect to user defined constraints, similarly to state-of-the-art IMPT inverse planning. Methods: The kind of range modulators proposed in this study is a voxelized object which can be placed in the CT for dose computation, which simplifies the simulation pipeline. Both the geometrical characteristics of the range modulator and the weights of the PBS spots were directly optimized with respect to constraints on the dose using a first-order method. A modified Monte Carlo dose engine was used to provide an estimate of the gradient of the relaxed constraints with respect to the elevation values of the range modulator. Results: Assessed on a head and neck case, dose conformity logically appeared to be significantly degraded compared to IMPT. We then demonstrated that this degradation came mainly from the use of a large range shifter and therefore from physical limitations inherent in the passive degradation of beam energy. The geometry of the range modulator, on the other hand, was shown to be very close to being optimal. PBS dose rates were computed and discussed with respect to FLASH objectives. Conclusions: The voxelized range modulators optimized with the proposed method were proven to be optimal on a head and neck case characterized by two rather large volumes, with irregular contours and variable depths. The optimized geometry differed from conventional ridge filters as it was arbitrarily set by the optimizer. This kind of range modulators can be directly added in the CT for dose computation and is well suited for 3D printing.
Sylvain Deffet, Kevin Souris, Edmond Sterpin
2023-03-15T14:26:23Z
http://arxiv.org/abs/2303.08649v1
# Optimization of patient-specific range modulators for conformal FLASH proton therapy ###### Abstract **Background:** In proton therapy, current pencil beam scanning (PBS) systems cannot deliver intensity modulated proton therapy (IMPT) treatment with a FLASH dose rate. A promising approach to enable FLASH conformal proton therapy is to passively degrade a single energy layer using a patient-specific range modulator. This range modulator can be seen as a combination of a ridge filter and a range shifter to achieve both uniformity and conformality. Several studies have already proved the feasibility of this approach. However, in those published works, the optimization of the range modulators is more akin to dose mimicking as it is not performed with respect to the constraints used to design the original IMPT plan. In addition, a complex simulation pipeline with an external dose engine is required to deal with the parameterized geometries of the range modulators. **Purpose:** We propose an innovative method to directly optimize the geometrical characteristics of the range modulator and the treatment plan with respect to user defined constraints, similarly to state-of-the-art IMPT inverse planning. **Methods:** The kind of range modulators proposed in this study is a voxelized object which can be placed in the CT for dose computation, which simplifies the simulation pipeline. Both the geometrical characteristics of the range modulator and the weights of the PBS spots were directly optimized with respect to constraints on the dose using a first-order method. A modified Monte Carlo dose engine was used to provide an estimate of the gradient of the relaxed constraints with respect to the elevation values of the range modulator. **Results:** Assessed on a head and neck case, dose conformity logically appeared to be significantly degraded compared to IMPT. We then demonstrated that this degradation came mainly from the use of a large range shifter and therefore from physical limitations inherent in the passive degradation of beam energy. The geometry of the range modulator, on the other hand, was shown to be very close to being optimal. PBS dose rates were computed and discussed with respect to FLASH objectives. **Conclusions:** The voxelized range modulators optimized with the proposed method were proven to be optimal on a head and neck case characterized by two rather large volumes, with irregular contours and variable depths. The optimized geometry differed from conventional ridge filters as it was arbitrarily set by the optimizer. This kind of range modulators can be directly added in the CT for dose computation and is well suited for 3D printing. FLASH, FLASH Proton Therapy, Range Modulator, Ridge Filter ## 1 Introduction Recent studies have shown that the use of very high dose rates would improve the protection of healthy tissues without compromising tumor control[4, 3, 8]. However, the preclinical results are sometimes contradictory[17, 2] and the underlying mechanism is not well understood yet[11, 15, 6, 5]. Preliminary findings suggest that a dose rate of at least 40 Gy/s is a critical prerequisite for the manifestation of the FLASH effect[4]. Nevertheless, recent research indicates that this threshold may not be universal, and the degree of the effect may depend on the dose rate[9, 12]. It also seems that there could be a minimum dose threshold[12]. State-of-the-art proton therapy relies on pencil beam scanning (PBS) to achieve excellent dose conformity. However, current PBS systems cannot deliver an intensity modulated proton therapy (IMPT) treatment with a FLASH dose rate. On the one hand, changing the beam energy from one layer to the other takes about 1 second. Consequently, the time to deliver a field can be around 20 seconds. On the other hand, several fields are generally required and the treatment can be fractionated over several days. In the light of the above, current PBS systems must be adapted to enable FLASH treatments. The combined use of a single energy layer and a patient-specific range modulator would decrease the irradiation time and thus increase the dose rate while maintaining conformity, with minimal system modifications[12]. The 3D range modulator typically consists of a collection of pyramids of different heights which transforms the pristine Bragg peaks into spread-out ones. The base of the range modulator is shaped to act as a range compensator to match the distal contour of the tumor. It has been shown that this set-up could easily achieve dose rate of at least 40 Gy/s[18, 7]. To plan a conformal PBS FLASH treatment it is therefore necessary to optimize a patient-specific range modulator and the weights of the PBS spots. Several methods have already been proposed[13, 7, 18]. Although different, they are based on two common principles: 1. A Monte Carlo dose engine is used to create a dictionary of dose distributions from which the sizes of the slabs of the pyramids constituting the range modulator are computed. To this end, a reference IMPT plan is first optimized and the energy levels are converted into slabs of range shifter. 2. The optimized range modulator is placed between the nozzle exit and the CT to calculate the dose influence matrix required for spot weight optimization. These studies have proved the feasability of using a single energy layer with a range modulator to deliver a dose to a complex target volume. However, one can regret that the optimization of the range modulators is akin to dose mimicking as it is not performed with respect to the constraints of the original IMPT plan. In addition, this optimizations require a complex simulation pipeline with an external dose engine. We present a new approach which is different in two aspects. First, the modulator geometry and spot weights are optimized directly with respect to dose constraints similar to state-of-the-art IMPT inverse planning. We assume that the geometry of the range modulator and the weights of the PBS spots are intrinsically linked and benefit from not being optimized separately. In a way, this is similar to IMPT optimization where it is not appropriate to select the final entries of the dose influence matrix before optimizing the spot weights. The best solution can only be obtained when considering all possible combinations of range modulators and spot weights. Secondly, we do not use any priors on the geometry of the range modulator. In particular, we do not impose that it consists of pyramids. On the contrary, our range modulator has a pixelated geometry. It can be represented by an elevation map in raster form. Thus, the optimizer generates an optimal arbitrary geometry consisting eg. of truncated and non-symmetrical pyramids to better conform to the complex shape of the target volume. In addition, with such a voxelized geometry the range modulator can directly be added in the CT to perform dose calculations without having to resort to a tool supporting parameterized geometries. The approache presented in this paper was applied to a head and neck case and compared to state-of-the-art IMPT. ## 2 Methods ### Overview of the treatment plan optimization process In classical inverse planning, the parameters subject to optimization are the spot weights \(\mathbf{w}\). For conformal FLASH, we must also optimize the individual heights, named \(\mathbf{h}\), of the range modulator which is coded as an elevation map as shown on Fig. 1. Following a classical approach, the objectives penalize deviations of the dose from a reference value in a region of interest (ROI)[10]. For example, to enforce a minimum dose named \(D_{min}\), the following objective term may be minimized: \[f_{min}(\mathbf{w},\mathbf{h})=\frac{1}{N_{r}}\sum_{i=1}^{N_{r}}max_{\mathbf{w},\mathbf{h}}\left( 0,D_{min}-D_{i}(\mathbf{w},\mathbf{h})\right)^{2} \tag{1}\] where \(N_{r}=|ROI|\). A similar objective may be used to enforce a maximum dose named \(D_{max}\): \[f_{max}(\mathbf{w},\mathbf{h})=\frac{1}{N_{r}}\sum_{i=1}^{N_{r}}max_{\mathbf{w},\mathbf{h}} \left(0,D_{i}(\mathbf{w},\mathbf{h})-D_{max}\right)^{2} \tag{2}\] Several objectives may be defined. Hence, the general treatment plan optimization problem takes the following form: \[min \sum_{i}f_{i}(\mathbf{w},\mathbf{h}) \tag{3}\] \[s.t. \mathbf{w}\geq 0\] \[\mathbf{h}\geq 0\] where \(f_{i}(\mathbf{w},\mathbf{h})\) are either of \(f_{min}(\mathbf{w},\mathbf{h})\) or \(f_{max}(\mathbf{w},\mathbf{h})\) -type. A common approach for the minimization of Eq. 3 with respect to \(\mathbf{w}\) involves a dose influence matrix[10]. Each column of this matrix is the dose contribution of one spot of the treatment plan. Hence, the total dose is computed as a weighted sum of the entries of the dose influence matrix. An analytical formulation of the derivative of the objective terms 1 and 2 can easily be calculated and fed to a first-order optimizer. On the contrary, the minimization of Eq. 3 with respect to \(\mathbf{h}\) cannot be computed using a dose influence matrix. Instead, we propose to compute the derivative of the objetive terms by finite difference as described in details in section 2.2 The optimization over \(\mathbf{w}\) and \(\mathbf{h}\) is done iteratively as depicted on Fig. 2. Typical layouts are square or hexagonal grids. The optimization of \(\mathbf{h}\) is nested in the optimization of \(\mathbf{w}\). In other words, at each iteration of the optimization of \(\mathbf{h}\), the spots weights \(\mathbf{w}\) are fully re-optimized based on the current value of \(\mathbf{h}\). This choice is a trade-off between the computation times of the two kinds of gradients. The evaluation of the gradient with respect to \(\mathbf{h}\) requires a full computation by finite difference for each update of \(\mathbf{h}\). Conversely, recomputing the gradient with respect to to \(\mathbf{w}\) for new values of \(\mathbf{w}\) is done based on the precalculated dose influence matrix which is must faster. ### Range modulator optimization For each iteration of the range modulator optimization, \(\mathbf{w}\) is fixed and \(\mathbf{h}\) remains the only variable to optimize on. Figure 1: A 3D range modulator can be represented as a 2D elevation map. It is split into ’towers’ and a range shifter. A minimum thickness of plain material is left attached to the ’towers’ accordingly to 3D printing specification. Figure 2: Schematics of the iterative optimization of the individual heights of the range modulator (RM) named \(\mathbf{h}\) and the PBS spot weights named \(\mathbf{w}\). The gradient with respect to \(\mathbf{h}\) of \(f_{min}(\mathbf{w},\mathbf{h})\) is: \[\nabla f_{min}=\frac{2}{N_{r}}\sum_{i=1}^{N_{r}}max_{\mathbf{h}}\left(0,D_{min}-D_{i} (\mathbf{w},\mathbf{h})\right)\frac{\delta D_{i}(\mathbf{w},\mathbf{h})}{\delta\mathbf{h}} \tag{4}\] A similar gradient may be derived for \(f_{max}\). In Eq. 4, the computation of \(\frac{\delta D_{i}(\mathbf{w},\mathbf{h})}{\delta\mathbf{h}}\) is a non trivial operation as the dose \(D_{i}(\mathbf{w},\mathbf{h})\) is usually computed with a Monte Carlo dose engine which does not provide any information of the variation of the particle range with respect to a small variation \(\delta E\) of its initial energy. Computing the gradient by finite difference by running a simulation for every element of the vector \(\delta\mathbf{h}\) may be intractable in terms of computation time but also in terms of memory because \(|\mathbf{h}|\) simulations would have to be performed and stored. Therefore, we have adapted our Monte Carlo dose engine to perform and store computation not per beamlet but per pixel of an initial fluence map. The derivative \(\frac{\delta D_{i}(\mathbf{w},\mathbf{h})}{\delta\mathbf{h}}\) is not directly estimated. Instead, we compute \(\frac{\delta D_{i}(\mathbf{w},\mathbf{h})}{\delta E}\) where \(\delta\mathbf{E}\) is the decrease in energy of the particles at the entrance of the range modulator caused by the increase in thickness \(\delta\mathbf{h}\). It turns out in conformal FLASH that both derivatives are proportional since a single energy layer is used. Specifically, the relationship between the change in energy and in range is given by \(\delta E=\delta h\times SP_{RM}(E)\), where \(SP_{RM}(E)\) is the stopping power of the material used for the range modulator. Our approach consists of providing a map of the fluence \(\mathbf{F}\) at the entrance of the range modulator to a Monte Carlo dose engine which was modified to perform and store a simulation independently for each pixel of the fluence map. Those developments are detailed in section 2.3. For each pixel \(h_{j}\) of the range modulator, we compute the dose \(D(\mathbf{w},\mathbf{h})\) for protons originating only at this location and with initial particles at nominal energy \(E\). The number of protons is proportional to \(F_{j}\). We then compute the dose \(D(\mathbf{w},\mathbf{h})\) in the exact same conditions but an initial energy of \(E-\Delta E\). The derivative of \(D_{i}(\mathbf{w},\mathbf{h})\) may then be approximated as: \[\frac{\delta D_{ij}(\mathbf{w},h_{j})}{\delta E} = \frac{\delta D_{i}(\mathbf{w},E_{j})}{\delta E_{j}} \tag{5}\] \[\approx \frac{D_{i}(\mathbf{w},E_{j})-D_{i}(\mathbf{w},E_{j}-\Delta E_{j})}{ \Delta E_{j}} \tag{6}\] Despite the modifications made in the dose calculator, the execution time may remain significant. To mitigate this, we propose a preliminary estimation of the objective function gradient using the continuous slowing down approximation (CSDA). Specifically, we estimate the dose in each voxel based on the energy loss calculated from a 3D map of relative proton stopping powers (RSPs) as follows: \[D_{x,y,z} = F_{x,y}SP_{w}\left(R(E_{0})-\int RSP_{x,y,z}dz\right) \tag{7}\] Here, \(z\) is assumed to be along the beam direction, \(SP_{w}(R(E))\) is the mass stopping power of water with respect to the range of protons having an energy equal to \(E\), and \(E_{0}\) is the nominal energy of the beam. While this approach neglects important phenomena like scattering and beam divergence, it can efficiently provide a rough estimate of \(\mathbf{h}\). It is worth noting that this simple analytical approach often yields satisfactory results for the values of \(\mathbf{h}\). ### Monte Carlo computations based on fluence map The Monte Carlo dose engine used in this study is MCsquare[14] which takes advantage of mutli-core CPU architectures. MCsquare has the ability to compute the dose independently for each beamlet and store it in a sparse matrix without significant increase of the computation time. We lacked this feature to store the dose associated to each pixel of the range modulator. MCsquare takes as input the location of the spot on the isocenter plane. To provide the fluence map to MCsquare, we first estimated the fluence at the entrance of the range modulator, given a model of the beam. Then, we projected the coordinates of the fluence values onto the isocenter plane given the distance between the steering magnets and the isocenter. Finally, the fluence map at isocenter was simply converted to the classical input taken by MCsquare. Some modifications were made to the code of MCsquare to bypass its default sampling scheme. By default, MCsquare samples the initial particle locations according to a probability density function of which the parameters are defined in a beam model given as input. This default behavior was changed so that the initial location would be exactly the position of the pixels of the fluence map projected onto the plane of the nozzle exit. In our modified MCsquare, the amount of particles to sample at each \((x,y)\) is now considered a deterministic variable corresponding to the pixels of the fluence map. The correlation between the particle direction and its location with respect to the center of the spot was also removed as such spots are not used anymore. Mathematically, MCsquare uses the following covariance matrix \(\Sigma^{2}\) to sample the particles at the nozzle exit, for each beamlet: \[\Sigma(z_{nozzle})=\begin{pmatrix}\sigma_{x}(z_{nozzle})&\rho_{x\theta}(z_{nozzle })\\ \rho_{x\theta}(z_{nozzle})&\sigma_{\theta}(z_{nozzle})\end{pmatrix} \tag{8}\] where \(\sigma_{x}(z_{nozzle})\) is the standard deviation for the particle location, \(\sigma_{\theta}(z_{nozzle})\) is the standard deviation for the particle direction and \(\rho_{x\theta}\) is the correlation between particle position and direction. A similar covariance matrix is used for the y axis, de facto assuming no correlation between x and y. However, when we provide the fluence at the nozzle exit, the initial location of the particle is now deterministic. It is constant for each pixel of the fluence map and therefore \(\sigma_{x}(z_{nozzle})\) is zero. Consequently, it does not make sense anymore to consider any correlation between the particle direction and its constant initial location. The covariance matrix implemented in MCsquare was thus changed to: \[\Sigma(z_{nozzle})=\begin{pmatrix}0&0\\ 0&\sigma_{\theta}(z_{nozzle})\end{pmatrix} \tag{9}\] ### Optimization of the PBS spot weights The optimization of the spot weights is done classically as in Barragan et al.[1]. A dose influence matrix is first computed using MCsquare. The optimization problem 3 is then solved by L-BFGS. ### Range modulator representation and insertion within the CT The range modulator is considered as a pixelated object which can be placed directly in the CT. The thickness in \((x,y)\) of the range modulator is encoded in a pixel value, as shown on Fig. 1. In 3D, the range modulator can be seen as a collection of 'towers' attached to a plain area which can be considered as a range shifter. Those two kinds of components can be made in different materials. Although the use of plastic material for the range modulator fabrication would be convenient, a higher density material like aluminum could be chosen for the range shifter instead. This would serve to improve the lateral dose uniformity by increasing the amount of lateral scattering. Splitting the range modulator into 'towers' and a range shifter is done dynamically at each iteration of the optimization. A minimum thickness of plain material is left attached to the 'towers' accordingly to 3D printing specifications which might be provided by the manufacturer. In this study this thickness was set to 5 mm. To avoid resampling artefacts of the range modulators, all computations were done on CTs resampled on the beams-eye views. The CT resolution was \(1\times 1\times 2\ \mathrm{mm^{3}}\). Calculated dose maps were then resampled back on the original CT. ### Experimental validation To evaluate our optimization method, we used it to design conformal FLASH plans on an head and neck case. These plans were compared to conventional IMPT. Three PTVs were drawn as extentions of the CTVs: one on the left side and two on the right side of the patient. The prescriptions were 54.25 Gy for the left PTV and 54.25 Gy and 70 Gy for the two PTVs on the right side of the patient. IMPT plans were optimized using RayStation 11b (RaySearch Laboratories, Stockholm, Sweden) considering four fields with gantry angles of around 60\({}^{\circ}\), 120\({}^{\circ}\), 240\({}^{\circ}\), and 300\({}^{\circ}\), and 10\({}^{\circ}\)couch rotation. A range shifter of 40 mm equivalent to water was used for all fields. As FLASH treatments will most likely be hypofractionated[16] and a long delay might be required between the delivery of each field, setup errors might be critical. Designing the range modulator within a robust optimization framework would be highly desirable. However, this must be considered in a second instance after that the dose conformity which can be achieved with a range modulator is proved to be sufficient. Consequently, in the absence of a robust framework, the optimization of each field was done so as to have a uniform dose for that field. Given the shape of the target volume, it is clear that 120\({}^{\circ}\)and 240\({}^{\circ}\)are not appropriate to have a uniform dose. Therefore, those beam angles were not considered for FLASH treatment optimization which suggests that a different approach will have to be taken when planning FLASH treatments or to select patients eligible for a FLASH treatment. A Python implementation of the aforementioned method was utilized to optimize FLASH plans. The maximum energy attainable from an IBA system, 226 MeV, was selected as the energy for the FLASH treatment. A spot size, characterized by a Gaussian distribution with standard deviations of \(\sigma_{x}=4.5\) mm and \(\sigma_{y}=5\) mm, was chosen. To account for the amplified scattering in the FLASH configuration, a spot spacing of 7 mm was employed in the present investigation. The materials used for the range modulator, the range shifter and the collimator were respectively: PMMA (density: \(1.2\ \mathrm{g/cm^{3}}\)), aluminum (density: \(2.7\ \mathrm{g/cm^{3}}\)) and tungsten (density: \(19.3\ \mathrm{g/cm^{3}}\)). ## 3 Results ### Dose conformity To evaluate our optimization method, we designed a conformal FLASH plan on an head and neck case. The target was composed of two rather large volumes, with irregular contours and variable depths. Fig. 3 shows the experimental setup and the dose distribution for the right part of the tumor. In this study, we mainly focused on the capabilities of the algorithm to give a uniform dose inside the target. We could not carry out an exhaustive dosimetric study as this head and neck case is not suitable for treatment with two opposite fields each giving a uniform dose. The combination of these fields would inevitably contribute to a superimposition of dose around the pharynx and therefore to a hot spot. In addition, a comprehensive dosimetric study should not only address differences in dose distributions but also take into account the benefits of using a high dose rate. A Comparison with IMPT is shown on Fig. 4 and 5. For the left PTV, the dose uniformity was close to IMPT. In the PTV we had \(D95=52.5\ \mathrm{Gy}\) and \(D5=57.9\ \mathrm{Gy}\) in FLASH, and \(D95=51.8\ \mathrm{Gy}\) and \(D5=57.1\ \mathrm{Gy}\) in IMPT. The dose was less uniform in the right PTV with \(D95=65.2\ \mathrm{Gy}\) and \(D5=74.5\ \mathrm{Gy}\) in FLASH, and \(D95=67.3\ \mathrm{Gy}\) and \(D5=71.4\ \mathrm{Gy}\) in IMPT. However, IMPT dose maps were obtained with two fields for each PTV whereas the FLASH treatment only had one field for each PTV. To determine whether the degradation in dose uniformity was caused by the use of the range modulator or rather by the use of a single field and a large range shifter, we removed the range modulator from the CT and designed an IMPT plan with a single field. Fig. 5 shows the experimental setup: the range modulator was removed but the range shifter was left at the exact same place in the CT. The spot spacing used for that IMPT plan was the same as that used to design the FLASH plan. In Fig. 5, the comparison of the dose profiles and the dose-volume histograms (DVHs) in the CTV shows that the design of the range modulator was actually very close to being optimal for these conditions, and that the dose degradation mainly came from the use of a single field and a large range modulator. D95s differ by \(0.1\ \mathrm{Gy}\) and D5s differ by \(0.7\ \mathrm{Gy}\). Figure 3: Experimental setup for the right part of the tumor. From nozzle to patient: range modulator (PMMA), range shifter (aluminium), and aperture (tungsten). Figure 4: (a) Dose distribution obtained with the FLASH plan, (b) dose profile corresponding to a line passing through the center of the PTV, (c) IMPT dose distribution, and (d) DVH comparison for the left PTV and a prescription of 54.25 Gy. Figure 5: (a) Dose distribution obtained with the FLASH plan, (b) comparison of the corresponding DVH and that obtained in standard IMPT, (c) Dose distribution obtained with the single field IMPT plan and the same range shifter as for the FLASH plan, and (d) comparison of the corresponding DVH and that obtained in FLASH, in the CTV. ### Dose rate In the current study, the PTVs considered had prescription doses of 54.5 Gy and 70 Gy. However, to accurately compute the dose rate, it is imperative to consider the dose per fraction, which is likely to be around 8 Gy for hypoffractionated FLASH treatments. The spots were arranged in a conventional serpentine pattern, as depicted in Fig. 6c. The scanning speed was set at 8000 mm/s, and the current averaged on a pulse period was 500 nA at the nozzle output. To assess the efficacy of a FLASH treatment, it is pertinent to evaluate the dose rate in the organs at risk (OARs) and the healthy tissues surrounding the target volume. However, to facilitate a direct comparison, a unique volume surrounding the PTV was defined instead of distinct volumes for each OAR. To delineate this volume, we first computed an extension of the PTV by 20 mm, named PTV-ext, which excluded areas where the dose was lower than 1 Gy. The dose rate maps, along with the dose rate volume histograms (DRVHs), are presented in Fig. 6. It can be observed that the threshold of 40 Gy/s, commonly associated with the FLASH effect[4], is not achieved. Notably, the PBS dose rate is influenced by the length to be scanned in the primary scanning direction. Thus, the dose rate may potentially be enhanced through optimization of the scanning pattern. ## 4 Discussion The results presented in the previous section showed that the range modulators which were optimized for a complex head and neck case were optimal when considering dose uniformity and conformality. One of the novel aspects in the proposed approach is that the range modulator is voxelized which has several advantages. First, it can be placed in the planning CT for dose computation. This greatly simplifies the simulation workflow as an external Monte Carlo dose engine supporting parameterized geometries like Geant4 do not need to be used. In addition, the geometry of the range modulator is arbitrarily set by the optimizer to optimally conform to the complex Figure 6: (a) and (b) Dose rate distribution computed for a current of 320 nA, (c) map of the intensity of the PBS spots, and (d) dose rate volume histogram. shape of the target volume. The algorithm is free to determine the height of each individual pixel of the elevation map and this typically results in asymmetrical pyramids as can be seen on the figures presented in Section 3. Arbitrary geometry was not the sole improvement provided by the proposed method. Both the weights of the PBS spots and the individual heights of the range modulator were optimized directly under constraints. Thus, the range modulator was not optimized given spot weights but instead both spot weights and modulator elevations were updated at each iteration as dependent variables. This was possible thanks to the use of a fast Monte Carlo which could discriminate the contribution of each pixel of the range modulator. In addition to the advantages just mentioned above, the proposed voxelized range modulator has a rather coarse resolution compared to other works in which it is similar to a conventional ridge filter with a high number of pyramidal peaks of variable prominence. The proposed range modulator typically has a resolution of around \(1\times 1\times 1\,\mathrm{mm}^{3}\). This is an asset for 3D printing. Even though current printing techniques have a precision of much less than one millimeter, this is not necessarily desirable from a mechanical point of view, in particular regarding the geometrical integrity in the presence of vibration or any externally applied force, including gravity. In this first study, we compared a conformal FLASH plan to an IMPT plan optimized with a modern TPS considering robustness criteria. Conformal FLASH logically underperformed when compared to IMPT which comes from physical limitations, in particular the degradation of the distal fall off. However, we showed that the presented approach led to a dose distribution very close to that obtained with a single field IMPT and the same range shifter as that used for FLASH. The remaining slight difference could be explained by the fact that just as the range shifter degrades the sharpness of the Bragg peak, an additional degradation arises from the use of the range modulator. These observations makes it clear that the single field uniform dose approach considered in FLASH cannot compete with conventional IMPT in terms of dose conformity, and that should drive the selection of patient for FLASH treatments and the way that the plan would be designed. In addition, the optimization method could be integrated into a more complete approach, in particular with robustness criteria and a model of the FLASH effect on tumor control. ## 5 Conclusions A joint range modulator and treatment plan optimization method was proposed and validated on an head an neck case. The optimization is done directly under constraints, similarly to IMPT inverse planning. The range modulator has a voxilized geometry which has many advantages: a simpler dose calculation pipeline, a geometry which can be arbitrarily determined by the optimizer and which is well suited for 3D printing. ## 6 Acknowledgments This work was supported by the Walloon Region of Belgium through technology innovation partnership no. 8341 (EPT-1 - Emerging Proton Therapies Phase 1) co-led by MecaTech and BioWin clusters. ## 7 Conflict of Interest Statement Dr. Kevin Souris was an employee of Ion Beam Applications during the writing of this paper.
2310.08771
Complex dimensions for IFS with overlaps
The notion of complex dimension of a one-dimensional Cantor set $C=\bigcap_{n=1}^\infty C_n$ dates back decades. It is defined as the set of poles of the meromorphic $\zeta$-function $\zeta(s)=\sum_{n=1}^{\infty}d_j^s$, where $\Re s>0$, and $d_j$ is the length of the $j$th interval in $C_n$. Following the trend, I switch from sets to measures, which will allow me to generalize the construction to iterated function schemes that do not necessarily satisfy the Open Set Condition.
Nikita Sidorov
2023-10-12T23:34:44Z
http://arxiv.org/abs/2310.08771v3
# Complex dimensions for IFS with overlaps ###### Abstract. The notion of complex dimension of a one-dimensional Cantor set \(C=\bigcap_{n=1}^{\infty}C_{n}\) dates back decades [2]. It is defined as the set of poles of the meromorphic \(\zeta\)-function \(\zeta(s)=\sum_{n=1}^{\infty}d_{j}^{s}\), where \(\Re s>0\), and \(d_{j}\) is the length of the \(j\)th interval in \(C_{n}\). Following the trend, I switch from sets to measures, which will allow me to generalize the construction to iterated function schemes that do not necessarily satisfy the Open Set Condition. Key words and phrases:Bernoulli convolution 2010 Mathematics Subject Classification: 26D20 Let \(\{f_{1},\ldots,f_{m}\}\) be the iterated function scheme on \(\mathbb{R}\) with \(f_{i}(x)=\rho x+(1-\rho)a_{i}\), where \(\rho\in(0,1),a_{i}\in\mathbb{N},a_{1}=0<a_{2}<\cdots<a_{m}\). Let \(p_{1},\ldots,p_{m}\in(0,1)\) with \(p_{1}+\cdots+p_{m}=1\), and \(\mu=\mu(\rho,\{a_{i}\},\{p_{i}\})\) be the pushdown measure for this IFS. The support of \(\mu\) is a subset of \(I=[0,a_{m}]\). Assume \(\rho\) to be algebraic so, as is well known, \(\mu\) is exact-dimensional with dimension \(D\), say. Fix \(\varepsilon>0\) and \(n>1\). Let \(D_{n}\) be the disjoint union of intervals that form the set \[\left\{\sum_{i=1}^{n}b_{i}\rho^{i}:b_{i}\in\{a_{1},\ldots,a_{m}\}\right\}.\] It was shown in [4] that \(\#D_{n}\gg\rho^{-n}\). Construct \(\ell_{j}\) as follows: \(\ell_{j}=\mu(J_{j})\), where \(J_{j}\in D_{n}\) such that \[|\mu(J_{j})-\rho^{Dn}|<\varepsilon.\] Put \[\zeta(s,\varepsilon,n)=\sum_{1\leq j\leq\#D_{n}:|\mu(J_{j})-\rho^{Dn}|< \varepsilon}\ell_{j}^{s}.\] **Theorem 1** (H).: _The sequence \(\zeta(s,\varepsilon,n)\) converges as \((\varepsilon,n)\to(0,\infty)\) to a meromorphic function that_ * _is holomorphic on_ \(\{s\in\mathbb{C}:\Re s>1\}\)_;_ * _has the set of poles_ \(\{s\in\mathbb{C}:0<\Re s<1\}\)_._ Proof.: Fix \(s\) with \(\Re s>0,n\geq 1\) and assume \(k>0\). We observe that \[|\zeta(s,\varepsilon^{\prime},n+k)-\zeta(s,\varepsilon,n)| =\left|\sum_{1\leq j\leq\#D_{n+k}:|\mu(J_{j})-\rho^{D(n+k)}|< \varepsilon^{\prime}}\ell_{j}^{s}-\sum_{1\leq j\leq\#D_{n}:|\mu(J_{j})-\rho^{Dn }|<\varepsilon}\ell_{j}^{s}\right|\] \[\leq\left|\sum_{\#D_{n}+1\leq j\leq\#D_{n+k}}|\ell_{j}^{s}|\right| \ll\rho^{k}\cdot\max_{\#D_{n}+1\leq j\leq\#D_{n+k}}\ell_{j}^{\Re s+i\Im s}\] \[\ll\rho^{k}\cdot\rho^{(n+k)\Re s}\to 0,\quad k\to\infty.\] Hence there exists \[\lim_{(\varepsilon,n)\to(0,\infty)}\zeta(s,\varepsilon,n)=:\zeta(s).\] It's time to switch to Probability and use Chebyshev's inequality. Roughly speaking, for "most" \(j\) we have \(\mu(J_{j})\approx\rho^{Dn}\), the mean of this distribution, and the aforementioned inequality will allow me to estimate those indices \(j\) that yield values of \(\mu(J_{j})\) that fall sufficiently "far" from the mean value. The mean value of \(\mu(J_{j})\) is known to be of exponential order \(\rho^{Dn}\)[4]. Hence by Chebyshev's inequality, \[\sum_{j:|\mu(J_{j})-\rho^{Dn}|>\varepsilon}\mu(J_{j})<\varepsilon^{2},\] whence \[\sum_{j:|\mu(J_{j})-\rho^{Dn}|\leq\varepsilon}\ell_{j}\geq 1-\varepsilon^{2}.\] Since \(|\ell_{j}^{s}|=\ell_{j}^{\Re s}\), \[\left|\sum_{j:|\mu(J_{j})-\rho^{Dn}|<\varepsilon}\ell_{j}^{s}\right|=\left| \sum_{j:|\mu(J_{j})-\rho^{Dn}|<\varepsilon}\ell_{j}\cdot\ell_{j}^{s-1}\right| \geq(\min_{j}\ell_{j})^{\Re s-1}\cdot(1-\varepsilon^{2}).\] Thus, \[m^{-n(\Re s-1)}(1-\varepsilon)\asymp m^{-n(\Re s-1)}\to\infty\Leftrightarrow \Re s<1,\] since \(\min\ell_{j}\asymp m^{-n}\). Hence the set of poles of \(\zeta\) contains the vertical strip \(\{s:\Re s\in(0,1)\}\). On the other hand, if \(\Re s>1\), then trivially \[\left|\sum_{j:|\mu(J_{j})-\rho^{Dn}|<\varepsilon}\ell_{j}^{s}\right|\leq 1\] whence \(\zeta\) is analytic on \(\{s\in\mathbb{C}:\Re s>1\}\). What happens on the boundary? Let's see. Let \(s=1+it\). We have \[\left|\sum_{j:\left|\mu(J_{j})-\rho^{Dn}\right|<\varepsilon}\ell_{j}^{1+it} \right|=\left|\sum_{j:\left|\mu(J_{j})-\rho^{Dn}\right|<\varepsilon}\ell_{j} \cdot\ell_{j}^{it}\right|=\left|\sum_{j:\left|\mu(J_{j})-\rho^{Dn}\right|< \varepsilon}\ell_{j}\cdot\exp(it\log\ell_{j})\right|\] \[=\left|\sum_{j:\left|\mu(J_{j})-\rho^{Dn}\right|<\varepsilon}\ell_{j}\cdot(\cos (b\log\ell_{j})+i\sin(b\log\ell_{j}))\right|:=\left|F_{n}(t)\right|.\] Set \[F(t)=\lim_{n\to\infty}F_{n}(t).\] **Theorem 2** (F).: _The following properties are either trivial or can be easily proved:_ **1.**_\(\left|F(t)\right|<1\)._ **2.**_\(F\) is, generally, aperiodic, so not your run-of-the-mill trig sum._ **3.**_\(F\in C^{\infty}(\mathbb{R})\)._ _Remark 3_.: **1.** Theorem H yields a rather crude dichotomy compared to affine cases without overlaps from [3] (see below). Said that, there are no natural affine cases - I guess the set of numbers whose continued fraction expansion contains only 1s and 2s is the most natural one. The set of poles here is uniformly discrete, but their picture looks pretty random. No explicit coordinates of these poles are known. **2.** The history of the problem can be learned from Mark Pollicott's slides [3] from his talk at the One World Numeration seminar. I am grateful to Wolfgang Steiner for organizing this talk. **3.** I think it is unlikely that \(F\) real analytic. A uniformly convergent sequence of real analytic functions can converge to any continuous function - even a sequence of polynomials, according to the Weierstrass theorem whose proof by Sergei Bernstein used Chebyshev's inequality and served as an inspiration for my proof of Theorem H.
2301.01772
Infomaxformer: Maximum Entropy Transformer for Long Time-Series Forecasting Problem
The Transformer architecture yields state-of-the-art results in many tasks such as natural language processing (NLP) and computer vision (CV), since the ability to efficiently capture the precise long-range dependency coupling between input sequences. With this advanced capability, however, the quadratic time complexity and high memory usage prevents the Transformer from dealing with long time-series forecasting problem (LTFP). To address these difficulties: (i) we revisit the learned attention patterns of the vanilla self-attention, redesigned the calculation method of self-attention based the Maximum Entropy Principle. (ii) we propose a new method to sparse the self-attention, which can prevent the loss of more important self-attention scores due to random sampling.(iii) We propose Keys/Values Distilling method motivated that a large amount of feature in the original self-attention map is redundant, which can further reduce the time and spatial complexity and make it possible to input longer time-series. Finally, we propose a method that combines the encoder-decoder architecture with seasonal-trend decomposition, i.e., using the encoder-decoder architecture to capture more specific seasonal parts. A large number of experiments on several large-scale datasets show that our Infomaxformer is obviously superior to the existing methods. We expect this to open up a new solution for Transformer to solve LTFP, and exploring the ability of the Transformer architecture to capture much longer temporal dependencies.
Peiwang Tang, Xianchao Zhang
2023-01-04T14:08:21Z
http://arxiv.org/abs/2301.01772v1
# Infomaxformer: Maximum Entropy Transformer for Long Time-Series Forecasting Problem ###### Abstract. The Transformer architecture yields state-of-the-art results in many tasks such as natural language processing (NLP) and computer vision (CV), since the ability to efficiently capture the precise long-range dependency coupling between input sequences. With this advanced capability, however, the quadratic time complexity and high memory usage prevents the Transformer from dealing with long time-series forecasting problem (LTFP). To address these difficulties: (i) we revisit the learned attention patterns of the vanilla self-attention, redesigned the calculation method of self-attention based the **Maximum Entropy Principle**. (ii) we propose a new method to sparse the self-attention, which can prevent the loss of more important self-attention scores due to random sampling.(iii) We propose Keys/Values Distilling method motivated that a large amount of feature in the original self-attention map is redundant, which can further reduce the time and spatial complexity and make it possible to input longer time-series. Finally, we propose a method that combines the encoder-decoder architecture with seasonal-trend decomposition, i.e., using the encoder-decoder architecture to capture more specific seasonal parts. A large number of experiments on several large-scale datasets show that our Infomaxformer is obviously superior to the existing methods. We expect this to open up a new solution for Transformer to solve LTFP, and exploring the ability of the Transformer architecture to capture much longer temporal dependencies. Maximum Entropy, Transformer, Time-Series, Forecasting + Footnote †: (c): corresponding author + Footnote †: (c): corresponding author + Footnote †: (c): corresponding author + Footnote †: (c): corresponding author + Footnote †: (c): corresponding author + Footnote †: (c): corresponding author + Footnote †: (c): corresponding author + Footnote †: (c): corresponding author + Footnote †: (c): corresponding author + Footnote †: (c): corresponding author + Footnote †: (c): corresponding author + Footnote †: (c): corresponding author + Footnote †: (c): corresponding author + Footnote †: (c): corresponding author + Footnote †: (c): corresponding author + Footnote †: (c): corresponding author + Footnote †: (c): corresponding author + Footnote †: (c): corresponding author + Footnote †: (c): corresponding author + Footnote †: (c): corresponding author + Footnote †: (c): corresponding author + Footnote †: (c): corresponding author + Footnote †: (c): corresponding author + Footnote †: (c): corresponding author + Footnote †: (c): corresponding author + Footnote †: (c): corresponding author + Footnote †: (c): corresponding author + Footnote †: (c): corresponding author + Footnote †: (c): corresponding author + Footnote †: (c): corresponding author + Footnote †: (c): corresponding author + Footnote †: (c): corresponding author + Footnote †: (c): corresponding author + Footnote †: (c): corresponding author + Footnote †: (c): corresponding author + Footnote †: (c): corresponding author + Footnote †: (c): corresponding author + Footnote †: (c): corresponding author + Footnote †: (c): corresponding author + Footnote †: (c): corresponding author + Footnote †: (c): corresponding author + Footnote †: (c): corresponding author + Footnote †: (c): corresponding author + Footnote †: (c): corresponding author + Footnote †: (c): corresponding author + Footnote †: (c): corresponding author + Footnote †: (c): corresponding author + Footnote †: (c): corresponding author + Footnote †: (c): corresponding author + Footnote †: (c): corresponding author + Footnote †: (c): corresponding author + Footnote †: (c): corresponding author + Footnote †: (c): corresponding author + Footnote †: (c): corresponding author + Footnote †: (c): corresponding author + Footnote †: (c): corresponding author + Footnote †: (c): corresponding author + Footnote †: (c): corresponding author + Footnote †: (c): corresponding author + Footnote †: (c): corresponding author + Footnote †: (c): corresponding author + Footnote †: (c): corresponding author + Footnote †: (c): corresponding author + Footnote †: (c): corresponding author + Footnote †: (c): corresponding author + Footnote †: (c): corresponding author + Footnote †: (c): corresponding author + Footnote †: (c): corresponding author + Footnote †: (c): corresponding author + Footnote †: (c): corresponding author + Footnote †: (c): corresponding author + Footnote †: (c): corresponding author + Footnote †: (c): corresponding author + Footnote †: (c): corresponding author + Footnote †: (c): corresponding author + Footnote †: (c): corresponding author + Footnote †: (c): corresponding author + Footnote †: (c): corresponding author + Footnote †: (c): corresponding author + Footnote †: (c): corresponding author + Footnote †: (c): corresponding author + Footnote †: (c): corresponding author + Footnote †: (c): corresponding author + Footnote †: (c): corresponding author + Footnote †: (c): corresponding author + Footnote †: (c): corresponding author + Footnote †: (c): corresponding author + Footnote †: (c): corresponding author + Footnote †: (c): corresponding author + Footnote †: (c): corresponding author + Footnote †: (c): corresponding author + Footnote † †: (c): corresponding author + Footnote †: (c): corresponding author + Footnote †: (c): corresponding author + Footnote †: (c): corresponding author + Footnote †: (c): corresponding author + Footnote †: (c): corresponding author + Footnote †: (c): corresponding author + Footnote †: (c): corresponding author + Footnote † †: (c): corresponding author + Footnote †: (c): corresponding author + Footnote † †: (c): corresponding author + Footnote † †: (c): corresponding author + Footnote † †: (c): corresponding author + Footnote † †: (c): corresponding author + Footnote
2302.01222
A novel automatic wind power prediction framework based on multi-time scale and temporal attention mechanisms
Wind energy is a widely distributed, renewable, and environmentally friendly energy source that plays a crucial role in mitigating global warming and addressing energy shortages. Nevertheless, wind power generation is characterized by volatility, intermittence, and randomness, which hinder its ability to serve as a reliable power source for the grid. Accurate wind power forecasting is crucial for developing a new power system that heavily relies on renewable energy sources. However, traditional wind power forecasting systems primarily focus on ultra-short-term or short-term forecasts, limiting their ability to address the diverse adjustment requirements of the power system simultaneously. To overcome these challenges, We propose an automatic framework capable of forecasting wind power across multi-time scale. The framework based on the tree-structured Parzen estimator (TPE) and temporal fusion transformer (TFT) that can provide ultra-short-term, short-term and medium-term wind power forecasting power.Our approach employs the TFT for wind power forecasting and categorizes features based on their properties. Additionally, we introduce a generic algorithm to simultaneously fine-tune the hyperparameters of the decomposition method and model. We evaluate the performance of our framework by conducting ablation experiments using three commonly used decomposition algorithms and six state-of-the-art models for forecasting multi-time scale. The experimental results demonstrate that our proposed method considerably improves prediction accuracy on the public dataset Engie https://opendata-renewables.engie.com. Compared to the second-best state-of-the-art model, our approach exhibits a reduction of 31.75% and 28.74% in normalized mean absolute error (nMAE) for 24-hour forecasting, and 20.79% and 16.93% in nMAE for 48-hour forecasting, respectively.
Meiyu Jiang, Jun Shen, Xuetao Jiang, Lihui Luo, Rui Zhou, Qingguo Zhou
2023-02-02T17:03:08Z
http://arxiv.org/abs/2302.01222v5
# A novel framework for medium-term wind power prediction based on temporal attention mechanisms ###### Abstract Traditional energy sources such as coal, crude oil and natural gas are burned to generate electricity and they emit more than 75% of the world's greenhouse gases, with approximately 90% of the world's carbon dioxide emissions [1]. This is the leading cause of global climate change. Traditional energy sources have limited reserves and can be depleted if over-exploited [2]. Due to the non-renewable nature of traditional energy sources, the present energy industry is facing great challenges, whereas the prices of traditional energy sources are maintained at high levels and are gradually increasing. In order to handle the energy crisis, more and more countries choose the renewable energy as alternative energy sources. Renewable energies includes wind, solar, tidal and biomass, which has the advantages of being widely distributed, environment friendly and often recyclable. With the rapid development of wind power technology, converting wind energy into electricity is becoming more efficient and less costly. Wind power plays an essential role in the growth of global electricity generation and has received widespread attention worldwide. According to the Global Wind Report 2022 [3], the total installed capacity of wind power installations worldwide had been 837 GW in 2021, an increase of 12% compared to 2020. Newly installed wind installations increased by 93.6 GW, slightly more than the 93 GW increase in 2020. Among them, China has led the global wind energy industry since 2010, with newly installed wind power capacity increasing significantly over the last 11 years and remaining at the top of the world. According to the National Energy Administration (NEA), the total installed capacity of wind power in China is 52GW in 2021, with 47.5GW of new grid-connected capacity. The principle of wind power is that wind energy is first converted into mechanical energy and then into electrical energy. Wind energy is characterized by high uncertainty, discontinuity and frequent fluctuations. In other words, the wind turbine power generation is unstable and difficult to predict, affecting the scheduling and planning of power systems and often leading to power supply and demand imbalances. To solve these problems, it is necessary to have accurate forecasts of future wind power generation, which helps to reduce costs and increase power generation efficiency. Wind power technology is therefore becoming increasingly important in the field of renewable energy. Wind power forecasting models can be classified as ultra-short-term, short-term, medium-term and long-term depending on the time horizon [4; 5; 6], as shown in Figure 1. The division of time scales differs from literature to literature, and there are differences in the specific wind power generation applications. The ultra-short-term forecasting model facilitates real-time access to wind power generation. Short-term forecasting models are helpful for power companies to develop load dispatching plans, mitigate the impact of wind power grid integration on the entire grid and ensure the safe operation of the electricity market. Medium-term forecasting models facilitate the renewable energy trading, optimization of generation schedules, and circuit overhauls. Long-term forecasting models are employed for the maintenance planning, such as for the location of wind farms and the development of annual generation plans. Some literature suggests that the accuracy of predictions decreases as the time horizon increases [7, 4, 8]. Also, given the high instability of the wind, it is more difficult to predict if the time horizon chosen is too long. This paper focuses on medium-term wind power forecasting, which is the most challenging. ### Contribution and paper organization This paper focuses on medium-term wind power forecasting by solving the remaining problems with the recent deep learning based methods. Noticeably, as we will discuss in the next section, many frameworks or methods using signal decomposition algorithms and deep learning have been proposed by other scholars, some of which also use optimization algorithms to tune the results. Among them, they provide exhaustive experiments on selecting parameters for decomposition algorithms and deep learning models on specific datasets. However, the practicality of these configurations may be limited, requiring manual re-experimentation on new data. To tackle this situation, this paper proposes a novel medium-term forecasting framework by tree-structured parzen estimator (TPE), VMD and time fusion transformer (TFT), this framework defines the TPE-VMD-TFT method for accurate wind power forecasting. The main contributions of this study are as follows: 1. A novel medium-term prediction framework based on TPE and decomposition algorithms is proposed, which defines the TPE-VMD-TFT method to predict wind turbine's wind power generation 24-h and 48-h ahead. The method achieved the lowest normalized mean absolute error (nMAE) and normalized root mean square error (nRMSE) so far in the public dataset eigen. 2. The TPE decomposition tuning based on model (TPE-DTBM) algorithm is proposed that optimizes the parameters of the decomposition algorithm using the TPE algorithm, which can be generalized to other common decomposition algorithms and models for wind power forecasting. 3. The proposed framework and methodology are evaluated and analyzed from several aspects. We use nMAE, nRMSE and predictions distribution and to measure their performance on different time horizons. We evaluate both the accuracy and the stability of the proposed method. The other subsections of the paper are organised as the following: Section 3 describes the basic theory and signal decomposition algorithms involved in this paper. Section 4 demonstrates the validity of the proposed wind power forecasting method by using the Engie wind dataset provided by the France's national power corporation. Conclusions and future works are given in Section 5. ## 2 Literature review According to modeling theory, wind power models can be classified into four groups: physical models, traditional statistical models, artificial intelligence-based models and hybrid models [9]. The summary of selected wind power forecasting studies is shown in Table 1. Figure 1: Time-scale classification of wind power forecasting. \begin{table} \begin{tabular}{c c c c c c} \hline \hline Classification & Author & Forecasted areas & Forecasting & Input variables & Forecasting Methods \\ \hline Physical models & Lazić et al. [10] & eastern Sweden & 48-h & wind speed, wind shear, turbulence, and air density & regional Eta model \\ Statistical models & Yatiyana et al. [11] & Western Australia & 6-h & wind speed and wind directions & ARIMA \\ & Firat et al. [12] & Netherlands & 24-h, 48-h, etc & wind speed & ICA and AR model \\ Artificial intelligence & Bilal et al. [13] & Senegal & 1-min & temperature, humidity, and wind direction & ANN \\ models & & & & wind speed, wind density, \\ & Jyothi et al. [14] & North India & 10-min & temperature, & AWNN \\ & & & and wind direction & \\ & & & the sine and cosine of wind direction, & \\ & & & wind speed, temperature, & \\ & & & atmospheric pressure, humidity, & \\ & & & and wind power & \\ & & & wind speed, wind direction, & \\ & & & temperature, air pressure, & \\ & & & air density, hour, month, & \\ & & & and wind power & \\ & & & wind speed, & \\ & & & & the sine and cosine of wind direction, & EEMD-BA-RGRU-CSO \\ & & & & and wind power & \\ & & & wind speed, wind direction, & \\ & & & & temperature, air pressure, & \\ & & & air density, hour, month, & \\ & & & and wind power & \\ & & & wind speed, & \\ Hybrid models & Shi et al. [20] & America & 7-h, 9h, etc & wind power & ARIMA-ANN model, \\ & & & & temperature, wind speed, & \\ & & & and wind power & \\ & & & & wind speed, wind direction, & \\ & & & & blade pitch angle, density, & RBF neural network \\ & & & & and rotor speed & \\ & & & zonal, meridional, wind speed, & \\ & & & & wind direction, and wind power & \\ & & & wind speed, wind direction, & \\ & & & temperature, and wind power & \\ \hline \hline \end{tabular} \end{table} Table 1: The summary of the selected wind power forecasting studies The physical approach does not require historical wind power generation data, but relies on the physical characteristics of the wind farm. Physical models can obtain predicted wind power generation values by predicting meteorological variables [25]. The meteorological variables herein usually refer to wind speeds, which can be derived from numerical weather prediction (NWP) or from the meteorological factors such as temperature, humidity and atmospheric pressure. The conversion of wind speed and wind power can be obtained from wind power curves. The wind power curve is a graph that shows the power of a wind turbine at different wind speeds, created by practitioners measuring wind speeds at wind farms, while further showing the wind turbine's power output at different wind speeds. Ackermann et al. [26] show that the wind turbine power is proportional to the third power of the wind speed in wind power curves. If there is a 10% error in the predicted wind speed, the error in the predicted wind turbine power will be 30%. Therefore, accurate prediction of wind speed is important for wind power forecasting. There have been many researches that use physical model-based solutions for wind power forecasting. For example, Lazi\(\acute{c}\) et al. [10] use the regional Eta model for wind farms to obtain predicted wind speed and then predict wind power based on the wind power curve. They first evaluate the application of the Eta model to wind power prediction problems and to validate it using data from the Gotland island power plant. The results show that the Eta model can be used as a meteorological driver for wind power prediction. The physical approach performs better in long-term wind power forecasting, while the statistical approach has advantages in short-term forecasting [27]. Statistical models require large amounts of data, such as historical power values of wind turbines, historical weather data and weather information to train a fitted model between input and output power. Statistical models can be classified into two types depending on the used parameters: parametric models like the time series models, and non-parametric models based on artificial intelligence such as artificial neural network (ANN) models and other data-driven models. Time series models are advantageous for short-term or ultra-short-term wind power forecasting, which could capture fluctuating variations in output power. Autoregressive integrated moving average (ARIMA) and autoregressive (AR) model are commonly used univariate time series forecasting methods. They have a short computation time and are suitable for smooth time series, which can lead to inaccurate predictions with volatile input data. Many methods based on traditional statistical models have been proposed to improve the accuracy of wind speed predictions, and then use the wind power curve to convert wind speed to wind power. Yatiyana et al. [11] proposed a statistical method for predicting wind speed and direction based on an ARIMA model. They used data from a specific site in western Australia for validation, with a time duration of 7 days and a resolution of one minute. The experimental results show that the error in predicting wind speed is less than 5%, and the error in predicting wind direction is less than 16%. Firat et al. [12] proposed a statistical method for wind speed prediction based on second order blind identification (SOBI) and AR model. SOBI is an independent component analysis (ICA) method that uses temporal structure to find hidden information or independent components in the input data, providing better prediction results than the direct wind speed prediction. Physical models require more detailed meteorological data and physical characteristics, which are more demanding on the data set. Moreover, wind turbines are complex and the wind-to-electricity conversion pattern is difficult to model. Traditional statistical models deal mainly with linear relationships, making it difficult to capture the characteristics between non-linear time-series wind power signal sequences. With the development of artificial intelligence techniques, many ANN-based methods have been proposed in the literature on wind power forecasting. Bilal et al. [13] proposed an ANN-based model for predicting wind turbine power. The input features are meteorological factors such as wind speed, wind direction, solar irradiance, temperature and humidity. The experimental results show that using meteorological factors as inputs to the ANN impacts the model's performance, with the most significant impact when wind direction and wind speed were used as feature inputs, and the accuracy of the prediction results was significantly improved. Jyothi et al. [14] used adaptive wavelet neural network (AWNN) for wind power generation. In addition to wind speed, wind direction and ambient temperature, wind density is added to the input characteristics, and the authors use morlet wavelets as motor wavelets. The AWNN was shown to be more effective than ANNs and the adaptive neuro-fuzzy inference system (ANFIS) for wind power prediction problems. The hybrid model combines different models that can improve forecasting accuracy by retaining each model's strengths and utilizing different aspects of data fluctuations in wind power. In the literature, a delicately designed hybrid model in time series forecasting tasks could capture linear and non-linear features of the time series data [28]. For example, Shi et al. [20] proposed two hybrid models, ARIMA-ANN and ARIMA-SVM, and comparing them with ARIMA, ANN or SVM models, shows that the hybrid models works better than single models. Therefore hybrid models are widely used for wind power forecasting. Wang et al. [21] proposed a cluster hybrid wind power forecasting model called PSO-SVM-ARMA. Shetty et al. [22] proposed the radial basis function neural network (RBFNN) model for wind power prediction. They use PSO to optimize the performance of the model and extreme learning machine (ELM) to improve the learning speed during training. Zameer et al. [23] propose a GPeANN model consisting of ANN and genetic programming for short-term wind power forecasting. Due to the high instability of meteorological features such as wind speed and wind direction, the predictions of individual models vary greatly. Experiments demonstrate that the proposed GPeANN model can generate a collective and robust decision space, which can avoid the above problems. Liu et al. [24] proposed a new wind power prediction method using the ANFIS by mixing three models, which are the back propagation neural network (BPNN), RBFNN and least squares support vector machine (LSSVM). In order to improve the accuracy of the prediction, a method based on the Pearson correlation coefficient (PCC) is used in data pre-processing. The results show that the proposed hybrid method works better than the individual models, regardless of the season. In recent years, deep neural networks (DNNs) have been widely used for wind power prediction, such as the convolutional neural network (CNN), long short-term memory (LSTM) and gated recurrent unit (GRU), etc. Some studies have shown that combining two or more DNNs can improve the prediction results. Yu et al. [15] proposed a spatiotemporal quantile regression (SQR) algorithm based on quantile regression (QR) and the hybrid neural network (HNN) for predicting regional wind power. HNN is a special neural network obtained by mixing CNN and LSTM. It combines the advantages of CNN and LSTM models. HNN uses CNN to extract spatio-temporal features from time series data and then feeds the features into the LSTM model. Niu et al. [16] proposed a sequence-to-sequence model based on the attention-based GRU (AGRU) for multi-step prediction of wind power by using multiple input and output strategies. The feature selection approach evaluates the importance of each input variable by combining the attention mechanism with the GRU model. The results show that wind speed and direction impact wind power forecasting to the most, followed by barometric pressure and seasonal variability. More recently, a hybrid wind power forecasting method called EEMD-BA-RGRU-CSO was proposed by Meng et al. [17]. This hybrid model consists of four different models used at different stages of the experiment. Ensemble empirical mode decomposition (EEMD) is used to decompose the wind turbine power data during data pre-processing, and then the bi-attention (BA) mechanism is used for feature selection. For prediction, the RGRU model using a combination of the residual network and GRU is proposed to extract the static and dynamic relationships between features. The performance of the RGRU model is optimized at training time by using the crisscross optimization algorithm (CSO). The experimental results show that the proposed hybrid method has better prediction results and is more stable than other models mentioned in the literature. In order to improve the accuracy of wind power prediction models, many studies have used data pre-processing techniques to reduce the instability of the raw input data to better extract features. For example, many works proposed to combine signal decomposition algorithms such as wavelet decomposition (WD), empirical mode decomposition (EMD), EEMD and variational mode decomposition (VMD) with machine learning models. Rayi et al. [18] used the VMD algorithm to decompose historical wind power data with non-linear and non-smooth characteristics. Then they built different forecasting models for each intrinsic mode function (IMF) obtained from the decomposition, effectively improving the predicted results. Duan et al. [19] used the vmd algorithm to decompose the wind power time series, so that the model could better extract local features. LSTM and deep belief networks based on PSO (PSO-DBN) were then used to construct a hybrid model for forecasting. Our work proposed in this paper will be tackling the issues that existing frameworks need extra data processing and lack the generality on heterogeneous data. We will detail our method in the next sections. ## 3 Methodology ### LSTM Hochreiter and Schmidhuber proposed the LSTM model, a particular type of recurrent neural network (RNN) model [29]. It not only learns contextual information and long-term dependencies stored in time series data, but also overcomes the gradient disappearance problem of RNN models. The structure of the LSTM cell is shown in Figure 2.The modular unit of the LSTM consists of three types of gates: input gate, output gate and forget gate. The three gates have different roles. The input gate controls the information input module cell and determines which of the current stream of information can be added to the internal state of the storage cell, which then updates the cell state. The forget gate determines which information will be discarded to preserve new information. The output gate acts in the hidden layer and controls whether the information is used as the output of the current LSTM. The gating mechanism can selectively discard irrelevant information and keep useful information, thus solving the problem of gradient dispersion in traditional RNNs. ### XGBRegressor Unlike ANN, LSTM and CNN, which belong to strong learner, the eXtreme Gradient Boosting (XGBoost) algorithm proposed by Chen et al. is a Boosting algorithm with multiple learners [30]. XGBoost is an improvement and excellent practice of the gradient boosting decision tree (GBDT) algorithm, which adds a regularisation term to the GBDT objective function that related to the Figure 2: LSTM network structure diagram. node partitioning difficulty factor and tree size. It controls the tree size to reduce the possibility of overfitting and speeding up the convergence of the algorithm. In addition, second-order Taylor expansions can make the loss function accurate and allow the function to converge in a precise direction. As one of the few integrated learning algorithms that can compete with strong learners, many researchers have conducted further studies such as missing value processing and feature importance analysis to the XGBoost algorithm, making it valuable for practical problems. XGBoost that handling regression problems is referred as the XGBRegressor. ### Tft TFT is a deep neural network architecture based on the attention mechanism which is proposed by Lim et al. and widely used in time series data prediction [31]. The model architecture of the TFT is shown in Figure 3. compared to other artificial intelligence-based models, the TFT model has a much-improved performance with greater interpretability. The TFT model classifies input features into different types, including static covariates, future inputs that can be speculated and time series data that are known in the past but not known in the future. It also captures the interactions between different types of input features in multi-horizon forecasting and estimates the importance of these features to the forecasting outcome. \[\hat{y}_{i}(q,t,\tau)=f_{q}(\tau,y_{i,t-\xi t},o_{i,t-\xi t},k_{i,t-\xi t+\tau}, s_{i}) \tag{1}\] The multi-step prediction function is defined as shown in Equation (1). In wind power forecast Figure 3: The model architecture of TFT. ing, \(i\) denotes a different wind farm, and \(t\) denotes a different point in time. \(s_{i}\) denotes a static variable that does not change over time, and \(y_{i,t}\) denotes the target variable for prediction. The time-related dynamic variables represented by \(x_{i,t}=\left[\mathbf{\sigma}_{i,t},\mathbf{\kappa}_{i,t}\right]\) can be divided into two categories. \(\mathbf{\sigma}_{i,t}\) indicates variables that change over time and are not known in advance, such as meteorological data (e.g., wind speed, wind direction, temperature, etc.). The variables indicated by \(k_{i,t}\) change over time but can be inferred from known conditions, such as weeks, months, seasons and special holidays, etc. In addition, \(f_{q}\) denotes the model used for forecasting and the function expresses the value of quantile \(q\) at time \(t\) for forecasting the future at step \(\tau\). Unlike classic regression models, TFT uses quantile loss instead of the traditional MSE loss. That allows the TFT to fit multiple target regions in high-dimensional space. According to the definition of quantile loss, the quantile \(q\) takes on a value between 0 and 1. Usually, when different \(q\) are chosen, there is an imbalance between positive and negative errors in the loss function. Therefore, quantile loss can avoid overfitting and underfitting the model and achieve quantile regression. The TFT model uses the quantile group \(Q\) and obtains the final loss by weighted summing all \(q\) in \(Q\). The TFT model consists of several modules with different purposes, which can extract features from various types of input data. (1) The function of gating mechanisms is to forget unnecessary parts, which not only simplifies the structure of the model, but also improves the performance of the model on different tasks. (2) Variable selection networks identify important input features at each time step, which can alleviate the problem of traditional DNNs' over-fitting and predicting features with irrelevant targets, and hence improve the ability of the model to adapt to different samples. (3) Static covariate encoders moderate the temporal dynamics modeling by encoding static features, as static information may be important for predicting the target, which helps to improve the prediction results. (4) Temporal processing enables the model to learn long and short-term time dependencies from future inputs obtained through speculation and from time series data that are known in the past but not in the future. This part consists of two modules, the sequence-sequence layer and the multi-headed attention layer, which perform local processing and learn long-term dependencies. (5) Prediction intervals show that quantile predictions are helpful in understanding the output distribution. ### Gating mechanisms In the field of time series forecasting, there are data sets of varying sizes and quality. In order to make the TFT model more generalizable and adaptable to realistic and complex scenarios, the gated residual network (GRN) is used to address the complexity of the input time series data. When encountering small and noisy datasets, there is no need for more complex models. GRN provides the flexibility to control the degree of non-linear transformation applied to improve model prediction. The GRN is defined as shown in Equation (2). Its input consists of two types, the primary input data \(a\) and the optional context vector \(c\). In Equation (4), ELU is the activation function which may take negative values. In addition, ELU allows the unit activation means to be closer to 0 than other linear unsaturated activation functions (such as ReLU) [32]. \[\text{GRN}_{a}(\boldsymbol{a},\boldsymbol{c})=\text{LayerNorm}\left( \boldsymbol{a}+\text{GLU}_{a}\left(\boldsymbol{\eta}_{1}\right)\right) \tag{2}\] \[\boldsymbol{\eta}_{1}=\boldsymbol{W}_{1,a}\boldsymbol{\eta}_{2}+\boldsymbol{b} _{1,a} \tag{3}\] \[\boldsymbol{\eta}_{2}=\text{ELU}\left(\boldsymbol{W}_{2,a}\boldsymbol{a}+ \boldsymbol{W}_{3,a}\boldsymbol{c}+\boldsymbol{b}_{2,a}\right) \tag{4}\] The GLU can control the degree of non-linear transformation and improve the model's flexibility. It is defined as shown in Equation (5). \(\boldsymbol{\gamma}\) denotes the input, \(W\) and \(b\) denote the weights and bias, and \(\sigma(.)\)denotes the sigmoid activation function. The output of the sigmoid function is between 0 and 1 and serves to perform feature selection. The GLU is a crucial part of the GRN. When the input is small-scale data, a simple non-linear transformation may be required when the GLU can achieve this by outputting a vector close to 0. \[\text{GLU}_{w}(\mathbf{\gamma})=\sigma\left(\mathbf{W}_{4,w}\mathbf{\gamma}+\mathbf{b}_{4,w} \right)\odot\left(W_{5,w}\mathbf{\gamma}+\mathbf{b}_{5,w}\right) \tag{5}\] #### 3.2.2 Variable selection networks When the model has multiple input variables, it is difficult to determine the importance of target vector in prediction. The TFT model proposes Variable selection networks for variable input selection. However, instead of directly discarding features that do not contribute to the prediction results, it weights the input variables. Higher weights indicate higher levels of importance. This method not only mitigates the negative impact of unimportant feature vectors, but also improves the performance of the model. \[\nu_{x_{t}}=\text{softmax}\left(GRN_{y_{t}}\left(\mathbf{\bar{z}}_{t},\mathbf{c}_{s} \right)\right) \tag{6}\] \[\widetilde{\xi}_{t}=\sum_{i=1}^{m_{s}}\nu_{x_{t}}^{(i)}\widetilde{\xi}_{t}^{( i)} \tag{7}\] \[\tilde{\xi}_{t}^{(i)}=\text{GRN}_{\tilde{\xi}^{(i)}}\left(\xi_{t}^{(i)}\right) \tag{8}\] Variable selection networks utilize GRN onto each feature individually, then concatenate the features before inputting to the GRN again. Softmax is also used to generate feature weights, assigning the resulting weights to different input variables. The way to obtain the weights is shown in Equation (6). Equation (7) represents the combination method for weighting the features, where \(\nu_{x_{t}}^{(i)}\) is the weight of the feature selection, and \(\tilde{\xi}_{t}^{(i)}\) is the feature after the GRN non-linear processing. The formula for calculating \(\tilde{\xi}_{t}^{(i)}\) is represented in Equation (8). #### 3.2.3 Static covariate encoders TFT produces four output variables \(cs,cc,ch\) and \(ce\) using four different GRNs. (1) temporal variable selection (\(cs\)) as the input to Variable selection networks. (2) local processing of temporal features (\(cc,ch\)), input LSTM as initialization state. (3) enriching temporal features with static information (\(ce\)) and input into the Static Enrichment. #### 3.2.4 Interpretable multi-head attention The interpretable multi-head attention module of the TFT model is based on the transformer model of multi-head attention, as shown in Equations (9) and (10). In the transformer model, the multi-headed attention mechanism forms multiple subspaces by dividing the model into multiple heads. Each head learns different weights, which facilitates the model to learn different aspects of the features. Better predictions can be achieved by collecting these features, but they are challenging to interpret. To increase the interpretability, the TFT model modifies the \(\nu\) matrix of each head to share weights, while the \(Q\) and \(K\) matrices do not share weights as previously. \[InterpretableMultiHead(Q,K,V)=\tilde{H}\,W_{H} \tag{9}\] \[\begin{split}\tilde{H}&=\tilde{A}(Q,K)VW_{V}\\ &=\left\{\frac{1}{m_{H}}\sum_{h=1}^{m_{H}}A\left(QW_{Q}^{(h)},KW_{K }^{(h)}\right)\right\}VW_{V}\\ &=\left\{\frac{1}{m_{H}}\sum_{h=1}^{m_{H}}\text{Attention }\left(QW_{Q}^{(h)},KW_{K}^{(h)},VW_{V}\right)\right\}\end{split} \tag{10}\] ### Signal decomposition algorithms Wu and Huang developed an improved method for EMD, the EEMD algorithm [33]. EEMD takes advantage of white noise having a zero average value, and changes the extreme value point of the signal by adding Gaussian white noise several times during the EMD decomposition process.The VMD algorithm is a signal processing algorithm based on Wiener filtering, Hilbert transform and frequency mixing [34].Compared to the EMD algorithm, it has a sound mathematical foundation and can separate signals accurately and efficiently. In order to extract multiple seasonal cycles of a time series, K. Bandara et al. proposed the multiple seasonal-trend decomposition based on loess (MSTL) decomposition algorithm[35]. MSTL extracts the multiple seasonal components, trend and residuals of the time series. ### Optimize parameters with optuna There are many papers on automatic parameter tuning of decomposition algorithms in wind power forecasting research. For example, An et al. proposed a PVMD algorithm that uses PSO to optimize the critical parameters of VMD \([K,\alpha]\)[36]. Yu et al. use the whale algorithm to adaptively optimize the critical parameters of the VMD [37]. Li et al. use the flower pollination algorithm to optimize the parameters of the VMD automatically and specify the decomposition loss as the criterion for evaluating the optimal parameters [38]. In these articles, the optimization VMD algorithm is independent of the prediction model. The evaluation criteria guiding the tuning of the parameters might vary, resulting in the final optimal parameters not necessarily being suitable for the specific prediction model. Therefore, we propose an algorithm for parameter selection of the critical parameters of the decomposition algorithm forecasting models. Optuna is an open-source hyperparametric optimization framework that enables automatic and efficient tuning of machine learning and deep learning algorithms [39]. It contains many tuning algorithms (such as grid search, stochastic search and Bayesian optimization algorithms) to find the optimal solution to the problem automatically. Thus optuna can be applied to the tuning of most machine learning models. Optuna offers two main samplers with different functions: the covariance matrix adaptation evolution strategy (CMA-ES) samples the relationships between parameters, and the TPE samples the parameters independently. The algorithm uses the TPE to optimize the parameters of the signal decomposition, the flowchart is shown in Figure 4. First, a signal decomposition algorithm is selected and the intervals of its critical parameters are estimated based on the study in [33, 34]. The TPE algorithm is then used to select the possible values of the critical parameters, input them into the prediction model for training, and calculate the prediction error. Then, it determines if the prediction error has changed. If the prediction error is reduced, then the optimal value of the parameter is updated, and the process moves on to the next step. If the prediction error has not changed or has improved, skip to the next step. In the end, it determines if the upper limit of the iteration has been reached. If not reached, then jump to the TPE optimizer and continue the loop to optimize the values of the parameters. If the iteration termination condition is met, the optimal value of the critical parameter is output. ## 4 Experiments, Data description and evaluation metrics ### Data description We use the dataset from the La Haute Borne wind farm in Meuse, France, with a longitude of 5.6013 E and a latitude of 48.4503 N. Meuse is located on the west coast of the ocean and has an oceanic climate with an average annual temperature of 11.36\({}^{\circ}\)C. There is narrow annual temperature range and few extremes of temperature, with warm summers and cool winters. It is rich in wind energy and has high wind speeds due to the influence of the Atlantic sea breeze. The dataset uses the Engie wind dataset provided by the electricity company in France [40]. The dataset is derived from the SCADA system and records daily electricity production from 2012 to 2018 for four wind power turbines rated at 2 MW. Data from 2012 to 2015 were used as training data, the data from 2016 as the validation set, and data from 2017 and 2018 were used as test data. Each wind turbine has three features: power, wind turbine and meteorology data. The power features include active power, reactive power and apparent power. The wind turbine data include a total of twenty-two features such as converter torque, generator converter speed and pitch angle, etc. Figure 4: Flow chart for optimizing VMD parameters based on the TPE. The meteorology features include outdoor temperature, wind speed, and absolute wind direction. Each feature has average, minimum, maximum and stand-deviation values. The data set has 0.02% to 0.05% missing cells and 0.65% to 0.88% missing records. The missing values are shorter than one day, so we fill them with near day in similar weather conditions. For the very rare outliers (e.g., minus 10 degrees never happens in Grand Est), we replace them with the corresponding extreme values. The interval between data readings is 10 minutes. Due to the high instability of wind data, a very high data density makes prediction more difficult. Therefore, we resampled the data and changed the resolution of the dataset to one hour. The correlation between wind power production and meteorological variables varies according to geographical locations. The correlation between wind power production and wind speed is more significant than other meteorological variables at the La Haute Borne wind farm. Figure 5 shows wind power and wind speed scatter plots from the raw data collected from the four wind farms. Figure 6 shows the scatter plot obtained by dividing the data according to the different months. These figures show a progressive increase in the cluster of points around the diagonal, indicating a Figure 5: Scatter plot of wind power and wind speed from raw data Figure 6: Scatter plot of wind power and wind speed according to different months ### Evaluation Metrics In order to evaluate the performance of the proposed method, the following two metrics of statistical prediction and actual value error were used: nMAE and nRMSE. They normalize loss as percentages to illustrate the error range, which allows comparison of different wind turbines. And nMAE use absolute error to prevent losses from offsetting, with low sensitivity to outliers. nMAE and nRMSE are defined as shown below. \[nMAE=\frac{1}{T\,y_{max}}\sum_{i=1}^{T}|y_{i}-\hat{y}_{i}| \tag{11}\] \[nRMSE=\sqrt{\frac{1}{T\,y_{max}^{2}}\sum_{i=1}^{T}(y_{i}-\hat{y}_{i})^{2}} \tag{12}\] Where \(T\) denotes the forecast length, \(y_{i}\) denotes the actual wind power at point \(i\), \(\hat{y}_{i}\) is the predicted value of wind power at point \(i\), and \(y_{max}\) denotes the maximum value of the actual wind power. In wind power forecasting studies, the units of RMSE and MAE can be kW, kWH and percentage, depending on the forecast object. In this paper, the units of each evaluation metric are in percentages to allow comparison with other wind power forecasting models. ### Signal decomposition algorithm to decompose wind power sequences Wind power generation data has intense instability and intermittent time series data. The regularity of the series is not always obvious and it is not easy to detect patterns through observation. Similarly, it is difficult for prediction models to obtain accurate predictions from the raw data. To solve this problem, many studies have used signal decomposition algorithms on wind power sequences to obtain a number of subsequences with stronger regularity. The sum of the subsequences is equal to the original sequence and they are more predictable. So signal decomposition algorithms are common in wind power prediction. We use EEMD to decompose the wind power series to obtain three IMFs (including trend and two subs) and a trend-residual, as shown in Figure 7. The decomposed wind power generation data is less volatile and random, and the IMFs have some regularity, which helps to improve the accuracy of wind power forecasting. The VMD decomposition algorithm is an effective method to solve the mode conflation problem that occurs in EEMD. The decomposition results are shown in Figure 8. The decomposition results in three IMFs around a central frequency and a residual, which adaptively separates the IMFs in different frequency bands. Unlike the EEMD and VMD algorithms, MSTL is a time series decomposition algorithm that can reflect the patterns and cycles of wind power generation over time. The MSTL method was used to decompose the original wind power sequence, and the results are shown in Figure 9. As can be seen from the figure, decomposing the original series yields a trend, two period series and a residual, which extracts the multiple seasonal trends of the series. The low-frequency components represent the original data's overall trend, and the high-frequency components represent the local trend. As can be seen, the trend of EEMD is very close to the original data, similar to the smoothed data of the original data. However, the other parts are more frequent. VMD's trend is not close to the original data but is lower in all parts of the frequency. MSTL only has a relatively low frequency for trend, the other components have a high frequency, and the residuals are very close to the original data. ### Experimental results and analysis The machine learning models adopted in this section were implemented by Python 3.8 and Torch 1.13.1, Pytorch-forecasting 0.10.2 and Pytorch-lightning 1.7.2. This paper uses historical data collected from the SCADA system of the Engie wind farm. All models use the "TimeSeriesDataSet" for data pre-processing and early stopping to prevent over-fitting of the models. The performance of the proposed new framework is verified by nMAE, nRMSE and stability analysis. The TFT model performs optimally under the same decomposition algorithm conditions in the proposed framework. Here we list the parameters derived for each decomposition algorithm in the framework of the used TFT model. The parameters used for TPE-EEMD are: trials are 93, noise_width is 0.05 and MAX_IMF is 13; The parameters used for TPE-VMD are: alpha is 5700 and K is 30; The parameters used for TPE-MSTL are : periods are 6 and 48. The other hyperparameters are: the hidden layers are set as 11, the learning rate is set to 0.004, and the batch size is 32. In addition, the optimal parameters for the decomposition algorithms of ANN, LSTM, CNN-LSTM, RNN-LSTM and XGB models are also derived from this framework. We do not detail the specific values since they are not the focus of this paper. Figure 8: VMD algorithm decomposes wind power data Figure 7: EEMD algorithm decomposes wind power data In this study, TPE-VMD-TFT outperforms other methods. We have chosen two interval lengths of 24h and 48h for the prediction time. Here we first compare TPE-VMD-TFT with the individual models, and then compare TPE-VMD-TFT and models based on the proposed framework. Next, we verified the validity by analyzing the performance of TPE-VMD-TFT in different months, seasons and years. #### 4.2.1 Comparison of TPE-VMD-TFT and individual models ANN, LSTM, CNN-LSTM, RNN-LSTM, XGB and TFT are commonly used for wind power prediction. Here we compare them with the proposed TPE-VMD-TFT method as in Table 2. In TPE-VMD-TFT, VMD optimizes the parameters by TFT, which uses the data decomposed by VMD for training and prediction. Individual models use the original wind power sequence data for training and prediction. As can be seen from the table, the nMAE and nRMSE of the TPE-VMD-TFT method are smaller on the wind power data from all four turbines compared to the individual models. In 24-h ahead wind power forecasting, the values of nMAE for the TPE-VMD-TFT method are 4.26%, 4.02%, 4.63% and 4.59%. The prediction results at wind turbine 1 are 70.29%, 67.08%, 66.80%, 66.54%, 66.27% and 64.29% lower than the prediction errors of the ANN, LSTM, CNN-LSTM, RNN-LSTM, XGB and TFT models. Similarly, its values for nRMSE were 6.59%, 6.21%, 6.78% and 6.75%. The prediction results at wind turbine 1 were 64.01%, 63.29%, 66.26%, 63.43%, 62.51% and 62.51% lower than the prediction errors of the individual models. The TPE-VMD-TFT method predicted the 48-h ahead with values of 7.33%, 7.5%, 7.99% and 7.35% for nMAE. The prediction results in wind turbine 1 were 60.21%, 56.93%, 51.04%, 51.62%, 51.46% and 54.36% lower than the prediction errors of ANN, LSTM, CNN-LSTM, RNN-LSTM, XGB and TFT models. Similarly, its values for nRMSE were 10.87%, 10.78%, 11.59% and 10.83%. The prediction results at wind turbine 1 were 49.77%, 59.74%, 51.69%, 51.19%, 45.57% and 52.74% lower than the prediction errors of the individual models. Even the forecasting model is same, the TPE-VMD-TFT method was significantly better than the TFT. This may be because the TPE-VMD-TFT method uses an optimized VMD to decompose the wind power series, producing smoother IMFs compared to the original data (subsection 4.3) and reducing the impact of noise on the prediction results. Thus, regarding overall results, TPE-VMD-TFT significantly outperforms other individual models. We draw scatter plots to compare the prediction performance of the individual model and TPE-VMD-TFT on each day in the test set. Using the 24h prediction as an example, Figure 10 represents the predicted and actual values for the TPE-VMD-TFT method and other models on the four wind Figure 9: MSTL algorithm decomposes wind power data turbines. The horizon axis represents the actual value of wind power, and the vertical axis represents the model's predicted value. The red diagonal indicates the ideal prediction, the yellow dots indicate the TPE-VMD-TFT method and the blue dots indicate other models. As can be seen from the figure, the yellow points are mainly distributed around the diagonal. In contrast, the blue points are more dispersed, indicating that the proposed TPE-VMD-TFT method has a higher accuracy rate than the individual models. For the ANN and XGB models, we found that the points they predicted were located above the diagonal and closer to the origin of the coordinates, suggesting that they were biased towards higher predictions for lower power. For the LSTM, CNN-LSTM and RNN-LSTM models, we find that many prediction points are distributed in a region parallel to the horizon axis, and the red dots indicate the best prediction points. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{Model} & \multicolumn{2}{c|}{Wind Turbine 1} & \multicolumn{2}{c|}{\multirow{2}{*}{Model}} & \multicolumn{2}{c|}{Wind Turbine 2} \\ \cline{2-2} \cline{5-10} & \multicolumn{2}{c|}{24-h} & \multicolumn{2}{c|}{48-h} & \multirow{2}{*}{Model} & \multicolumn{2}{c|}{24-h} & \multicolumn{2}{c|}{48-h} \\ \cline{3-10} & nMAE & nRMSE & nMAE & nRMSE & & nMAE & nRMSE & nMAE & nRMSE \\ \hline ANN & 14.34\% & 18.31\% & 18.42\% & 21.64\% & ANN & 13.65\% & 17.31\% & 15.47\% & 18.95\% \\ \hline LSTM & 12.94\% & 17.95\% & 17.02\% & 27.00\% & LSTM & 11.89\% & 16.79\% & 16.11\% & 25.14\% \\ \hline CNN-LSTM & 12.83\% & 19.53\% & 14.97\% & 22.50\% & CNN-LSTM & 11.94\% & 17.33\% & 13.84\% & 20.82\% \\ \hline RNN-LSTM & 12.73\% & 18.02\% & 15.15\% & 22.27\% & RNN-LSTM & 12.20\% & 16.94\% & 14.01\% & 20.56\% \\ \hline XGB & 12.63\% & 17.58\% & 15.10\% & 19.97\% & XGB & 11.61\% & 16.31\% & 13.81\% & 18.45\% \\ \hline TFT & 11.93\% & 17.58\% & 16.06\% & 23.00\% & TFT & **11.41**\% & 16.74\% & **14.28**\% & 20.04\% \\ \hline TPE-VMD-TFT & 4.26\% & 6.59\% & 7.33\% & 10.87\% & TPE-VMD-TFT & 4.02\% & 6.21\% & 7.50\% & 10.78\% \\ \hline \multicolumn{10}{|c|}{Wind Turbine 3} & \multicolumn{2}{c|}{} & \multicolumn{2}{c|}{} & \multicolumn{2}{c|}{} & \multicolumn{2}{c|}{} & \multicolumn{2}{c|}{} & \multicolumn{2}{c|}{} \\ \cline{3-10} Model & 24-h & \multicolumn{2}{c|}{48-h} & \multicolumn{2}{c|}{48-h} & \multicolumn{2}{c|}{Model} & \multicolumn{2}{c|}{24-h} & \multicolumn{2}{c|}{48-h} \\ \cline{3-10} & nMAE & nRMSE & nMAE & nRMSE & & nMAE & nRMSE & nMAE & nRMSE \\ \hline ANN & 16.11\% & 19.53\% & 20.77\% & 23.74\% & ANN & 17.42\% & 20.30\% & 19.55\% & 22.53\% \\ \hline LSTM & 15.36\% & 19.63\% & 19.91\% & 29.47\% & LSTM & 14.56\% & 18.51\% & 18.98\% & 27.49\% \\ \hline CNN-LSTM & 14.94\% & 19.65\% & 16.73\% & 23.62\% & CNN-LSTM & 14.05\% & 18.90\% & 17.80\% & 28.13\% \\ \hline RNN-LSTM & 13.87\% & 19.05\% & 16.96\% & 23.27\% & RNN-LSTM & 13.81\% & 18.28\% & 15.85\% & 22.48\% \\ \hline XGB & 14.73\% & 18.95\% & 17.33\% & 21.56\% & XGB & 13.47\% & 18.08\% & 15.60\% & 20.19\% \\ \hline TFT & 15.78\% & 20.20\% & 17.15\% & 23.51\% & TFT & 13.92\% & 19.05\% & 15.18\% & 22.46\% \\ \hline TPE-VMD-TFT & 4.63\% & 6.78\% & 7.99\% & 11.59\% & TPE-VMD-TFT & 4.59\% & 6.75\% & 7.35\% & 10.83\% \\ \hline \end{tabular} \end{table} Table 2: TPE-VMD-TFT method and other models for 24-h and 48-h ahead prediction error results. Figure 10: Scatter plot of the models’ predicted and actual values on four wind turbines. suggesting that they predict them as the same range of values for some point, regardless of the actual value. No significant aggregation areas emerged for the TFT model alone, and its points in high power were scattered. We can therefore see that ANN, LSTM, CNN-LSTM, RNN-LSTM and XGB produce certain fixed output patterns to fit the training metrics in the complex raw wind power data. However, the TFT and XGB model does not produce this pattern. In addition, we find that the individual models have very few points in the upper right region of the scatter plot, indicating that they are poor predictors of the peak power. For the TPE-VMD-TFT method, we find that the prediction points are almost evenly distributed on both sides of the ideal prediction, and there are also more points in the peak region, with less dispersion in the upper right point than in the lower left. By comparison, we can find that the TPE-VMD-TFT method has the best prediction at low power compared to other individual models. Furthermore, at high power, although the prediction error is higher than at low power, it is a significant improvement over the other models. ### Comparison of TPE-VMD-TFT and models based on the proposed frame The proposed TPE-VMD-TFT performed much better than the other individual models in the previous subsection. At the same time, the decomposition algorithm significantly impacts the accuracy of the prediction. For example, the TFT in the previous subsection has a lower prediction than the TPE-VMD-TFT. Therefore, the algorithm used in our TPE-VMD-TFT can be generalized to other individual models so that they can also be trained and predicted using the optimized decomposition algorithm. Therefore, the algorithm used in our TPE-VMD-TFT generalizes to other individual models so that they can also be trained and predicted using the optimized decomposition algorithm. In this subsection, we compare and analyze TPE-VMD-TFT with other individual models based on the proposed framework. The comparison method includes all the decomposition algorithms and models mentioned earlier: TPE-EEMD-ANN, TPE-EEMD-LSTM, TPE-EEMD-CNN-LSTM, TPE-EEMD-RNN-LSTM, TPE-EEMD-XGB, TPE-EEMD-TFT, TPE-VMD-ANN, TPE-VMD-LSTM, TPE-VMD-RNN-LSTM, TPE-VMD-XGB, TPE-MSTL-LSTM, TPE-MSTL-RNN-LSTM, TPE-MSTL-XGB and TPE-MSTL-TFT. Table 3 shows the results of the TPE-VMD-TFT and other individual models based on the proposed framework for the 24-h and 48-h ahead wind power forecasts. The lowest values are derived from TPE-VMD-TFT, and the TFT model performs best in the different decomposition algorithms. This means the TPE-VMD-TFT method outperforms the TPE-EEMD-TFT and TPE-MSTL-TFT models on wind power data from all four turbines. Regarding wind turbine 1 prediction, the TPE-VMD-TFT method showed a 32.17% and 60.11% decrease in nMAE and a 35.07% and 56.96% decrease in nRMSE compared to the TPE-EEMD-TFT and TPE-MSTL-TFT models, when predicting 24-h ahead. In predicting 48-h ahead, the predicted wind power from the TPE-VMD-TFT method decreased by 18.10% and 44.72% and the value of nRMSE decreased by 15.21% and 42.43% compared to the TPE-EEMD-TFT and TPE-MSTL-TFT models. The above analysis illustrates that there are differences in the predictive effectiveness of the TFT model combined with the three models generated by the proposed framework, with the TPE-VMD-TFT method having a lower prediction error. For every individual model, the decomposition methods impact the proposed framework differently. The VMD decomposition algorithm performed better than EEMD and MSTL in predicting the 24-h ahead wind power generation results. The value of nMAE for the TPE-VMD-CNN-LSTM model on the wind turbine 1 data was 6.23%, a decrease of 27.56% and 44.03% compared to the TPE-EEMD- ANN and TPE-MSTL-ANN models. The value of nRMSE was 9.22%, a decrease of 24.98% and 42.77%. Most of the VMD decomposition algorithms performed better than EEMD and MSTL in predicting 48-h ahead wind power, but there were discrepancies in the predictions of some of the models. For example, the TPE-VMD-LSTM model performs worse than the TPE-EEMD-LSTM model regarding nMAE and nRMSE values on the Wind Turbine 1 and 2 data. So it was actually difficult to determine a optimal decomposition method for a dataset. Thus, our proposed algorithm can \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{model} & \multicolumn{8}{|c|}{24-h} & \multicolumn{8}{|c|}{48-h} \\ \cline{2-13} & \multicolumn{3}{|c|}{TPE-EMD} & \multicolumn{2}{|c|}{TPE-VMD} & \multicolumn{2}{|c|}{TPE-MSTL} & \multicolumn{2}{|c|}{TPE-EEMD} & \multicolumn{2}{|c|}{TPE-VMD} & \multicolumn{2}{|c|}{TPE-MSTL} \\ \cline{2-13} & \multicolumn{1}{|c|}{nMAE} & \multicolumn{1}{|c|}{nRMSE} & \multicolumn{1}{|c|}{nMAE} & \multicolumn{1}{|c|}{nRMSE} & \multicolumn{1}{|c|}{nMAE} & \multicolumn{1}{|c|}{nRMSE} & \multicolumn{1}{|c|}{nMAE} & \multicolumn{1}{|c|}{nRMSE} & \multicolumn{1}{|c|}{nMAE} & \multicolumn{1}{|c|}{nRMSE} & \multicolumn{1}{|c|}{nMAE} & \multicolumn{1}{|c|}{nRMSE} \\ \hline ANN & 7.57\% & 10.65\% & 7.27\% & 9.37\% & **10.34\%** & **12.75\%** & 16.55\% & 20.35\% & 15.97\% & 18.51\% & 17.28\% & 20.06\% \\ \hline LSTM & 7.75\% & 10.84\% & 5.98\% & 8.96\% & 11.66\% & 15.84\% & 16.34\% & 25.56\% & 17.61\% & 27.37\% & 17.61\% & 27.31\% \\ \hline CNN-LSTM & 8.60\% & 12.29\% & 6.23\% & 9.22\% & 11.13\% & 16.11\% & 14.85\% & 22.18\% & 14.66\% & 22.12\% & 14.87\% & 22.70\% \\ \hline RNN-LSTM & 8.75\% & 12.76\% & 7.07\% & 9.78\% & 12.05\% & 15.48\% & 15.62\% & 21.95\% & 15.20\% & 22.22\% & 15.11\% & 22.30\% \\ \hline XGB & 8.41\% & 12.34\% & 7.17\% & 10.87\% & 11.92\% & 16.46\% & 10.63\% & 14.98\% & 10.23\% & 14.80\% & 14.66\% & 19.37\% \\ \hline TFT & **6.82\%** & **10.15\%** & **4.26\%** & **6.59\%** & 10.68\% & 15.31\% & **8.95\%** & **12.82\%** & **7.33\%** & **10.87\%** & **13.19\%** & **19.41\%** \\ \hline \multicolumn{13}{|c|}{Wind Turbine 2} \\ \cline{2-13} & \multicolumn{3}{|c|}{24-h} & \multicolumn{8}{|c|}{48-h} \\ \cline{2-13} & \multicolumn{3}{|c|}{TPE-EMD} & \multicolumn{2}{|c|}{TPE-VMD} & \multicolumn{2}{|c|}{TPE-MSTL} & \multicolumn{2}{|c|}{TPE-EEMD} & \multicolumn{2}{|c|}{TPE-VMD} & \multicolumn{2}{|c|}{TPE-MSTL} \\ \cline{2-13} & \multicolumn{1}{|c|}{nMAE} & \multicolumn{1}{|c|}{nRMSE} & \multicolumn{1}{|c|}{nMAE} & \multicolumn{1}{|c|}{nRMSE} & \multicolumn{1}{|c|}{nMAE} & \multicolumn{1}{|c|}{nRMSE} & \multicolumn{1}{|c|}{nMAE} & \multicolumn{1}{|c|}{nRMSE} & \multicolumn{1}{|c|}{nMAE} & \multicolumn{1}{|c|}{nRMSE} \\ \hline ANN & 7.72\% & 10.40\% & 6.27\% & 8.26\% & **9.39\%** & **11.45\%** & 13.82\% & 18.20\% & 10.38\% & 13.34\% & 16.02\% & 18.61\% \\ \hline LSTM & 7.46\% & 10.28\% & 6.08\% & 8.89\% & 10.27\% & 15.07\% & 15.92\% & 24.63\% & 17.68\% & 26.13\% & 18.32\% & 26.61\% \\ \hline CNN-LSTM & 7.84\% & 11.13\% & 5.73\% & 8.33\% & 10.97\% & 15.01\% & 13.91\% & 20.72\% & 13.54\% & 20.40\% & 13.83\% & 20.93\% \\ \hline RNN-LSTM & 8.39\% & 12.25\% & 5.91\% & 8.43\% & 12.27\% & 15.50\% & 14.47\% & 20.20\% & 14.12\% & 20.41\% & 14.47\% & 20.26\% \\ \hline XGB & 7.62\% & 11.29\% & 6.83\% & 10.21\% & 11.10\% & 15.49\% & **9.74\%** & **13.85\%** & 9.64\% & 13.71\% & 14.01\% & 18.49\% \\ \hline TFT & **6.55\%** & **9.55\%** & **4.02\%** & **6.21\%** & 9.41\% & 14.13\% & 9.89\% & 14.10\% & **7.50\%** & **10.78\%** & **12.12\%** & **17.35\%** \\ \hline \multicolumn{13}{|c|}{Wind Turbine 3} \\ \cline{2-13} & \multicolumn{3}{|c|}{24-h} & \multicolumn{8}{|c|}{48-h} \\ \cline{2-13} & \multicolumn{3}{|c|}{TPE-EMD} & \multicolumn{2}{|c|}{TPE-VMD} & \multicolumn{2}{|c|}{TPE-MSTL} & \multicolumn{2}{|c|}{TPE-EEMD} & \multicolumn{2}{|c|}{TPE-VMD} & \multicolumn{2}{|c|}{TPE-MSTL} \\ \cline{2-13} & \multicolumn{1}{|c|}{nMAE} & \multicolumn{1}{|c|}{nRMSE} & \multicolumn{1}{|c|}{nMAE} & \multicolumn{1}{|c|}{nRMSE} & \multicolumn{1}{|c|}{nMAE} & \multicolumn{1}{|c|}{nRMSE} & \multicolumn{1}{|c|}{nMAE} & \multicolumn{1}{|c|}{nRMSE} & \multicolumn{1}{|c|}{nMAE} & \multicolumn{1}{|c|}{nRMSE} \\ \hline ANN & 10.49\% & 13.16\% & 10.87\% & 12.91\% & 11.12\% & **13.58\%** & 22.43\% & 25.33\% & 17.71\% & 20.23\% & 19.44\% & 22.47\% \\ \hline LSTM & 9.29\% & 11.76\% & 7.74\% & 10.21\% & 11.99\% & 16.79\% & 19.75\% & 29.02\% & 19.66\% & 29.05\% & 20.03\% & 29.52\% \\ \hline CNN-LSTM & 9.23\% & 12.60\% & 8.00\% & 10.29\% & 12.90\% & 17.07\% & 16.04\% & 22.59\% & 16.50\% & 23.55\% & 16.63\% & 23.81\% \\ \hline RNN-LSTM & 9.36\% & 13.22\% & 9.08\% & 11.04\% & 13.26\% & 16.64\% & 17.82\% & 22.92\% & 17.24\% & 23.12\% & 17.07\% & 23.18\% \\ \hline XGB & 9.56\% & 13.01\% & 8.17\% & 11.36\% & 12.65\% & 16.93\% & 12.02\% & 15.80\% & 11.83\% & 15.56\% & 15.82\% & **20.18\%** \\ \hline TFT & **7.62\%** & **11.06\%** automatically find the most suitable parameters on other datasets and has a higher expectation of achieving high performance than other methods that fix the decomposition algorithm and model. From Table 3 and the above analysis, we find that TPE-VMD-TFT is the best of the adopted models. However, a direct comparison may not be intuitive due to the large variety of models and decomposition algorithms. Therefore, we individually compare TPE-VMD-TFT with each class of Figure 11: The scatter plot of the model’s 24-h ahead forecast results. Figure 12: The scatter plot of the model’s 48-h ahead forecast results. models, e.g., TPE-VMD-TFT and TPE-VMD-ANN, TPE-EEMD-ANN and TPE-MSTL-ANN. Scatter plots comparing the 24h and 48h results are shown in Figures 11 and 12. The horizon axis represents the model's nMAE value and the vertical axis represents the nRMSE value of the model. To simplify the description, the model formed by combining the proposed framework under an arbitrary decomposition algorithm is denoted as TPE-Decomp-Model. Such as TPE-Decomp-TFT denotes TPE-VMD-TFT, TPE-EEMD-TFT and TPE-MSTL-TFT. As can be seen in fugires, the blue points representing TPE-VMD-TFT are distributed near the origin of the scatter plot on all four wind turbine data, indicating that it predicts better than the other models formed by combining the proposed framework. This suggests that the TPE-VMD-TFT method has better prediction results on the more volatile wind power data. For the scatter plots of TPE-Decomp-LSTM, TPE-Decomp-RNN-LSTM and TPE-Decomp-CNN-LSTM, the distances between their points are similar under different decomposition algorithms, so we speculate that these models are not significantly affected by different decomposition algorithms. Furthermore, the points on them and TPE-VMD-TFT are distributed in the lower left and upper right corners, indicating a large gap between TPE-Decomp-LSTM, TPE-Decomp-RNN-LSTM, TPE-Decomp-CNN-LSTM and TPE-VMD-TFT. For TPE-Decomp-ANN, TPE-Decomp-XGB and TPE-Decomp-TFT, their point distributions are more dispersed along the diagonal and the general characteristics can be seen as: VMD is better than EEMD, and EEMD is better than MSTL. Therefore, decomposition algorithms have a major influence on the models, and the VMD perform best in most models. We list the improvement of nMAE and nRMSE of TPE-VMD-TFT in Table 4, compared with the second-best model. The second-best model is determined by each wind turbine in terms of nMAE or nRMSE. In the 24-h ahead forecasting, the TPE-VMD-TFT method is 28.76%, 29.84%, 39.24% and 29.17% lower than the second best model for four wind turbines, and the corresponding average is 31.75%. For nRMSE, the TPE-VMD-TFT approach achieved an average reduction of 28.74% over the second best model. Similarly, in the 48-h ahead forecasting, the TPE-VMD-TFT method achieved an average reduction of 20.79% and 16.93% compared to the second best model according to nMAE and nRMSE respectively. power results, except for wind turbine 1, which has an nMAE value of 7.01% for December and less than 7% for all other months. Forecast errors were relatively low for April-September and relatively high for January-March and October-December. The four wind turbines had error values of 2.59%, 2.49%, 3.21% and 3.08% in August, lower than the other months of the year. In the predicted 48-h ahead wind power generation, the nMAE values for the four wind turbines ranged from 4% to 13% throughout the year. Forecast errors for December are relatively high, with nMAE values above 11%. The forecast errors for March-September are relatively low, with nMAE values all below 8%. Figures 15 show the results of the TPE-VMD-TFT method for 24-h ahead and 48-h ahead wind power forecasting by season. The horizon axis represents the four seasons, and the vertical axis represents the prediction error. Regarding the geographical location of the wind farm, the months of March, April and May are classified as spring. June, July and August are classified as summer. Figure 14: Monthly results of 48-h ahead forecasting based on TPE-VMD-TFT method (nMAE). Figure 13: Monthly results of 24-h ahead forecasting based on TPE-VMD-TFT method (nMAE). September, October and November are classified as autumn. December, January and February are classified as winter. As can be seen from the graph, the prediction error values for all four wind turbines are relatively low in all seasons when predicting 24-h ahead, with nMAE values ranging from 3% to 6%. The lowest nMAE values all occurred during the summer months, at 3.3%, 3.16%, 4.06% and 3.84%. The highest nMAE values all occurred in winter, at 5.81%, 5.41%, 5.78% and 5.85%. In predicting 48-h ahead, the nMAE values for the four wind turbines ranged from 5% to 11% in all seasons, with relatively similar prediction errors in spring, summer and autumn and relatively high prediction errors in winter. Figures 16 show the results of the TPE-VMD-TFT method for 24-h ahead and 48-h ahead wind power forecasting by year. The horizon axis represents the different years, and the vertical axis represents the forecast error. From figures 16, in predicting the results for 24-h ahead wind power, all four wind turbines have lower prediction errors in 2017 than in 2018. Wind turbine 2 has the most significant prediction difference at 3.99% and 6.49%. The forecast errors for 2017 and 2018 are relatively similar when forecasting 48-h ahead. Wind turbine 4 has almost the same prediction error, while wind turbine 1 has a more significant difference in prediction error of 7.26% and 9.63%. From the above analysis, it can be seen that the prediction accuracy of the TPE-VMD-TFT method Figure 15: Seasonal results of 24-h and 48-h ahead forecasting based on TPE-VMD-TFT method (nMAE). is relatively high in different months, seasons and years, indicating that the TPE-DTBM algorithm proposed in this paper is effective in improving the prediction performance of the TFT model. Figure 17 are box plots of prediction errors for all wind power forecasting models. The horizon axis indicates the models involved, and the vertical axis indicates the distribution of prediction errors for the models. As the test set contains 8832 time points, even with outliers of 2%-10%, there will be at least 170 outliers per method, making it difficult to plot outliers clearly. Therefore, we still use Q1, Q2 and Q3 as the lower boundaries of the box, the median and the upper boundaries of the box in the box plot, but use the minimum and maximum values of the data as the lower and upper boundaries of the box plot. The blue boxes in the figure indicate the error distribution of individual models, the yellow boxes indicate models based on the TPE-MSTL algorithm, the green boxes indicate models based on the TPE-EEMD algorithm, and the red boxes indicate models based on the TPE-VMD algorithm. As can be seen from the graph, the median of the blue boxes is mostly higher, and the boxes are longer. This indicates that the individual models are less effective in predicting as well as being unstable. The red boxes had mostly lower median values and shorter boxes than the other colored boxes. This indicates that the model based on the TPE-VMD algorithm has better predictive performance and is more stable than the models based on the TPE-EEMD and TPE-MSTL algorithms. The TPE-VMD-TFT method has the lowest median number of boxes and the shortest boxes, indicating that this method has the most superior performance compared to the other models. To visualize the performance of the TPE-VMD-TFT method proposed in this paper on wind power data, Figures 18 show the results of the TPE-VMD-TFT method for 24-h ahead and 48-h ahead wind power predictions. The black and red lines in the graph indicate the actual and predicted values of wind power. It can be seen from the graph that the two lines overlap more often when predicting 24-h ahead, which indicates that the predicted values of wind power are closer to the actual values. Although the prediction performance of the TPE-VMD-TFT method is reduced when predicting 48-h ahead, it is still reasonable. This is because predicting wind power 48-h ahead is more complex than predicting 24-h ahead, with greater instability and uncertainty in the data. In summary, for the more volatile wind power data, the TPE-VMD-TFT method's prediction curves are closer to the original wind power series, yielding higher accuracy than individual models. Decomposing the wind power series through the TPE-VMD framework to reduce the influence of noise on the prediction results can effectively extract features for the TFT model and improve the model's prediction performance, as expected. Figure 16: Annual results of 24-h and 48-h ahead forecasting based on TPE-VMD-TFT method (nMAE). ## 5 Conclusion Accurate wind power forecasting plays an essential role in the power systems scheduling and planning. However, wind power forecasting is challenging because of its high uncertainty, discontinuity and violent fluctuations. In this paper, we proposed a novel medium-term forecasting framework based on TPE and decomposition algorithms that define the TPE-VMD-TFT method to predict wind power from wind turbine. Much of the literature is based on the empirical selection of the hyperparameters of the decomposition algorithm, which can lead to sub-optimal or inferior predictions. To alleviate these problems, we proposed a TPE-DTBM algorithm. The TPE-DTBM algorithm is not only the critical method of the TPE-VMD-TFT approach, but also generalizable to other common decomposition algorithms and models for wind power forecasting. We conducted experiments using the benhmakring Engie wind dataset from the electricity company in France and used nMAE and nRMSE to evaluate the performance of the proposed method. Figure 17: Box plot of the error of the model for 24-h and 48-h ahead prediction. The values of nMAE for our proposed method for 24-h ahead wind power prediction were 4.26%, 4.02%, 4.63% and 4.59% for the four wind turbines and 6.59%, 6.21%, 6.78% and 6.75% for the nRMSE respectively. At the predicted 48-h ahead, the values for nMAE were 7.33%, 7.5%, 7.99% and 7.35%, and the values for nRMSE were 10.87%, 10.78%, 11.59% and 10.83%. Compared to other individual time series prediction models (e.g. ANN, LSTM, CNN-LSTM, RNN-LSTM and XGB), the nMAE values were reduced by more than 50% and the nRMSE values were reduced by more than 40%. Compared with wind power generation forecasting methods that obtain data features through other decomposition algorithms, the forecasting effect of the proposed method is also better than these models. The TPE-VMD-TFT method uses the TPE-DTBM algorithm to optimize the parameters of the VMD to decompose the wind power series, which can effectively extract the characteristics of the wind power and reduce the noise of the historical data, significantly improving the prediction accuracy. To further analyze and verify the validity of the TPE-VMD-TFT method, the forecasts were grouped according to month, season and year. The experiments prove that the proposed method has high prediction accuracy in different months, seasons and years, indicating that the TPE-DTBM algorithm proposed in this paper effectively improves the prediction performance of the TFT model. A box plot of the wind power forecasting model's prediction errors demonstrates the proposed method's higher stability compared to other comparative models. In the wind power forecasting problem, the novel medium-term forecasting framework proposed in this paper and the defined TPE-VMD-TFT method have superior forecasting performance and generalization capability. However, there are still some more challenging issues that need to be investigated. For example, data sets from different geographical locations are not considered. This is because meteorological data also differ from region to region, resulting in differences in prediction accuracy. In addition, the proposed framework for wind power prediction can be tried to be extended to other energy prediction areas, such as photovoltaic power and wind speed prediction, which is also the direction of our future efforts. Figure 18: 24-h and 48-h ahead prediction results of the TPE-VMD-TFT method. ## Acknowledgments TBD Data Availability Statement.TBD